threads
listlengths
1
275
[ { "msg_contents": "Ok, looks like the FreeBSD community is interested in PostgreSQL\nperformance, or at least enough to investigate it.\n\nAnyone here have experience hacking on FreeBSD?\n\n----- Forwarded message from Kris Kennaway <[email protected]> -----\n\nX-Spam-Checker-Version: SpamAssassin 3.1.6 (2006-10-03) on noel.decibel.org\nX-Spam-Level: \nX-Spam-Status: No, score=-0.9 required=5.0 tests=AWL,BAYES_50,\n\tFORGED_RCVD_HELO,SPF_PASS autolearn=no version=3.1.6\nDate: Sun, 25 Feb 2007 19:22:35 -0500\nFrom: Kris Kennaway <[email protected]>\nTo: [email protected]\nUser-Agent: Mutt/1.4.2.2i\nCc: [email protected]\nSubject: Anyone interested in improving postgresql scaling?\nPrecedence: list\nErrors-To: [email protected]\n\nIf so, then your task is the following:\n\nMake SYSV semaphores less dumb about process wakeups. Currently\nwhenever the semaphore state changes, all processes sleeping on the\nsemaphore are woken, even if we only have released enough resources\nfor one waiting process to claim. i.e. there is a thundering herd\nwakeup situation which destroys performance at high loads. Fixing\nthis will involve replacing the wakeup() calls with appropriate\namounts of wakeup_one().\n\nKris\n\n_______________________________________________\[email protected] mailing list\nhttp://lists.freebsd.org/mailman/listinfo/freebsd-current\nTo unsubscribe, send any mail to \"[email protected]\"\n\n\n----- End forwarded message -----\n\n-- \nJim C. Nasby, Database Architect [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 27 Feb 2007 12:32:20 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "[[email protected]: Anyone interested in improving postgresql\n\tscaling?]" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Ok, looks like the FreeBSD community is interested in PostgreSQL\n> performance, or at least enough to investigate it.\n\nI think this guy is barking up the wrong tree though, because we don't\never have multiple processes sleeping on the same sema. Not that\nthere's anything wrong with his proposal, but it doesn't affect Postgres.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Feb 2007 13:54:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [[email protected]: Anyone interested in improving postgresql\n\tscaling?]" } ]
[ { "msg_contents": "Hi all,\n\nDo someone already had some problem with performances using a pentium\nD (64 bits) and postgres 8.2.3 on a redhat enterprise update 2 ?\nI did the install from sources and nothing change... I have a RAID 0\nfor data and 3Gb of RAM and my inserts rate is quite low, 8333 inserts/\nsec (lower than on my laptop which is 10526 inserts/sec).\nI suspect a problem with the CPU because using gkrellm, the use of 1\nCPU is always quite low... Is it normal ?\n\nMany thanks for your help,\n\n\nJoël\n\n", "msg_date": "28 Feb 2007 07:58:14 -0800", "msg_from": "\"hatman\" <[email protected]>", "msg_from_op": true, "msg_subject": "performances with Pentium D" }, { "msg_contents": "I suspect the difference is your disk subsystem. IDE disks (in your laptop \nI assume) quite often (almost always!!) ignore fsync calls and return as \nsoon as the data gets to the disk cache, not the physical disk. SCSI disks \nare almost always more correct, and wait until the data gets to the physical \ndisk before they return from an fsync call.\n\nI hope this helps.\n\nRegards,\n\nBen\n\"hatman\" <[email protected]> wrote in message \nnews:[email protected]...\nHi all,\n\nDo someone already had some problem with performances using a pentium\nD (64 bits) and postgres 8.2.3 on a redhat enterprise update 2 ?\nI did the install from sources and nothing change... I have a RAID 0\nfor data and 3Gb of RAM and my inserts rate is quite low, 8333 inserts/\nsec (lower than on my laptop which is 10526 inserts/sec).\nI suspect a problem with the CPU because using gkrellm, the use of 1\nCPU is always quite low... Is it normal ?\n\nMany thanks for your help,\n\n\nJo�l\n\n\n", "msg_date": "Thu, 1 Mar 2007 11:00:38 -0000", "msg_from": "\"Ben Trewern\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performances with Pentium D" }, { "msg_contents": "Hi Ben,\n\nThanks for you answer.\n\nI was thinking about CPU speed bottleneck because one of the CPU load\nwas always quite low on the server (Pentium D, dual core) and on the\nother hand, the CPU load on my laptop was always very high. After some\nmore testing (using a threaded client software which does the same\ninserts using 10 parallel connections), i was able to have the other\nCPU arround 90% too :-)\nThen the INSERT speed reached 18'181 inserts/sec !!\n\nSo now i'm wondering if i reached the disk limit or again the CPU\nlimit... Anyway, thanks a lot for your advice, i didn't know this\ndifference between fsync implementation on SCSI and IDE !\n\nJoël\n\nOn Mar 1, 12:00 pm, \"Ben Trewern\" <[email protected]> wrote:\n> I suspect the difference is your disk subsystem. IDE disks (in your laptop\n> I assume) quite often (almost always!!) ignore fsync calls and return as\n> soon as the data gets to the disk cache, not the physical disk. SCSI disks\n> are almost always more correct, and wait until the data gets to the physical\n> disk before they return from an fsync call.\n>\n> I hope this helps.\n>\n> Regards,\n>\n> Ben\"hatman\" <[email protected]> wrote in message\n>\n> news:[email protected]...\n> Hi all,\n>\n> Do someone already had some problem with performances using a pentium\n> D (64 bits) and postgres 8.2.3 on a redhat enterprise update 2 ?\n> I did the install from sources and nothing change... I have a RAID 0\n> for data and 3Gb of RAM and my inserts rate is quite low, 8333 inserts/\n> sec (lower than on my laptop which is 10526 inserts/sec).\n> I suspect a problem with the CPU because using gkrellm, the use of 1\n> CPU is always quite low... Is it normal ?\n>\n> Many thanks for your help,\n>\n> Joël\n\n\n", "msg_date": "1 Mar 2007 14:28:09 -0800", "msg_from": "\"hatman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performances with Pentium D" } ]
[ { "msg_contents": "\nAs the subject says. A quite puzzling situation: we not only upgraded the\nsoftware, but also the hardware:\n\nOld system:\n\nPG 7.4.x on Red Hat 9 (yes, it's not a mistake!!!)\nP4 HT 3GHz with 1GB of RAM and IDE hard disk (120GB, I believe)\n\nNew system:\nPG 8.2.3 on Fedora Core 4\nAthlon64 X2 4200+ with 2GB of RAM and SATA hard disk (250GB)\n\nI would have expected a mind-blowing increase in responsiveness and\noverall performance. However, that's not the case --- if I didn't know\nbetter, I'd probably tend to say that it is indeed the opposite \n(performance\nseems to have deteriorated)\n\nI wonder if some configuration parameters have somewhat different\nmeaning, or the considerations around them are different? Here's what\nI have in postgresql.conf (the ones I believe are relevant) :\n\nmax_connections = 100\nshared_buffers = 1024MB\n#temp_buffers = 8MB\n#max_prepared_transactions = 5\n#work_mem = 1MB\n#maintenance_work_mem = 16MB\n#max_stack_depth = 2MB\nmax_fsm_pages = 204800\ncheckpoint_segments = 10\n\nHere's my eternal confusion --- the kernel settings for shmmax and shmall:\nI did the following in /ec/rc.local, before starting postgres:\n\necho -n \"1342177280\" > /proc/sys/kernel/shmmax\necho -n \"83886080\" > /proc/sys/kernel/shmall\n\nI still haevn't found any docs that clarify this issue I know it's not \nPG-specific,\nbut Linux kernel specific, or maybe even distro-specific??)\n\nFor shmall, I read \"if in bytes, then ...., if in pages, then ....\", and \nI see\na reference to PAGE_SIZE (if memory serves --- no pun intended!);\nHow would I know if the spec has to be given in bytes or in pages? \nAnd if in pages, how can I know the page size?? I put it like this to\nmaintain the ratio between the numbers that were by default. But I'm\nstill puzzled by this.\n\nPostgreSQL does start (which it wouldn't if I put shmmax too low),\nwhich suggests to me that the setting is ok ... Somehow, I'm extremely\nuncomfortable with having to settle for a \"seems like it's fine\".\n\nThe system does very frequent insertions and updates --- the longest\ntable has, perhaps, some 20 million rows, and it's indexed (the primary\nkey is the combination of two integer fields). This longest table only\nhas inserts (and much less frequent selects), at a peak rate of maybe\none or a few insertions per second.\n\nThe commands top and ps seem to indicate that postgres is quite\ncomfortable in terms of CPU (CPU idle time rarely goes below 95%).\nvmstat indicates activity, but it all looks quite smooth (si and so are\nalways 0 --- without exception).\n\nHowever, I'm seeing the logs of my application, and right now the\napp. is inserting records from last night around midnight (that's a\n12 hours delay).\n\nAny help/tips/guidance in troubleshooting this issue? It will be\nmuch appreciated!\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Wed, 28 Feb 2007 12:11:46 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Upgraded to 8.2.3 --- still having performance issues" }, { "msg_contents": "Carlos Moreno <[email protected]> writes:\n> I would have expected a mind-blowing increase in responsiveness and\n> overall performance. However, that's not the case --- if I didn't know\n> better, I'd probably tend to say that it is indeed the opposite \n> (performance seems to have deteriorated)\n\nDid you remember to re-ANALYZE everything after loading up the new\ndatabase? That's a frequent gotcha ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Feb 2007 12:20:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgraded to 8.2.3 --- still having performance issues " }, { "msg_contents": "Tom Lane wrote:\n\n>Carlos Moreno <[email protected]> writes:\n> \n>\n>>I would have expected a mind-blowing increase in responsiveness and\n>>overall performance. However, that's not the case --- if I didn't know\n>>better, I'd probably tend to say that it is indeed the opposite \n>>(performance seems to have deteriorated)\n>> \n>>\n>\n>Did you remember to re-ANALYZE everything after loading up the new\n>database? That's a frequent gotcha ...\n> \n>\n\nI did.\n\nI didn't think it would be necessary, but being paranoid as I am, I figured\nlet's do it just in case.\n\nAfter a few hours of operation, I did a vacuumdb -z also. But it seems to\ncontinue downhill :-(\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Wed, 28 Feb 2007 13:37:25 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgraded to 8.2.3 --- still having performance issues" }, { "msg_contents": "Tom Lane wrote:\n\n>Carlos Moreno <[email protected]> writes:\n> \n>\n>>I would have expected a mind-blowing increase in responsiveness and\n>>overall performance. However, that's not the case --- if I didn't know\n>>better, I'd probably tend to say that it is indeed the opposite \n>>(performance seems to have deteriorated)\n>> \n>>\n>\n>Did you remember to re-ANALYZE everything after loading up the new\n>database? That's a frequent gotcha ...\n> \n>\n\nI had done it, even though I was under the impression that it wouldn't be\nnecessary with 8.2.x (I still chose to do it just in case).\n\nI've since discovered a problem that *may* be related to the deterioration\nof the performance *now* --- but that still does not explain the machine\nchoking since last night, so any comments or tips are still most welcome.\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Wed, 28 Feb 2007 14:36:55 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgraded to 8.2.3 --- still having performance issues" }, { "msg_contents": "Carlos Moreno wrote:\n> Tom Lane wrote:\n> \n>> Carlos Moreno <[email protected]> writes:\n>> \n>>\n>>> I would have expected a mind-blowing increase in responsiveness and\n>>> overall performance. However, that's not the case --- if I didn't know\n>>> better, I'd probably tend to say that it is indeed the opposite \n>>> (performance seems to have deteriorated)\n>>> \n>>\n>> Did you remember to re-ANALYZE everything after loading up the new\n>> database? That's a frequent gotcha ...\n>> \n>>\n> \n> I had done it, even though I was under the impression that it wouldn't be\n> necessary with 8.2.x (I still chose to do it just in case).\n> \n> I've since discovered a problem that *may* be related to the deterioration\n> of the performance *now* --- but that still does not explain the machine\n> choking since last night, so any comments or tips are still most welcome.\n> \n> Thanks,\n> \n> Carlos\n> -- \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\nAnd the problem that *may* be related is....?\n\nAll the information is required so someone can give you good information...\n", "msg_date": "Wed, 28 Feb 2007 17:20:41 -0300", "msg_from": "Rodrigo Gonzalez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgraded to 8.2.3 --- still having performance issues" }, { "msg_contents": "Rodrigo Gonzalez wrote:\n\n>\n>> I've since discovered a problem that *may* be related to the \n>> deterioration\n>> of the performance *now* --- but that still does not explain the machine\n>> choking since last night, so any comments or tips are still most \n>> welcome.\n>> [...]\n>>\n> And the problem that *may* be related is....?\n>\n> All the information is required so someone can give you good \n> information...\n\n\nYou are absolutely right, of course --- it was an instance of \"making a \nlong\nstory short\" for everyone's benefit :-)\n\nTo make the story as short as possible: I was running a program that does\nclean up on the database (move records older than 60 days). That program\ncreates log files, and it exhausted the available space on the /home \npartition\n(don't ask! :-)).\n\nThe thing is, all of postgres's data is below the /var partition (which \nhas a\ntotal of 200GB, and still around 150GB available) --- in particular, the\npostgres' home directory is /var/users/postgres, and the database cluster's\ndata directory is /var/users/postgres/data --- that tells me that this \nissue with\nthe /home partition should not make postgres itself choke; the clean up\nprogram was totally choking, of course.\n\nAnd yes, after realizing that, I moved the cleanup program to some place\nbelow the /var directory, and /home now has 3.5GB available.\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Wed, 28 Feb 2007 15:53:32 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgraded to 8.2.3 --- still having performance issues" }, { "msg_contents": "\nAre there any issues with client libraries version mismatching\nbackend version?\n\nI'm just realizing that the client software is still running on the\nsame machine (not the same machine where PG is running) that\nhas PG 7.4 installed on it, and so it is using the client libraries 7.4\n\nAny chance that this may be causing trouble on the performance\nside? (I had been monitoring the logs to watch for SQLs now\nfailing when they worked before... But I was thinking rather\nincompatibilities on the backend side ... )\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Wed, 28 Feb 2007 16:33:23 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgraded to 8.2.3 --- still having performance issues" }, { "msg_contents": "Carlos Moreno skrev:\n> The system does very frequent insertions and updates --- the longest\n> table has, perhaps, some 20 million rows, and it's indexed (the primary\n> key is the combination of two integer fields). This longest table only\n> has inserts (and much less frequent selects), at a peak rate of maybe\n> one or a few insertions per second.\n\nOne or a few inserts per second doesn't sound like that much. I would \nhave expected it to work. If you can you might want to group several \ninserts into a single transaction.\n\nA standard hint is also to move the WAL onto its own disk. Or get a disk \ncontroller with battery backed up ram.\n\nBut it's hard to say from your description what the bottleneck is and \nthus hard to give any advice.\n\n> Any help/tips/guidance in troubleshooting this issue? It will be\n> much appreciated!\n\nYou could try to find specific queries that are slow. Pg can for example \nlog queries for you that run for longer than X seconds.\n\n/Dennis\n", "msg_date": "Sat, 03 Mar 2007 07:41:51 +0100", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgraded to 8.2.3 --- still having performance issues" } ]
[ { "msg_contents": "Hi,\n\nI am sorry if it is a repeat question but I want to know if database performance will decrease if I increase the max-connections to 2000. At present it is 100.\n\nI have a requirement where the clent want 2000 simultaneous users and the only option we have now is to in crease the database connection but I am unale to find any document which indicates that this is a good or a bad practise.\n\nthanks for your help and time.\n\nregards\n\nshiva\n\n \t\t\t\t\n---------------------------------\n Here�s a new way to find what you're looking for - Yahoo! Answers \nHi,I am sorry if it is a repeat question but I want to know if database performance will decrease if I increase the max-connections to 2000. At present it is 100.I have a requirement where the clent want 2000 simultaneous users and the only option we have now is to in crease the database connection but I am unale to find any document which indicates that this is a good or a bad practise.thanks for your help and time.regardsshiva\n \nHere�s a new way to find what you're looking for - Yahoo! Answers", "msg_date": "Thu, 1 Mar 2007 05:18:59 +0000 (GMT)", "msg_from": "Shiva Sarna <[email protected]>", "msg_from_op": true, "msg_subject": "increasing database connections" }, { "msg_contents": "On 3/1/07, Shiva Sarna <[email protected]> wrote:\n> I am sorry if it is a repeat question but I want to know if database\n> performance will decrease if I increase the max-connections to 2000. At\n> present it is 100.\n\nMost certainly. Adding connections over 200 will degrade performance\ndramatically. You should look into pgpool or connection pooling from\nthe application.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Thu, 1 Mar 2007 00:49:14 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing database connections" }, { "msg_contents": "Jonah H. Harris wrote:\n> On 3/1/07, Shiva Sarna <[email protected]> wrote:\n>> I am sorry if it is a repeat question but I want to know if database\n>> performance will decrease if I increase the max-connections to 2000. At\n>> present it is 100.\n> \n> Most certainly. Adding connections over 200 will degrade performance\n> dramatically. You should look into pgpool or connection pooling from\n> the application.\n\nhuh? That is certainly not my experience. I have systems that show no\ndepreciable performance hit on even 1000+ connections. To be fair to the\ndiscussion, these are on systems with 4+ cores. Usually 8+ and\nsignificant ram 16/32 gig fo ram.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Wed, 28 Feb 2007 22:18:18 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing database connections" }, { "msg_contents": "Joshua D. Drake wrote:\n> Jonah H. Harris wrote:\n>> On 3/1/07, Shiva Sarna <[email protected]> wrote:\n>>> I am sorry if it is a repeat question but I want to know if database\n>>> performance will decrease if I increase the max-connections to 2000. At\n>>> present it is 100.\n>> Most certainly. Adding connections over 200 will degrade performance\n>> dramatically. You should look into pgpool or connection pooling from\n>> the application.\n> \n> huh? That is certainly not my experience. I have systems that show no\n> depreciable performance hit on even 1000+ connections. To be fair to the\n> discussion, these are on systems with 4+ cores. Usually 8+ and\n> significant ram 16/32 gig fo ram.\n\n\nYeah - I thought that somewhere closer to 10000 connections is where you \nget hit with socket management related performance issues.\n\nCheers\n\nMark\n", "msg_date": "Thu, 01 Mar 2007 20:07:06 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing database connections" }, { "msg_contents": "On Thu, Mar 01, 2007 at 12:49:14AM -0500, Jonah H. Harris wrote:\n> On 3/1/07, Shiva Sarna <[email protected]> wrote:\n> >I am sorry if it is a repeat question but I want to know if database\n> >performance will decrease if I increase the max-connections to 2000. At\n> >present it is 100.\n> \n> Most certainly. Adding connections over 200 will degrade performance\n> dramatically. You should look into pgpool or connection pooling from\n> the application.\n\nAre you sure? I've heard of at least one installation which runs with\n5000+ connections, and it works fine. (you know who you are - I don't\nknow if it's public info, so I can't put out the details - but feel free\nto fill in :P)\n\nThat said, there's certainly some overhead, and using pgpool if possible\nis good advice (depending on workload). I'm just wondering about\nthe \"dramatically\" part.\n\n//Magnus\n", "msg_date": "Thu, 1 Mar 2007 09:58:24 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing database connections" }, { "msg_contents": "* Mark Kirkwood:\n\n> Yeah - I thought that somewhere closer to 10000 connections is where\n> you get hit with socket management related performance issues.\n\nHuh? These sockets aren't handled in a single process, are they?\nNowadays, this number of sockets does not pose any problem for most\nsystems, especially if you don't do I/O multiplexing. Of course, if\nyou've got 10,000 connections which are active in parallel, most users\nwon't be content with 1/10,000th of your database performance. 8-/ (I\ndon't see why idle connections should be a problem from a socket\nmanagement POV, though.)\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Thu, 01 Mar 2007 12:04:03 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing database connections" }, { "msg_contents": "At 01:18 AM 3/1/2007, Joshua D. Drake wrote:\n>Jonah H. Harris wrote:\n> > On 3/1/07, Shiva Sarna <[email protected]> wrote:\n> >> I am sorry if it is a repeat question but I want to know if database\n> >> performance will decrease if I increase the max-connections to 2000. At\n> >> present it is 100.\n> >\n> > Most certainly. Adding connections over 200 will degrade performance\n> > dramatically. You should look into pgpool or connection pooling from\n> > the application.\n>\n>huh? That is certainly not my experience. I have systems that show no\n>depreciable performance hit on even 1000+ connections. To be fair to the\n>discussion, these are on systems with 4+ cores. Usually 8+ and\n>significant ram 16/32 gig fo ram.\n>\n>Sincerely,\n>\n>Joshua D. Drake\n\nSome caveats.\n\nKeeping a DB connection around is relatively inexpensive.\nOTOH, building and tearing down a DB connection =can be= expensive.\nExpensive or not, connection build and tear down are pure overhead \nactivities. Any overhead you remove from the system is extra \ncapacity that the system can use in actually answering DB queries \n(...at least until the physical IO system is running flat out...)\n\nSo having 1000+ DB connections open should not be a problem in and of \nitself (but you probably do not want 1000+ queries worth of \nsimultaneous HD IO!...).\n\nOTOH, you probably do !not! want to be constantly creating and \ndestroying 1000+ DB connections.\nBetter to open 1000+ DB connections once at system start up time and \nuse them as a connection pool.\n\nThe potential =really= big performance hit in having lots of \nconnections around is in lots of connections doing simultaneous \nheavy, especially seek heavy, HD IO.\n\nOnce you have enough open connections that your physical IO subsystem \ntends to be maxed out performance wise on the typical workload being \nhandled, it is counter productive to allow any more concurrent DB connections.\n\nSo the issue is not \"how high a max-connections is too high?\". It's \n\"how high a max connections is too high for =my= HW running =my= query mix?\"\n\nThe traditional advice is to be conservative and start with a \nrelatively small number of connections and increase that number only \nas long as doing so results in increased system performance on your \njob mix. Once you hit the performance plateau, stop increasing \nmax-connections and let connection caching and pooling handle things.\nIf that does not result in enough performance, it's time to initiate \nthe traditional optimization hunt.\n\nAlso, note Josh's deployed HW for systems that can handle 1000+ \nconnections. ...and you can bet the IO subsystems on those boxes are \nsimilarly \"beefy\". Don't expect miracles out of modest HW.\nRon \n\n", "msg_date": "Thu, 01 Mar 2007 07:20:55 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing database connections" }, { "msg_contents": "Magnus Hagander wrote:\n> On Thu, Mar 01, 2007 at 12:49:14AM -0500, Jonah H. Harris wrote:\n>> On 3/1/07, Shiva Sarna <[email protected]> wrote:\n>>> I am sorry if it is a repeat question but I want to know if database\n>>> performance will decrease if I increase the max-connections to 2000. At\n>>> present it is 100.\n>> Most certainly. Adding connections over 200 will degrade performance\n>> dramatically. You should look into pgpool or connection pooling from\n>> the application.\n> \n> Are you sure? I've heard of at least one installation which runs with\n> 5000+ connections, and it works fine.\n\nWe have one that high as well and it does fine. Although I wouldn't\nsuggest it on less than 8.1 ;). 8.2 handles it even better since 8.2\nhandles >8 cores better than 8.1.\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 01 Mar 2007 07:47:28 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: increasing database connections" } ]
[ { "msg_contents": "1. The function:\n \n SELECT a.birth_date FROM (\n SELECT indiv_fkey, birth_dt as birth_date, intern_last_update::date as last_update, 'fed' as source\n \n FROM cdm.cdm_fedcustomer\n WHERE birth_dt IS NOT NULL\n AND indiv_fkey = $1\n UNION \n SELECT indiv_fkey, birthdate as birth_date, last_update::date as last_update, 'reg' as source\n FROM cdm.cdm_reg_customer\n WHERE birthdate IS NOT NULL\n AND indiv_fkey = $1\nORDER BY source asc, last_update desc limit 1\n) a \n \n 2. The query:\n \n INSERT INTO indiv_mast.staging_birthdate\n SELECT * FROM (\n SELECT im.indiv_key, indiv_mast.getbest_bday(im.indiv_key::integer) AS birth_date\n FROM indiv_mast.indiv_mast Im\n WHERE im.indiv_key >= 2000000 AND im.indiv_key < 4000000\n ) b\n WHERE b.birth_date IS NOT NULL\n ;\n \n 3. The query plan:\n \n Bitmap Heap Scan on indiv_mast im (cost=28700.91..2098919.14 rows=1937250 width=8)\n Recheck Cond: ((indiv_key >= 2000000) AND (indiv_key < 4000000))\n Filter: (indiv_mast.getbest_bday((indiv_key)::integer) IS NOT NULL)\n -> Bitmap Index Scan on indiv_mast_pkey_idx (cost=0.00..28700.91 rows=1946985 width=0)\n Index Cond: ((indiv_key >= 2000000) AND (indiv_key < 4000000))\n\n 4. Number of records in the tables:\n \n indiv_mast.indiv_mast : 15Million\n cdm.cdm_fedcustomer: 18Million\n cdm.cdm_reg_customer: 9 Million\n \n The query (2) runs for hours. It started at 2:00Am last night and it is still running (6:00Am).\n \n Some of the postgresql.conf file parameters are below:\n \n shared_buffers = 20000 #60000\nwork_mem = 65536 #131072 #65536\nmaintenance_work_mem = 524288 #131072\nmax_fsm_pages = 8000000 \nmax_fsm_relations = 32768\nwal_buffers = 128\ncheckpoint_segments = 256 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 3600\ncheckpoint_warning = 300\neffective_cache_size = 20000\nrandom_page_cost = 2 # (same)\n \n I really do not know how to find out what the query is waiting on, unlike oracle db provides some of the information through its dynamic performance views.\n \n Please help in understanding how I can find out what the system is waiting for or why is it taking the query so long.\n \n I will really appreciate some help.\n \n Thanks\n Abu\n\n \n---------------------------------\nSucker-punch spam with award-winning protection.\n Try the free Yahoo! Mail Beta.\n1. The function:   SELECT a.birth_date FROM ( SELECT  indiv_fkey, birth_dt as birth_date, intern_last_update::date as last_update, 'fed' as source      FROM cdm.cdm_fedcustomer  WHERE birth_dt IS NOT NULL  AND indiv_fkey = $1  UNION   SELECT  indiv_fkey, birthdate as birth_date, last_update::date as last_update, 'reg' as source  FROM cdm.cdm_reg_customer  WHERE birthdate IS NOT NULL  AND indiv_fkey = $1ORDER BY source asc, last_update desc limit 1)   a   2. The query:   INSERT INTO  indiv_mast.staging_birthdate        SELECT * FROM (        SELECT im.indiv_key,   indiv_mast.getbest_bday(im.indiv_key::integer) AS\n birth_date        FROM indiv_mast.indiv_mast Im        WHERE im.indiv_key >= 2000000 AND im.indiv_key < 4000000        ) b        WHERE b.birth_date IS NOT NULL        ;   3. The query plan:   Bitmap Heap Scan on indiv_mast im  (cost=28700.91..2098919.14 rows=1937250 width=8)  Recheck Cond: ((indiv_key >= 2000000) AND (indiv_key < 4000000))  Filter: (indiv_mast.getbest_bday((indiv_key)::integer) IS NOT NULL)  ->  Bitmap Index Scan on indiv_mast_pkey_idx  (cost=0.00..28700.91 rows=1946985 width=0)        Index Cond: ((indiv_key >= 2000000) AND (indiv_key < 4000000)) 4. Number of records in the tables:\n  indiv_mast.indiv_mast : 15Million cdm.cdm_fedcustomer: 18Million cdm.cdm_reg_customer: 9 Million   The query (2) runs for hours. It started at 2:00Am last night and it is still running (6:00Am).   Some of the postgresql.conf file parameters are below:   shared_buffers = 20000 #60000work_mem = 65536 #131072 #65536maintenance_work_mem = 524288 #131072max_fsm_pages = 8000000 max_fsm_relations = 32768wal_buffers = 128checkpoint_segments = 256               # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 3600checkpoint_warning = 300effective_cache_size = 20000random_page_cost = 2        # (same)   I really do not know how to find out what the query is waiting on,\n unlike oracle db provides some of the information through its dynamic performance views.   Please help in understanding how I can find out what the system is waiting for or why is it taking the query so long.   I will really appreciate some help.   Thanks Abu\nSucker-punch spam with award-winning protection. Try the free Yahoo! Mail Beta.", "msg_date": "Thu, 1 Mar 2007 06:13:45 -0800 (PST)", "msg_from": "Abu Mushayeed <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Query" }, { "msg_contents": "Abu,\n\n> I really do not know how to find out what the query is waiting on,\n> unlike oracle db provides some of the information through its dynamic\n> performance views.\n\nYeah, we don't have that yet.\n\n> Please help in understanding how I can find out what the system is\n> waiting for or why is it taking the query so long.\n\nFirst guess would be I/O bound. The planner, at least, thinks you're \ninserting 2 million records. What kind of disk support do you have?\n\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Fri, 2 Mar 2007 16:52:22 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Query" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> Please help in understanding how I can find out what the system is\n>> waiting for or why is it taking the query so long.\n\n> First guess would be I/O bound. The planner, at least, thinks you're \n> inserting 2 million records. What kind of disk support do you have?\n\nI don't see any need to guess. iostat or vmstat or local equivalent\nwill show you quick enough if you are maxing out the disk or the CPU.\n\nIt seems at least somewhat possible that the thing is blocked on a lock,\nin which case the pg_locks view would tell you about it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Mar 2007 21:12:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Query " } ]
[ { "msg_contents": "Question for anyone...\n\nI tried posting to the bugs, and they said this is a better question for here.\nI have to queries. One runs in about 2 seconds. The other takes upwards\nof 2 minutes. I have a temp table that is created with 2 columns. This\ntable is joined with the larger database of call detail records.\nHowever, these 2 queries are handled very differently.\n\nThe queries:\nFirst----\n\ncalldetail=> EXPLAIN SELECT current.* FROM current JOIN anitmp ON\ncurrent.destnum=anitmp.ani AND istf=true;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2026113.09 rows=500908 width=108)\n -> Seq Scan on anitmp (cost=0.00..33.62 rows=945 width=8)\n Filter: (istf = true)\n -> Index Scan using i_destnum on current (cost=0.00..2137.36\nrows=531 width=108)\n Index Cond: (current.destnum = \"outer\".ani)\n(5 rows)\n\nSecond----\ncalldetail=> EXPLAIN SELECT current.* FROM current JOIN anitmp ON\ncurrent.orignum=anitmp.ani AND istf=false;\n QUERY PLAN\n---------------------------------------------------------------------------\n Hash Join (cost=35.99..3402035.53 rows=5381529 width=108)\n Hash Cond: (\"outer\".orignum = \"inner\".ani)\n -> Seq Scan on current (cost=0.00..907191.05 rows=10170805 width=108)\n -> Hash (cost=33.62..33.62 rows=945 width=8)\n -> Seq Scan on anitmp (cost=0.00..33.62 rows=945 width=8)\n Filter: (istf = false)\n(6 rows)\n\n\nThe tables:\n Table \"public.current\"\n Column | Type | Modifiers\n----------+-----------------------------+-----------\n datetime | timestamp without time zone |\n orignum | bigint |\n destnum | bigint |\n billto | bigint |\n cost | numeric(6,4) |\n duration | numeric(8,1) |\n origcity | character(12) |\n destcity | character(12) |\n file | character varying(30) |\n linenum | integer |\n carrier | character(1) |\nIndexes:\n \"i_destnum\" btree (destnum)\n \"i_orignum\" btree (orignum)\n\n\n Table \"public.anitmp\"\n Column | Type | Modifiers\n--------+---------+-----------\n ani | bigint |\n istf | boolean |\n\n\nI was also asked to post the EXPLAIN ANALYZE for both:\n\ncalldetail=> EXPLAIN ANALYZE SELECT current.* FROM anitmp JOIN current ON istf=false AND current.orignum=anitmp.ani;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=35.99..3427123.39 rows=5421215 width=108) (actual time=1994.164..157443.544 rows=157 loops=1)\n Hash Cond: (\"outer\".orignum = \"inner\".ani)\n -> Seq Scan on current (cost=0.00..913881.09 rows=10245809 width=108) (actual time=710.986..137963.320 rows=10893541 loops=1)\n -> Hash (cost=33.62..33.62 rows=945 width=8) (actual time=10.948..10.948 rows=0 loops=1)\n -> Seq Scan on anitmp (cost=0.00..33.62 rows=945 width=8) (actual time=10.934..10.939 rows=2 loops=1)\n Filter: (istf = false)\n Total runtime: 157443.900 ms\n(7 rows)\n\ncalldetail=> EXPLAIN ANALYZE SELECT current.* FROM current JOIN anitmp ON current.destnum=anitmp.ani AND istf=true;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2037526.69 rows=504602 width=108) (actual time=88.752..1050.295 rows=1445 loops=1)\n -> Seq Scan on anitmp (cost=0.00..33.62 rows=945 width=8) (actual time=8.189..8.202 rows=2 loops=1)\n Filter: (istf = true)\n -> Index Scan using i_destnum on current (cost=0.00..2149.40 rows=534 width=108) (actual time=62.365..517.454 rows=722 loops=2)\n Index Cond: (current.destnum = \"outer\".ani)\n Total runtime: 1052.862 ms\n(6 rows)\n\n\nAnyone have any ideas for me? I have indexes on each of the necessary\ncolumns.\n\nRob\n\n\n", "msg_date": "Thu, 01 Mar 2007 09:43:25 -0600", "msg_from": "Rob Schall <[email protected]>", "msg_from_op": true, "msg_subject": "Identical Queries" }, { "msg_contents": "On Thu, 1 Mar 2007, Rob Schall wrote:\n\n> Question for anyone...\n>\n> I tried posting to the bugs, and they said this is a better question for here.\n> I have to queries. One runs in about 2 seconds. The other takes upwards\n> of 2 minutes. I have a temp table that is created with 2 columns. This\n> table is joined with the larger database of call detail records.\n> However, these 2 queries are handled very differently.\n\nHow many rows are there in anitmp and how many rows in anitmp have\nistf=true and how many have istf=false? If you don't currently analyze the\ntemp table after adding the rows, you might find that doing an analyze\nhelps, or at least makes the row estimates better.\n", "msg_date": "Thu, 1 Mar 2007 11:39:09 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical Queries" }, { "msg_contents": "There are 4 entries (wanted to make the playing field level for this\ntest). There are 2 with true for istf and 2 with false.\n\nRob\n\n\nStephan Szabo wrote:\n> On Thu, 1 Mar 2007, Rob Schall wrote:\n>\n> \n>> Question for anyone...\n>>\n>> I tried posting to the bugs, and they said this is a better question for here.\n>> I have to queries. One runs in about 2 seconds. The other takes upwards\n>> of 2 minutes. I have a temp table that is created with 2 columns. This\n>> table is joined with the larger database of call detail records.\n>> However, these 2 queries are handled very differently.\n>> \n>\n> How many rows are there in anitmp and how many rows in anitmp have\n> istf=true and how many have istf=false? If you don't currently analyze the\n> temp table after adding the rows, you might find that doing an analyze\n> helps, or at least makes the row estimates better.\n> \n\n", "msg_date": "Thu, 01 Mar 2007 14:55:01 -0600", "msg_from": "Rob Schall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Identical Queries" }, { "msg_contents": "On 3/1/07, Rob Schall <[email protected]> wrote:\n>\n> There are 4 entries (wanted to make the playing field level for this\n> test). There are 2 with true for istf and 2 with false.\n>\n\nThen the difference here has to do with using orignum vs destnum as the join\ncriteria. There must be more intersections for orignum than destnum, or\nyour statistics are so far out of whack. It appears to be estimating 5M vs\n500K for a result set, and naturally it chose a different plan.\n\nOn 3/1/07, Rob Schall <[email protected]> wrote:\nThere are 4 entries (wanted to make the playing field level for thistest). There are 2 with true for istf and 2 with false.Then the difference here has to do with using orignum vs destnum as the join criteria.  There must be more intersections for orignum than destnum, or your statistics are so far out of whack.  It appears to be estimating 5M vs 500K for a result set, and naturally it chose a different plan.", "msg_date": "Thu, 1 Mar 2007 16:09:17 -0500", "msg_from": "\"Chad Wagner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical Queries" }, { "msg_contents": "On Thu, 1 Mar 2007, Rob Schall wrote:\n\n> There are 4 entries (wanted to make the playing field level for this\n> test). There are 2 with true for istf and 2 with false.\n\nThen analyzing might help, because I think it's estimating many more rows\nfor both cases, and with 2 rows estimated to be returned the nested loop\nshould seem a lot more attractive than at 900+.\n", "msg_date": "Thu, 1 Mar 2007 14:23:08 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical Queries" }, { "msg_contents": "Stephan Szabo wrote:\n> I tried posting to the bugs, and they said this is a better question for here.\n> I have to queries. One runs in about 2 seconds. The other takes upwards\n> of 2 minutes. I have a temp table that is created with 2 columns. This\n> table is joined with the larger database of call detail records.\n> However, these 2 queries are handled very differently.\n\nEven for a temporary table, you should run ANALYZE on it after you fill it but before you query or join to it. I found out (the hard way) that a temporary table of just 100 rows will generate dramatically different plans before and after ANALYZE.\n\nCraig\n\n", "msg_date": "Thu, 01 Mar 2007 14:41:39 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identical Queries" } ]
[ { "msg_contents": "Hello,\n\nI have noticed a strange performance regression and I'm at a loss as\nto what's happening. We have a fairly large database (~16 GB). The\noriginal postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB\nof ram running Solaris on local scsi discs. The new server is a sun\nOpteron box with 4 cores, 8 GB of ram running postgres 8.1.4 on Linux\n(AMD64) on a 4 Gbps FC SAN volume. When we created the new database\nit was created from scratch rather than copying over the old one,\nhowever the table structure is almost identical (UTF8 on the new one\nvs. C on the old). The problem is queries are ~10x slower on the new\nhardware. I read several places that the SAN might be to blame, but\ntesting with bonnie and dd indicates that the SAN is actually almost\ntwice as fast as the scsi discs in the old sun server. I've tried\nadjusting just about every option in the postgres config file, but\nperformance remains the same. Any ideas?\n\nThanks,\n\nAlex\n", "msg_date": "Thu, 1 Mar 2007 15:40:28 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 01.03.2007, at 13:40, Alex Deucher wrote:\n\n> I read several places that the SAN might be to blame, but\n> testing with bonnie and dd indicates that the SAN is actually almost\n> twice as fast as the scsi discs in the old sun server. I've tried\n> adjusting just about every option in the postgres config file, but\n> performance remains the same. Any ideas?\n\nAs mentioned last week:\n\nDid you actually try to use the local drives for speed testing? It \nmight be that the SAN introduces latency especially for random access \nyou don't see on local drives.\n\ncug\n", "msg_date": "Mon, 5 Mar 2007 19:14:48 -0700", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/5/07, Guido Neitzer <[email protected]> wrote:\n> On 01.03.2007, at 13:40, Alex Deucher wrote:\n>\n> > I read several places that the SAN might be to blame, but\n> > testing with bonnie and dd indicates that the SAN is actually almost\n> > twice as fast as the scsi discs in the old sun server. I've tried\n> > adjusting just about every option in the postgres config file, but\n> > performance remains the same. Any ideas?\n>\n> As mentioned last week:\n>\n> Did you actually try to use the local drives for speed testing? It\n> might be that the SAN introduces latency especially for random access\n> you don't see on local drives.\n\nYes, I started setting that up this afternoon. I'm going to test that\ntomorrow and post the results.\n\nAlex\n\n>\n> cug\n>\n", "msg_date": "Mon, 5 Mar 2007 21:56:24 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 05.03.2007, at 19:56, Alex Deucher wrote:\n\n> Yes, I started setting that up this afternoon. I'm going to test that\n> tomorrow and post the results.\n\nGood - that may or may not give some insight in the actual \nbottleneck. You never know but it seems to be one of the easiest to \nfind out ...\n\ncug\n", "msg_date": "Mon, 5 Mar 2007 19:58:29 -0700", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/5/07, Guido Neitzer <[email protected]> wrote:\n> On 05.03.2007, at 19:56, Alex Deucher wrote:\n>\n> > Yes, I started setting that up this afternoon. I'm going to test that\n> > tomorrow and post the results.\n>\n> Good - that may or may not give some insight in the actual\n> bottleneck. You never know but it seems to be one of the easiest to\n> find out ...\n>\n\nWell, the SAN appears to be the limiting factor. I set up the DB on\nthe local scsi discs (software RAID 1) and performance is excellent\n(better than the old server). Thanks for everyone's help.\n\nAlex\n", "msg_date": "Tue, 6 Mar 2007 10:25:30 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "At 10:25 AM 3/6/2007, Alex Deucher wrote:\n>On 3/5/07, Guido Neitzer <[email protected]> wrote:\n>>On 05.03.2007, at 19:56, Alex Deucher wrote:\n>>\n>> > Yes, I started setting that up this afternoon. I'm going to test that\n>> > tomorrow and post the results.\n>>\n>>Good - that may or may not give some insight in the actual\n>>bottleneck. You never know but it seems to be one of the easiest to\n>>find out ...\n>\n>Well, the SAN appears to be the limiting factor. I set up the DB on\n>the local scsi discs (software RAID 1) and performance is excellent\n>(better than the old server). Thanks for everyone's help.\n>\n>Alex\n\nWhat kind of SAN is it and how many + what kind of HDs are in it?\nAssuming the answers are reasonable...\n\nProfile the table IO pattern your workload generates and start \nallocating RAID sets to tables or groups of tables based on IO pattern.\n\nFor any table or group of tables that has a significant level of \nwrite IO, say >= ~25% of the IO mix, try RAID 5 or 6 first, but be \nprepared to go RAID 10 if performance is not acceptable.\n\nDon't believe any of the standard \"lore\" regarding what tables to put \nwhere or what tables to give dedicated spindles to.\nProfile, benchmark, and only then start allocating dedicated resources.\nFor instance, I've seen situations where putting pg_xlog on its own \nspindles was !not! the right thing to do.\n\nBest Wishes,\nRon Peacetree\n\n", "msg_date": "Tue, 06 Mar 2007 11:58:15 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and\n 8.1" }, { "msg_contents": "On 3/6/07, Ron <[email protected]> wrote:\n> At 10:25 AM 3/6/2007, Alex Deucher wrote:\n> >On 3/5/07, Guido Neitzer <[email protected]> wrote:\n> >>On 05.03.2007, at 19:56, Alex Deucher wrote:\n> >>\n> >> > Yes, I started setting that up this afternoon. I'm going to test that\n> >> > tomorrow and post the results.\n> >>\n> >>Good - that may or may not give some insight in the actual\n> >>bottleneck. You never know but it seems to be one of the easiest to\n> >>find out ...\n> >\n> >Well, the SAN appears to be the limiting factor. I set up the DB on\n> >the local scsi discs (software RAID 1) and performance is excellent\n> >(better than the old server). Thanks for everyone's help.\n> >\n> >Alex\n>\n> What kind of SAN is it and how many + what kind of HDs are in it?\n> Assuming the answers are reasonable...\n>\n\nIt's a Hitachi WMS/Tagmastore. 105 hitachi SATA drives; 4 Gbps FC.\n\n> Profile the table IO pattern your workload generates and start\n> allocating RAID sets to tables or groups of tables based on IO pattern.\n>\n> For any table or group of tables that has a significant level of\n> write IO, say >= ~25% of the IO mix, try RAID 5 or 6 first, but be\n> prepared to go RAID 10 if performance is not acceptable.\n>\n\nRight now it's designed for max capacity: big RAID 5 groups. I expect\nI'll probably need RAID 10 for decent performance.\n\n> Don't believe any of the standard \"lore\" regarding what tables to put\n> where or what tables to give dedicated spindles to.\n> Profile, benchmark, and only then start allocating dedicated resources.\n> For instance, I've seen situations where putting pg_xlog on its own\n> spindles was !not! the right thing to do.\n>\n\nRight. Thanks for the advice. I'll post my results when I get around\nto testing some new SAN configurations.\n\nAlex\n", "msg_date": "Tue, 6 Mar 2007 13:11:24 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "I would just like to note here that this is an example of inefficient\nstrategy.\n\nWe could all agree (up to a certain economical point) that Alex saved the\nmost expensive one thousand dollars of his life.\n\nI don't know the financial status nor the size of your organization, but I'm\nsure that you have selected the path that has cost you more.\n\nIn the future, an investment on memory for a (let's say) rather small\ndatabase should be your first attempt.\n\nYours,\nRodrigo Madera\n\nOn 3/6/07, Alex Deucher <[email protected]> wrote:\n>\n> On 3/6/07, Ron <[email protected]> wrote:\n> > At 10:25 AM 3/6/2007, Alex Deucher wrote:\n> > >On 3/5/07, Guido Neitzer <[email protected]> wrote:\n> > >>On 05.03.2007, at 19:56, Alex Deucher wrote:\n> > >>\n> > >> > Yes, I started setting that up this afternoon. I'm going to test\n> that\n> > >> > tomorrow and post the results.\n> > >>\n> > >>Good - that may or may not give some insight in the actual\n> > >>bottleneck. You never know but it seems to be one of the easiest to\n> > >>find out ...\n> > >\n> > >Well, the SAN appears to be the limiting factor. I set up the DB on\n> > >the local scsi discs (software RAID 1) and performance is excellent\n> > >(better than the old server). Thanks for everyone's help.\n> > >\n> > >Alex\n> >\n> > What kind of SAN is it and how many + what kind of HDs are in it?\n> > Assuming the answers are reasonable...\n> >\n>\n> It's a Hitachi WMS/Tagmastore. 105 hitachi SATA drives; 4 Gbps FC.\n>\n> > Profile the table IO pattern your workload generates and start\n> > allocating RAID sets to tables or groups of tables based on IO pattern.\n> >\n> > For any table or group of tables that has a significant level of\n> > write IO, say >= ~25% of the IO mix, try RAID 5 or 6 first, but be\n> > prepared to go RAID 10 if performance is not acceptable.\n> >\n>\n> Right now it's designed for max capacity: big RAID 5 groups. I expect\n> I'll probably need RAID 10 for decent performance.\n>\n> > Don't believe any of the standard \"lore\" regarding what tables to put\n> > where or what tables to give dedicated spindles to.\n> > Profile, benchmark, and only then start allocating dedicated resources.\n> > For instance, I've seen situations where putting pg_xlog on its own\n> > spindles was !not! the right thing to do.\n> >\n>\n> Right. Thanks for the advice. I'll post my results when I get around\n> to testing some new SAN configurations.\n>\n> Alex\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nI would just like to note here that this is an example of inefficient strategy.We could all agree (up to a certain economical point) that Alex saved the most expensive one thousand dollars of his life.I don't know the financial status nor the size of your organization, but I'm sure that you have selected the path that has cost you more.\nIn the future, an investment on memory for a (let's say) rather small database should be your first attempt.Yours,Rodrigo MaderaOn 3/6/07, \nAlex Deucher <[email protected]> wrote:\nOn 3/6/07, Ron <[email protected]> wrote:> At 10:25 AM 3/6/2007, Alex Deucher wrote:> >On 3/5/07, Guido Neitzer <\[email protected]> wrote:> >>On 05.03.2007, at 19:56, Alex Deucher wrote:> >>> >> > Yes, I started setting that up this afternoon.  I'm going to test that> >> > tomorrow and post the results.\n> >>> >>Good - that may or may not give some insight in the actual> >>bottleneck. You never know but it seems to be one of the easiest to> >>find out ...> >\n> >Well, the SAN appears to be the limiting factor.  I set up the DB on> >the local scsi discs (software RAID 1) and performance is excellent> >(better than the old server).  Thanks for everyone's help.\n> >> >Alex>> What kind of SAN is it and how many + what kind of HDs are in it?> Assuming the answers are reasonable...>It's a Hitachi WMS/Tagmastore.  105 hitachi SATA drives; 4 Gbps FC.\n> Profile the table IO pattern your workload generates and start> allocating RAID sets to tables or groups of tables based on IO pattern.>> For any table or group of tables that has a significant level of\n> write IO, say >= ~25% of the IO mix, try RAID 5 or 6 first, but be> prepared to go RAID 10 if performance is not acceptable.>Right now it's designed for max capacity: big RAID 5 groups.  I expect\nI'll probably need RAID 10 for decent performance.> Don't believe any of the standard \"lore\" regarding what tables to put> where or what tables to give dedicated spindles to.> Profile, benchmark, and only then start allocating dedicated resources.\n> For instance, I've seen situations where putting pg_xlog on its own> spindles was !not! the right thing to do.>Right.  Thanks for the advice.  I'll post my results when I get around\nto testing some new SAN configurations.Alex---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to       choose an index scan if your joining column's datatypes do not\n       match", "msg_date": "Thu, 8 Mar 2007 09:23:49 -0300", "msg_from": "\"Rodrigo Madera\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "Rodrigo Madera wrote:\n> I would just like to note here that this is an example of inefficient \n> strategy.\n> \n> We could all agree (up to a certain economical point) that Alex saved \n> the most expensive one thousand dollars of his life.\n> \n> I don't know the financial status nor the size of your organization, but \n> I'm sure that you have selected the path that has cost you more.\n> \n> In the future, an investment on memory for a (let's say) rather small \n> database should be your first attempt.\n\nAlex may have made the correct, rational choice, given the state of accounting at most corporations. Corporate accounting practices and the budgetary process give different weights to cash and labor. Labor is fixed, and can be grossly wasted without (apparently) affecting the quarterly bottom line. Cash expenditures come directly off profits.\n\nIt's shortsighted and irrational, but nearly 100% of corporations operate this way. You can waste a week of your time and nobody complains, but spend a thousand dollars, and the company president is breathing down your neck.\n\nWhen we answer a question on this forum, we need to understand that the person who needs help may be under irrational, but real, constraints, and offer appropriate advice. Sure, it's good to fight corporate stupidity, but sometimes you just want to get the system back online.\n\nCraig\n", "msg_date": "Thu, 08 Mar 2007 10:34:06 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "At 01:34 PM 3/8/2007, Craig A. James wrote:\n>Rodrigo Madera wrote:\n>>I would just like to note here that this is an example of \n>>inefficient strategy.\n>>We could all agree (up to a certain economical point) that Alex \n>>saved the most expensive one thousand dollars of his life.\n>>I don't know the financial status nor the size of your \n>>organization, but I'm sure that you have selected the path that has \n>>cost you more.\n>>In the future, an investment on memory for a (let's say) rather \n>>small database should be your first attempt.\n>\n>Alex may have made the correct, rational choice, given the state of \n>accounting at most corporations. Corporate accounting practices and \n>the budgetary process give different weights to cash and \n>labor. Labor is fixed, and can be grossly wasted without \n>(apparently) affecting the quarterly bottom line. Cash expenditures \n>come directly off profits.\n>\n>It's shortsighted and irrational, but nearly 100% of corporations \n>operate this way. You can waste a week of your time and nobody \n>complains, but spend a thousand dollars, and the company president \n>is breathing down your neck.\n>\n>When we answer a question on this forum, we need to understand that \n>the person who needs help may be under irrational, but real, \n>constraints, and offer appropriate advice. Sure, it's good to fight \n>corporate stupidity, but sometimes you just want to get the system back online.\n>\n>Craig\nAll good points.\n\nHowever, when we allow or help (even tacitly by \"looking the other \nway\") our organizations to waste IT dollars we increase the risk that \nwe are going to be paid less because there's less money. Or even \nthat we will be unemployed because there's less money (as in \"we \nwasted enough money we went out of business\").\n\nThe correct strategy is to Speak Their Language (tm) to the \naccounting and management folks and give them the information needed \nto Do The Right Thing (tm) (or at least authorize you doing it ;-) \n). They may still not be / act sane, but at that point your hands are clean.\n(...and if your organization has a habit of Not Listening to Reason \n(tm), strongly consider finding a new job before you are forced to by \ntheir fiscal or managerial irresponsibility.)\n\nCap Ex may not be the same as Discretionary Expenses, but at the end \nof the day dollars are dollars.\nAny we spend in one place can't be spent in any other place; and \nthere's a finite pile of them.\n\nSpending 10x as much in labor and opportunity costs (you can only do \none thing at a time...) as you would on CapEx to address a problem is \nsimply not smart money management nor good business. Even spending \n2x as much in that fashion is probably not.\n\n Cheers,\nRon Peacetree\n\n\n\n\n\n", "msg_date": "Thu, 08 Mar 2007 13:59:50 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and\n 8.1" }, { "msg_contents": "\n>> I would just like to note here that this is an example of inefficient \n>> strategy.\n>> [ ... ]\n>\n>\n> Alex may have made the correct, rational choice, given the state of \n> accounting at most corporations. Corporate accounting practices and \n> the budgetary process give different weights to cash and labor. Labor \n> is fixed, and can be grossly wasted without (apparently) affecting the \n> quarterly bottom line. Cash expenditures come directly off profits.\n>\n> It's shortsighted and irrational, but nearly 100% of corporations \n> operate this way. You can waste a week of your time and nobody \n> complains, but spend a thousand dollars, and the company president is \n> breathing down your neck.\n>\n> When we answer a question on this forum, we need to understand that \n> the person who needs help may be under irrational, but real, \n> constraints, and offer appropriate advice. Sure, it's good to fight \n> corporate stupidity, but sometimes you just want to get the system \n> back online.\n\n\nAnother thing --- which may or may not apply to Alex's case and to the\nparticular state of the thread, but it's still related and IMHO \nimportant to\ntake into account:\n\nThere may be other consrtaints that makes it impossible to even consider\na memory upgrade --- for example, us (our project). We *rent* the servers\nfrom a Web hoster (dedicated servers). This particular hoster does not\neven offer the possibility of upgrading the hardware --- 2GB of RAM,\ntake it r leave it. Period.\n\nIn other cases, the memory upgrade has a *monthly* cost (and quite\noften I find it excessive --- granted, that may be just me). So, $50 or\n$100 per month *additional* expenses may be considerable.\n\nNow, yet another thing that you (Craig) seem to be missing: you're\nsimply putting the expense of all this time under the expenses column\nin exchange for solving the particular problem --- gaining the insight\non the internals and performance tuning techniques for PG may well\nbe worth tens of thousands of dollars for his company in the future.\n\nThe \"quick and dirty\" solution is not giving a damn about knowledge\nbut to the ability to solve the problem at hand *now*, at whatever\n\"petty cash cost\" because it looks more cost effective (when seen\nfrom the non-irrational accounting point of view, that is) --- but isn't\ngoing for the \"quick and dirty\" solution without learning anything\nfrom the experience also shortsighted ???\n\nCarlos\n--\n\n", "msg_date": "Thu, 08 Mar 2007 14:34:37 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "Ron wrote:\n\n>\n> Speak Their Language (tm) [ ... ] Do The Right Thing (tm)\n> [...] Not Listening to Reason (tm),\n> [...]\n>\n> fiscal or managerial irresponsibility.)\n\n\nAnd *here*, of all the instances, you don't put a (TM) sign ??!!!!\nTsk-tsk-tsk\n\n:-)\n\nCarlos\n--\n\n", "msg_date": "Thu, 08 Mar 2007 14:38:26 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "Carlos,\n> Now, yet another thing that you (Craig) seem to be missing: you're\n> simply putting the expense of all this time under the expenses column\n> in exchange for solving the particular problem...\n\nMore like I was trying to keep my response short ;-). I think we're all in agreement on pretty much everything:\n\n 1. Understand your problem\n 2. Find potential solutions\n 3. Find the technical, economic AND situational tradeoffs\n 4. Choose the best course of action\n\nMy original comment was directed at item #3. I was trying to remind everyone that a simple cost analysis may point to solutions that simply aren't possible, given business constraints.\n\nI know we also agree that we should constantly fight corporate stupidity and short-sighted budgetary oversight. But that's a second battle, one that goes on forever. Sometimes you just have to get the job done within the current constraints.\n\n'Nuff said.\n\nCraig\n", "msg_date": "Thu, 08 Mar 2007 13:07:20 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" } ]
[ { "msg_contents": "Hello,\n\nI have noticed a strange performance regression and I'm at a loss as\nto what's happening. We have a fairly large database (~16 GB). The\noriginal postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB\nof ram running Solaris on local scsi discs. The new server is a sun\nOpteron box with 4 cores, 8 GB of ram running postgres 8.1.4 on Linux\n(AMD64) on a 4 Gbps FC SAN volume. When we created the new database\nit was created from scratch rather than copying over the old one,\nhowever the table structure is almost identical (UTF8 on the new one\nvs. C on the old). The problem is queries are ~10x slower on the new\nhardware. I read several places that the SAN might be to blame, but\ntesting with bonnie and dd indicates that the SAN is actually almost\ntwice as fast as the scsi discs in the old sun server. I've tried\nadjusting just about every option in the postgres config file, but\nperformance remains the same. Any ideas?\n\nThanks,\n\nAlex\n", "msg_date": "Thu, 1 Mar 2007 15:44:03 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "strange performance regression between 7.4 and 8.1" }, { "msg_contents": "Alex Deucher wrote:\n> Hello,\n> \n> I have noticed a strange performance regression and I'm at a loss as\n> to what's happening. We have a fairly large database (~16 GB). The\n> original postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB\n> of ram running Solaris on local scsi discs. The new server is a sun\n> Opteron box with 4 cores, 8 GB of ram running postgres 8.1.4 on Linux\n> (AMD64) on a 4 Gbps FC SAN volume. When we created the new database\n> it was created from scratch rather than copying over the old one,\n> however the table structure is almost identical (UTF8 on the new one\n> vs. C on the old). The problem is queries are ~10x slower on the new\n> hardware. I read several places that the SAN might be to blame, but\n> testing with bonnie and dd indicates that the SAN is actually almost\n> twice as fast as the scsi discs in the old sun server. I've tried\n> adjusting just about every option in the postgres config file, but\n> performance remains the same. Any ideas?\n\nVacuum? Analayze? default_statistics_target? How many shared_buffers?\neffective_cache_size? work_mem?\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Thanks,\n> \n> Alex\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 01 Mar 2007 12:50:24 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On Thu, 1 Mar 2007, Joshua D. Drake wrote:\n\n> Alex Deucher wrote:\n>> Hello,\n>>\n>> I have noticed a strange performance regression and I'm at a loss as\n>> to what's happening. We have a fairly large database (~16 GB). The\n>> original postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB\n>> of ram running Solaris on local scsi discs. The new server is a sun\n>> Opteron box with 4 cores, 8 GB of ram running postgres 8.1.4 on Linux\n>> (AMD64) on a 4 Gbps FC SAN volume. When we created the new database\n>> it was created from scratch rather than copying over the old one,\n>> however the table structure is almost identical (UTF8 on the new one\n>> vs. C on the old). The problem is queries are ~10x slower on the new\n>> hardware. I read several places that the SAN might be to blame, but\n>> testing with bonnie and dd indicates that the SAN is actually almost\n>> twice as fast as the scsi discs in the old sun server. I've tried\n>> adjusting just about every option in the postgres config file, but\n>> performance remains the same. Any ideas?\n>\n> Vacuum? Analayze? default_statistics_target? How many shared_buffers?\n> effective_cache_size? work_mem?\n\nAlso, an explain analyze from both the 7.4 and 8.1 systems with one of the \n10x slower queries would probably be handy.\n\nWhat do you mean by \"created from scratch rather than copying over the old \none\"? How did you put the data in? Did you run analyze after loading it? \nIs autovacuum enabled and if so, what are the thresholds?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 1 Mar 2007 12:56:08 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/1/07, Joshua D. Drake <[email protected]> wrote:\n> Alex Deucher wrote:\n> > Hello,\n> >\n> > I have noticed a strange performance regression and I'm at a loss as\n> > to what's happening. We have a fairly large database (~16 GB). The\n> > original postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB\n> > of ram running Solaris on local scsi discs. The new server is a sun\n> > Opteron box with 4 cores, 8 GB of ram running postgres 8.1.4 on Linux\n> > (AMD64) on a 4 Gbps FC SAN volume. When we created the new database\n> > it was created from scratch rather than copying over the old one,\n> > however the table structure is almost identical (UTF8 on the new one\n> > vs. C on the old). The problem is queries are ~10x slower on the new\n> > hardware. I read several places that the SAN might be to blame, but\n> > testing with bonnie and dd indicates that the SAN is actually almost\n> > twice as fast as the scsi discs in the old sun server. I've tried\n> > adjusting just about every option in the postgres config file, but\n> > performance remains the same. Any ideas?\n>\n> Vacuum? Analayze? default_statistics_target? How many shared_buffers?\n> effective_cache_size? work_mem?\n>\n\nI'm running the autovacuum process on the 8.1 server. vacuuming on\nthe old server was done manually.\n\ndefault_statistics_target and effective_cache_size are set to the the\ndefaults on both.\n\npostgres 7.4 server:\n# - Memory -\nshared_buffers = 82000 # 1000 min 16, at least\nmax_connections*2, 8KB each\nsort_mem = 8000 # 1024 min 64, size in KB\nvacuum_mem = 32000 # 8192 min 1024, size in KB\n# - Free Space Map -\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n# - Kernel Resource Usage -\n#max_files_per_process = 1000 # min 25\n\npostgres 8.1 server:\n# - Memory -\nshared_buffers = 100000 # min 16 or max_connections*2, 8KB each\ntemp_buffers = 2000 #1000 # min 100, 8KB each\nmax_prepared_transactions = 100 #5 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 10000 #1024 # min 64, size in KB\nmaintenance_work_mem = 524288 #16384 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n\nI've also tried using the same settings from the old server on the new\none; same performance issues.\n\nThanks,\n\nAlex\n\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>\n> >\n> > Thanks,\n> >\n> > Alex\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/docs/faq\n> >\n>\n>\n> --\n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n>\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n> PostgreSQL Replication: http://www.commandprompt.com/products/\n>\n>\n", "msg_date": "Thu, 1 Mar 2007 16:06:34 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/1/07, Jeff Frost <[email protected]> wrote:\n> On Thu, 1 Mar 2007, Joshua D. Drake wrote:\n>\n> > Alex Deucher wrote:\n> >> Hello,\n> >>\n> >> I have noticed a strange performance regression and I'm at a loss as\n> >> to what's happening. We have a fairly large database (~16 GB). The\n> >> original postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB\n> >> of ram running Solaris on local scsi discs. The new server is a sun\n> >> Opteron box with 4 cores, 8 GB of ram running postgres 8.1.4 on Linux\n> >> (AMD64) on a 4 Gbps FC SAN volume. When we created the new database\n> >> it was created from scratch rather than copying over the old one,\n> >> however the table structure is almost identical (UTF8 on the new one\n> >> vs. C on the old). The problem is queries are ~10x slower on the new\n> >> hardware. I read several places that the SAN might be to blame, but\n> >> testing with bonnie and dd indicates that the SAN is actually almost\n> >> twice as fast as the scsi discs in the old sun server. I've tried\n> >> adjusting just about every option in the postgres config file, but\n> >> performance remains the same. Any ideas?\n> >\n> > Vacuum? Analayze? default_statistics_target? How many shared_buffers?\n> > effective_cache_size? work_mem?\n>\n> Also, an explain analyze from both the 7.4 and 8.1 systems with one of the\n> 10x slower queries would probably be handy.\n>\n\nI'll run some and get back to you.\n\n> What do you mean by \"created from scratch rather than copying over the old\n> one\"? How did you put the data in? Did you run analyze after loading it?\n> Is autovacuum enabled and if so, what are the thresholds?\n\nBoth the databases were originally created from xml files. We just\nre-created the new one from the xml rather than copying the old\ndatabase over. I didn't manually run analyze on it, but we are\nrunning the autovacuum process:\n\nautovacuum = on #off # enable autovacuum subprocess?\nautovacuum_naptime = 360 #60 # time between autovacuum runs, in secs\nautovacuum_vacuum_threshold = 10000 #1000 # min # of tuple updates before\n # vacuum\nautovacuum_analyze_threshold = 5000 #500 # min # of tuple updates before\n\nThanks,\n\nAlex\n\n>\n> --\n> Jeff Frost, Owner <[email protected]>\n> Frost Consulting, LLC http://www.frostconsultingllc.com/\n> Phone: 650-780-7908 FAX: 650-649-1954\n>\n", "msg_date": "Thu, 1 Mar 2007 16:11:25 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On Thu, 1 Mar 2007, Alex Deucher wrote:\n\n> On 3/1/07, Jeff Frost <[email protected]> wrote:\n>> On Thu, 1 Mar 2007, Joshua D. Drake wrote:\n>> \n>> > Alex Deucher wrote:\n>> >> Hello,\n>> >>\n>> >> I have noticed a strange performance regression and I'm at a loss as\n>> >> to what's happening. We have a fairly large database (~16 GB). The\n>> >> original postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB\n>> >> of ram running Solaris on local scsi discs. The new server is a sun\n>> >> Opteron box with 4 cores, 8 GB of ram running postgres 8.1.4 on Linux\n>> >> (AMD64) on a 4 Gbps FC SAN volume. When we created the new database\n>> >> it was created from scratch rather than copying over the old one,\n>> >> however the table structure is almost identical (UTF8 on the new one\n>> >> vs. C on the old). The problem is queries are ~10x slower on the new\n>> >> hardware. I read several places that the SAN might be to blame, but\n>> >> testing with bonnie and dd indicates that the SAN is actually almost\n>> >> twice as fast as the scsi discs in the old sun server. I've tried\n>> >> adjusting just about every option in the postgres config file, but\n>> >> performance remains the same. Any ideas?\n>> >\n>> > Vacuum? Analayze? default_statistics_target? How many shared_buffers?\n>> > effective_cache_size? work_mem?\n>> \n>> Also, an explain analyze from both the 7.4 and 8.1 systems with one of the\n>> 10x slower queries would probably be handy.\n>> \n>\n> I'll run some and get back to you.\n>\n>> What do you mean by \"created from scratch rather than copying over the old\n>> one\"? How did you put the data in? Did you run analyze after loading it?\n>> Is autovacuum enabled and if so, what are the thresholds?\n>\n> Both the databases were originally created from xml files. We just\n> re-created the new one from the xml rather than copying the old\n> database over. I didn't manually run analyze on it, but we are\n> running the autovacuum process:\n\nYou should probably manually run analyze and see if that resolves your \nproblem.\n\n>\n> autovacuum = on #off # enable autovacuum subprocess?\n> autovacuum_naptime = 360 #60 # time between autovacuum runs, in \n> secs\n> autovacuum_vacuum_threshold = 10000 #1000 # min # of tuple updates \n> before\n> # vacuum\n> autovacuum_analyze_threshold = 5000 #500 # min # of tuple updates \n> before\n\nMost people make autovacuum more aggressive and not less aggressive. In fact, \nthe new defaults in 8.2 are:\n\n#autovacuum_vacuum_threshold = 500 # min # of tuple updates before\n # vacuum\n#autovacuum_analyze_threshold = 250 # min # of tuple updates before\n # analyze\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before\n # vacuum\n#autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before\n\nI'd recommend trying those, otherwise you might not vacuum enough.\n\nIt'll be interesting to see the explain analyze output after you've run \nanalyze by hand.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 1 Mar 2007 13:34:04 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On Thu, 1 Mar 2007, Alex Deucher wrote:\n\n>> Vacuum? Analayze? default_statistics_target? How many shared_buffers?\n>> effective_cache_size? work_mem?\n>> \n>\n> I'm running the autovacuum process on the 8.1 server. vacuuming on\n> the old server was done manually.\n>\n> default_statistics_target and effective_cache_size are set to the the\n> defaults on both.\n>\n> postgres 7.4 server:\n> # - Memory -\n> shared_buffers = 82000 # 1000 min 16, at least\n> max_connections*2, 8KB each\n> sort_mem = 8000 # 1024 min 64, size in KB\n> vacuum_mem = 32000 # 8192 min 1024, size in KB\n> # - Free Space Map -\n> #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n> #max_fsm_relations = 1000 # min 100, ~50 bytes each\n> # - Kernel Resource Usage -\n> #max_files_per_process = 1000 # min 25\n>\n> postgres 8.1 server:\n> # - Memory -\n> shared_buffers = 100000 # min 16 or max_connections*2, 8KB \n> each\n> temp_buffers = 2000 #1000 # min 100, 8KB each\n> max_prepared_transactions = 100 #5 # can be 0 or more\n> # note: increasing max_prepared_transactions costs ~600 bytes of shared \n> memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> work_mem = 10000 #1024 # min 64, size in KB\n> maintenance_work_mem = 524288 #16384 # min 1024, size in KB\n> #max_stack_depth = 2048 # min 100, size in KB\n>\n> I've also tried using the same settings from the old server on the new\n> one; same performance issues.\n>\n\nIf this is a linux system, could you give us the output of the 'free' command? \nPostgresql might be choosing a bad plan because your effective_cache_size is \nway off (it's the default now right?). Also, what was the block read/write \nspeed of the SAN from your bonnie tests? Probably want to tune \nrandom_page_cost as well if it's also at the default.\n\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 1 Mar 2007 13:36:37 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/1/07, Jeff Frost <[email protected]> wrote:\n> On Thu, 1 Mar 2007, Alex Deucher wrote:\n>\n> >> Vacuum? Analayze? default_statistics_target? How many shared_buffers?\n> >> effective_cache_size? work_mem?\n> >>\n> >\n> > I'm running the autovacuum process on the 8.1 server. vacuuming on\n> > the old server was done manually.\n> >\n> > default_statistics_target and effective_cache_size are set to the the\n> > defaults on both.\n> >\n> > postgres 7.4 server:\n> > # - Memory -\n> > shared_buffers = 82000 # 1000 min 16, at least\n> > max_connections*2, 8KB each\n> > sort_mem = 8000 # 1024 min 64, size in KB\n> > vacuum_mem = 32000 # 8192 min 1024, size in KB\n> > # - Free Space Map -\n> > #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n> > #max_fsm_relations = 1000 # min 100, ~50 bytes each\n> > # - Kernel Resource Usage -\n> > #max_files_per_process = 1000 # min 25\n> >\n> > postgres 8.1 server:\n> > # - Memory -\n> > shared_buffers = 100000 # min 16 or max_connections*2, 8KB\n> > each\n> > temp_buffers = 2000 #1000 # min 100, 8KB each\n> > max_prepared_transactions = 100 #5 # can be 0 or more\n> > # note: increasing max_prepared_transactions costs ~600 bytes of shared\n> > memory\n> > # per transaction slot, plus lock space (see max_locks_per_transaction).\n> > work_mem = 10000 #1024 # min 64, size in KB\n> > maintenance_work_mem = 524288 #16384 # min 1024, size in KB\n> > #max_stack_depth = 2048 # min 100, size in KB\n> >\n> > I've also tried using the same settings from the old server on the new\n> > one; same performance issues.\n> >\n>\n> If this is a linux system, could you give us the output of the 'free' command?\n\n total used free shared buffers cached\nMem: 8059852 8042868 16984 0 228 7888648\n-/+ buffers/cache: 153992 7905860\nSwap: 15631224 2164 15629060\n\n\n> Postgresql might be choosing a bad plan because your effective_cache_size is\n> way off (it's the default now right?). Also, what was the block read/write\n\nyes it's set to the default.\n\n> speed of the SAN from your bonnie tests? Probably want to tune\n> random_page_cost as well if it's also at the default.\n>\n\n\t ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nluna12-san 16000M 58896 91 62931 9 35870 5 54869 82 145504 13 397.7 0\n\neffective_cache_size is the default.\n\nAlex\n", "msg_date": "Thu, 1 Mar 2007 16:50:36 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On Thu, 1 Mar 2007, Alex Deucher wrote:\n\n> On 3/1/07, Jeff Frost <[email protected]> wrote:\n>> On Thu, 1 Mar 2007, Alex Deucher wrote:\n>> \n>> >> Vacuum? Analayze? default_statistics_target? How many shared_buffers?\n>> >> effective_cache_size? work_mem?\n>> >>\n>> >\n>> > I'm running the autovacuum process on the 8.1 server. vacuuming on\n>> > the old server was done manually.\n>> >\n>> > default_statistics_target and effective_cache_size are set to the the\n>> > defaults on both.\n>> >\n>> > postgres 7.4 server:\n>> > # - Memory -\n>> > shared_buffers = 82000 # 1000 min 16, at least\n>> > max_connections*2, 8KB each\n>> > sort_mem = 8000 # 1024 min 64, size in KB\n>> > vacuum_mem = 32000 # 8192 min 1024, size in KB\n>> > # - Free Space Map -\n>> > #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n>> > #max_fsm_relations = 1000 # min 100, ~50 bytes each\n>> > # - Kernel Resource Usage -\n>> > #max_files_per_process = 1000 # min 25\n>> >\n>> > postgres 8.1 server:\n>> > # - Memory -\n>> > shared_buffers = 100000 # min 16 or max_connections*2, \n>> 8KB\n>> > each\n>> > temp_buffers = 2000 #1000 # min 100, 8KB each\n>> > max_prepared_transactions = 100 #5 # can be 0 or more\n>> > # note: increasing max_prepared_transactions costs ~600 bytes of shared\n>> > memory\n>> > # per transaction slot, plus lock space (see max_locks_per_transaction).\n>> > work_mem = 10000 #1024 # min 64, size in KB\n>> > maintenance_work_mem = 524288 #16384 # min 1024, size in KB\n>> > #max_stack_depth = 2048 # min 100, size in KB\n>> >\n>> > I've also tried using the same settings from the old server on the new\n>> > one; same performance issues.\n>> >\n>> \n>> If this is a linux system, could you give us the output of the 'free' \n>> command?\n>\n> total used free shared buffers cached\n> Mem: 8059852 8042868 16984 0 228 7888648\n> -/+ buffers/cache: 153992 7905860\n> Swap: 15631224 2164 15629060\n\nSo, I would set effective_cache_size = 988232 (7905860/8).\n\n>\n>> Postgresql might be choosing a bad plan because your effective_cache_size \n>> is\n>> way off (it's the default now right?). Also, what was the block read/write\n>\n> yes it's set to the default.\n>\n>> speed of the SAN from your bonnie tests? Probably want to tune\n>> random_page_cost as well if it's also at the default.\n>> \n>\n> \t ------Sequential Output------ --Sequential Input- \n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec \n> %CP\n> luna12-san 16000M 58896 91 62931 9 35870 5 54869 82 145504 13 397.7 \n> 0\n>\n\nSo, you're getting 62MB/s writes and 145MB/s reads. Just FYI, that write \nspeed is about the same as my single SATA drive write speed on my workstation, \nso not that great. The read speed is decent, though and with that sort of \nread performance, you might want to lower random_page_cost to something like \n2.5 or 2 so the planner will tend to prefer index scans.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 1 Mar 2007 14:01:28 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/1/07, Jeff Frost <[email protected]> wrote:\n> On Thu, 1 Mar 2007, Alex Deucher wrote:\n>\n> > On 3/1/07, Jeff Frost <[email protected]> wrote:\n> >> On Thu, 1 Mar 2007, Alex Deucher wrote:\n> >>\n> >> >> Vacuum? Analayze? default_statistics_target? How many shared_buffers?\n> >> >> effective_cache_size? work_mem?\n> >> >>\n> >> >\n> >> > I'm running the autovacuum process on the 8.1 server. vacuuming on\n> >> > the old server was done manually.\n> >> >\n> >> > default_statistics_target and effective_cache_size are set to the the\n> >> > defaults on both.\n> >> >\n> >> > postgres 7.4 server:\n> >> > # - Memory -\n> >> > shared_buffers = 82000 # 1000 min 16, at least\n> >> > max_connections*2, 8KB each\n> >> > sort_mem = 8000 # 1024 min 64, size in KB\n> >> > vacuum_mem = 32000 # 8192 min 1024, size in KB\n> >> > # - Free Space Map -\n> >> > #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n> >> > #max_fsm_relations = 1000 # min 100, ~50 bytes each\n> >> > # - Kernel Resource Usage -\n> >> > #max_files_per_process = 1000 # min 25\n> >> >\n> >> > postgres 8.1 server:\n> >> > # - Memory -\n> >> > shared_buffers = 100000 # min 16 or max_connections*2,\n> >> 8KB\n> >> > each\n> >> > temp_buffers = 2000 #1000 # min 100, 8KB each\n> >> > max_prepared_transactions = 100 #5 # can be 0 or more\n> >> > # note: increasing max_prepared_transactions costs ~600 bytes of shared\n> >> > memory\n> >> > # per transaction slot, plus lock space (see max_locks_per_transaction).\n> >> > work_mem = 10000 #1024 # min 64, size in KB\n> >> > maintenance_work_mem = 524288 #16384 # min 1024, size in KB\n> >> > #max_stack_depth = 2048 # min 100, size in KB\n> >> >\n> >> > I've also tried using the same settings from the old server on the new\n> >> > one; same performance issues.\n> >> >\n> >>\n> >> If this is a linux system, could you give us the output of the 'free'\n> >> command?\n> >\n> > total used free shared buffers cached\n> > Mem: 8059852 8042868 16984 0 228 7888648\n> > -/+ buffers/cache: 153992 7905860\n> > Swap: 15631224 2164 15629060\n>\n> So, I would set effective_cache_size = 988232 (7905860/8).\n>\n> >\n> >> Postgresql might be choosing a bad plan because your effective_cache_size\n> >> is\n> >> way off (it's the default now right?). Also, what was the block read/write\n> >\n> > yes it's set to the default.\n> >\n> >> speed of the SAN from your bonnie tests? Probably want to tune\n> >> random_page_cost as well if it's also at the default.\n> >>\n> >\n> > ------Sequential Output------ --Sequential Input-\n> > --Random-\n> > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> > --Seeks--\n> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec\n> > %CP\n> > luna12-san 16000M 58896 91 62931 9 35870 5 54869 82 145504 13 397.7\n> > 0\n> >\n>\n> So, you're getting 62MB/s writes and 145MB/s reads. Just FYI, that write\n> speed is about the same as my single SATA drive write speed on my workstation,\n> so not that great. The read speed is decent, though and with that sort of\n> read performance, you might want to lower random_page_cost to something like\n> 2.5 or 2 so the planner will tend to prefer index scans.\n>\n\nRight, but the old box was getting ~45MBps on both reads and writes,\nso it's an improvement for me :) Thanks for the advice, I'll let you\nknow how it goes.\n\nAlex\n", "msg_date": "Thu, 1 Mar 2007 17:12:02 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/1/07, Jeff Frost <[email protected]> wrote:\n> On Thu, 1 Mar 2007, Joshua D. Drake wrote:\n>\n> > Alex Deucher wrote:\n> >> Hello,\n> >>\n> >> I have noticed a strange performance regression and I'm at a loss as\n> >> to what's happening. We have a fairly large database (~16 GB). The\n> >> original postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB\n> >> of ram running Solaris on local scsi discs. The new server is a sun\n> >> Opteron box with 4 cores, 8 GB of ram running postgres 8.1.4 on Linux\n> >> (AMD64) on a 4 Gbps FC SAN volume. When we created the new database\n> >> it was created from scratch rather than copying over the old one,\n> >> however the table structure is almost identical (UTF8 on the new one\n> >> vs. C on the old). The problem is queries are ~10x slower on the new\n> >> hardware. I read several places that the SAN might be to blame, but\n> >> testing with bonnie and dd indicates that the SAN is actually almost\n> >> twice as fast as the scsi discs in the old sun server. I've tried\n> >> adjusting just about every option in the postgres config file, but\n> >> performance remains the same. Any ideas?\n> >\n> > Vacuum? Analayze? default_statistics_target? How many shared_buffers?\n> > effective_cache_size? work_mem?\n>\n> Also, an explain analyze from both the 7.4 and 8.1 systems with one of the\n> 10x slower queries would probably be handy.\n\nhere are some examples. Analyze is still running on the new db, I'll\npost results when that is done. Mostly what our apps do is prepared\nrow selects from different tables:\nselect c1,c2,c3,c4,c5 from t1 where c1='XXX';\n\nold server:\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c2='6258261';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c2_index on t1 (cost=0.00..166.89 rows=42\nwidth=26) (actual time=5.722..5.809 rows=2 loops=1)\n Index Cond: ((c2)::text = '6258261'::text)\n Total runtime: 5.912 ms\n(3 rows)\n\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c1='6258261';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c1_key on t1 (cost=0.00..286.08 rows=72\nwidth=26) (actual time=12.423..12.475 rows=12 loops=1)\n Index Cond: ((c1)::text = '6258261'::text)\n Total runtime: 12.538 ms\n(3 rows)\n\n\nnew server:\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c2='6258261';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c2_index on t1 (cost=0.00..37.63 rows=11\nwidth=26) (actual time=33.461..51.377 rows=2 loops=1)\n Index Cond: ((c2)::text = '6258261'::text)\n Total runtime: 51.419 ms\n(3 rows)\n\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c1='6258261';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c1_index on t1 (cost=0.00..630.45 rows=2907\nwidth=26) (actual time=45.733..46.271 rows=12 loops=1)\n Index Cond: ((c1)::text = '6258261'::text)\n Total runtime: 46.325 ms\n(3 rows)\n\n\nAlex\n", "msg_date": "Thu, 1 Mar 2007 18:11:21 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On Thu, 1 Mar 2007, Alex Deucher wrote:\n\n> here are some examples. Analyze is still running on the new db, I'll\n> post results when that is done. Mostly what our apps do is prepared\n> row selects from different tables:\n> select c1,c2,c3,c4,c5 from t1 where c1='XXX';\n>\n> old server:\n> db=# EXPLAIN ANALYZE select c1,c2 from t1 where c2='6258261';\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------\n> Index Scan using t1_c2_index on t1 (cost=0.00..166.89 rows=42\n> width=26) (actual time=5.722..5.809 rows=2 loops=1)\n> Index Cond: ((c2)::text = '6258261'::text)\n> Total runtime: 5.912 ms\n> (3 rows)\n>\n> db=# EXPLAIN ANALYZE select c1,c2 from t1 where c1='6258261';\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Index Scan using t1_c1_key on t1 (cost=0.00..286.08 rows=72\n> width=26) (actual time=12.423..12.475 rows=12 loops=1)\n> Index Cond: ((c1)::text = '6258261'::text)\n> Total runtime: 12.538 ms\n> (3 rows)\n>\n>\n> new server:\n> db=# EXPLAIN ANALYZE select c1,c2 from t1 where c2='6258261';\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Index Scan using t1_c2_index on t1 (cost=0.00..37.63 rows=11\n> width=26) (actual time=33.461..51.377 rows=2 loops=1)\n> Index Cond: ((c2)::text = '6258261'::text)\n> Total runtime: 51.419 ms\n> (3 rows)\n>\n> db=# EXPLAIN ANALYZE select c1,c2 from t1 where c1='6258261';\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using t1_c1_index on t1 (cost=0.00..630.45 rows=2907\n> width=26) (actual time=45.733..46.271 rows=12 loops=1)\n> Index Cond: ((c1)::text = '6258261'::text)\n> Total runtime: 46.325 ms\n> (3 rows)\n\nNotice the huge disparity here betwen the expected number of rows (2907) and \nthe actual rows? That's indicative of needing to run analyze. The time is \nonly about 4x the 7.4 runtime and that's with the analyze running merrily \nalong in the background. It's probably not as bad off as you think. At least \nthis query isn't 10x. :-)\n\nRun these again for us after analyze is complete.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 1 Mar 2007 16:21:32 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On Thu, 1 Mar 2007, Alex Deucher wrote:\n\n>> >> Postgresql might be choosing a bad plan because your \n>> effective_cache_size\n>> >> is\n>> >> way off (it's the default now right?). Also, what was the block \n>> read/write\n>> >\n>> > yes it's set to the default.\n>> >\n>> >> speed of the SAN from your bonnie tests? Probably want to tune\n>> >> random_page_cost as well if it's also at the default.\n>> >>\n>> >\n>> > ------Sequential Output------ --Sequential Input-\n>> > --Random-\n>> > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n>> > --Seeks--\n>> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n>> /sec\n>> > %CP\n>> > luna12-san 16000M 58896 91 62931 9 35870 5 54869 82 145504 13 \n>> 397.7\n>> > 0\n>> >\n>> \n>> So, you're getting 62MB/s writes and 145MB/s reads. Just FYI, that write\n>> speed is about the same as my single SATA drive write speed on my \n>> workstation,\n>> so not that great. The read speed is decent, though and with that sort of\n>> read performance, you might want to lower random_page_cost to something \n>> like\n>> 2.5 or 2 so the planner will tend to prefer index scans.\n>> \n>\n> Right, but the old box was getting ~45MBps on both reads and writes,\n> so it's an improvement for me :) Thanks for the advice, I'll let you\n> know how it goes.\n\nDo you think that is because you have a different interface between you and \nthe SAN? ~45MBps is pretty slow - your average 7200RPM ATA133 drive can do \nthat and costs quite a bit less than a SAN.\n\nIs the SAN being shared between the database servers and other servers? Maybe \nit was just random timing that gave you the poor write performance on the old \nserver which might be also yielding occassional poor performance on the new \none.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 1 Mar 2007 16:36:44 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "At 07:36 PM 3/1/2007, Jeff Frost wrote:\n>On Thu, 1 Mar 2007, Alex Deucher wrote:\n>\n>>> >> Postgresql might be choosing a bad plan because your \n>>> effective_cache_size\n>>> >> is\n>>> >> way off (it's the default now right?). Also, what was the \n>>> block read/write\n>>> >\n>>> > yes it's set to the default.\n>>> >\n>>> >> speed of the SAN from your bonnie tests? Probably want to tune\n>>> >> random_page_cost as well if it's also at the default.\n>>> >>\n>>> >\n>>> > ------Sequential Output------ --Sequential Input-\n>>> > --Random-\n>>> > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n>>> > --Seeks--\n>>> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n>>> K/sec %CP /sec\n>>> > %CP\n>>> > luna12-san 16000M 58896 91 62931 9 35870 5 54869 82 \n>>> 145504 13 397.7\n>>> > 0\n>>> >\n>>>So, you're getting 62MB/s writes and 145MB/s reads. Just FYI, that write\n>>>speed is about the same as my single SATA drive write speed on my \n>>>workstation,\n>>>so not that great. The read speed is decent, though and with that sort of\n>>>read performance, you might want to lower random_page_cost to something like\n>>>2.5 or 2 so the planner will tend to prefer index scans.\n>>\n>>Right, but the old box was getting ~45MBps on both reads and writes,\n>>so it's an improvement for me :) Thanks for the advice, I'll let you\n>>know how it goes.\n>\n>Do you think that is because you have a different interface between \n>you and the SAN? ~45MBps is pretty slow - your average 7200RPM \n>ATA133 drive can do that and costs quite a bit less than a SAN.\n>\n>Is the SAN being shared between the database servers and other \n>servers? Maybe it was just random timing that gave you the poor \n>write performance on the old server which might be also yielding \n>occassional poor performance on the new one.\n\nRemember that pg, even pg 8.2.3, has a known history of very poor \ninsert speed (see comments on this point by Josh Berkus, Luke Lonergan, etc)\n\nFor some reason, the code changes that have resulted in dramatic \nimprovements in pg's read speed have not had nearly the same efficacy \nfor writes.\n\nBottom line: pg presently has a fairly low and fairly harsh upper \nbound on write performance. What exactly that bound is has been the \nsubject of some discussion, but IIUC the fact of its existence is \nwell established.\n\nVarious proposals for improving the situation exist, I've even made \nsome of them, but AFAIK this is currently considered one of the \n\"tough pg problems\".\n\nCheers,\nRon Peacetree \n\n", "msg_date": "Thu, 01 Mar 2007 20:01:11 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and\n 8.1" }, { "msg_contents": "On 3/1/07, Jeff Frost <[email protected]> wrote:\n> On Thu, 1 Mar 2007, Alex Deucher wrote:\n>\n> >> >> Postgresql might be choosing a bad plan because your\n> >> effective_cache_size\n> >> >> is\n> >> >> way off (it's the default now right?). Also, what was the block\n> >> read/write\n> >> >\n> >> > yes it's set to the default.\n> >> >\n> >> >> speed of the SAN from your bonnie tests? Probably want to tune\n> >> >> random_page_cost as well if it's also at the default.\n> >> >>\n> >> >\n> >> > ------Sequential Output------ --Sequential Input-\n> >> > --Random-\n> >> > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> >> > --Seeks--\n> >> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> >> /sec\n> >> > %CP\n> >> > luna12-san 16000M 58896 91 62931 9 35870 5 54869 82 145504 13\n> >> 397.7\n> >> > 0\n> >> >\n> >>\n> >> So, you're getting 62MB/s writes and 145MB/s reads. Just FYI, that write\n> >> speed is about the same as my single SATA drive write speed on my\n> >> workstation,\n> >> so not that great. The read speed is decent, though and with that sort of\n> >> read performance, you might want to lower random_page_cost to something\n> >> like\n> >> 2.5 or 2 so the planner will tend to prefer index scans.\n> >>\n> >\n> > Right, but the old box was getting ~45MBps on both reads and writes,\n> > so it's an improvement for me :) Thanks for the advice, I'll let you\n> > know how it goes.\n>\n> Do you think that is because you have a different interface between you and\n> the SAN? ~45MBps is pretty slow - your average 7200RPM ATA133 drive can do\n> that and costs quite a bit less than a SAN.\n>\n> Is the SAN being shared between the database servers and other servers? Maybe\n> it was just random timing that gave you the poor write performance on the old\n> server which might be also yielding occassional poor performance on the new\n> one.\n>\n\nThe direct attached scsi discs on the old database server we getting\n45MBps not the SAN. The SAN got 62/145Mbps, which is not as bad. We\nhave 4 servers on the SAN each with it's own 4 GBps FC link via an FC\nswitch. I'll try and re-run the numbers when the servers are idle\nthis weekend.\n\nAlex\n", "msg_date": "Thu, 1 Mar 2007 20:44:33 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "\\\n>> Is the SAN being shared between the database servers and other\n>> servers? Maybe\n>> it was just random timing that gave you the poor write performance on\n>> the old\n>> server which might be also yielding occassional poor performance on\n>> the new\n>> one.\n>>\n> \n> The direct attached scsi discs on the old database server we getting\n> 45MBps not the SAN. The SAN got 62/145Mbps, which is not as bad. \n\nHow many spindles you got in that SAN?\n\n We\n> have 4 servers on the SAN each with it's own 4 GBps FC link via an FC\n> switch. I'll try and re-run the numbers when the servers are idle\n> this weekend.\n> \n> Alex\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 01 Mar 2007 17:46:56 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/1/07, Joshua D. Drake <[email protected]> wrote:\n> \\\n> >> Is the SAN being shared between the database servers and other\n> >> servers? Maybe\n> >> it was just random timing that gave you the poor write performance on\n> >> the old\n> >> server which might be also yielding occassional poor performance on\n> >> the new\n> >> one.\n> >>\n> >\n> > The direct attached scsi discs on the old database server we getting\n> > 45MBps not the SAN. The SAN got 62/145Mbps, which is not as bad.\n>\n> How many spindles you got in that SAN?\n\n105 IIRC.\n\nAlex\n", "msg_date": "Thu, 1 Mar 2007 20:47:19 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On Thu, 1 Mar 2007, Alex Deucher wrote:\n\n> On 3/1/07, Jeff Frost <[email protected]> wrote:\n>> On Thu, 1 Mar 2007, Alex Deucher wrote:\n>> \n>> >> >> Postgresql might be choosing a bad plan because your\n>> >> effective_cache_size\n>> >> >> is\n>> >> >> way off (it's the default now right?). Also, what was the block\n>> >> read/write\n>> >> >\n>> >> > yes it's set to the default.\n>> >> >\n>> >> >> speed of the SAN from your bonnie tests? Probably want to tune\n>> >> >> random_page_cost as well if it's also at the default.\n>> >> >>\n>> >> >\n>> >> > ------Sequential Output------ --Sequential Input-\n>> >> > --Random-\n>> >> > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n>> >> > --Seeks--\n>> >> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n>> >> /sec\n>> >> > %CP\n>> >> > luna12-san 16000M 58896 91 62931 9 35870 5 54869 82 145504 13\n>> >> 397.7\n>> >> > 0\n>> >> >\n>> >>\n>> >> So, you're getting 62MB/s writes and 145MB/s reads. Just FYI, that \n>> write\n>> >> speed is about the same as my single SATA drive write speed on my\n>> >> workstation,\n>> >> so not that great. The read speed is decent, though and with that sort \n>> of\n>> >> read performance, you might want to lower random_page_cost to something\n>> >> like\n>> >> 2.5 or 2 so the planner will tend to prefer index scans.\n>> >>\n>> >\n>> > Right, but the old box was getting ~45MBps on both reads and writes,\n>> > so it's an improvement for me :) Thanks for the advice, I'll let you\n>> > know how it goes.\n>> \n>> Do you think that is because you have a different interface between you and\n>> the SAN? ~45MBps is pretty slow - your average 7200RPM ATA133 drive can do\n>> that and costs quite a bit less than a SAN.\n>> \n>> Is the SAN being shared between the database servers and other servers? \n>> Maybe\n>> it was just random timing that gave you the poor write performance on the \n>> old\n>> server which might be also yielding occassional poor performance on the new\n>> one.\n>> \n>\n> The direct attached scsi discs on the old database server we getting\n> 45MBps not the SAN. The SAN got 62/145Mbps, which is not as bad. We\n> have 4 servers on the SAN each with it's own 4 GBps FC link via an FC\n> switch. I'll try and re-run the numbers when the servers are idle\n> this weekend.\n\nSorry, I thought the old server was also attached to the SAN. My fault for \nnot hanging onto the entire email thread.\n\nI think you're mixing and matching your capitol and lower case Bs in your \nsentence above though. :-)\n\nI suspect what you really mean is The SAN got 62/145MBps (megabytes/sec) and \nteh FC link is 4Gbps (gigabits/sec) or 500MBps. Is that correct? If so, and \nseeing that you think there are 105 spindles on the SAN, I'd say you're either \nmaxxing out the switch fabric of the SAN with your servers or you have a \nreally poorly performing SAN in general, or you just misunderstood the .\n\nAs a comparison With 8 WD Raptors configured in a RAID10 with normal ext3 I \nget about 160MB/s write and 305MB/s read performance. Hopefully the SAN has \nlots of other super nifty features that make up for the poor performance. :-(\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 1 Mar 2007 17:56:38 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/1/07, Jeff Frost <[email protected]> wrote:\n> On Thu, 1 Mar 2007, Alex Deucher wrote:\n>\n> > On 3/1/07, Jeff Frost <[email protected]> wrote:\n> >> On Thu, 1 Mar 2007, Alex Deucher wrote:\n> >>\n> >> >> >> Postgresql might be choosing a bad plan because your\n> >> >> effective_cache_size\n> >> >> >> is\n> >> >> >> way off (it's the default now right?). Also, what was the block\n> >> >> read/write\n> >> >> >\n> >> >> > yes it's set to the default.\n> >> >> >\n> >> >> >> speed of the SAN from your bonnie tests? Probably want to tune\n> >> >> >> random_page_cost as well if it's also at the default.\n> >> >> >>\n> >> >> >\n> >> >> > ------Sequential Output------ --Sequential Input-\n> >> >> > --Random-\n> >> >> > -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> >> >> > --Seeks--\n> >> >> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> >> >> /sec\n> >> >> > %CP\n> >> >> > luna12-san 16000M 58896 91 62931 9 35870 5 54869 82 145504 13\n> >> >> 397.7\n> >> >> > 0\n> >> >> >\n> >> >>\n> >> >> So, you're getting 62MB/s writes and 145MB/s reads. Just FYI, that\n> >> write\n> >> >> speed is about the same as my single SATA drive write speed on my\n> >> >> workstation,\n> >> >> so not that great. The read speed is decent, though and with that sort\n> >> of\n> >> >> read performance, you might want to lower random_page_cost to something\n> >> >> like\n> >> >> 2.5 or 2 so the planner will tend to prefer index scans.\n> >> >>\n> >> >\n> >> > Right, but the old box was getting ~45MBps on both reads and writes,\n> >> > so it's an improvement for me :) Thanks for the advice, I'll let you\n> >> > know how it goes.\n> >>\n> >> Do you think that is because you have a different interface between you and\n> >> the SAN? ~45MBps is pretty slow - your average 7200RPM ATA133 drive can do\n> >> that and costs quite a bit less than a SAN.\n> >>\n> >> Is the SAN being shared between the database servers and other servers?\n> >> Maybe\n> >> it was just random timing that gave you the poor write performance on the\n> >> old\n> >> server which might be also yielding occassional poor performance on the new\n> >> one.\n> >>\n> >\n> > The direct attached scsi discs on the old database server we getting\n> > 45MBps not the SAN. The SAN got 62/145Mbps, which is not as bad. We\n> > have 4 servers on the SAN each with it's own 4 GBps FC link via an FC\n> > switch. I'll try and re-run the numbers when the servers are idle\n> > this weekend.\n>\n> Sorry, I thought the old server was also attached to the SAN. My fault for\n> not hanging onto the entire email thread.\n>\n> I think you're mixing and matching your capitol and lower case Bs in your\n> sentence above though. :-)\n\nwhoops :)\n\n>\n> I suspect what you really mean is The SAN got 62/145MBps (megabytes/sec) and\n> teh FC link is 4Gbps (gigabits/sec) or 500MBps. Is that correct? If so, and\n> seeing that you think there are 105 spindles on the SAN, I'd say you're either\n> maxxing out the switch fabric of the SAN with your servers or you have a\n> really poorly performing SAN in general, or you just misunderstood the .\n>\n> As a comparison With 8 WD Raptors configured in a RAID10 with normal ext3 I\n> get about 160MB/s write and 305MB/s read performance. Hopefully the SAN has\n> lots of other super nifty features that make up for the poor performance. :-(\n>\n\nIt's big and reliable (and compared to lots of others, relatively\ninexpensive) which is why we bought it. We bought it mostly as a huge\nfile store. The RAID groups on the SAN were set up for maximum\ncapacity rather than for performance. Using it for the databases just\ncame up recently.\n\nAlex\n", "msg_date": "Thu, 1 Mar 2007 21:06:54 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "Alex Deucher wrote:\n> On 3/1/07, Joshua D. Drake <[email protected]> wrote:\n>> \\\n>> >> Is the SAN being shared between the database servers and other\n>> >> servers? Maybe\n>> >> it was just random timing that gave you the poor write performance on\n>> >> the old\n>> >> server which might be also yielding occassional poor performance on\n>> >> the new\n>> >> one.\n>> >>\n>> >\n>> > The direct attached scsi discs on the old database server we getting\n>> > 45MBps not the SAN. The SAN got 62/145Mbps, which is not as bad.\n>>\n>> How many spindles you got in that SAN?\n> \n> 105 IIRC.\n\nYou have 105 spindles are you are only get 62megs on writes? That seems\nabout half what you should be getting. (at least).\n\nJoshua D. Drake\n\n\n> \n> Alex\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 01 Mar 2007 18:21:36 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/1/07, Joshua D. Drake <[email protected]> wrote:\n> Alex Deucher wrote:\n> > On 3/1/07, Joshua D. Drake <[email protected]> wrote:\n> >> \\\n> >> >> Is the SAN being shared between the database servers and other\n> >> >> servers? Maybe\n> >> >> it was just random timing that gave you the poor write performance on\n> >> >> the old\n> >> >> server which might be also yielding occassional poor performance on\n> >> >> the new\n> >> >> one.\n> >> >>\n> >> >\n> >> > The direct attached scsi discs on the old database server we getting\n> >> > 45MBps not the SAN. The SAN got 62/145Mbps, which is not as bad.\n> >>\n> >> How many spindles you got in that SAN?\n> >\n> > 105 IIRC.\n>\n> You have 105 spindles are you are only get 62megs on writes? That seems\n> about half what you should be getting. (at least).\n>\n\nTake the numbers with grain of salt. They are by no means a thorough\nevaluation. I just ran bonnie a couple times to get a rough reference\npoint. I can do a more thorough analysis.\n\nAlex\n\n> Joshua D. Drake\n>\n>\n> >\n> > Alex\n> >\n>\n", "msg_date": "Thu, 1 Mar 2007 22:16:37 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "* Alex Deucher:\n\n> I have noticed a strange performance regression and I'm at a loss as\n> to what's happening. We have a fairly large database (~16 GB).\n\nSorry for asking, but is this a typo? Do you mean 16 *TB* instead of\n16 *GB*?\n\nIf it's really 16 GB, you should check if it's cheaper to buy more RAM\nthan to fiddle with the existing infrastructure.\n\n> however the table structure is almost identical (UTF8 on the new one\n> vs. C on the old).\n\nLocale settings make a huge difference for sorting and LIKE queries.\nWe usually use the C locale and SQL_ASCII encoding, mostly for\nperformance reasons. (Proper UTF-8 can be enforced through\nconstraints if necessary.)\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Fri, 02 Mar 2007 09:51:58 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "Florian Weimer wrote:\n> * Alex Deucher:\n>\n> \n>> I have noticed a strange performance regression and I'm at a loss as\n>> to what's happening. We have a fairly large database (~16 GB).\n>> \n>\n> Sorry for asking, but is this a typo? Do you mean 16 *TB* instead of\n> 16 *GB*?\n>\n> If it's really 16 GB, you should check if it's cheaper to buy more RAM\n> than to fiddle with the existing infrastructure.\n> \n\nThis brings me to a related question:\n\nDo I need to specifically configure something to take advantage of\nsuch increase of RAM?\n\nIn particular, is the amount of things that postgres can do with RAM\nlimited by the amount of shared_buffers or some other parameter?\nShould shared_buffers be a fixed fraction of the total amount of\nphysical RAM, or should it be the total amount minus half a gigabyte\nor so?\n\nAs an example, if one upgrades a host from 1GB to 4GB, what would\nbe the right thing to do in the configuration, assuming 8.1 or 8.2? (at\nleast what would be the critical aspects?)\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Fri, 02 Mar 2007 08:56:59 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/2/07, Florian Weimer <[email protected]> wrote:\n> * Alex Deucher:\n>\n> > I have noticed a strange performance regression and I'm at a loss as\n> > to what's happening. We have a fairly large database (~16 GB).\n>\n> Sorry for asking, but is this a typo? Do you mean 16 *TB* instead of\n> 16 *GB*?\n>\n> If it's really 16 GB, you should check if it's cheaper to buy more RAM\n> than to fiddle with the existing infrastructure.\n>\n\nYes, 16 GB. I'd rather not shell out for more ram, if I'm not even\nsure that will help. The new system should be faster, or at least as\nfast, so I'd like to sort out what's going on before I buy more ram.\n\n\n> > however the table structure is almost identical (UTF8 on the new one\n> > vs. C on the old).\n>\n> Locale settings make a huge difference for sorting and LIKE queries.\n> We usually use the C locale and SQL_ASCII encoding, mostly for\n> performance reasons. (Proper UTF-8 can be enforced through\n> constraints if necessary.)\n>\n\nI suppose that might be a factor. How much of a performance\ndifference do you see between utf-8 and C?\n\n\nAlex\n", "msg_date": "Fri, 2 Mar 2007 10:16:37 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "At 08:56 AM 3/2/2007, Carlos Moreno wrote:\n>Florian Weimer wrote:\n>>* Alex Deucher:\n>>\n>>\n>>>I have noticed a strange performance regression and I'm at a loss as\n>>>to what's happening. We have a fairly large database (~16 GB).\n>>>\n>>\n>>Sorry for asking, but is this a typo? Do you mean 16 *TB* instead of\n>>16 *GB*?\n>>\n>>If it's really 16 GB, you should check if it's cheaper to buy more RAM\n>>than to fiddle with the existing infrastructure.\n>>\n>\n>This brings me to a related question:\n>\n>Do I need to specifically configure something to take advantage of\n>such increase of RAM?\n>\n>In particular, is the amount of things that postgres can do with RAM\n>limited by the amount of shared_buffers or some other parameter?\n>Should shared_buffers be a fixed fraction of the total amount of\n>physical RAM, or should it be the total amount minus half a gigabyte\n>or so?\n>\n>As an example, if one upgrades a host from 1GB to 4GB, what would\n>be the right thing to do in the configuration, assuming 8.1 or 8.2? (at\n>least what would be the critical aspects?)\n>\n>Thanks,\n>\n>Carlos\n\nUnfortunately, pg does not (yet! ;-) ) treat all available RAM as a \ncommon pool and dynamically allocate it intelligently to each of the \nvarious memory data structures.\n\nSo if you increase your RAM, you will have to manually change the \nentries in the pg config file to take advantage of it.\n(and start pg after changing it for the new config values to take effect)\n\nThe pertinent values are all those listed under \"Memory\" in the \nannotated pg conf file: shared_buffers, work_mem, maintenance_work_mem, etc.\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n\nCheers,\nRon Peacetree\n\n", "msg_date": "Fri, 02 Mar 2007 10:21:17 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and\n 8.1" }, { "msg_contents": "On 3/1/07, Jeff Frost <[email protected]> wrote:\n> On Thu, 1 Mar 2007, Alex Deucher wrote:\n>\n> > here are some examples. Analyze is still running on the new db, I'll\n> > post results when that is done. Mostly what our apps do is prepared\n> > row selects from different tables:\n> > select c1,c2,c3,c4,c5 from t1 where c1='XXX';\n> >\n> > old server:\n> > db=# EXPLAIN ANALYZE select c1,c2 from t1 where c2='6258261';\n> > QUERY PLAN\n> > ---------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using t1_c2_index on t1 (cost=0.00..166.89 rows=42\n> > width=26) (actual time=5.722..5.809 rows=2 loops=1)\n> > Index Cond: ((c2)::text = '6258261'::text)\n> > Total runtime: 5.912 ms\n> > (3 rows)\n> >\n> > db=# EXPLAIN ANALYZE select c1,c2 from t1 where c1='6258261';\n> > QUERY PLAN\n> > ----------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using t1_c1_key on t1 (cost=0.00..286.08 rows=72\n> > width=26) (actual time=12.423..12.475 rows=12 loops=1)\n> > Index Cond: ((c1)::text = '6258261'::text)\n> > Total runtime: 12.538 ms\n> > (3 rows)\n> >\n> >\n> > new server:\n> > db=# EXPLAIN ANALYZE select c1,c2 from t1 where c2='6258261';\n> > QUERY PLAN\n> > ----------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using t1_c2_index on t1 (cost=0.00..37.63 rows=11\n> > width=26) (actual time=33.461..51.377 rows=2 loops=1)\n> > Index Cond: ((c2)::text = '6258261'::text)\n> > Total runtime: 51.419 ms\n> > (3 rows)\n> >\n> > db=# EXPLAIN ANALYZE select c1,c2 from t1 where c1='6258261';\n> > QUERY PLAN\n> > --------------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using t1_c1_index on t1 (cost=0.00..630.45 rows=2907\n> > width=26) (actual time=45.733..46.271 rows=12 loops=1)\n> > Index Cond: ((c1)::text = '6258261'::text)\n> > Total runtime: 46.325 ms\n> > (3 rows)\n>\n> Notice the huge disparity here betwen the expected number of rows (2907) and\n> the actual rows? That's indicative of needing to run analyze. The time is\n> only about 4x the 7.4 runtime and that's with the analyze running merrily\n> along in the background. It's probably not as bad off as you think. At least\n> this query isn't 10x. :-)\n>\n> Run these again for us after analyze is complete.\n\nwell, while the DB isn't 10x, the application using the DB shoes a 10x\ndecrease in performance. Pages that used to take 5 seconds to load\ntake 50 secs (I supposed the problem is compounded as there are\nseveral queries per page.). Anyway, new numbers after the analyze.\nUnfortunately, they are improved, but still not great:\n\nold server:\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c2='6258261';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c2_index on t1 (cost=0.00..166.89 rows=42\nwidth=26) (actual time=0.204..0.284 rows=2 loops=1)\n Index Cond: ((c2)::text = '6258261'::text)\n Total runtime: 0.421 ms\n(3 rows)\n\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c1='6258261';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c1_key on t1 (cost=0.00..286.08 rows=72\nwidth=26) (actual time=0.299..0.354 rows=12 loops=1)\n Index Cond: ((c1)::text = '6258261'::text)\n Total runtime: 0.451 ms\n(3 rows)\n\n\n\nnew server:\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c2='6258261';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c2_index on t1 (cost=0.00..37.63 rows=11\nwidth=26) (actual time=0.126..0.134 rows=2 loops=1)\n Index Cond: ((c2)::text = '6258261'::text)\n Total runtime: 0.197 ms\n(3 rows)\n\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c1='6258261';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c1_index on t1 (cost=0.00..630.45 rows=2907\nwidth=26) (actual time=5.820..5.848 rows=12 loops=1)\n Index Cond: ((c1)::text = '6258261'::text)\n Total runtime: 5.899 ms\n(3 rows)\n\nHere's another example:\nold server:\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c2='6000001';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c2_index on t1 (cost=0.00..166.89 rows=42\nwidth=26) (actual time=4.031..55.349 rows=8 loops=1)\n Index Cond: ((c2)::text = '6000001'::text)\n Total runtime: 55.459 ms\n(3 rows)\n\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c1='6000001';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c1_key on t1 (cost=0.00..286.08 rows=72\nwidth=26) (actual time=0.183..0.203 rows=4 loops=1)\n Index Cond: ((c1)::text = '6000001'::text)\n Total runtime: 0.289 ms\n(3 rows)\n\n\nnew server:\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c2='6000001';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c2_index on t1 (cost=0.00..37.63 rows=11\nwidth=26) (actual time=115.412..202.151 rows=8 loops=1)\n Index Cond: ((c2)::text = '6000001'::text)\n Total runtime: 202.234 ms\n(3 rows)\n\ndb=# EXPLAIN ANALYZE select c1,c2 from t1 where c1='6000001';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_c1_index on t1 (cost=0.00..630.45 rows=2907\nwidth=26) (actual time=99.811..99.820 rows=4 loops=1)\n Index Cond: ((c1)::text = '6000001'::text)\n Total runtime: 99.861 ms\n(3 rows)\n\nI haven't gotten a chance to restart postgres this the config changes\nyou suggested yet. The rows have improved for some but not all and\nthe times are still slow. Any ideas?\n\nAlex\n", "msg_date": "Fri, 2 Mar 2007 10:28:12 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "At 10:16 AM 3/2/2007, Alex Deucher wrote:\n>On 3/2/07, Florian Weimer <[email protected]> wrote:\n>>* Alex Deucher:\n>>\n>> > I have noticed a strange performance regression and I'm at a loss as\n>> > to what's happening. We have a fairly large database (~16 GB).\n>>\n>>Sorry for asking, but is this a typo? Do you mean 16 *TB* instead of\n>>16 *GB*?\n>>\n>>If it's really 16 GB, you should check if it's cheaper to buy more RAM\n>>than to fiddle with the existing infrastructure.\n>\n>Yes, 16 GB. I'd rather not shell out for more ram, if I'm not even\n>sure that will help. The new system should be faster, or at least as\n>fast, so I'd like to sort out what's going on before I buy more ram.\n>\nOK. You\na= went from pg 7.4.x to 8.1.4 AND\n\nb= you changed from 4 SPARC CPUs (how many cores? If this is > 4...) \nto 2 2C Opterons AND\n(SPEC and TPC bench differences between these CPUs?)\n\nc= you went from a Sun box to a \"white box\" AND\n(memory subsystem differences? other differences?)\n\nd= you went from local HD IO to a SAN\n(many differences hidden in that one line... ...and is the physical \nlayout of tables and things like pg_xlog sane on the SAN?)\n\n\n...and you did this by just pulling over the old DB onto the new HW?\n\nMay I suggest that it is possible that your schema, queries, etc were \nall optimized for pg 7.x running on the old HW?\n(explain analyze shows the old system taking ~1/10 the time per row \nas well as estimating the number of rows more accurately)\n\nRAM is =cheap=. Much cheaper than the cost of a detective hunt \nfollowed by rework to queries, schema, etc.\nFitting the entire DB into RAM is guaranteed to help unless this is \nan OLTP like application where HD IO is required to be synchronous..\nIf you can fit the entire DB comfortably into RAM, do it and buy \nyourself the time to figure out the rest of the story w/o impacting \non production performance.\n\nCheers,\nRon Peacetree \n\n", "msg_date": "Fri, 02 Mar 2007 10:48:02 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and\n 8.1" }, { "msg_contents": "On 3/2/07, Ron <[email protected]> wrote:\n> At 10:16 AM 3/2/2007, Alex Deucher wrote:\n> >On 3/2/07, Florian Weimer <[email protected]> wrote:\n> >>* Alex Deucher:\n> >>\n> >> > I have noticed a strange performance regression and I'm at a loss as\n> >> > to what's happening. We have a fairly large database (~16 GB).\n> >>\n> >>Sorry for asking, but is this a typo? Do you mean 16 *TB* instead of\n> >>16 *GB*?\n> >>\n> >>If it's really 16 GB, you should check if it's cheaper to buy more RAM\n> >>than to fiddle with the existing infrastructure.\n> >\n> >Yes, 16 GB. I'd rather not shell out for more ram, if I'm not even\n> >sure that will help. The new system should be faster, or at least as\n> >fast, so I'd like to sort out what's going on before I buy more ram.\n> >\n> OK. You\n> a= went from pg 7.4.x to 8.1.4 AND\n>\n\nyes.\n\n> b= you changed from 4 SPARC CPUs (how many cores? If this is > 4...)\n> to 2 2C Opterons AND\n> (SPEC and TPC bench differences between these CPUs?)\n>\n\n4 single core 800 Mhz sparcs to 2 dual core 2.2 Ghz opterons.\n\n> c= you went from a Sun box to a \"white box\" AND\n> (memory subsystem differences? other differences?)\n>\n\nThe new hardware is Sun as well. X4100s running Linux. It should be\nfaster all around because the old server is 5 years old.\n\n> d= you went from local HD IO to a SAN\n> (many differences hidden in that one line... ...and is the physical\n> layout of tables and things like pg_xlog sane on the SAN?)\n>\n>\n> ...and you did this by just pulling over the old DB onto the new HW?\n>\n\nWe rebuild the DB from scratch on the new server. Same table\nstructure though. We reloaded from the source material directly.\n\n> May I suggest that it is possible that your schema, queries, etc were\n> all optimized for pg 7.x running on the old HW?\n> (explain analyze shows the old system taking ~1/10 the time per row\n> as well as estimating the number of rows more accurately)\n>\n> RAM is =cheap=. Much cheaper than the cost of a detective hunt\n> followed by rework to queries, schema, etc.\n> Fitting the entire DB into RAM is guaranteed to help unless this is\n> an OLTP like application where HD IO is required to be synchronous..\n> If you can fit the entire DB comfortably into RAM, do it and buy\n> yourself the time to figure out the rest of the story w/o impacting\n> on production performance.\n\nPerhaps so. I just don't want to spend $1000 on ram and have it only\nmarginally improve performance if at all. The old DB works, so we can\nkeep using that until we sort this out.\n\nAlex\n\n>\n> Cheers,\n> Ron Peacetree\n>\n>\n", "msg_date": "Fri, 2 Mar 2007 11:03:06 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "\"Alex Deucher\" <[email protected]> writes:\n> Anyway, new numbers after the analyze.\n> Unfortunately, they are improved, but still not great:\n\nWhy are the index names different between the old and new servers?\nIs that just cosmetic, or is 8.2 actually picking a different\n(and less suitable) index for the c1 queries?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Mar 2007 11:10:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1 " }, { "msg_contents": "On 3/2/07, Tom Lane <[email protected]> wrote:\n> \"Alex Deucher\" <[email protected]> writes:\n> > Anyway, new numbers after the analyze.\n> > Unfortunately, they are improved, but still not great:\n>\n> Why are the index names different between the old and new servers?\n> Is that just cosmetic, or is 8.2 actually picking a different\n> (and less suitable) index for the c1 queries?\n>\n\nThat's just cosmetic. They are the same.\n\nAlex\n", "msg_date": "Fri, 2 Mar 2007 11:14:15 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On Fri, 2007-03-02 at 10:03, Alex Deucher wrote:\n> On 3/2/07, Ron <[email protected]> wrote:\n> > At 10:16 AM 3/2/2007, Alex Deucher wrote:\n\n> > d= you went from local HD IO to a SAN\n> > (many differences hidden in that one line... ...and is the physical\n> > layout of tables and things like pg_xlog sane on the SAN?)\n> >\n> >\n> > ...and you did this by just pulling over the old DB onto the new HW?\n> >\n> \n> We rebuild the DB from scratch on the new server. Same table\n> structure though. We reloaded from the source material directly.\n\nI would REALLY recommend testing this machine out with a simple software\nRAID-1 pair of SCSI or SATA drives just to eliminate or confirm the SAN\nas the root problem.\n\n", "msg_date": "Fri, 02 Mar 2007 10:50:46 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "Am Donnerstag 01 März 2007 21:44 schrieb Alex Deucher:\n> Hello,\n>\n> I have noticed a strange performance regression and I'm at a loss as\n> to what's happening. We have a fairly large database (~16 GB). The\n> original postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB\n> of ram running Solaris on local scsi discs. The new server is a sun\n> Opteron box with 4 cores, 8 GB of ram running postgres 8.1.4 on Linux\n> (AMD64) on a 4 Gbps FC SAN volume. When we created the new database\n> it was created from scratch rather than copying over the old one,\n> however the table structure is almost identical (UTF8 on the new one\n> vs. C on the old). The problem is queries are ~10x slower on the new\n> hardware. I read several places that the SAN might be to blame, but\n> testing with bonnie and dd indicates that the SAN is actually almost\n> twice as fast as the scsi discs in the old sun server. I've tried\n> adjusting just about every option in the postgres config file, but\n> performance remains the same. Any ideas?\n>\n\n\n1. Do you use NUMA ctl for locking the db on one node ?\n\n2. do you use bios to interleave memeory ?\n\n3. do you expand cache over mor than one numa node ?\n\n> Thanks,\n>\n> Alex\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n-- \n\nATRSoft GmbH\nRosellstrasse 9\nD 50354 Hürth\nDeutschland\nTel .: +49(0)2233 691324\n\nGeschäftsführer Anton Rommerskirchen\n\nKöln HRB 44927\nSTNR 224/5701 - 1010\n", "msg_date": "Fri, 2 Mar 2007 18:45:44 +0100", "msg_from": "Anton Rommerskirchen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "At 11:03 AM 3/2/2007, Alex Deucher wrote:\n>On 3/2/07, Ron <[email protected]> wrote:\n>\n>>May I suggest that it is possible that your schema, queries, etc were\n>>all optimized for pg 7.x running on the old HW?\n>>(explain analyze shows the old system taking ~1/10 the time per row\n>>as well as estimating the number of rows more accurately)\n>>\n>>RAM is =cheap=. Much cheaper than the cost of a detective hunt\n>>followed by rework to queries, schema, etc.\n>>Fitting the entire DB into RAM is guaranteed to help unless this is\n>>an OLTP like application where HD IO is required to be synchronous..\n>>If you can fit the entire DB comfortably into RAM, do it and buy\n>>yourself the time to figure out the rest of the story w/o impacting\n>>on production performance.\n>\n>Perhaps so. I just don't want to spend $1000 on ram and have it only\n>marginally improve performance if at all. The old DB works, so we can\n>keep using that until we sort this out.\n>\n>Alex\n1= $1000 worth of RAM is very likely less than the $ worth of, say, \n10 hours of your time to your company. Perhaps much less.\n(Your =worth=, not your pay or even your fully loaded cost. This \nnumber tends to be >= 4x what you are paid unless the organization \nyou are working for is in imminent financial danger.)\nYou've already put more considerably more than 10 hours of your time \ninto this...\n\n2= If the DB goes from not fitting completely into RAM to being \ncompletely RAM resident, you are almost 100% guaranteed a big \nperformance boost.\nThe exception is an OLTP like app where DB writes can't be done \na-synchronously (doing financial transactions, real time control systems, etc).\nData mines should never have this issue.\n\n3= Whether adding enough RAM to make the DB RAM resident (and \nre-configuring conf, etc, appropriately) solves the problem or not, \nyou will have gotten a serious lead as to what's wrong.\n\n...and I still think looking closely at the actual physical layout of \nthe tables in the SAN is likely to be worth it.\n\nCheers,\nRon Peacetree\n \n\n", "msg_date": "Fri, 02 Mar 2007 12:59:37 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and\n 8.1" }, { "msg_contents": "Florian Weimer escribi�:\n\n> Locale settings make a huge difference for sorting and LIKE queries.\n> We usually use the C locale and SQL_ASCII encoding, mostly for\n> performance reasons. (Proper UTF-8 can be enforced through\n> constraints if necessary.)\n\nHmm, you are aware of varchar_pattern_ops and related opclasses, right?\nThat helps for LIKE queries in non-C locales (though you do have to keep\nalmost-duplicate indexes).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 2 Mar 2007 16:33:19 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/2/07, Ron <[email protected]> wrote:\n> At 11:03 AM 3/2/2007, Alex Deucher wrote:\n> >On 3/2/07, Ron <[email protected]> wrote:\n> >\n> >>May I suggest that it is possible that your schema, queries, etc were\n> >>all optimized for pg 7.x running on the old HW?\n> >>(explain analyze shows the old system taking ~1/10 the time per row\n> >>as well as estimating the number of rows more accurately)\n> >>\n> >>RAM is =cheap=. Much cheaper than the cost of a detective hunt\n> >>followed by rework to queries, schema, etc.\n> >>Fitting the entire DB into RAM is guaranteed to help unless this is\n> >>an OLTP like application where HD IO is required to be synchronous..\n> >>If you can fit the entire DB comfortably into RAM, do it and buy\n> >>yourself the time to figure out the rest of the story w/o impacting\n> >>on production performance.\n> >\n> >Perhaps so. I just don't want to spend $1000 on ram and have it only\n> >marginally improve performance if at all. The old DB works, so we can\n> >keep using that until we sort this out.\n> >\n> >Alex\n> 1= $1000 worth of RAM is very likely less than the $ worth of, say,\n> 10 hours of your time to your company. Perhaps much less.\n> (Your =worth=, not your pay or even your fully loaded cost. This\n> number tends to be >= 4x what you are paid unless the organization\n> you are working for is in imminent financial danger.)\n> You've already put more considerably more than 10 hours of your time\n> into this...\n>\n> 2= If the DB goes from not fitting completely into RAM to being\n> completely RAM resident, you are almost 100% guaranteed a big\n> performance boost.\n> The exception is an OLTP like app where DB writes can't be done\n> a-synchronously (doing financial transactions, real time control systems, etc).\n> Data mines should never have this issue.\n>\n> 3= Whether adding enough RAM to make the DB RAM resident (and\n> re-configuring conf, etc, appropriately) solves the problem or not,\n> you will have gotten a serious lead as to what's wrong.\n>\n> ...and I still think looking closely at the actual physical layout of\n> the tables in the SAN is likely to be worth it.\n>\n\nHow would I go about doing that?\n\nThanks,\n\nAlex\n", "msg_date": "Fri, 2 Mar 2007 14:43:05 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "At 02:43 PM 3/2/2007, Alex Deucher wrote:\n>On 3/2/07, Ron <[email protected]> wrote:\n>>\n>>...and I still think looking closely at the actual physical layout of\n>>the tables in the SAN is likely to be worth it.\n>\n>How would I go about doing that?\n>\n>Alex\n\nHard for me to give specific advice when I don't know what SAN \nproduct we are talking about nor what kind of HDs are in it nor how \nthose HDs are presently configured...\n\nI quote you in an earlier post:\n\"The RAID groups on the SAN were set up for maximum capacity rather \nthan for performance. Using it for the databases just came up recently.\"\n\nThat implies to me that the SAN is more or less set up as a huge 105 \nHD (assuming this number is correct? We all know how \"assume\" is \nspelled...) JBOD or RAID 5 (or 6, or 5*, or 6*) set.\n\n=IF= that is true, tables are not being given dedicated RAID \ngroups. That implies that traditional lore like having pg_xlog on \ndedicated spindles is being ignored.\nNor is the more general Best Practice of putting the most heavily \nused tables onto dedicated spindles being followed.\n\nIn addition, the most space efficient RAID levels: 5* or 6*, are not \nthe best performing one (RAID 10 striping your mirrors)\n\nIn short, configuring a SAN for maximum capacity is exactly the wrong \nthing to do if one is planning to use it in the best way to support \nDB performance.\n\nI assume (there's that word again...) that there is someone in your \norganization who understands how the SAN is configured and administered.\nYou need to talk to them about these issues.\n\nCheers,\nRon\n\n\n", "msg_date": "Fri, 02 Mar 2007 15:28:26 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and\n 8.1" }, { "msg_contents": "On 3/2/07, Ron <[email protected]> wrote:\n> At 02:43 PM 3/2/2007, Alex Deucher wrote:\n> >On 3/2/07, Ron <[email protected]> wrote:\n> >>\n> >>...and I still think looking closely at the actual physical layout of\n> >>the tables in the SAN is likely to be worth it.\n> >\n> >How would I go about doing that?\n> >\n> >Alex\n>\n> Hard for me to give specific advice when I don't know what SAN\n> product we are talking about nor what kind of HDs are in it nor how\n> those HDs are presently configured...\n>\n> I quote you in an earlier post:\n> \"The RAID groups on the SAN were set up for maximum capacity rather\n> than for performance. Using it for the databases just came up recently.\"\n>\n> That implies to me that the SAN is more or less set up as a huge 105\n> HD (assuming this number is correct? We all know how \"assume\" is\n> spelled...) JBOD or RAID 5 (or 6, or 5*, or 6*) set.\n>\n> =IF= that is true, tables are not being given dedicated RAID\n> groups. That implies that traditional lore like having pg_xlog on\n> dedicated spindles is being ignored.\n> Nor is the more general Best Practice of putting the most heavily\n> used tables onto dedicated spindles being followed.\n>\n> In addition, the most space efficient RAID levels: 5* or 6*, are not\n> the best performing one (RAID 10 striping your mirrors)\n>\n> In short, configuring a SAN for maximum capacity is exactly the wrong\n> thing to do if one is planning to use it in the best way to support\n> DB performance.\n>\n> I assume (there's that word again...) that there is someone in your\n> organization who understands how the SAN is configured and administered.\n> You need to talk to them about these issues.\n>\n\nAh OK. I see what you are saying; thank you for clarifying. Yes,\nthe SAN is configured for maximum capacity; it has large RAID 5\ngroups. As I said earlier, we never intended to run a DB on the SAN,\nit just happened to come up, hence the configuration.\n\nAlex\n", "msg_date": "Fri, 2 Mar 2007 16:20:41 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 02.03.2007, at 14:20, Alex Deucher wrote:\n\n> Ah OK. I see what you are saying; thank you for clarifying. Yes,\n> the SAN is configured for maximum capacity; it has large RAID 5\n> groups. As I said earlier, we never intended to run a DB on the SAN,\n> it just happened to come up, hence the configuration.\n\nSo why not dumping the stuff ones, importing into a PG configured to \nuse local discs (Or even ONE local disc, you might have the 16GB you \ngave as a size for the db on the local machine, right?) and testing \nwhether the problem is with PG connecting to the SAN. So you have one \nfactor less to consider after all your changes.\n\nMaybe it's just that something in the chain from PG to the actual HD \nspindles kills your random access performance for getting the actual \nrows.\n\ncug\n", "msg_date": "Fri, 2 Mar 2007 14:34:19 -0700", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On Fri, 2 Mar 2007, Guido Neitzer wrote:\n\n> On 02.03.2007, at 14:20, Alex Deucher wrote:\n>\n>> Ah OK. I see what you are saying; thank you for clarifying. Yes,\n>> the SAN is configured for maximum capacity; it has large RAID 5\n>> groups. As I said earlier, we never intended to run a DB on the SAN,\n>> it just happened to come up, hence the configuration.\n>\n> So why not dumping the stuff ones, importing into a PG configured to use \n> local discs (Or even ONE local disc, you might have the 16GB you gave as a \n> size for the db on the local machine, right?) and testing whether the problem \n> is with PG connecting to the SAN. So you have one factor less to consider \n> after all your changes.\n>\n> Maybe it's just that something in the chain from PG to the actual HD spindles \n> kills your random access performance for getting the actual rows.\n\nI am actually starting to think that the SAN may be introducing some amount of \nlatency that is enough to kill your random IO which is what all of the queries \nin question are doing - look up in index - fetch row from table.\n\nIf you have the time, it would be totally worth it to test with a local disk \nand see how that affects the speed.\n\nI would think that even with RAID5, a SAN with that many spindles would be \nquite fast in raw throughput, but perhaps it's just seek latency that's \nkilling you.\n\nWhen you run the bonnie tests again, take note of what the seeks/sec is \ncompared with the old disk. Also, you should run bonnie with the -b switch to \nsee if that causes significant slowdown of the writes...maybe minor synced \nwrite activity to pg_xlog is bogging the entire system down. Is the system \nspending most of its time in IO wait?\n\nAlso, another item of note might be the actual on disk DB size..I wonder if it \nhas changed significantly going from SQL_ASCII to UTF8.\n\nIn 8.1 you can do this:\n\nSELECT datname,\n pg_size_pretty(pg_database_size(datname)) AS size\nFROM pg_database;\n\nIn 7.4, you'll need to install the dbsize contrib module to get the same info.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Fri, 2 Mar 2007 14:59:29 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" }, { "msg_contents": "On 3/2/07, Jeff Frost <[email protected]> wrote:\n> On Fri, 2 Mar 2007, Guido Neitzer wrote:\n>\n> > On 02.03.2007, at 14:20, Alex Deucher wrote:\n> >\n> >> Ah OK. I see what you are saying; thank you for clarifying. Yes,\n> >> the SAN is configured for maximum capacity; it has large RAID 5\n> >> groups. As I said earlier, we never intended to run a DB on the SAN,\n> >> it just happened to come up, hence the configuration.\n> >\n> > So why not dumping the stuff ones, importing into a PG configured to use\n> > local discs (Or even ONE local disc, you might have the 16GB you gave as a\n> > size for the db on the local machine, right?) and testing whether the problem\n> > is with PG connecting to the SAN. So you have one factor less to consider\n> > after all your changes.\n> >\n> > Maybe it's just that something in the chain from PG to the actual HD spindles\n> > kills your random access performance for getting the actual rows.\n>\n> I am actually starting to think that the SAN may be introducing some amount of\n> latency that is enough to kill your random IO which is what all of the queries\n> in question are doing - look up in index - fetch row from table.\n>\n> If you have the time, it would be totally worth it to test with a local disk\n> and see how that affects the speed.\n>\n> I would think that even with RAID5, a SAN with that many spindles would be\n> quite fast in raw throughput, but perhaps it's just seek latency that's\n> killing you.\n>\n> When you run the bonnie tests again, take note of what the seeks/sec is\n> compared with the old disk. Also, you should run bonnie with the -b switch to\n> see if that causes significant slowdown of the writes...maybe minor synced\n> write activity to pg_xlog is bogging the entire system down. Is the system\n> spending most of its time in IO wait?\n>\n> Also, another item of note might be the actual on disk DB size..I wonder if it\n> has changed significantly going from SQL_ASCII to UTF8.\n>\n> In 8.1 you can do this:\n>\n> SELECT datname,\n> pg_size_pretty(pg_database_size(datname)) AS size\n> FROM pg_database;\n>\n> In 7.4, you'll need to install the dbsize contrib module to get the same info.\n>\n\nI'm beginning the think the same thing. I'm planning to try the tests\nabove next week. I'll let you know what I find out.\n\nThanks!\n\nAlex\n", "msg_date": "Fri, 2 Mar 2007 18:31:28 -0500", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: strange performance regression between 7.4 and 8.1" } ]
[ { "msg_contents": "I need to cross reference 2 tables. There are O(10M) A's, each has an \nordered set of 10 of the O(100K) B's associated with it. The dominant \nquery will be finding the A's and their count associated with a given \nlist of ~1k B's i.e. if 2 of the listed B's are in A's set of 10, it's \n(A,2), and we should get back ~100K rows. The good news is we only need \nto run this brutal query every couple minutes, but the row updates will \nflow fast.\n\nLuckily this is PostgreSQL, so the simple solution seems to be\n\n CREATE TABLE xref( A bigint, B bigint[10] ); -- A is primary key\n\nwhich cuts down the table overhead. O(10M) rows w/array.\n\nOn the surface, looks like a job for GIN, but GIN seems undocumented, \nspecifically mentions it doesn't support the deletes we'll have many of \nsince it's designed for word searching apparently, the performance \nimplications are undocumented. I searched, I read, and even IRC'd, and \nit seems like GIN is just not used much.\n\nIs GIN right? Will this work at all? Will it run fast enough to function?\n", "msg_date": "Thu, 01 Mar 2007 19:59:04 -0800", "msg_from": "Adam L Beberg <[email protected]>", "msg_from_op": true, "msg_subject": "Array indexes, GIN?" }, { "msg_contents": "Adam,\n\n> On the surface, looks like a job for GIN, but GIN seems undocumented,\n> specifically mentions it doesn't support the deletes we'll have many of\n> since it's designed for word searching apparently, the performance\n> implications are undocumented. I searched, I read, and even IRC'd, and\n> it seems like GIN is just not used much.\n\nIt's new (as of 8.2). And the authors, Oleg and Teodor, are notorious for \nskimpy documetentation.\n\nI'd start with the code in INTARRAY contrib module (also by Teodor) and bug \nthem on pgsql-hackers about helping you implement a GIN index for arrays.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 1 Mar 2007 21:47:38 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Array indexes, GIN?" }, { "msg_contents": "On Thu, 1 Mar 2007, Josh Berkus wrote:\n\n> Adam,\n>\n>> On the surface, looks like a job for GIN, but GIN seems undocumented,\n>> specifically mentions it doesn't support the deletes we'll have many of\n>> since it's designed for word searching apparently, the performance\n>> implications are undocumented. I searched, I read, and even IRC'd, and\n>> it seems like GIN is just not used much.\n>\n> It's new (as of 8.2). And the authors, Oleg and Teodor, are notorious for\n> skimpy documetentation.\n\nWe're getting better, we have 72 pages written about new FTS :)\n\n>\n> I'd start with the code in INTARRAY contrib module (also by Teodor) and bug\n> them on pgsql-hackers about helping you implement a GIN index for arrays.\n\nGIN already has support for one dimensional arrays and intarray, particularly,\ntoo has support of GiN.\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Fri, 2 Mar 2007 09:45:03 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Array indexes, GIN?" }, { "msg_contents": "Oleg Bartunov wrote on 3/1/2007 10:45 PM:\n> On Thu, 1 Mar 2007, Josh Berkus wrote:\n> \n>> Adam,\n>>\n>>> On the surface, looks like a job for GIN, but GIN seems undocumented,\n>>> specifically mentions it doesn't support the deletes we'll have many of\n>>> since it's designed for word searching apparently, the performance\n>>> implications are undocumented. I searched, I read, and even IRC'd, and\n>>> it seems like GIN is just not used much.\n>>\n>> It's new (as of 8.2). And the authors, Oleg and Teodor, are notorious \n>> for\n>> skimpy documetentation.\n> \n> We're getting better, we have 72 pages written about new FTS :)\n\nI'm guessing FTS is not quite done since you matched 'FTS' to 'GIN' ;)\n\n> GIN already has support for one dimensional arrays and intarray, \n> particularly, too has support of GiN.\n\nGreat, so can GIN handle my situation? I'm a little unsure what to make \nof \"Note: There is no delete operation for ET.\" in particular since I'm \ndealing with large numbers.\n\n-- \nAdam L. Beberg\nhttp://www.mithral.com/~beberg/\n", "msg_date": "Thu, 01 Mar 2007 23:50:56 -0800", "msg_from": "Adam L Beberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Array indexes, GIN?" }, { "msg_contents": "On Thu, 2007-03-01 at 19:59 -0800, Adam L Beberg wrote:\n> On the surface, looks like a job for GIN, but GIN seems undocumented, \n> specifically mentions it doesn't support the deletes we'll have many of \n> since it's designed for word searching apparently, the performance \n\nIt can delete an entry for one of the keys of an index, it just can't\ndelete the key itself when the number of entries goes down to zero.\nBecause you only have O(100K) possible keys, that shouldn't be a\nproblem. The GIN indexes can reclaim space. If they couldn't, they\nwouldn't be nearly as useful. \n\nThe time when you run into problems is when you have a huge, sparsely\npopulated keyspace, with a huge number of keys contained by no tuples in\nthe table.\n\nHowever, for your application, GIN still might not be the right answer.\nGIN can only return tuples which do contain some matching keys, it won't\nreturn the number of matching keys in that tuple (that's not the job of\nan index). \n\nLet's run some numbers:\n\n * percentage of tuples returned = 100K rows out of the 10M = 1%\n * tuples per page = 8192 bytes / 32 (tuple header) + 8 (bigint) + 80\n(10 bigints) = ~70. Let's say it's 50 due to some free space.\n\nBased on those numbers, the GIN index is basically going to say \"get\nevery other page\". PostgreSQL will optimize that into a sequential scan\nbecause it makes no sense to do a random fetch for every other page.\n\nSo, the fastest way you can do this (that I can see) is just fetch every\ntuple and count the number of matches in each array. You know your data\nbetter than I do, so replace those numbers with real ones and see if it\nstill makes sense.\n\nThe basic rule is that an index scan is useful only if it reduces the\nnumber of disk pages you need to fetch enough to make up for the extra\ncost of random I/O.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Fri, 02 Mar 2007 11:50:38 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Array indexes, GIN?" } ]
[ { "msg_contents": "\nHello!\n\nI'm new to performance tuning on postgres. I've read the docs on the\nposgtres site, as well as:\n\n http://www.revsys.com/writings/postgresql-performance.html\n http://www.powerpostgresql.com/PerfList\n\nHowever, my query is really slow, and I'm not sure what the main cause\ncould be, as there are so many variables. I'm hoping people with more\nexperience could help out.\n\nMy machine has 8Gb RAM, 2xCPU (2Gz, I think...)\n\nTable has about 1M rows.\n\nThis is my postgres.conf:\n\nlisten_addresses = '*'\nport = 5432\nmax_connections = 100\nshared_buffers = 256000\neffective_cache_size = 1000000\nwork_mem = 5000000\nredirect_stderr = on\nlog_directory = 'pg_log'\nlog_filename = 'postgresql-%a.log'\nlog_truncate_on_rotation = on\nlog_rotation_age = 1440\nlog_rotation_size = 0\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\n\nThis is the result of \"explain analyze\":\n\n Aggregate (cost=384713.17..384713.18 rows=1 width=4) (actual\ntime=254856.025..254856.025 rows=1 loops=1)\n -> Seq Scan on medline_articles t0 (cost=0.00..382253.00\nrows=984068 width=4) (actual time=511.841..254854.981 rows=788 loops=1)\n Filter: long_ugly_query_here\n\n\n\nAnd this is the actual query:\n\nSELECT COUNT(t0.ID) FROM public.MY_TABLE t0 \nWHERE ((POSITION('adenosine cyclic 35-monophosphate' IN LOWER(t0.TITLE))\n- 1) >=0 OR \n(POSITION('adenosine cyclic 55-monophosphate' IN LOWER(t0.TEXT)) - 1) >=\n0 OR \n(POSITION('cyclic amp, disodium salt' IN LOWER(t0.TITLE)) - 1) >= 0 OR \n(POSITION('cyclic amp, disodium salt' IN LOWER(t0.TEXT)) - 1) >= 0 OR \n(POSITION('cyclic amp, sodium salt' IN LOWER(t0.TEXT)) - 1) >= 0 OR \n(POSITION('cyclic amp, sodium salt' IN LOWER(t0.TITLE)) - 1) >= 0 OR \n(POSITION('cyclic amp' IN LOWER(t0.TEXT)) - 1) >= 0 OR \n(POSITION('cyclic amp' IN LOWER(t0.TITLE)) - 1) >= 0 OR \n(POSITION('cyclic amp, monopotassium salt' IN LOWER(t0.TEXT)) - 1) >= 0\nOR \n(POSITION('cyclic amp, monopotassium salt' IN LOWER(t0.TEXT)) - 1) >= 0\nOR \n(POSITION('adenosine cyclic-35-monophosphate' IN LOWER(t0.TEXT)) - 1) >=\n0 OR \n(POSITION('adenosine cyclic-35-monophosphate' IN LOWER(t0.TITLE)) - 1)\n>= 0 OR \n(POSITION('adenosine cyclic monophosphate' IN LOWER(t0.TEXT)) - 1) >= 0\nOR \n(POSITION('adenosine cyclic monophosphate' IN LOWER(t0.TITLE)) - 1) >= 0\nOR \n(POSITION('cyclic amp, monoammonium salt' IN LOWER(t0.TEXT)) - 1) >= 0\nOR \n(POSITION('cyclic amp, monoammonium salt' IN LOWER(t0.TITLE)) - 1) >= 0\nOR \n(POSITION('adenosine cyclic 3,5 monophosphate' IN LOWER(t0.TEXT)) - 1)\n>= 0 OR \n(POSITION('adenosine cyclic 3,5 monophosphate' IN LOWER(t0.TITLE)) - 1)\n>= 0 OR \n(POSITION('cyclic amp, monosodium salt' IN LOWER(t0.TEXT)) - 1) >= 0 OR \n(POSITION('cyclic amp, monosodium salt' IN LOWER(t0.TITLE)) - 1) >= 0\nOR \n(POSITION('cyclic amp, (r)-isomer' IN LOWER(t0.TEXT)) - 1) >= 0 OR \n(POSITION('cyclic amp, (r)-isomer' IN LOWER(t0.TEXT)) - 1) >= 0)\n\n\nSome more info:\n\npubmed=> SELECT relpages, reltuples FROM pg_class WHERE relname =\n'MY_TABLE';\n relpages | reltuples\n----------+-----------\n 155887 | 984200\n(1 row)\n\n\n\nThanks for any suggestions!\n\nDave\n\n\n\n\nPS - Yes! I did run \"vacuum analyze\" :-)\n\n\n\n", "msg_date": "Fri, 02 Mar 2007 13:00:30 +0900", "msg_from": "David Leangen <[email protected]>", "msg_from_op": true, "msg_subject": "Improving query performance" }, { "msg_contents": "David Leangen <[email protected]> writes:\n> And this is the actual query:\n\n> SELECT COUNT(t0.ID) FROM public.MY_TABLE t0 \n> WHERE ((POSITION('adenosine cyclic 35-monophosphate' IN LOWER(t0.TITLE))\n> - 1) >=0 OR \n> (POSITION('adenosine cyclic 55-monophosphate' IN LOWER(t0.TEXT)) - 1) >=\n> 0 OR \n> (POSITION('cyclic amp, disodium salt' IN LOWER(t0.TITLE)) - 1) >= 0 OR \n> (POSITION('cyclic amp, disodium salt' IN LOWER(t0.TEXT)) - 1) >= 0 OR \n> (POSITION('cyclic amp, sodium salt' IN LOWER(t0.TEXT)) - 1) >= 0 OR \n> ...etc...\n\nI think you need to look into full-text indexing (see tsearch2).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Mar 2007 00:48:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving query performance " }, { "msg_contents": "> > And this is the actual query:\n> \n> I think you need to look into full-text indexing (see tsearch2).\n\n\nThanks, Tom.\n\nYes, we know this.\n\nThis is just a temporary fix that we needed to get up today for biz\nreasons. Implementing full-text searching within a few short hours was\nout of the question.\n\n\nAnyway, we found a temporary solution. We'll be doing this \"properly\"\nlater.\n\n\nThanks for taking the time to suggest this.\n\n\nCheers,\nDave\n\n\n\n", "msg_date": "Fri, 02 Mar 2007 18:06:06 +0900", "msg_from": "David Leangen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving query performance" } ]
[ { "msg_contents": "Hi ,\nI want to know about hibernate left join,\nIs their any way to do left join in hibernate ?.\n\n\nPlease give me an example of hibernate left join with its maaping\n\n\nThanks & Regards\nBholu.\n\n", "msg_date": "1 Mar 2007 22:46:31 -0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Hibernate left join" }, { "msg_contents": "[email protected] wrote:\n> Hi ,\n> I want to know about hibernate left join,\n> Is their any way to do left join in hibernate ?.\n> \n> \n> Please give me an example of hibernate left join with its maaping\n\nThis isn't really a performance question, or even a PostgreSQL question.\n\nYou'll do better asking this on the pgsql-general list and even better \nasking on a hibernate-related list.\n\nAnother tip - for these sorts of questions many projects have details in \ntheir documentation. Google can help you here - try searching for \n\"hibernate manual left join\" and see if the results help.\n\nHTH\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 06 Mar 2007 07:37:39 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hibernate left join" } ]
[ { "msg_contents": "Hi All,\n\nI tried posting this last week but it has not come through yet, so please \nexcuse me if there is a double post.\n\nWe're having some issue's with the vacuum times within our database \nenvironment, and would like some input from the guru's out there that could \npotentially suggest a better approach or mechanism.\n\n\n\n------------------------------------------------------------------------------------\n\nProblem Background\n\nFirst off, I'm no PostgreSQL guru, so, please be nice :)\n\n\nOver time we have noticed increased response times from the database which \nhas an adverse affect on our registration times. After doing some research \nit appears that this may have been related to our maintenance regime, and \nhas thus been amended as follows:\n\n\n[1] AutoVacuum runs during the day over the entire PostgreSQL cluster,\n\n[2] A Vacuum Full Verbose is run during our least busy period (generally \n03:30) against the Database,\n\n[3] A Re-Index on the table is performed,\n\n[4] A Cluster on the table is performed against the most used index,\n\n[5] A Vacuum Analyze Verbose is run against the database.\n\n\nThese maintenance steps have been setup to run every 24 hours.\n\n\nThe database in essence, once loaded up and re-index is generally around \n17MB for data and 4.7MB for indexes in size.\n\n\nOver a period of 24 hours the database can grow up to around 250MB and the \nindexes around 33MB (Worst case thus far). When the maintenance kicks in, \nthe vacuum full verbose step can take up to 15 minutes to complete (worst \ncase). The re-index, cluster and vacuum analyze verbose steps complete in \nunder 1 second each. The problem here is the vacuum full verbose, which \nrenders the environment unusable during the vacuum phase. The idea here is \nto try and get the vacuum full verbose step to complete in less than a \nminute. Ideally, if we could get it to complete quicker then that would be \nGREAT, but our minimal requirement is for it to complete at the very most 1 \nminute. Looking at the specifications of our environment below, do you think \nthat this is at all possible?\n\n\n------------------------------------------------------------------------------------\n\nEnvironment Background:\n\n\nWe are running a VoIP service whereby the IP phones perform a registration \nrequest every 113 seconds. These registration requests are verified against \nthe database and the details are updated accordingly. Currently we average \naround 100 - 150 read/write requests per second to this particular database. \nThe requirement here is that the database response is sub 15 milliseconds \nfor both types of requests, which it currently is. The database _must_ also \nbe available 24x7.\n\n\n------------------------------------------------------------------------------------\n\nHardware Environment:\n\nSunFire X4200\n\n 2 x Dual Core Opteron 280's\n\n 8GB RAM\n\n 2 x Q-Logic Fibre Channel HBA's\n\n\nSun StorEdge 3511 FC SATA Array\n\n 1 x 1GB RAID Module\n\n 12 x 250GB 7200 RPM SATA disks\n\n\n------------------------------------------------------------------------------------\n\nRAID Environment:\n\n 5 Logical drives, each LD is made up of 2 x 250GB SATA HDD in a RAID 1 \nmirror.\n\n 2 x 250GB SATA HDD allocated as hot spares\n\n\n\nThe logical drives are partitioned and presented to the OS as follows:\n\n LD0 (2 x 250GB SATA HDD's RAID 1)\n\n Partition 0 (120GB)\n\n Partition 1 (120GB)\n\n LD1 (2 x 250GB SATA HDD's RAID 1)\n\n Partition 0 (120GB)\n\n Partition 1 (120GB)\n\n LD2 (2 x 250GB SATA HDD's RAID 1)\n\n Partition 0 (80GB)\n\n Partition 1 (80GB)\n\n Partition 2 (80GB)\n\n LD3 (2 x 250GB SATA HDD's RAID 1)\n\n Partition 0 (80GB)\n\n Partition 1 (80GB)\n\n Partition 2 (80GB)\n\n LD4 (2 x 250GB SATA HDD's RAID 1)\n\n Partition 0 (120GB)\n\n Partition 1 (120GB)\n\n\n\n-------------------------------------------------------------------------------------\n\nOS Environment\n\nSolaris 10 Update 3 (11/06)\n\n Boot disks are 76GB 15000 RPM configure in a RAID 1 mirror.\n\n\n-------------------------------------------------------------------------------------\n\nFilesystem Layout\n\n PostgreSQL Data\n\n 250GB ZFS file-system made up of:\n\n LD0 Partition 0 Mirrored to LD1 Partition 0 (120GB)\n\n LD0 Partition 1 Mirrored to LD1 Partition 1 (120GB)\n\n The above 2 vdevs are then striped across each other\n\n\n\n PostgreSQL WAL\n\n 80GB ZFS filesystem made up of:\n\n LD2 Partition 0 Mirrored to LD3 Partition 0 (80GB)\n\n LD2 partition 1 Mirrored to LD3 Partition 1 (80GB)\n\n The above 2 vdevs are then striped across each other\n\n\n\n-------------------------------------------------------------------------------------\n\nPostgreSQL Configuration\n\n PostgreSQL version 8.2.3\n\n\n#---------------------------------------------------------------------------\n\n# RESOURCE USAGE (except WAL)\n\n#---------------------------------------------------------------------------\n\n\n\n# - Memory -\n\n\n\nshared_buffers = 1024MB # min 128kB or max_connections*16kB\n\n # (change requires restart)\n\ntemp_buffers = 8MB # min 800kB\n\nmax_prepared_transactions = 200 # can be 0 or more\n\n # (change requires restart) # Note: \nincreasing max_prepared_transactions costs ~600 bytes of shared memory # per \ntransaction slot, plus lock space (see max_locks_per_transaction).\n\nwork_mem = 1MB # min 64kB\n\nmaintenance_work_mem = 256MB # min 1MB\n\nmax_stack_depth = 2MB # min 100kB\n\n\n\n# - Free Space Map -\n\n\n\nmax_fsm_pages = 2048000 # min max_fsm_relations*16, 6 bytes \neach\n\n # (change requires restart)\n\nmax_fsm_relations = 10000 # min 100, ~70 bytes each\n\n # (change requires restart)\n\n\n\n# - Kernel Resource Usage -\n\n\n\n#max_files_per_process = 1000 # min 25\n\n # (change requires restart)\n\n#shared_preload_libraries = '' # (change requires restart)\n\n\n\n# - Cost-Based Vacuum Delay -\n\n\n\nvacuum_cost_delay = 200 # 0-1000 milliseconds\n\nvacuum_cost_page_hit = 1 # 0-10000 credits\n\nvacuum_cost_page_miss = 10 # 0-10000 credits\n\nvacuum_cost_page_dirty = 20 # 0-10000 credits\n\nvacuum_cost_limit = 200 # 0-10000 credits\n\n\n\n# - Background writer -\n\n\n\n#bgwriter_delay = 200ms # 10-10000ms between rounds\n\n#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers \nscanned/round\n\n#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n\n#bgwriter_all_percent = 0.333 # 0-100% of all buffers \nscanned/round\n\n#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n\n\n\n#---------------------------------------------------------------------------\n\n# WRITE AHEAD LOG\n\n#---------------------------------------------------------------------------\n\n\n\n# - Settings -\n\n\n\nfsync = on # turns forced synchronization on or \noff\n\nwal_sync_method = open_datasync # the default is the first option\n\n # supported by the operating system:\n\n # open_datasync\n\n # fdatasync\n\n # fsync\n\n # fsync_writethrough\n\n # open_sync\n\n#full_page_writes = on # recover from partial page writes\n\nwal_buffers = 512kB # min 32kB\n\n # (change requires restart)\n\ncommit_delay = 10000 # range 0-100000, in microseconds\n\ncommit_siblings = 50 # range 1-1000\n\n\n\n# - Checkpoints -\n\n\n\ncheckpoint_segments = 128 # in logfile segments, min 1, 16MB \neach\n\ncheckpoint_timeout = 5min # range 30s-1h\n\ncheckpoint_warning = 5min # 0 is off\n\n\n\n\n\n#---------------------------------------------------------------------------\n\n# QUERY TUNING\n\n#---------------------------------------------------------------------------\n\n\n\n# - Planner Method Configuration -\n\n\n\n#enable_bitmapscan = on\n\n#enable_hashagg = on\n\n#enable_hashjoin = on\n\n#enable_indexscan = on\n\n#enable_mergejoin = on\n\n#enable_nestloop = on\n\n#enable_seqscan = on\n\n#enable_sort = on\n\n#enable_tidscan = on\n\n\n\n# - Planner Cost Constants -\n\n\n\n#seq_page_cost = 1.0 # measured on an arbitrary scale\n\nrandom_page_cost = 2.0 # same scale as above\n\n#cpu_tuple_cost = 0.01 # same scale as above\n\n#cpu_index_tuple_cost = 0.005 # same scale as above\n\n#cpu_operator_cost = 0.0025 # same scale as above\n\neffective_cache_size = 5120MB\n\n\n\n# - Genetic Query Optimizer -\n\n\n\n#geqo = on\n\n#geqo_threshold = 12\n\n#geqo_effort = 5 # range 1-10\n\n#geqo_pool_size = 0 # selects default based on effort\n\n#geqo_generations = 0 # selects default based on effort\n\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n\n\n# - Other Planner Options -\n\n\n\n#default_statistics_target = 10 # range 1-1000\n\n#constraint_exclusion = off\n\n#from_collapse_limit = 8\n\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\n\n # JOINs\n\n#---------------------------------------------------------------------------\n\n# RUNTIME STATISTICS\n\n#---------------------------------------------------------------------------\n\n\n\n# - Query/Index Statistics Collector -\n\n\n\nstats_command_string = on\n\nupdate_process_title = on\n\n\n\nstats_start_collector = on # needed for block or row stats\n\n # (change requires restart) \nstats_block_level = on stats_row_level = on\n\nstats_reset_on_server_start = off # (change requires restart)\n\n\n\n\n\n# - Statistics Monitoring -\n\n\n\n#log_parser_stats = off\n\n#log_planner_stats = off\n\n#log_executor_stats = off\n\n#log_statement_stats = off\n\n\n\n\n\n#---------------------------------------------------------------------------\n\n# AUTOVACUUM PARAMETERS\n\n#---------------------------------------------------------------------------\n\n\n\nautovacuum = on # enable autovacuum subprocess?\n\n # 'on' requires \nstats_start_collector\n\n # and stats_row_level to also be on\n\nautovacuum_naptime = 1min # time between autovacuum runs\n\nautovacuum_vacuum_threshold = 500 # min # of tuple updates before\n\n # vacuum\n\nautovacuum_analyze_threshold = 250 # min # of tuple updates before\n\n # analyze\n\nautovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before\n\n # vacuum\n\nautovacuum_analyze_scale_factor = 0.1 # fraction of rel size before\n\n # analyze\n\nautovacuum_freeze_max_age = 200000000 # maximum XID age before forced \nvacuum\n\n # (change requires restart)\n\nautovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n\n # autovacuum, -1 means use\n\n # vacuum_cost_delay\n\nautovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n\n # autovacuum, -1 means use\n\n # vacuum_cost_limit\n\n\n\n------------------------------------------------------------------------------------\n\nDatabase Environment\n\n\n\nThere are 18 databases running within this PostgreSQL cluster, the database \nwe are having the vacuum issue with is described as follows:\n\n\n\n List of relations\n\n Schema | Name | Type | Owner\n\n--------+---------------------+----------+------------\n\n public | dialplans | table |\n\n public | extensions | table |\n\n public | extensions_id_seq | sequence |\n\n public | iaxaccounts | table |\n\n public | numbercategories | table |\n\n public | numbers | table |\n\n public | numbersubcategories | table |\n\n public | portnumbers | table |\n\n public | portrequests | table |\n\n public | sipaccounts | table |\n\n public | voicemailaccounts | table |\n\n\n\nThe table (with indexes) being used the most is described as follows:\n\n\n\n Table \"public.sipaccounts\"\n\n Column | Type | Modifiers\n\n--------------------+--------------------------+------------------------\n\n--------------------+--------------------------+----------------------\n\n id | character varying(36) | not null\n\n name | character varying(80) | not null\n\n accountcode | character varying(20) |\n\n amaflags | character varying(13) |\n\n callgroup | character varying(10) |\n\n callerid | character varying(80) |\n\n canreinvite | character(3) | default 'no'::bpchar\n\n context | character varying(80) |\n\n defaultip | character varying(15) |\n\n dtmfmode | character varying(7) |\n\n fromuser | character varying(80) |\n\n fromdomain | character varying(80) |\n\n fullcontact | character varying(80) |\n\n host | character varying(31) | not null default \n''::character varying\n\n insecure | character varying(11) |\n\n language | character(2) |\n\n mailbox | character varying(50) |\n\n md5secret | character varying(80) |\n\n nat | character varying(5) | not null default \n'no'::character varying\n\n deny | character varying(95) |\n\n permit | character varying(95) |\n\n mask | character varying(95) |\n\n pickupgroup | character varying(10) |\n\n port | character varying(5) | not null default \n''::character varying\n\n qualify | character(4) |\n\n restrictcid | character(1) |\n\n rtptimeout | character(3) |\n\n rtpholdtimeout | character(5) |\n\n secret | character varying(80) |\n\n type | character varying(6) | not null default \n'friend'::character varying\n\n username | character varying(80) | not null default \n''::character varying\n\n disallow | character varying(100) |\n\n allow | character varying(100) |\n\n musiconhold | character varying(100) |\n\n regseconds | integer | not null default 0\n\n ipaddr | character varying(15) | not null default \n''::character varying\n\n regexten | character varying(80) | not null default \n''::character varying\n\n cancallforward | character(3) | default 'yes'::bpchar\n\n setvar | character varying(100) | not null default \n''::character varying\n\n inserted | timestamp with time zone | not null default now()\n\n lastregister | timestamp with time zone |\n\n useragent | character varying(128) |\n\n natsendkeepalives | character(1) | default 'n'::bpchar\n\n natconnectionstate | character(1) |\n\n outboundproxyport | character varying(5) | not null default \n''::character varying\n\n outboundproxy | character varying(31) | not null default \n''::character varying\n\n voicemailextension | character varying(128) |\n\n pstncallerid | character varying(24) | default 'Uknown'::character \nvarying\n\n dialplan | character varying(64) |\n\n whitelabelid | character varying(32) |\n\n localcallprefix | character varying(10) |\n\nIndexes:\n\n \"sippeers_pkey\" PRIMARY KEY, btree (id), tablespace \"bf_service_idx\"\n\n \"sippeers_name_key\" UNIQUE, btree (name) CLUSTER, tablespace \n\"bf_service_idx\"\n\n \"accountcode_index\" btree (accountcode), tablespace \"bf_service_idx\"\n\n\n\n\n\nIf anyone has any comments/suggestions please feel free to respond. Any \nresponses are most welcome.\n\n\n\nThanks\n\nBruce\n\n\n\n\n", "msg_date": "Mon, 5 Mar 2007 12:33:18 -0000", "msg_from": "\"Bruce McAlister\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 8.2.3 VACUUM Timings/Performance" }, { "msg_contents": "Bruce McAlister wrote:\n> Over time we have noticed increased response times from the database which \n> has an adverse affect on our registration times. After doing some research \n> it appears that this may have been related to our maintenance regime, and \n> has thus been amended as follows:\n> \n> \n> [1] AutoVacuum runs during the day over the entire PostgreSQL cluster,\n> \n> [2] A Vacuum Full Verbose is run during our least busy period (generally \n> 03:30) against the Database,\n> \n> [3] A Re-Index on the table is performed,\n> \n> [4] A Cluster on the table is performed against the most used index,\n> \n> [5] A Vacuum Analyze Verbose is run against the database.\n> \n> \n> These maintenance steps have been setup to run every 24 hours.\n> \n> \n> The database in essence, once loaded up and re-index is generally around \n> 17MB for data and 4.7MB for indexes in size.\n> \n> \n> Over a period of 24 hours the database can grow up to around 250MB and the \n> indexes around 33MB (Worst case thus far). When the maintenance kicks in, \n> the vacuum full verbose step can take up to 15 minutes to complete (worst \n> case). The re-index, cluster and vacuum analyze verbose steps complete in \n> under 1 second each. The problem here is the vacuum full verbose, which \n> renders the environment unusable during the vacuum phase. The idea here is \n> to try and get the vacuum full verbose step to complete in less than a \n> minute. Ideally, if we could get it to complete quicker then that would be \n> GREAT, but our minimal requirement is for it to complete at the very most 1 \n> minute. Looking at the specifications of our environment below, do you think \n> that this is at all possible?\n\n250MB+33MB isn't very much. It should easily fit in memory, I don't see \nwhy you need the 12 disk RAID array. Are you sure you got the numbers right?\n\nVacuum full is most likely a waste of time. Especially on the tables \nthat you cluster later, cluster will rewrite the whole table and indexes \nanyway. A regular normal vacuum should be enough to keep the table in \nshape. A reindex is also not usually necessary, and for the tables that \nyou cluster, it's a waste of time like vacuum full.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 05 Mar 2007 13:20:49 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance" }, { "msg_contents": "Hi Heikki,\n\nThanks for the reply.\n\nThe RAID array was implemented due to a projected growth pattern which \nincorporate all 18 of our databases. The sizings I mentioned only refer to 1 \nof those databases, which, is also the most heavily used database :)\n\nIf I understand you correctly, we could in essence change our maintenance \nroutine to the follwing:\n\n[1] Cluster on most used index\n[2] Perform a vacuum analyze on the table\n\nIf I read your post correctly, this will regenerate the index that the \ncluster is performed on (1 of 3) and also re-generate the table in the \nsequence of that index?\n\nIf that is the case, why would anyone use the vacuum full approach if they \ncould use the cluster command on a table/database that will regen these \nfiles for you. It almost seems like the vacuum full approach would, or \ncould, be obsoleted by the cluster command, especially if the timings in \ntheir respective runs are that different (in our case the vacuum full took \n15 minutes in our worst case, and the cluster command took under 1 second \nfor the same table and scenario).\n\nThe output of our script for that specific run is as follows (just in-case \ni'm missing something):\n\nChecking disk usage before maintenance on service (sipaccounts) at \n02-Mar-2007 03:30:00\n\n258M /database/pgsql/bf_service/data\n33M /database/pgsql/bf_service/index\n\nCompleted checking disk usage before maintenance on service (sipaccounts) at \n02-Mar-2007 03:30:00\n\nStarting VACUUM FULL VERBOSE on service (sipaccounts) at 02-Mar-2007 \n03:30:00\n\nINFO: vacuuming \"public.sipaccounts\"\nINFO: \"sipaccounts\": found 71759 removable, 9314 nonremovable row versions \nin 30324 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 318 to 540 bytes long.\nThere were 439069 unused item pointers.\nTotal free space (including removable row versions) is 241845076 bytes.\n28731 pages are or will become empty, including 41 at the end of the table.\n30274 pages containing 241510688 free bytes are potential move destinations.\nCPU 0.00s/0.05u sec elapsed 31.70 sec.\nINFO: index \"sippeers_name_key\" now contains 9314 row versions in 69 pages\nDETAIL: 7265 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.01u sec elapsed 1.52 sec.\nINFO: index \"sippeers_pkey\" now contains 9314 row versions in 135 pages\nDETAIL: 7161 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.01u sec elapsed 3.07 sec.\nINFO: index \"accountcode_index\" now contains 9314 row versions in 3347 pages\nDETAIL: 71759 index row versions were removed.\n1151 index pages have been deleted, 1151 are currently reusable.\nCPU 0.02s/0.08u sec elapsed 56.31 sec.\nINFO: \"sipaccounts\": moved 3395 row versions, truncated 30324 to 492 pages\nDETAIL: CPU 0.03s/0.56u sec elapsed 751.99 sec.\nINFO: index \"sippeers_name_key\" now contains 9314 row versions in 69 pages\nDETAIL: 3395 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.21 sec.\nINFO: index \"sippeers_pkey\" now contains 9314 row versions in 135 pages\nDETAIL: 3395 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"accountcode_index\" now contains 9314 row versions in 3347 pages\nDETAIL: 3395 index row versions were removed.\n1159 index pages have been deleted, 1159 are currently reusable.\nCPU 0.01s/0.01u sec elapsed 30.03 sec.\nINFO: vacuuming \"pg_toast.pg_toast_2384131\"\nINFO: \"pg_toast_2384131\": found 0 removable, 0 nonremovable row versions in \n0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_2384131_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\nCompleted VACUUM FULL VERBOSE on service (sipaccounts) at 02-Mar-2007 \n03:44:35\nStarting REINDEX on service (sipaccounts) at 02-Mar-2007 03:44:35\n\nREINDEX\n\nCompleted REINDEX on service (sipaccounts) at 02-Mar-2007 03:44:35\nStarting CLUSTER on service (sipaccounts) at 02-Mar-2007 03:44:35\n\nCLUSTER sipaccounts;\nCLUSTER\n\nCompleted CLUSTER on service (sipaccounts) at 02-Mar-2007 03:44:36\nStarting VACUUM ANALYZE VERBOSE on service (sipaccounts) at 02-Mar-2007 \n03:44:36\n\nINFO: vacuuming \"public.sipaccounts\"\nINFO: scanned index \"sippeers_name_key\" to remove 9 row versions\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: scanned index \"sippeers_pkey\" to remove 9 row versions\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.20 sec.\nINFO: scanned index \"accountcode_index\" to remove 9 row versions\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"sipaccounts\": removed 9 row versions in 9 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"sippeers_name_key\" now contains 9361 row versions in 36 pages\nDETAIL: 9 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"sippeers_pkey\" now contains 9361 row versions in 69 pages\nDETAIL: 9 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"accountcode_index\" now contains 9361 row versions in 49 pages\nDETAIL: 9 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"sipaccounts\": found 9 removable, 9361 nonremovable row versions in \n495 pages\nDETAIL: 131 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n28 pages contain useful free space.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.84 sec.\nINFO: vacuuming \"pg_toast.pg_toast_2386447\"\nINFO: index \"pg_toast_2386447_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_2386447\": found 0 removable, 0 nonremovable row versions in \n0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages contain useful free space.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.sipaccounts\"\nINFO: \"sipaccounts\": scanned 517 of 517 pages, containing 8966 live rows and \n800 dead rows; 3000 rows in sample, 8966 estimated total rows VACUUM\n\nCompleted VACUUM ANALYZE VERBOSE on service (sipaccounts) at 02-Mar-2007 \n03:44:39\nChecking disk usage after maintenance on service (sipaccounts) at \n02-Mar-2007 03:44:39\n\n22M /database/pgsql/bf_service/data\n\n6.7M /database/pgsql/bf_service/index\n\nCompleted checking disk usage after maintenance on service (sipaccounts) at \n02-Mar-2007 03:44:39\n\nThanks\nBruce\n\"Heikki Linnakangas\" <[email protected]> wrote in message \nnews:[email protected]...\n> Bruce McAlister wrote:\n>> Over time we have noticed increased response times from the database \n>> which has an adverse affect on our registration times. After doing some \n>> research it appears that this may have been related to our maintenance \n>> regime, and has thus been amended as follows:\n>>\n>>\n>> [1] AutoVacuum runs during the day over the entire PostgreSQL cluster,\n>>\n>> [2] A Vacuum Full Verbose is run during our least busy period (generally \n>> 03:30) against the Database,\n>>\n>> [3] A Re-Index on the table is performed,\n>>\n>> [4] A Cluster on the table is performed against the most used index,\n>>\n>> [5] A Vacuum Analyze Verbose is run against the database.\n>>\n>>\n>> These maintenance steps have been setup to run every 24 hours.\n>>\n>>\n>> The database in essence, once loaded up and re-index is generally around \n>> 17MB for data and 4.7MB for indexes in size.\n>>\n>>\n>> Over a period of 24 hours the database can grow up to around 250MB and \n>> the indexes around 33MB (Worst case thus far). When the maintenance kicks \n>> in, the vacuum full verbose step can take up to 15 minutes to complete \n>> (worst case). The re-index, cluster and vacuum analyze verbose steps \n>> complete in under 1 second each. The problem here is the vacuum full \n>> verbose, which renders the environment unusable during the vacuum phase. \n>> The idea here is to try and get the vacuum full verbose step to complete \n>> in less than a minute. Ideally, if we could get it to complete quicker \n>> then that would be GREAT, but our minimal requirement is for it to \n>> complete at the very most 1 minute. Looking at the specifications of our \n>> environment below, do you think that this is at all possible?\n>\n> 250MB+33MB isn't very much. It should easily fit in memory, I don't see \n> why you need the 12 disk RAID array. Are you sure you got the numbers \n> right?\n>\n> Vacuum full is most likely a waste of time. Especially on the tables that \n> you cluster later, cluster will rewrite the whole table and indexes \n> anyway. A regular normal vacuum should be enough to keep the table in \n> shape. A reindex is also not usually necessary, and for the tables that \n> you cluster, it's a waste of time like vacuum full.\n>\n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n\n\n", "msg_date": "Mon, 5 Mar 2007 14:06:21 -0000", "msg_from": "\"Bruce McAlister\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance" }, { "msg_contents": "Bruce McAlister wrote:\n> Hi Heikki,\n> \n> Thanks for the reply.\n> \n> The RAID array was implemented due to a projected growth pattern which \n> incorporate all 18 of our databases. The sizings I mentioned only refer to 1 \n> of those databases, which, is also the most heavily used database :)\n> \n> If I understand you correctly, we could in essence change our maintenance \n> routine to the follwing:\n> \n> [1] Cluster on most used index\n> [2] Perform a vacuum analyze on the table\n> \n> If I read your post correctly, this will regenerate the index that the \n> cluster is performed on (1 of 3) and also re-generate the table in the \n> sequence of that index?\n\nThat's right. In fact, even cluster probably doesn't make much \ndifference in your case. Since the table fits in memory anyway, the \nphysical order of it doesn't matter much.\n\nI believe you would be fine just turning autovacuum on, and not doing \nany manual maintenance.\n\n> If that is the case, why would anyone use the vacuum full approach if they \n> could use the cluster command on a table/database that will regen these \n> files for you. It almost seems like the vacuum full approach would, or \n> could, be obsoleted by the cluster command, especially if the timings in \n> their respective runs are that different (in our case the vacuum full took \n> 15 minutes in our worst case, and the cluster command took under 1 second \n> for the same table and scenario).\n\nIn fact, getting rid of vacuum full, or changing it to work like \ncluster, has been proposed in the past. The use case really is pretty \nnarrow; cluster is a lot faster if there's a lot of unused space in the \ntable, and if there's not, vacuum full isn't going to do much so there's \nnot much point running it in the first place. The reason it exists is \nlargely historical, there hasn't been a pressing reason to remove it either.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 05 Mar 2007 14:44:31 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance" }, { "msg_contents": "* Heikki Linnakangas <[email protected]> [070305 09:46]:\n\n> >If that is the case, why would anyone use the vacuum full approach if they \n> >could use the cluster command on a table/database that will regen these \n> >files for you. It almost seems like the vacuum full approach would, or \n> >could, be obsoleted by the cluster command, especially if the timings in \n> >their respective runs are that different (in our case the vacuum full took \n> >15 minutes in our worst case, and the cluster command took under 1 second \n> >for the same table and scenario).\n> \n> In fact, getting rid of vacuum full, or changing it to work like \n> cluster, has been proposed in the past. The use case really is pretty \n> narrow; cluster is a lot faster if there's a lot of unused space in the \n> table, and if there's not, vacuum full isn't going to do much so there's \n> not much point running it in the first place. The reason it exists is \n> largely historical, there hasn't been a pressing reason to remove it either.\n\nI've never used CLUSTER, because I've always heard murmerings of it not\nbeing completely MVCC safe. From the TODO:\n * CLUSTER\n\to Make CLUSTER preserve recently-dead tuples per MVCC\n\t requirements\nBut the documents don't mention anything about cluster being unsafe.\n\nAFAIK, Vacuum full doesn't suffer the same MVCC issues that cluster\ndoes. Is this correct?\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Mon, 5 Mar 2007 10:18:20 -0500", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance" }, { "msg_contents": "\"Bruce McAlister\" <[email protected]> writes:\n> [1] AutoVacuum runs during the day over the entire PostgreSQL cluster,\n\nGood, but evidently you need to make it more aggressive.\n\n> [2] A Vacuum Full Verbose is run during our least busy period (generally \n> 03:30) against the Database,\n\n> [3] A Re-Index on the table is performed,\n\n> [4] A Cluster on the table is performed against the most used index,\n\n> [5] A Vacuum Analyze Verbose is run against the database.\n\nThat is enormous overkill. Steps 2 and 3 are a 100% waste of time if\nyou are going to cluster in step 4. Just do the CLUSTER and then\nANALYZE (or VACUUM ANALYZE if you really must, but the value is marginal).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2007 11:11:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance " }, { "msg_contents": "Aidan Van Dyk wrote:\n> * Heikki Linnakangas <[email protected]> [070305 09:46]:\n>> In fact, getting rid of vacuum full, or changing it to work like \n>> cluster, has been proposed in the past. The use case really is pretty \n>> narrow; cluster is a lot faster if there's a lot of unused space in the \n>> table, and if there's not, vacuum full isn't going to do much so there's \n>> not much point running it in the first place. The reason it exists is \n>> largely historical, there hasn't been a pressing reason to remove it either.\n> \n> I've never used CLUSTER, because I've always heard murmerings of it not\n> being completely MVCC safe. From the TODO:\n> * CLUSTER\n> \to Make CLUSTER preserve recently-dead tuples per MVCC\n> \t requirements\n\nGood point, I didn't remember that. Using cluster in an environment like \nthe OP has, cluster might actually break the consistency of concurrent \ntransactions.\n\n> But the documents don't mention anything about cluster being unsafe.\n\nReally? <checks docs>. Looks like you're right. Should definitely be \nmentioned in the docs.\n\n> AFAIK, Vacuum full doesn't suffer the same MVCC issues that cluster\n> does. Is this correct?\n\nThat's right. Vacuum full goes to great lengths to be MVCC-safe.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 05 Mar 2007 16:25:27 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance" }, { "msg_contents": "Hi Tom,\n\nThanks for the suggestion. It's been a while since I replied to this as I \nhad to go and do some further investigation of the docs with regards the \nautovacuum daemons configuration. According to the documentation, the \nformula's for the vacuum and analyze are as follows:\n\nVacuum\n vacuum threshold = vacuum base threshold + vacuum scale factor * number \nof tuples\nAnalyze\n analyze threshold = analyze base threshold + analyze scale factor * \nnumber of tuples\n\nMy current settings for autovacuum are as follows:\n\n# - Cost-Based Vacuum Delay -\n\nvacuum_cost_delay = 200 # 0-1000 milliseconds\nvacuum_cost_page_hit = 1 # 0-10000 credits\nvacuum_cost_page_miss = 10 # 0-10000 credits\nvacuum_cost_page_dirty = 20 # 0-10000 credits\nvacuum_cost_limit = 200 # 0-10000 credits\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = on # \nenable autovacuum subprocess?\n \n # 'on' requires stats_start_collector\n \n # and stats_row_level to also be on\nautovacuum_naptime = 1min # time \nbetween autovacuum runs\nautovacuum_vacuum_threshold = 500 # min # of tuple \nupdates before\n \n # vacuum\nautovacuum_analyze_threshold = 250 # min # of tuple \nupdates before\n \n # analyze\nautovacuum_vacuum_scale_factor = 0.2 # fraction of rel \nsize before\n \n # vacuum\nautovacuum_analyze_scale_factor = 0.1 # fraction of rel \nsize before\n \n # analyze\nautovacuum_freeze_max_age = 200000000 # maximum XID age before \nforced vacuum\n \n # (change requires restart)\nautovacuum_vacuum_cost_delay = -1 # default vacuum \ncost delay for\n \n # autovacuum, -1 means use\n \n # vacuum_cost_delay\nautovacuum_vacuum_cost_limit = -1 # default vacuum \ncost limit for\n \n # autovacuum, -1 means use\n \n # vacuum_cost_limit\n\nThus to make the autovacuum more aggressive I am thinking along the lines of \nchanging the following parameters:\n\nautovacuum_vacuum_threshold = 250\nautovacuum_analyze_threshold = 125\n\nThe documentation also mentions that when the autovacuum runs it selects a \nsingle database to process on that run. This means that the particular table \nthat we are interrested in will only be vacuumed once every 17 minutes, \nassuming we have 18 databases and the selection process is sequential \nthrough the database list.\n\n From my understanding of the documentation, the only way to work around this \nissue is to manually update the system catalog table pg_autovacuum and set \nthe pg_autovacuum.enabled field to false to skip the autovacuum on tables \nthat dont require such frequent vacuums. If I do enable this feature, and I \nmanually run a vacuumdb from the command line against that particular \ndisabled table, will the vacuum still process the table? I'm assuming too, \nthat the best tables to disable autovacuum on will be ones with a minimal \namount of update/delete queries run against it. For example, if we have a \ntable that only has inserts applied to it, it is safe to assume that that \ntable can safely be ignored by autovacuum.\n\nDo you have any other suggestions as to which tables generally can be \nexcluded from the autovacuum based on the usage patterns?\nCan you see anything with respect to my new autovacuum parameters that may \ncause issue's and are there any other parameters that you suggest I need to \nchange to make the autovacuum daemon more aggressive?\n\nPS: Currently we have the Cluster command running on the sipaccounts table \nas the vacuum full is taking too long. It would be nice though to have some \npiece of mind that the cluster command is mvcc safe, as Heikki and Aidan \nhave mentioned that it is not and may break things in our environment, I'm a \nlittle afraid of running with the cluster command, and should possibly go \nback to the vacuum full :/\n\nThanks all for any and all suggestions/comments.\n\nThanks\nBruce\n\n\n\"Tom Lane\" <[email protected]> wrote in message \nnews:[email protected]...\n> \"Bruce McAlister\" <[email protected]> writes:\n>> [1] AutoVacuum runs during the day over the entire PostgreSQL cluster,\n>\n> Good, but evidently you need to make it more aggressive.\n>\n>> [2] A Vacuum Full Verbose is run during our least busy period (generally\n>> 03:30) against the Database,\n>\n>> [3] A Re-Index on the table is performed,\n>\n>> [4] A Cluster on the table is performed against the most used index,\n>\n>> [5] A Vacuum Analyze Verbose is run against the database.\n>\n> That is enormous overkill. Steps 2 and 3 are a 100% waste of time if\n> you are going to cluster in step 4. Just do the CLUSTER and then\n> ANALYZE (or VACUUM ANALYZE if you really must, but the value is marginal).\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n\n\n", "msg_date": "Fri, 9 Mar 2007 10:45:16 -0000", "msg_from": "\"Bruce McAlister\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance" }, { "msg_contents": "> In fact, getting rid of vacuum full, or changing it to work like\n> cluster, has been proposed in the past. The use case really is pretty\n> narrow; cluster is a lot faster if there's a lot of unused space in the\n> table, and if there's not, vacuum full isn't going to do much so there's\n> not much point running it in the first place. The reason it exists is\n> largely historical, there hasn't been a pressing reason to remove it either.\n\nI can assure you it is a great way to get back gigabytes when someone\nhas put no vacuum strategy in place and your 200K row table (with\nabout 200 bytes per row) is taking up 1.7gig!!!\nVive le truncate table, and vive le vacuum full!\n:-)\nAnton\n", "msg_date": "Tue, 13 Mar 2007 23:20:38 +0100", "msg_from": "\"Anton Melser\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance" }, { "msg_contents": "Hi All,\n\nOkay, I'm getting a little further now. I'm about to create entries in the \npg_autovacuum system tables. However, I'm a little confused as to how I go \nabout finding out the OID value of the tables. The pg_autovacuum table \nrequires the OID of the table you want to create settings for (vacrelid). \nCan anyone shed some light on how I can extract the OID of the table? Also, \nwhat happens if you create a table without OID's, are you still able to add \nit's details in the pg_autovacuum table if there is no OID associated with a \ntable?\n\n Name Type References Description\n vacrelid oid pg_class.oid The table this entry is for\n enabled bool If false, this table is never autovacuumed\n vac_base_thresh integer Minimum number of modified tuples before \nvacuum\n vac_scale_factor float4 Multiplier for reltuples to add to \nvac_base_thresh\n anl_base_thresh integer Minimum number of modified tuples before \nanalyze\n anl_scale_factor float4 Multiplier for reltuples to add to \nanl_base_thresh\n vac_cost_delay integer Custom vacuum_cost_delay parameter\n vac_cost_limit integer Custom vacuum_cost_limit parameter\n freeze_min_age integer Custom vacuum_freeze_min_age parameter\n freeze_max_age integer Custom autovacuum_freeze_max_age parameter\n\n\nThanks\nBruce\n\n\n\"Bruce McAlister\" <[email protected]> wrote in message \nnews:[email protected]...\n> Hi Tom,\n>\n> Thanks for the suggestion. It's been a while since I replied to this as I \n> had to go and do some further investigation of the docs with regards the \n> autovacuum daemons configuration. According to the documentation, the \n> formula's for the vacuum and analyze are as follows:\n>\n> Vacuum\n> vacuum threshold = vacuum base threshold + vacuum scale factor * number \n> of tuples\n> Analyze\n> analyze threshold = analyze base threshold + analyze scale factor * \n> number of tuples\n>\n> My current settings for autovacuum are as follows:\n>\n> # - Cost-Based Vacuum Delay -\n>\n> vacuum_cost_delay = 200 # 0-1000 milliseconds\n> vacuum_cost_page_hit = 1 # 0-10000 credits\n> vacuum_cost_page_miss = 10 # 0-10000 credits\n> vacuum_cost_page_dirty = 20 # 0-10000 credits\n> vacuum_cost_limit = 200 # 0-10000 credits\n>\n> #---------------------------------------------------------------------------\n> # AUTOVACUUM PARAMETERS\n> #---------------------------------------------------------------------------\n>\n> autovacuum = on # \n> enable autovacuum subprocess?\n> \n> # 'on' requires stats_start_collector\n> \n> # and stats_row_level to also be on\n> autovacuum_naptime = 1min # time \n> between autovacuum runs\n> autovacuum_vacuum_threshold = 500 # min # of tuple \n> updates before\n> \n> # vacuum\n> autovacuum_analyze_threshold = 250 # min # of \n> tuple updates before\n> \n> # analyze\n> autovacuum_vacuum_scale_factor = 0.2 # fraction of rel \n> size before\n> \n> # vacuum\n> autovacuum_analyze_scale_factor = 0.1 # fraction of \n> rel size before\n> \n> # analyze\n> autovacuum_freeze_max_age = 200000000 # maximum XID age \n> before forced vacuum\n> \n> # (change requires restart)\n> autovacuum_vacuum_cost_delay = -1 # default vacuum \n> cost delay for\n> \n> # autovacuum, -1 means use\n> \n> # vacuum_cost_delay\n> autovacuum_vacuum_cost_limit = -1 # default \n> vacuum cost limit for\n> \n> # autovacuum, -1 means use\n> \n> # vacuum_cost_limit\n>\n> Thus to make the autovacuum more aggressive I am thinking along the lines \n> of changing the following parameters:\n>\n> autovacuum_vacuum_threshold = 250\n> autovacuum_analyze_threshold = 125\n>\n> The documentation also mentions that when the autovacuum runs it selects a \n> single database to process on that run. This means that the particular \n> table that we are interrested in will only be vacuumed once every 17 \n> minutes, assuming we have 18 databases and the selection process is \n> sequential through the database list.\n>\n> From my understanding of the documentation, the only way to work around \n> this issue is to manually update the system catalog table pg_autovacuum \n> and set the pg_autovacuum.enabled field to false to skip the autovacuum on \n> tables that dont require such frequent vacuums. If I do enable this \n> feature, and I manually run a vacuumdb from the command line against that \n> particular disabled table, will the vacuum still process the table? I'm \n> assuming too, that the best tables to disable autovacuum on will be ones \n> with a minimal amount of update/delete queries run against it. For \n> example, if we have a table that only has inserts applied to it, it is \n> safe to assume that that table can safely be ignored by autovacuum.\n>\n> Do you have any other suggestions as to which tables generally can be \n> excluded from the autovacuum based on the usage patterns?\n> Can you see anything with respect to my new autovacuum parameters that may \n> cause issue's and are there any other parameters that you suggest I need \n> to change to make the autovacuum daemon more aggressive?\n>\n> PS: Currently we have the Cluster command running on the sipaccounts table \n> as the vacuum full is taking too long. It would be nice though to have \n> some piece of mind that the cluster command is mvcc safe, as Heikki and \n> Aidan have mentioned that it is not and may break things in our \n> environment, I'm a little afraid of running with the cluster command, and \n> should possibly go back to the vacuum full :/\n>\n> Thanks all for any and all suggestions/comments.\n>\n> Thanks\n> Bruce\n>\n>\n> \"Tom Lane\" <[email protected]> wrote in message \n> news:[email protected]...\n>> \"Bruce McAlister\" <[email protected]> writes:\n>>> [1] AutoVacuum runs during the day over the entire PostgreSQL cluster,\n>>\n>> Good, but evidently you need to make it more aggressive.\n>>\n>>> [2] A Vacuum Full Verbose is run during our least busy period (generally\n>>> 03:30) against the Database,\n>>\n>>> [3] A Re-Index on the table is performed,\n>>\n>>> [4] A Cluster on the table is performed against the most used index,\n>>\n>>> [5] A Vacuum Analyze Verbose is run against the database.\n>>\n>> That is enormous overkill. Steps 2 and 3 are a 100% waste of time if\n>> you are going to cluster in step 4. Just do the CLUSTER and then\n>> ANALYZE (or VACUUM ANALYZE if you really must, but the value is \n>> marginal).\n>>\n>> regards, tom lane\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>>\n>\n> \n\n\n", "msg_date": "Fri, 16 Mar 2007 19:06:57 -0000", "msg_from": "\"Bruce McAlister\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance" }, { "msg_contents": "On Mar 16, 2007, at 2:06 PM, Bruce McAlister wrote:\n\n> Hi All,\n>\n> Okay, I'm getting a little further now. I'm about to create entries \n> in the\n> pg_autovacuum system tables. However, I'm a little confused as to \n> how I go\n> about finding out the OID value of the tables. The pg_autovacuum table\n> requires the OID of the table you want to create settings for \n> (vacrelid).\n> Can anyone shed some light on how I can extract the OID of the \n> table? Also,\n> what happens if you create a table without OID's, are you still \n> able to add\n> it's details in the pg_autovacuum table if there is no OID \n> associated with a\n> table?\n\nSELECT oid FROM pg_class where relname='table_name';\n\nThe WITH/WITHOUT OIDS clause of CREATE TABLE refers to whether or not \nto create oids for the rows of the table, not the table it itself. \nTables always get an oid.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 16, 2007, at 2:06 PM, Bruce McAlister wrote:Hi All,Okay, I'm getting a little further now. I'm about to create entries in the pg_autovacuum system tables. However, I'm a little confused as to how I go about finding out the OID value of the tables. The pg_autovacuum table requires the OID of the table you want to create settings for (vacrelid). Can anyone shed some light on how I can extract the OID of the table? Also, what happens if you create a table without OID's, are you still able to add it's details in the pg_autovacuum table if there is no OID associated with a table? SELECT oid FROM pg_class where relname='table_name';The WITH/WITHOUT OIDS clause of CREATE TABLE refers to whether or not to create oids for the rows of the table, not the table it itself.  Tables always get an oid. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Fri, 16 Mar 2007 14:22:04 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance" }, { "msg_contents": "On Fri, Mar 16, 2007 at 07:06:57PM -0000, Bruce McAlister wrote:\n> Okay, I'm getting a little further now. I'm about to create entries in the \n> pg_autovacuum system tables. However, I'm a little confused as to how I go \n> about finding out the OID value of the tables. The pg_autovacuum table \n> requires the OID of the table you want to create settings for (vacrelid). \n> Can anyone shed some light on how I can extract the OID of the table? Also, \n> what happens if you create a table without OID's, are you still able to add \n> it's details in the pg_autovacuum table if there is no OID associated with a \n> table?\n\nThe easiest would seem to be to be:\n\nSELECT 'mytable'::regclass;\n\nThat will get you the OID without you having to look it up yourself.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.", "msg_date": "Sun, 18 Mar 2007 17:54:29 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.2.3 VACUUM Timings/Performance" } ]
[ { "msg_contents": "Dear All,\nI have to take backup of a database as a SQL file using the pg_dump\nutility and I have to check the disk space \nbefore taking the backup. Hence I need to estimate the size of the\npg_dump output.\nThe size given by pg_database_size procedure and the size of file\ngenerated by pg_dump\nvary to a large extent. Kindly let me know how to estimate the size of\nthe file that pg_dump generates.\n\nNote: Please bear with us for the disclaimer because it is automated in\nthe exchange server.\nRegards, \nRavi\n\n\nDISCLAIMER:\n-----------------------------------------------------------------------------------------------------------------------\n\nThe contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.\nIt shall not attach any liability on the originator or HCL or its affiliates. Any views or opinions presented in \nthis email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates.\nAny form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of \nthis message without the prior written consent of the author of this e-mail is strictly prohibited. If you have\nreceived this email in error please delete it and notify the sender immediately. Before opening any mail and \nattachments please check them for viruses and defect.\n\n-----------------------------------------------------------------------------------------------------------------------\n\n\n\n\nEstimate the size of the SQL file generated by pg_dump utility\n\n\n\nDear All,\nI have to take backup of a database as a SQL file using the pg_dump utility and I have to check the disk space \nbefore taking the backup. Hence I need to estimate the size of the pg_dump output.\nThe size given by pg_database_size procedure and the size of file generated by pg_dump\nvary to a large extent. Kindly let me know how to estimate the size of the file that pg_dump generates.\nNote: Please bear with us for the disclaimer because it is automated in the exchange server.\nRegards, \nRavi", "msg_date": "Mon, 5 Mar 2007 20:25:21 +0530", "msg_from": "\"Ravindran G-TLS,Chennai.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Estimate the size of the SQL file generated by pg_dump utility" }, { "msg_contents": "am Mon, dem 05.03.2007, um 20:25:21 +0530 mailte Ravindran G-TLS,Chennai. folgendes:\n> Dear All,\n> \n> I have to take backup of a database as a SQL file using the pg_dump utility and\n> I have to check the disk space\n> \n> before taking the backup. Hence I need to estimate the size of the pg_dump\n> output.\n\nYou can take a empty backup by pipe the output to wc -c. Is this a\nsolution for you? You can see the result size in bytes.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Mon, 5 Mar 2007 16:34:51 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimate the size of the SQL file generated by pg_dump utility" }, { "msg_contents": "Ravindran G-TLS,Chennai. wrote:\n> Note: Please bear with us for the disclaimer because it is automated in\n> the exchange server.\n> Regards, \n> Ravi\n\nFYI, we are getting closer to rejecting any email with such a\ndisclaimer, or emailing you back every time saying we are ignoring the\ndisclaimer.\n\n---------------------------------------------------------------------------\n\n\n> \n> \n> DISCLAIMER:\n> -----------------------------------------------------------------------------------------------------------------------\n> \n> The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.\n> It shall not attach any liability on the originator or HCL or its affiliates. Any views or opinions presented in \n> this email are solely those of the author and may not necessarily reflect the opinions of HCL or its affiliates.\n> Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of \n> this message without the prior written consent of the author of this e-mail is strictly prohibited. If you have\n> received this email in error please delete it and notify the sender immediately. Before opening any mail and \n> attachments please check them for viruses and defect.\n> \n> -----------------------------------------------------------------------------------------------------------------------\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 5 Mar 2007 19:19:12 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimate the size of the SQL file generated by\n pg_dump utility" }, { "msg_contents": "Bruce Momjian wrote:\n> Ravindran G-TLS,Chennai. wrote:\n>> Note: Please bear with us for the disclaimer because it is automated in\n>> the exchange server.\n>> Regards, \n>> Ravi\n> \n> FYI, we are getting closer to rejecting any email with such a\n> disclaimer, or emailing you back every time saying we are ignoring the\n> disclaimer.\n\nI think this issue cropped up a year or two ago, and one of the \nsuggestions was for the offender to simply put a link back to their \ndisclaimer at the foot of their email, rather than that uber-verbose \nmessage.\n", "msg_date": "Mon, 05 Mar 2007 18:19:54 -0800", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimate the size of the SQL file generated by pg_dump\n utility" }, { "msg_contents": "Bricklen Anderson wrote:\n> Bruce Momjian wrote:\n> > Ravindran G-TLS,Chennai. wrote:\n> >> Note: Please bear with us for the disclaimer because it is automated in\n> >> the exchange server.\n> >> Regards, \n> >> Ravi\n> > \n> > FYI, we are getting closer to rejecting any email with such a\n> > disclaimer, or emailing you back every time saying we are ignoring the\n> > disclaimer.\n> \n> I think this issue cropped up a year or two ago, and one of the \n> suggestions was for the offender to simply put a link back to their \n> disclaimer at the foot of their email, rather than that uber-verbose \n> message.\n\nRight. The problem is that most of the posters have no control over\ntheir footers --- it is added by their email software.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 5 Mar 2007 21:22:21 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimate the size of the SQL file generated by\n pg_dump utility" }, { "msg_contents": "In response to Bruce Momjian <[email protected]>:\n\n> Bricklen Anderson wrote:\n> > Bruce Momjian wrote:\n> > > Ravindran G-TLS,Chennai. wrote:\n> > >> Note: Please bear with us for the disclaimer because it is automated in\n> > >> the exchange server.\n> > >> Regards, \n> > >> Ravi\n> > > \n> > > FYI, we are getting closer to rejecting any email with such a\n> > > disclaimer, or emailing you back every time saying we are ignoring the\n> > > disclaimer.\n> > \n> > I think this issue cropped up a year or two ago, and one of the \n> > suggestions was for the offender to simply put a link back to their \n> > disclaimer at the foot of their email, rather than that uber-verbose \n> > message.\n> \n> Right. The problem is that most of the posters have no control over\n> their footers --- it is added by their email software.\n\nI'm curious, what problem does the disclaimer cause?\n\nI wrote the following TOS for my personal system:\nhttps://www.potentialtech.com/cms/node/9\nExcerpt of the relevant part:\n\"If you send me email, you are granting me the unrestricted right to use\nthe contents of that email however I see fit, unless otherwise agreed in\nwriting beforehand. You have no rights to the privacy of any email that you\nsend me. If I feel the need, I will forward emails to authorities or make\ntheir contents publicly available. By sending me email you consent to this\npolicy and agree that it overrides any disclaimers or policies that may\nexist elsewhere.\"\n\nI have no idea if that's legally binding or not, but I've talked to a few\nassociates who have some experience in law, and they all argue that email\ndisclaimers probably aren't legally binding anyway -- so the result is\nundefined.\n\nDon't know if this addresses the issue or confuses it ... ?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n", "msg_date": "Tue, 6 Mar 2007 08:44:37 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimate the size of the SQL file generated by\n pg_dump utility" }, { "msg_contents": "Bill Moran wrote:\n> I'm curious, what problem does the disclaimer cause?\n> \n> I wrote the following TOS for my personal system:\n> https://www.potentialtech.com/cms/node/9\n> Excerpt of the relevant part:\n> \"If you send me email, you are granting me the unrestricted right to use\n> the contents of that email however I see fit, unless otherwise agreed in\n> writing beforehand. You have no rights to the privacy of any email that you\n> send me. If I feel the need, I will forward emails to authorities or make\n> their contents publicly available. By sending me email you consent to this\n> policy and agree that it overrides any disclaimers or policies that may\n> exist elsewhere.\"\n> \n> I have no idea if that's legally binding or not, but I've talked to a few\n> associates who have some experience in law, and they all argue that email\n> disclaimers probably aren't legally binding anyway -- so the result is\n> undefined.\n\nNo, it's not legally binding. Agreements are only binding if both parties agree, and someone sending you email has not consented to your statement. If I send you something with a copyright mark, you'd better respect it unless you have a signed agreement granting you rights. Federal law always wins.\n\nDisclaimers are bad for two reasons. First, they're powerless. Just because Acme Corp. attaches a disclaimer doesn't mean they've absolved themselves of responsibility for the actions of their employees. Second, they're insulting to the employees. It's a big red flag saying, \"We, Acme Corp., hire clowns we don't trust, and THIS person may be one of them!\"\n\nCraig\n", "msg_date": "Tue, 06 Mar 2007 07:07:45 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimate the size of the SQL file generated by pg_dump\n utility" }, { "msg_contents": "> > I'm curious, what problem does the disclaimer cause?\n> >\n> > I wrote the following TOS for my personal system:\n> > https://www.potentialtech.com/cms/node/9\n> > Excerpt of the relevant part:\n> > I have no idea if that's legally binding or not, but I've talked to a few\n> > associates who have some experience in law, and they all argue that email\n> > disclaimers probably aren't legally binding anyway -- so the result is\n> > undefined.\n>\n> No, it's not legally binding. Agreements are only binding if both parties agree, and someone sending you email has not consented to your statement. If I send you something with a copyright mark, you'd better respect it unless you have a signed agreement granting you rights. Federal law always wins.\n>\n> Disclaimers are bad for two reasons. First, they're powerless. Just because Acme Corp. attaches a disclaimer doesn't mean they've absolved themselves of responsibility for the actions of their employees. Second, they're insulting to the employees. It's a big red flag saying, \"We, Acme Corp., hire clowns we don't trust, and THIS person may be one of them!\"\n\nDear sirs, this is off-topic at best. Pls. discontinue this thread.\n\nregards\nClaus\n", "msg_date": "Tue, 6 Mar 2007 16:09:15 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimate the size of the SQL file generated by pg_dump utility" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> Bill Moran wrote:\n>> I have no idea if that's legally binding or not, but I've talked to a few\n>> associates who have some experience in law, and they all argue that email\n>> disclaimers probably aren't legally binding anyway -- so the result is\n>> undefined.\n\n> No, it's not legally binding. Agreements are only binding if both\n> parties agree, and someone sending you email has not consented to your\n> statement.\n\nTo take this back to the PG problem: it's probably true that we can\nignore disclaimers as far as receiving, redistributing, and archiving\nmail list submissions goes. On the other hand, accepting a patch is\nanother matter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2007 10:31:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimate the size of the SQL file generated by pg_dump utility " }, { "msg_contents": "\nOn Tue, 6 Mar 2007, Tom Lane wrote:\n> \"Craig A. James\" <[email protected]> writes:\n> > Bill Moran wrote:\n> >> I have no idea if that's legally binding or not, but I've talked to a few\n> >> associates who have some experience in law, and they all argue that email\n> >> disclaimers probably aren't legally binding anyway -- so the result is\n> >> undefined.\n>\n> > No, it's not legally binding. Agreements are only binding if both\n> > parties agree, and someone sending you email has not consented to your\n> > statement.\n>\n> To take this back to the PG problem: it's probably true that we can\n> ignore disclaimers as far as receiving, redistributing, and archiving\n> mail list submissions goes. On the other hand, accepting a patch is\n> another matter.\n\nA published policy on patch submission making them fit whatever legal\nmodel is desired would avoid any and all legal issues related to legalease\nincluded with a submission. The would-be patcher's action of submission\ncan also count as acknowledgement of the actual agreement - your\nagreement - if you've got the policy unambiguously and prominently\ndisplayed...\n\nHTH,\nRT\n\n\n-- \nRichard Troy, Chief Scientist\nScience Tools Corporation\n510-924-1363 or 202-747-1263\[email protected], http://ScienceTools.com/\n\n", "msg_date": "Tue, 6 Mar 2007 09:05:22 -0800 (PST)", "msg_from": "Richard Troy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimate the size of the SQL file generated by pg_dump\n utility" } ]
[ { "msg_contents": "Not quite a performance question, but I can't seem to find a simple answer\nto this. We're using 8.1.4 and the autovacuum daemon is running every 40\nseconds cycling between 3 databases. What is the easiest way to disable the\nautovacuumer for a minute or two, do some other work, then re-enable it? Do\nI have to modify postgresql.conf and send a HUP signal to pick up the\nchanges?\n\nI figured this would work but I can't find a reason why not:\n\n# show autovacuum;\n autovacuum\n------------\n on\n(1 row)\n\n# set autovacuum to off;\nERROR: parameter \"autovacuum\" cannot be changed now\n\nIn postgresql.conf:\n\nautovacuum = on\n\nThanks,\n\nSteve\n\nNot quite a performance question, but I can't seem to find a simple answer to this.  We're using 8.1.4 and the autovacuum daemon is running every 40 seconds cycling between 3 databases.  What is the easiest way to disable the autovacuumer for a minute or two, do some other work, then re-enable it?  Do I have to modify \npostgresql.conf and send a HUP signal to pick up the changes? \n \nI figured this would work but I can't find a reason why not:\n \n# show autovacuum; autovacuum------------ on(1 row)\n\n# set autovacuum to off;ERROR:  parameter \"autovacuum\" cannot be changed now\nIn postgresql.conf:\nautovacuum = on\nThanks,\nSteve", "msg_date": "Mon, 5 Mar 2007 11:00:48 -0500", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Turning off Autovacuum" }, { "msg_contents": "If you want to disable it only for some tables, you can put special \nvalues into pg_autovacuum. This won't disable the autovacuum daemon, but \nsome of the tables won't be vacuumed.\n\nTomas\n> Not quite a performance question, but I can't seem to find a simple \n> answer to this. We're using 8.1.4 and the autovacuum daemon is \n> running every 40 seconds cycling between 3 databases. What is the \n> easiest way to disable the autovacuumer for a minute or two, do some \n> other work, then re-enable it? Do I have to modify postgresql.conf \n> and send a HUP signal to pick up the changes? \n> \n> I figured this would work but I can't find a reason why not:\n> \n> # show autovacuum;\n> autovacuum\n> ------------\n> on\n> (1 row)\n>\n> # set autovacuum to off;\n> ERROR: parameter \"autovacuum\" cannot be changed now\n>\n> In postgresql.conf:\n>\n> autovacuum = on\n>\n> Thanks,\n>\n> Steve\n>\n\n", "msg_date": "Mon, 05 Mar 2007 21:26:45 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Turning off Autovacuum" }, { "msg_contents": "Yeah, I'm hoping there's an easier way. I'd have to put several thousand\nentries in the pg_autovacuum table only to remove them a few minutes later.\nWhat I really want is to disable the daemon.\n\nAny idea why I can't just simply set autovacuum to off?\n\nSteve\n\n\nOn 3/5/07, Tomas Vondra <[email protected]> wrote:\n>\n> If you want to disable it only for some tables, you can put special\n> values into pg_autovacuum. This won't disable the autovacuum daemon, but\n> some of the tables won't be vacuumed.\n>\n> Tomas\n> > Not quite a performance question, but I can't seem to find a simple\n> > answer to this. We're using 8.1.4 and the autovacuum daemon is\n> > running every 40 seconds cycling between 3 databases. What is the\n> > easiest way to disable the autovacuumer for a minute or two, do some\n> > other work, then re-enable it? Do I have to modify postgresql.conf\n> > and send a HUP signal to pick up the changes?\n> >\n> > I figured this would work but I can't find a reason why not:\n> >\n> > # show autovacuum;\n> > autovacuum\n> > ------------\n> > on\n> > (1 row)\n> >\n> > # set autovacuum to off;\n> > ERROR: parameter \"autovacuum\" cannot be changed now\n> >\n> > In postgresql.conf:\n> >\n> > autovacuum = on\n> >\n> > Thanks,\n> >\n> > Steve\n> >\n>\n>\n\nYeah, I'm hoping there's an easier way.  I'd have to put several thousand entries in the pg_autovacuum table only to remove them a few minutes later.  What I really want is to disable the daemon.\n \nAny idea why I can't just simply set autovacuum to off?\n \nSteve \nOn 3/5/07, Tomas Vondra <[email protected]> wrote:\nIf you want to disable it only for some tables, you can put specialvalues into pg_autovacuum. This won't disable the autovacuum daemon, but\nsome of the tables won't be vacuumed.Tomas> Not quite a performance question, but I can't seem to find a simple> answer to this.  We're using 8.1.4 and the autovacuum daemon is> running every 40 seconds cycling between 3 databases.  What is the\n> easiest way to disable the autovacuumer for a minute or two, do some> other work, then re-enable it?  Do I have to modify postgresql.conf> and send a HUP signal to pick up the changes?>> I figured this would work but I can't find a reason why not:\n>> # show autovacuum;>  autovacuum> ------------>  on> (1 row)>> # set autovacuum to off;> ERROR:  parameter \"autovacuum\" cannot be changed now>\n> In postgresql.conf:>> autovacuum = on>> Thanks,>> Steve>", "msg_date": "Mon, 5 Mar 2007 16:47:33 -0500", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Turning off Autovacuum" } ]
[ { "msg_contents": "Hi, I'm new to tuning PostgreSQL and I have a query that gets slower \nafter I run a vacuum analyze. I believe it uses a Hash Join before \nthe analyze and a Nested Loop IN Join after. It seems the Nested \nLoop IN Join estimates the correct number of rows, but underestimates \nthe amount of time required. I am curious why the vacuum analyze \nmakes it slower and if that gives any clues as too which parameter I \nshould be tuning.\n\nBTW, I know the query could be re-structured more cleanly to remove \nthe sub-selects, but that doesn't impact the performance.\n\nthanks,\nJeff\n\n\n\nWelcome to psql 8.1.5, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help with psql commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nplm_demo=# explain analyze SELECT count(*) AS count_all FROM symptoms \nWHERE ( 1=1 and symptoms.id in (select symptom_id from \nsymptom_reports sr where 1=1 and sr.user_id in (select id from users \nwhere disease_id=1))) ;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-----------------------\nAggregate (cost=366.47..366.48 rows=1 width=0) (actual \ntime=125.093..125.095 rows=1 loops=1)\n -> Hash Join (cost=362.41..366.38 rows=36 width=0) (actual \ntime=124.162..124.859 rows=106 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".symptom_id)\n -> Seq Scan on symptoms (cost=0.00..3.07 rows=107 \nwidth=4) (actual time=0.032..0.295 rows=108 loops=1)\n -> Hash (cost=362.25..362.25 rows=67 width=4) (actual \ntime=124.101..124.101 rows=106 loops=1)\n -> HashAggregate (cost=361.58..362.25 rows=67 \nwidth=4) (actual time=123.628..123.854 rows=106 loops=1)\n -> Hash IN Join (cost=35.26..361.41 rows=67 \nwidth=4) (actual time=9.767..96.372 rows=13074 loops=1)\n Hash Cond: (\"outer\".user_id = \"inner\".id)\n -> Seq Scan on symptom_reports sr \n(cost=0.00..259.65 rows=13165 width=8) (actual time=0.029..33.359 \nrows=13074 loops=1)\n -> Hash (cost=35.24..35.24 rows=11 \nwidth=4) (actual time=9.696..9.696 rows=1470 loops=1)\n -> Bitmap Heap Scan on users \n(cost=2.04..35.24 rows=11 width=4) (actual time=0.711..6.347 \nrows=1470 loops=1)\n Recheck Cond: (disease_id = 1)\n -> Bitmap Index Scan on \nusers_disease_id_index (cost=0.00..2.04 rows=11 width=0) (actual \ntime=0.644..0.644 rows=2378 loops=1)\n Index Cond: (disease_id \n= 1)\nTotal runtime: 134.045 ms\n(15 rows)\n\n\nplm_demo=# vacuum analyze;\nVACUUM\nplm_demo=# analyze;\nANALYZE\n\nplm_demo=# explain analyze SELECT count(*) AS count_all FROM symptoms \nWHERE ( 1=1 and symptoms.id in (select symptom_id from \nsymptom_reports sr where 1=1 and sr.user_id in (select id from users \nwhere disease_id=1))) ;\n QUERY \nPLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------\nAggregate (cost=586.47..586.48 rows=1 width=0) (actual \ntime=3441.385..3441.386 rows=1 loops=1)\n -> Nested Loop IN Join (cost=149.05..586.26 rows=85 width=0) \n(actual time=54.517..3441.115 rows=106 loops=1)\n Join Filter: (\"outer\".id = \"inner\".symptom_id)\n -> Seq Scan on symptoms (cost=0.00..3.08 rows=108 \nwidth=4) (actual time=0.007..0.273 rows=108 loops=1)\n -> Hash IN Join (cost=149.05..603.90 rows=13074 width=4) \n(actual time=0.078..24.503 rows=3773 loops=108)\n Hash Cond: (\"outer\".user_id = \"inner\".id)\n -> Seq Scan on symptom_reports sr \n(cost=0.00..258.74 rows=13074 width=8) (actual time=0.003..9.044 \nrows=3773 loops=108)\n -> Hash (cost=145.38..145.38 rows=1470 width=4) \n(actual time=7.608..7.608 rows=1470 loops=1)\n -> Seq Scan on users (cost=0.00..145.38 \nrows=1470 width=4) (actual time=0.006..4.353 rows=1470 loops=1)\n Filter: (disease_id = 1)\nTotal runtime: 3441.452 ms\n(11 rows)\n\n\n", "msg_date": "Mon, 5 Mar 2007 17:05:41 -0500", "msg_from": "Jeff Cole <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "Jeff Cole <[email protected]> writes:\n> Hi, I'm new to tuning PostgreSQL and I have a query that gets slower \n> after I run a vacuum analyze. I believe it uses a Hash Join before \n> the analyze and a Nested Loop IN Join after. It seems the Nested \n> Loop IN Join estimates the correct number of rows, but underestimates \n> the amount of time required. I am curious why the vacuum analyze \n> makes it slower and if that gives any clues as too which parameter I \n> should be tuning.\n\nHm, the cost for the upper nestloop is way less than you would expect\ngiven that the HASH IN join is going to have to be repeated 100+ times.\nI think this must be due to a very low \"join_in_selectivity\" estimate\nbut I'm not sure why you are getting that, especially seeing that the\nrowcount estimates aren't far off. Can you show us the pg_stats\nrows for symptoms.id and symptom_reports.symptom_id?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2007 20:54:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\nOn Mar 5, 2007, at 8:54 PM, Tom Lane wrote:\n\n> Hm, the cost for the upper nestloop is way less than you would expect\n> given that the HASH IN join is going to have to be repeated 100+ \n> times.\n> I think this must be due to a very low \"join_in_selectivity\" estimate\n> but I'm not sure why you are getting that, especially seeing that the\n> rowcount estimates aren't far off. Can you show us the pg_stats\n> rows for symptoms.id and symptom_reports.symptom_id?\n>\n\nHi Tom, thanks for the response. Here are the pg_stats. I think I \nunderstand what the stats say, but I don't know what to conclude from \nthem.\n\n\nplm_stage=# select * from pg_stats where tablename = 'symptoms' and \nattname = 'id';\nschemaname | tablename | attname | null_frac | avg_width | n_distinct \n| most_common_vals | most_common_freqs | \nhistogram_bounds | correlation\n------------+-----------+---------+-----------+----------- \n+------------+------------------+------------------- \n+-------------------------------------+-------------\npublic | symptoms | id | 0 | 4 | -1 \n| | | \n{1,11,24,34,46,57,71,85,95,106,117} | 0.451606\n\n\nplm_stage=# select * from pg_stats where tablename = \n'symptom_reports' and attname = 'symptom_id';\nschemaname | tablename | attname | null_frac | avg_width | \nn_distinct | most_common_vals \n| \nmost_common_freqs | \nhistogram_bounds | correlation\n------------+-----------------+------------+-----------+----------- \n+------------+------------------------ \n+----------------------------------------------------------------------- \n---------------+-------------------------------------+-------------\npublic | symptom_reports | symptom_id | 0 | 4 \n| 80 | {3,2,4,1,5,8,9,7,10,6} | \n{0.094,0.0933333,0.0933333,0.092,0.0913333,0.0903333,0.0866667,0.0843333 \n,0.084,0.08} | {12,18,24,30,38,44,51,57,91,91,114} | 0.0955925\n\n\n\nAnd Ismo, I followed your suggestion to re-write the SQL more \ncleanly, and you are right it was faster, so that is certainly a \nsolution. Although I am still curious why my original query slowed \ndown after the vacuum analyze. In any case, here is the explain \nanalyze from the new query. Compare that to the 3441.452 ms of the \nold query after the analyze (and 134.045 ms before the analyze):\n\nplm_stage=# explain analyze SELECT count(distinct s.id) AS count_all \nFROM symptoms s ,symptom_reports sr,users u WHERE s.id=sr.symptom_id \nand sr.user_id=u.id and u.disease_id in (1);\n QUERY \nPLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------\nAggregate (cost=770.05..770.06 rows=1 width=4) (actual \ntime=176.749..176.751 rows=1 loops=1)\n -> Hash Join (cost=89.43..737.50 rows=13020 width=4) (actual \ntime=7.762..142.063 rows=13038 loops=1)\n Hash Cond: (\"outer\".symptom_id = \"inner\".id)\n -> Hash Join (cost=86.09..538.86 rows=13020 width=4) \n(actual time=7.277..89.293 rows=13038 loops=1)\n Hash Cond: (\"outer\".user_id = \"inner\".id)\n -> Seq Scan on symptom_reports sr \n(cost=0.00..257.38 rows=13038 width=8) (actual time=0.003..30.499 \nrows=13038 loops=1)\n -> Hash (cost=82.41..82.41 rows=1471 width=4) \n(actual time=7.261..7.261 rows=1471 loops=1)\n -> Seq Scan on users u (cost=0.00..82.41 \nrows=1471 width=4) (actual time=0.006..4.133 rows=1471 loops=1)\n Filter: (disease_id = 1)\n -> Hash (cost=3.07..3.07 rows=107 width=4) (actual \ntime=0.469..0.469 rows=107 loops=1)\n -> Seq Scan on symptoms s (cost=0.00..3.07 rows=107 \nwidth=4) (actual time=0.007..0.247 rows=107 loops=1)\nTotal runtime: 176.842 ms\n(12 rows)\n\n\n", "msg_date": "Tue, 6 Mar 2007 10:40:11 -0500", "msg_from": "Jeff Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: " }, { "msg_contents": "Jeff Cole <[email protected]> writes:\n> Hi Tom, thanks for the response. Here are the pg_stats. I think I \n> understand what the stats say, but I don't know what to conclude from \n> them.\n\nOK, the symptom_id row claims there are only 80 distinct values of\nsymptom_id in symptom_reports. This is a bit low (looks like the true\nstate of affairs is that all but 2 of the 108 entries of symptoms are\nrepresented in symptom_reports), but it's not horridly off considering\nthat you're using the rather low default statistics_target. What\nhappens is that the planner expects that on average only 80 rows of the\ninner join will need to be scanned to find a match for a given symptoms.id,\nand this makes the nestloop look cheap. However, per your previous\nEXPLAIN ANALYZE:\n\n> -> Nested Loop IN Join (cost=149.05..586.26 rows=85 width=0) (actual time=54.517..3441.115 rows=106 loops=1)\n> Join Filter: (\"outer\".id = \"inner\".symptom_id)\n> -> Seq Scan on symptoms (cost=0.00..3.08 rows=108 width=4) (actual time=0.007..0.273 rows=108 loops=1)\n> -> Hash IN Join (cost=149.05..603.90 rows=13074 width=4) (actual time=0.078..24.503 rows=3773 loops=108)\n\n\nthe *actual* average number of rows scanned is 3773. I'm not sure why\nthis should be --- is it possible that the distribution of keys in\nsymptom_reports is wildly uneven? This could happen if all of the\nphysically earlier rows in symptom_reports contain the same small set\nof symptom_ids, but the stats don't seem to indicate such a skew.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2007 11:40:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\nOn Mar 6, 2007, at 11:40 AM, Tom Lane wrote:\n\n> the *actual* average number of rows scanned is 3773. I'm not sure why\n> this should be --- is it possible that the distribution of keys in\n> symptom_reports is wildly uneven? This could happen if all of the\n> physically earlier rows in symptom_reports contain the same small set\n> of symptom_ids, but the stats don't seem to indicate such a skew.\n\nHi Tom, you are correct, the distribution is uneven... In the 13k \nsymptom_reports rows, there are 105 distinct symptom_ids. But the \nfirst 8k symptom_reports rows only have 10 distinct symptom_ids. \nCould this cause the problem and would there be anything I could do \nto address it?\n\nThanks for all your help, I appreciate it.\n\n-Jeff\n", "msg_date": "Tue, 6 Mar 2007 16:47:20 -0500", "msg_from": "Jeff Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: " }, { "msg_contents": "Jeff Cole <[email protected]> writes:\n> Hi Tom, you are correct, the distribution is uneven... In the 13k \n> symptom_reports rows, there are 105 distinct symptom_ids. But the \n> first 8k symptom_reports rows only have 10 distinct symptom_ids. \n> Could this cause the problem and would there be anything I could do \n> to address it?\n\nAh-hah, yeah, that explains it. Is it worth your time to deliberately\nrandomize the order of the rows in symptom_reports? It wasn't clear\nwhether this query is actually something you need to optimize. You\nmight have other queries that benefit from the rows being in nonrandom\norder, so I'm not entirely sure that this is a good thing to do ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2007 11:37:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "Hi Tom, thanks for the great job getting to the core of this \nproblem... I would say I'm not sure I want randomize the rows (not \nreally even sure how to do it without truncating the table and re- \nadding the records in a random order). I think for the moment I \nwill either a) re-write the query per Ismo's suggestion, or b) wait \nuntil more data comes into that table, potentially kicking the query \nplanner into not using the Nested Loop anymore.\n\nAnyway, thanks again, I appreciate it...\n\n-Jeff\n\n\nOn Mar 7, 2007, at 11:37 AM, Tom Lane wrote:\n\n> Jeff Cole <[email protected]> writes:\n>> Hi Tom, you are correct, the distribution is uneven... In the 13k\n>> symptom_reports rows, there are 105 distinct symptom_ids. But the\n>> first 8k symptom_reports rows only have 10 distinct symptom_ids.\n>> Could this cause the problem and would there be anything I could do\n>> to address it?\n>\n> Ah-hah, yeah, that explains it. Is it worth your time to deliberately\n> randomize the order of the rows in symptom_reports? It wasn't clear\n> whether this query is actually something you need to optimize. You\n> might have other queries that benefit from the rows being in nonrandom\n> order, so I'm not entirely sure that this is a good thing to do ...\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Thu, 8 Mar 2007 10:30:03 -0500", "msg_from": "Jeff Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: " } ]
[ { "msg_contents": "Hi, I'm new to tuning PostgreSQL and I have a query that gets slower \nafter I run a vacuum analyze. I believe it uses a Hash Join before \nthe analyze and a Nested Loop IN Join after. It seems the Nested \nLoop IN Join estimates the correct number of rows, but underestimates \nthe amount of time required. I am curious why the vacuum analyze \nmakes it slower and if that gives any clues as too which parameter I \nshould be tuning.\n\nBTW, I know the query could be re-structured more cleanly to remove \nthe sub-selects, but that doesn't seem to impact the performance.\n\nthanks,\nJeff\n\nWelcome to psql 8.1.5, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help with psql commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nplm_demo=# explain analyze SELECT count(*) AS count_all FROM symptoms \nWHERE ( 1=1 and symptoms.id in (select symptom_id from \nsymptom_reports sr where 1=1 and sr.user_id in (select id from users \nwhere disease_id=1))) ;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-----------------------\nAggregate (cost=366.47..366.48 rows=1 width=0) (actual \ntime=125.093..125.095 rows=1 loops=1)\n -> Hash Join (cost=362.41..366.38 rows=36 width=0) (actual \ntime=124.162..124.859 rows=106 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".symptom_id)\n -> Seq Scan on symptoms (cost=0.00..3.07 rows=107 \nwidth=4) (actual time=0.032..0.295 rows=108 loops=1)\n -> Hash (cost=362.25..362.25 rows=67 width=4) (actual \ntime=124.101..124.101 rows=106 loops=1)\n -> HashAggregate (cost=361.58..362.25 rows=67 \nwidth=4) (actual time=123.628..123.854 rows=106 loops=1)\n -> Hash IN Join (cost=35.26..361.41 rows=67 \nwidth=4) (actual time=9.767..96.372 rows=13074 loops=1)\n Hash Cond: (\"outer\".user_id = \"inner\".id)\n -> Seq Scan on symptom_reports sr \n(cost=0.00..259.65 rows=13165 width=8) (actual time=0.029..33.359 \nrows=13074 loops=1)\n -> Hash (cost=35.24..35.24 rows=11 \nwidth=4) (actual time=9.696..9.696 rows=1470 loops=1)\n -> Bitmap Heap Scan on users \n(cost=2.04..35.24 rows=11 width=4) (actual time=0.711..6.347 \nrows=1470 loops=1)\n Recheck Cond: (disease_id = 1)\n -> Bitmap Index Scan on \nusers_disease_id_index (cost=0.00..2.04 rows=11 width=0) (actual \ntime=0.644..0.644 rows=2378 loops=1)\n Index Cond: (disease_id \n= 1)\nTotal runtime: 134.045 ms\n(15 rows)\n\n\nplm_demo=# vacuum analyze;\nVACUUM\nplm_demo=# analyze;\nANALYZE\n\nplm_demo=# explain analyze SELECT count(*) AS count_all FROM symptoms \nWHERE ( 1=1 and symptoms.id in (select symptom_id from \nsymptom_reports sr where 1=1 and sr.user_id in (select id from users \nwhere disease_id=1))) ;\n QUERY \nPLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------\nAggregate (cost=586.47..586.48 rows=1 width=0) (actual \ntime=3441.385..3441.386 rows=1 loops=1)\n -> Nested Loop IN Join (cost=149.05..586.26 rows=85 width=0) \n(actual time=54.517..3441.115 rows=106 loops=1)\n Join Filter: (\"outer\".id = \"inner\".symptom_id)\n -> Seq Scan on symptoms (cost=0.00..3.08 rows=108 \nwidth=4) (actual time=0.007..0.273 rows=108 loops=1)\n -> Hash IN Join (cost=149.05..603.90 rows=13074 width=4) \n(actual time=0.078..24.503 rows=3773 loops=108)\n Hash Cond: (\"outer\".user_id = \"inner\".id)\n -> Seq Scan on symptom_reports sr \n(cost=0.00..258.74 rows=13074 width=8) (actual time=0.003..9.044 \nrows=3773 loops=108)\n -> Hash (cost=145.38..145.38 rows=1470 width=4) \n(actual time=7.608..7.608 rows=1470 loops=1)\n -> Seq Scan on users (cost=0.00..145.38 \nrows=1470 width=4) (actual time=0.006..4.353 rows=1470 loops=1)\n Filter: (disease_id = 1)\nTotal runtime: 3441.452 ms\n(11 rows)\n\n", "msg_date": "Mon, 5 Mar 2007 20:27:10 -0500", "msg_from": "Jeff Cole <[email protected]>", "msg_from_op": true, "msg_subject": "query slows down after vacuum analyze" }, { "msg_contents": "\nAre you sure that:\n\nSELECT count(distinct s.id) AS count_all \nFROM symptoms s ,symptom_reports sr,users u\nWHERE s.id=sr.symptom_id and sr.user_id=u.id and u.disease_id=1;\n\nis as slow as\n\nSELECT count(*) AS count_all \nFROM symptoms \nWHERE (1=1 and symptoms.id in (\n select symptom_id from symptom_reports sr \n where 1=1 and sr.user_id in (\n select id from users where disease_id=1\n )\n )\n);\n\nI think that it's best to have database to deside how to find rows, so I \nlike to write all as \"clean\" as possible.\n\nonly when queries are slow I analyze them and try to write those different \nway.\n\nthat have worked great in oracle, where it seems that \"cleanest\" query is \nalways fastest. in postgres it's not always true, sometimes you must write \nsubqueries to make it faster.\n\nIsmo\n\nOn Mon, 5 Mar 2007, Jeff Cole wrote:\n\n> Hi, I'm new to tuning PostgreSQL and I have a query that gets slower after I\n> run a vacuum analyze. I believe it uses a Hash Join before the analyze and a\n> Nested Loop IN Join after. It seems the Nested Loop IN Join estimates the\n> correct number of rows, but underestimates the amount of time required. I am\n> curious why the vacuum analyze makes it slower and if that gives any clues as\n> too which parameter I should be tuning.\n> \n> BTW, I know the query could be re-structured more cleanly to remove the\n> sub-selects, but that doesn't seem to impact the performance.\n> \n> thanks,\n> Jeff\n> \n> Welcome to psql 8.1.5, the PostgreSQL interactive terminal.\n> \n> Type: \\copyright for distribution terms\n> \\h for help with SQL commands\n> \\? for help with psql commands\n> \\g or terminate with semicolon to execute query\n> \\q to quit\n> \n> plm_demo=# explain analyze SELECT count(*) AS count_all FROM symptoms WHERE (\n> 1=1 and symptoms.id in (select symptom_id from symptom_reports sr where 1=1\n> and sr.user_id in (select id from users where disease_id=1))) ;\n> QUERY\n> PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=366.47..366.48 rows=1 width=0) (actual time=125.093..125.095\n> rows=1 loops=1)\n> -> Hash Join (cost=362.41..366.38 rows=36 width=0) (actual\n> -> time=124.162..124.859 rows=106 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".symptom_id)\n> -> Seq Scan on symptoms (cost=0.00..3.07 rows=107 width=4) (actual\n> -> time=0.032..0.295 rows=108 loops=1)\n> -> Hash (cost=362.25..362.25 rows=67 width=4) (actual\n> -> time=124.101..124.101 rows=106 loops=1)\n> -> HashAggregate (cost=361.58..362.25 rows=67 width=4) (actual\n> -> time=123.628..123.854 rows=106 loops=1)\n> -> Hash IN Join (cost=35.26..361.41 rows=67 width=4)\n> -> (actual time=9.767..96.372 rows=13074 loops=1)\n> Hash Cond: (\"outer\".user_id = \"inner\".id)\n> -> Seq Scan on symptom_reports sr\n> -> (cost=0.00..259.65 rows=13165 width=8) (actual\n> -> time=0.029..33.359 rows=13074 loops=1)\n> -> Hash (cost=35.24..35.24 rows=11 width=4)\n> -> (actual time=9.696..9.696 rows=1470 loops=1)\n> -> Bitmap Heap Scan on users\n> -> (cost=2.04..35.24 rows=11 width=4) (actual\n> -> time=0.711..6.347 rows=1470 loops=1)\n> Recheck Cond: (disease_id = 1)\n> -> Bitmap Index Scan on\n> users_disease_id_index (cost=0.00..2.04\n> rows=11 width=0) (actual\n> time=0.644..0.644 rows=2378 loops=1)\n> Index Cond: (disease_id = 1)\n> Total runtime: 134.045 ms\n> (15 rows)\n> \n> \n> plm_demo=# vacuum analyze;\n> VACUUM\n> plm_demo=# analyze;\n> ANALYZE\n> \n> plm_demo=# explain analyze SELECT count(*) AS count_all FROM symptoms WHERE (\n> 1=1 and symptoms.id in (select symptom_id from symptom_reports sr where 1=1\n> and sr.user_id in (select id from users where disease_id=1))) ;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=586.47..586.48 rows=1 width=0) (actual\n> time=3441.385..3441.386 rows=1 loops=1)\n> -> Nested Loop IN Join (cost=149.05..586.26 rows=85 width=0) (actual\n> -> time=54.517..3441.115 rows=106 loops=1)\n> Join Filter: (\"outer\".id = \"inner\".symptom_id)\n> -> Seq Scan on symptoms (cost=0.00..3.08 rows=108 width=4) (actual\n> -> time=0.007..0.273 rows=108 loops=1)\n> -> Hash IN Join (cost=149.05..603.90 rows=13074 width=4) (actual\n> -> time=0.078..24.503 rows=3773 loops=108)\n> Hash Cond: (\"outer\".user_id = \"inner\".id)\n> -> Seq Scan on symptom_reports sr (cost=0.00..258.74\n> -> rows=13074 width=8) (actual time=0.003..9.044 rows=3773\n> -> loops=108)\n> -> Hash (cost=145.38..145.38 rows=1470 width=4) (actual\n> -> time=7.608..7.608 rows=1470 loops=1)\n> -> Seq Scan on users (cost=0.00..145.38 rows=1470\n> -> width=4) (actual time=0.006..4.353 rows=1470 loops=1)\n> Filter: (disease_id = 1)\n> Total runtime: 3441.452 ms\n> (11 rows)\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n", "msg_date": "Tue, 6 Mar 2007 07:56:37 +0200 (EET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: query slows down after vacuum analyze" } ]
[ { "msg_contents": "Hello!!\nI have perfectly installed ebmail and Hermes 2 whith JDK\n1.4.2 ,postgreSQL\n8.1 and Tomacat 5.0.28 and all good works.\nI send messages whith ebmail and it`s writing in Postgresql but I\nhave\nsome mistakes:\n\n2007-02-02 16:41:03 [Thread-6 ] <INFO > <cecid.ebms.spa> <Found 1\nmessage(s) in mail box>\n2007-02-02 16:41:13 [Thread-39 ] <INFO > <cecid.ebms.spa> <Received\nan ebxml message from mail box>\n2007-02-02 16:41:13 [Thread-39 ] <ERROR> <cecid.ebms.spa>\n<Unauthorized message, no principal id>\n2007-02-02 16:41:14 [Thread-39 ] <INFO > <cecid.ebms.spa> <Store\noutgoing message: [email protected]>\n2007-02-02 16:41:25 [Thread-6 ] <ERROR> <cecid.ebms.spa> <Error in\ncollecting message from mail box>\nhk.hku.cecid.piazza.commons.net.ConnectionException: Unable to\nconnect\nto incoming mail server\n by javax.mail.AuthenticationFailedException: [LOGIN-DELAY]\nminimum\ntime between mail checks violation\n at\nhk.hku.cecid.piazza.commons.net.MailReceiver.connect(MailReceiver.java:\n66)\n at\nhk.hku.cecid.ebms.spa.task.MailCollector.getTaskList(MailCollector.java:\n49)\n at\nhk.hku.cecid.piazza.commons.module.ActiveTaskModule.execute(ActiveTaskModul­­\ne.java:\n137)\n at\nhk.hku.cecid.piazza.commons.module.ActiveModule.run(ActiveModule.java:\n205)\n at java.lang.Thread.run(Thread.java:534)\nCaused by: javax.mail.AuthenticationFailedException: [LOGIN-DELAY]\nminimum time between mail checks violation\n at\ncom.sun.mail.pop3.POP3Store.protocolConnect(POP3Store.java:\n118)\n at javax.mail.Service.connect(Service.java:255)\n at javax.mail.Service.connect(Service.java:134)\n at javax.mail.Service.connect(Service.java:86)\n at\nhk.hku.cecid.piazza.commons.net.MailReceiver.connect(MailReceiver.java:\n63)\n ... 4 more\n\n\nCan someome help me? is very important for me, Thank very much\n\n", "msg_date": "6 Mar 2007 03:36:13 -0800", "msg_from": "\"lissette\" <[email protected]>", "msg_from_op": true, "msg_subject": "help" }, { "msg_contents": "On Tue, 6 Mar 2007, lissette wrote:\n\n> 2007-02-02 16:41:03 [Thread-6 ] <INFO > <cecid.ebms.spa> <Found 1\n> message(s) in mail box>\n> 2007-02-02 16:41:13 [Thread-39 ] <INFO > <cecid.ebms.spa> <Received\n> an ebxml message from mail box>\n> 2007-02-02 16:41:13 [Thread-39 ] <ERROR> <cecid.ebms.spa>\n> <Unauthorized message, no principal id>\n> 2007-02-02 16:41:14 [Thread-39 ] <INFO > <cecid.ebms.spa> <Store\n> outgoing message: [email protected]>\n> 2007-02-02 16:41:25 [Thread-6 ] <ERROR> <cecid.ebms.spa> <Error in\n> collecting message from mail box>\n> hk.hku.cecid.piazza.commons.net.ConnectionException: Unable to\n> connect\n> to incoming mail server\n> by javax.mail.AuthenticationFailedException: [LOGIN-DELAY]\n> minimum\n> time between mail checks violation\n> at\n> hk.hku.cecid.piazza.commons.net.MailReceiver.connect(MailReceiver.java:\n> 66)\n\nDid you actually read the error message? This seems unrelated to\nPostgreSQL and definately not related to PostgreSQL performance.\n\nGavin\n", "msg_date": "Fri, 9 Mar 2007 11:46:28 +1100 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help" } ]
[ { "msg_contents": "Hello,\n\nIs anyone aware of some test-suite for Postgresql?\n\nThanks,\nNeelam\n\nHello,Is anyone aware of some test-suite for Postgresql?Thanks,Neelam", "msg_date": "Tue, 6 Mar 2007 11:40:16 -0600", "msg_from": "\"Neelam Goyal\" <[email protected]>", "msg_from_op": true, "msg_subject": "Automated test-suite for Postgres" }, { "msg_contents": "Neelam Goyal wrote:\n> Is anyone aware of some test-suite for Postgresql?\n\nWhat do you want to test? PostgreSQL itself or some application using \nit? Do you want to do performance testing or functional regression \ntesting, perhaps?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 06 Mar 2007 19:03:36 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automated test-suite for Postgres" } ]
[ { "msg_contents": "Hi,\n\nI have database with a huge amount of data so i'm trying to make it as fast as possible and minimize space.\n\nOne thing i've done is join on a prepopulated date lookup table to prevent a bunch of rows with duplicate date columns. Without this I'd have about 2500 rows per hour with the exact same date w. timestamp in them.\n\nMy question is, with postgres do I really gain anything by this, or should I just use the date w. timestamp column on the primary table and ditch the join on the date_id table.\n\nPrimary table is all integers like:\n\ndate id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8\n-------------------------------------------------------------------------------------------------\nprimary key is on date to num->6 columns\n\ndate_id lookup table:\n\nThis table is prepopulated with the date values that will be used.\n\ndate_id | date w timestamp\n----------------------------------------\n1 | 2007-2-15 Midnight\n2 | 2007-2-15 1 am\n3 | 2007-2-15 2 am etc for 24 hours each day\n\n\nEach day 60k records are added to a monthly table structured as above, about 2500 per hour.\n\nthank you for your advice\n\n \n---------------------------------\nIt's here! Your new message!\nGet new email alerts with the free Yahoo! Toolbar.\nHi,I have database with a huge amount of data so i'm trying to make it as fast as possible and minimize space.One thing i've done is join on a prepopulated date lookup table to prevent a bunch of rows with duplicate date columns. Without this I'd have about 2500 rows per hour with the exact same date w. timestamp in them.My question is, with postgres do I really gain anything by this, or should I just use the date w. timestamp column on the primary table and ditch the join on the date_id table.Primary table is all integers like:date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8-------------------------------------------------------------------------------------------------primary key is on date to num->6 columnsdate_id lookup table:This table is prepopulated with the date values that will be used.date_id | date w\n timestamp----------------------------------------1         | 2007-2-15 Midnight2         | 2007-2-15 1 am3         | 2007-2-15 2 am  etc for 24 hours each dayEach day 60k records are added to a monthly table structured as above, about 2500 per hour.thank you for your advice\nIt's here! Your new message!Get\n new email alerts with the free Yahoo! Toolbar.", "msg_date": "Tue, 6 Mar 2007 13:49:27 -0800 (PST)", "msg_from": "Zoolin Lin <[email protected]>", "msg_from_op": true, "msg_subject": "Any advantage to integer vs stored date w. timestamp" }, { "msg_contents": "Zoolin Lin wrote:\n> Hi,\n> \n> I have database with a huge amount of data so i'm trying to make it\n> as fast as possible and minimize space.\n> \n> One thing i've done is join on a prepopulated date lookup table to\n> prevent a bunch of rows with duplicate date columns. Without this I'd\n> have about 2500 rows per hour with the exact same date w. timestamp\n> in them.\n> \n> My question is, with postgres do I really gain anything by this, or\n> should I just use the date w. timestamp column on the primary table\n> and ditch the join on the date_id table.\n> \n> Primary table is all integers like:\n> \n> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 \n> -------------------------------------------------------------------------------------------------\n> primary key is on date to num->6 columns\n\nWhat types are num1->8?\n\n> date_id lookup table:\n> \n> This table is prepopulated with the date values that will be used.\n> \n> date_id | date w timestamp ---------------------------------------- 1\n> | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 | 2007-2-15\n> 2 am etc for 24 hours each day\n\nIf you only want things accurate to an hour, you could lost the join and\njust store it as an int: 2007021500, 2007021501 etc.\n\nThat should see you good to year 2100 or so.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 07 Mar 2007 09:05:35 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any advantage to integer vs stored date w. timestamp" }, { "msg_contents": "thanks for your reply\n\n> Primary table is all integers like:\n> \n> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 \n> -------------------------------------------------------------------------------------------------\n> primary key is on date to num->6 columns\n\n>>What types are num1->8?\n\nThey are all integer\n\n\n> date_id | date w timestamp ---------------------------------------- 1\n> | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 | 2007-2-15\n> 2 am etc for 24 hours each day\n\n>>If you only want things accurate to an hour, you could lost the join and\n>>just store it as an int: 2007021500, 2007021501 etc.\n\nHmm yeh I could, I think with the amount of data in the db though it behooves me to use one of the date types, even if via lookup table.\n\nSo I guess I'm just not sure if I'm really gaining anything by using an integer date id column and doing a join on a date lookup table, vs just making it a date w. timestamp column and having duplicate dates in that column.\n\nI would imagine internally that the date w. timestamp is stored as perhaps a time_t type plus some timezone information. I don't know if it takes that much more space, or there's a significant performance penalty in using it\n\n2,500 rows per hour, with duplicate date columns, seems like it could add up though.\n\nthanks\n\nRichard Huxton <[email protected]> wrote: Zoolin Lin wrote:\n> Hi,\n> \n> I have database with a huge amount of data so i'm trying to make it\n> as fast as possible and minimize space.\n> \n> One thing i've done is join on a prepopulated date lookup table to\n> prevent a bunch of rows with duplicate date columns. Without this I'd\n> have about 2500 rows per hour with the exact same date w. timestamp\n> in them.\n> \n> My question is, with postgres do I really gain anything by this, or\n> should I just use the date w. timestamp column on the primary table\n> and ditch the join on the date_id table.\n> \n> Primary table is all integers like:\n> \n> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 \n> -------------------------------------------------------------------------------------------------\n> primary key is on date to num->6 columns\n\nWhat types are num1->8?\n\n> date_id lookup table:\n> \n> This table is prepopulated with the date values that will be used.\n> \n> date_id | date w timestamp ---------------------------------------- 1\n> | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 | 2007-2-15\n> 2 am etc for 24 hours each day\n\nIf you only want things accurate to an hour, you could lost the join and\njust store it as an int: 2007021500, 2007021501 etc.\n\nThat should see you good to year 2100 or so.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n \n---------------------------------\nFood fight? Enjoy some healthy debate\nin the Yahoo! Answers Food & Drink Q&A.\nthanks for your reply> Primary table is all integers like:> > date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 > -------------------------------------------------------------------------------------------------> primary key is on date to num->6 columns>>What types are num1->8?They are all integer> date_id | date w timestamp ---------------------------------------- 1> | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 | 2007-2-15> 2 am etc for 24 hours each day>>If you only want things accurate to an hour, you could lost the join and>>just store it as an int: 2007021500, 2007021501 etc.Hmm yeh I could, I think with the amount of data in the db though it behooves me to use one of the date types, even if via lookup table.So I guess I'm just not sure if I'm really gaining anything by using an integer  date id\n column and doing a join on a date lookup table, vs just making it a date w. timestamp column and having duplicate dates in that column.I would imagine internally that the date w. timestamp is stored as perhaps a time_t type  plus some timezone information. I don't know if it takes that much more space, or there's a significant performance penalty in using it2,500 rows per hour, with duplicate date columns, seems like it could add up though.thanksRichard Huxton <[email protected]> wrote: Zoolin Lin wrote:> Hi,> > I have database with a huge amount of data so i'm trying to make it> as fast as possible and minimize space.> > One thing i've done is join on a prepopulated date lookup table to> prevent a bunch of rows with duplicate date columns. Without this\n I'd> have about 2500 rows per hour with the exact same date w. timestamp> in them.> > My question is, with postgres do I really gain anything by this, or> should I just use the date w. timestamp column on the primary table> and ditch the join on the date_id table.> > Primary table is all integers like:> > date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 > -------------------------------------------------------------------------------------------------> primary key is on date to num->6 columnsWhat types are num1->8?> date_id lookup table:> > This table is prepopulated with the date values that will be used.> > date_id | date w timestamp ---------------------------------------- 1> | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 | 2007-2-15> 2 am etc for 24 hours each dayIf you only want\n things accurate to an hour, you could lost the join andjust store it as an int: 2007021500, 2007021501 etc.That should see you good to year 2100 or so.-- Richard Huxton Archonet Ltd---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster\nFood fight? Enjoy some healthy debatein the Yahoo! Answers Food & Drink Q&A.", "msg_date": "Wed, 7 Mar 2007 06:50:40 -0800 (PST)", "msg_from": "Zoolin Lin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any advantage to integer vs stored date w. timestamp" }, { "msg_contents": "Zoolin Lin wrote:\n> thanks for your reply\n> \n>> Primary table is all integers like:\n>> \n>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 \n>> -------------------------------------------------------------------------------------------------\n>> primary key is on date to num->6 columns\n> \n>>> What types are num1->8?\n> \n> They are all integer\n\nHmm - not sure if you'd get any better packing if you could make some\nint2 and put them next to each other. Need to test.\n\n>> date_id | date w timestamp ----------------------------------------\n>> 1 | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 |\n>> 2007-2-15 2 am etc for 24 hours each day\n> \n>>> If you only want things accurate to an hour, you could lost the\n>>> join and just store it as an int: 2007021500, 2007021501 etc.\n> \n> Hmm yeh I could, I think with the amount of data in the db though it\n> behooves me to use one of the date types, even if via lookup table.\n\nYou can always create it as a custom ZLDate type. All it really needs to \nbe is an int with a few casts.\n\n> So I guess I'm just not sure if I'm really gaining anything by using\n> an integer date id column and doing a join on a date lookup table,\n> vs just making it a date w. timestamp column and having duplicate\n> dates in that column.\n> \n> I would imagine internally that the date w. timestamp is stored as\n> perhaps a time_t type plus some timezone information. I don't know\n> if it takes that much more space, or there's a significant\n> performance penalty in using it\n\nIt's a double or int64 I believe, so allow 8 bytes instead of 4 for your \nint.\n\n> 2,500 rows per hour, with duplicate date columns, seems like it could\n> add up though.\n\nWell, let's see 2500*24*365 = 21,900,000 * 4 bytes extra = 83MB \nadditional storage over a year.\tNot sure it's worth worrying about.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 07 Mar 2007 15:04:18 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any advantage to integer vs stored date w. timestamp" }, { "msg_contents": "Thank you for the reply\n \n>> Primary table is all integers like:\n>> \n>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 \n>> -------------------------------------------------------------------------------------------------\n>> primary key is on date to num->6 columns\n> \n>>> What types are num1->8?\n> \n> They are all integer\n\n>>Hmm - not sure if you'd get any better packing if you could make some\n>>int2 and put them next to each other. Need to test.\n\nThanks, I find virtually nothing on the int2 column type? beyond brief mention here\nhttp://www.postgresql.org/docs/8.2/interactive/datatype-numeric.html#DATATYPE-INT\n\nCould i prevail on you to expand on packing wtih int2 a bit more, or point me in the right direction for documentation?\n\nIf there's some way I can pack multipe columns into one to save space, yet still effectively query on them, even if it's a lot slower, that would be great.\n\nMy current scheme, though as normalized and summarized as I can make it, really chews up a ton of space. It might even be chewing up more than the data files i'm summarizing, I assume due to the indexing.\n\nRegading saving disk space, I saw someone mention doing a custom build and changing\n\nTOAST_TUPLE_THRESHOLD/TOAST_TUPLE_TARGET\n\nSo data is compressed sooner, it seems like that might be a viable option as well.\n\nhttp://www.thescripts.com/forum/thread422854.html\n\n> 2,500 rows per hour, with duplicate date columns, seems like it could\n> add up though.\n\n>>>Well, let's see 2500*24*365 = 21,900,000 * 4 bytes extra = 83MB \n>>>additional storage over a year. Not sure it's worth worrying about.\n\nAhh yes probably better to make it a date w. timestamp column then.\n\nZ\n\n\n\n\n\n\nRichard Huxton <[email protected]> wrote: Zoolin Lin wrote:\n> thanks for your reply\n> \n>> Primary table is all integers like:\n>> \n>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 \n>> -------------------------------------------------------------------------------------------------\n>> primary key is on date to num->6 columns\n> \n>>> What types are num1->8?\n> \n> They are all integer\n\nHmm - not sure if you'd get any better packing if you could make some\nint2 and put them next to each other. Need to test.\n\n>> date_id | date w timestamp ----------------------------------------\n>> 1 | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 |\n>> 2007-2-15 2 am etc for 24 hours each day\n> \n>>> If you only want things accurate to an hour, you could lost the\n>>> join and just store it as an int: 2007021500, 2007021501 etc.\n> \n> Hmm yeh I could, I think with the amount of data in the db though it\n> behooves me to use one of the date types, even if via lookup table.\n\nYou can always create it as a custom ZLDate type. All it really needs to \nbe is an int with a few casts.\n\n> So I guess I'm just not sure if I'm really gaining anything by using\n> an integer date id column and doing a join on a date lookup table,\n> vs just making it a date w. timestamp column and having duplicate\n> dates in that column.\n> \n> I would imagine internally that the date w. timestamp is stored as\n> perhaps a time_t type plus some timezone information. I don't know\n> if it takes that much more space, or there's a significant\n> performance penalty in using it\n\nIt's a double or int64 I believe, so allow 8 bytes instead of 4 for your \nint.\n\n> 2,500 rows per hour, with duplicate date columns, seems like it could\n> add up though.\n\nWell, let's see 2500*24*365 = 21,900,000 * 4 bytes extra = 83MB \nadditional storage over a year. Not sure it's worth worrying about.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n\n \n---------------------------------\nNo need to miss a message. Get email on-the-go \nwith Yahoo! Mail for Mobile. Get started.\nThank you for the reply >> Primary table is all integers like:>> >> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 >> ------------------------------------------------------------------------------------------------->> primary key is on date to num->6 columns> >>> What types are num1->8?> > They are all integer>>Hmm - not sure if you'd get any better packing if you could make some>>int2 and put them next to each other. Need to test.Thanks, I find virtually nothing on the int2 column type? beyond brief mention herehttp://www.postgresql.org/docs/8.2/interactive/datatype-numeric.html#DATATYPE-INTCould i prevail on you to expand on packing wtih int2 a bit more, or point me in the right direction for documentation?If there's some way I can pack multipe columns into one to save space, yet still effectively query\n on them, even if it's a lot slower, that would be great.My current scheme, though as normalized and summarized as I can make it, really chews up a ton of space. It might even be chewing up more than the data files i'm summarizing, I assume due to the indexing.Regading saving disk space, I saw someone mention doing a custom build and changingTOAST_TUPLE_THRESHOLD/TOAST_TUPLE_TARGETSo data is compressed sooner, it seems like that might be a viable option as well.http://www.thescripts.com/forum/thread422854.html> 2,500 rows per hour, with duplicate date columns, seems like it could> add up though.>>>Well, let's see 2500*24*365 = 21,900,000 * 4 bytes extra = 83MB >>>additional storage over a year. Not sure it's worth worrying about.Ahh yes probably better to make it a date w. timestamp column then.ZRichard Huxton\n <[email protected]> wrote: Zoolin Lin wrote:> thanks for your reply> >> Primary table is all integers like:>> >> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 >> ------------------------------------------------------------------------------------------------->> primary key is on date to num->6 columns> >>> What types are num1->8?> > They are all integerHmm - not sure if you'd get any better packing if you could make someint2 and put them next to each other. Need to test.>> date_id | date w timestamp ---------------------------------------->> 1 | 2007-2-15 Midnight 2 | 2007-2-15 1 am 3 |>> 2007-2-15 2 am etc for 24 hours each day> >>> If you only\n want things accurate to an hour, you could lost the>>> join and just store it as an int: 2007021500, 2007021501 etc.> > Hmm yeh I could, I think with the amount of data in the db though it> behooves me to use one of the date types, even if via lookup table.You can always create it as a custom ZLDate type. All it really needs to be is an int with a few casts.> So I guess I'm just not sure if I'm really gaining anything by using> an integer date id column and doing a join on a date lookup table,> vs just making it a date w. timestamp column and having duplicate> dates in that column.> > I would imagine internally that the date w. timestamp is stored as> perhaps a time_t type plus some timezone information. I don't know> if it takes that much more space, or there's a significant> performance penalty in using itIt's a double or int64 I believe, so allow 8\n bytes instead of 4 for your int.> 2,500 rows per hour, with duplicate date columns, seems like it could> add up though.Well, let's see 2500*24*365 = 21,900,000 * 4 bytes extra = 83MB additional storage over a year. Not sure it's worth worrying about.-- Richard Huxton Archonet Ltd\nNo need to miss a message. Get email on-the-go with Yahoo! Mail for Mobile. Get started.", "msg_date": "Wed, 7 Mar 2007 13:45:42 -0800 (PST)", "msg_from": "Zoolin Lin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any advantage to integer vs stored date w. timestamp" }, { "msg_contents": "Zoolin Lin wrote:\n> Thank you for the reply\n> \n>>> Primary table is all integers like:\n>>>\n>>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 \n>>> -------------------------------------------------------------------------------------------------\n>>> primary key is on date to num->6 columns\n>>>> What types are num1->8?\n>> They are all integer\n> \n>>> Hmm - not sure if you'd get any better packing if you could make some\n>>> int2 and put them next to each other. Need to test.\n> \n> Thanks, I find virtually nothing on the int2 column type? beyond brief mention here\n> http://www.postgresql.org/docs/8.2/interactive/datatype-numeric.html#DATATYPE-INT\n> \n> Could i prevail on you to expand on packing wtih int2 a bit more, or point me in the right direction for documentation?\n\nint4 is the internal name for integer (4 bytes)\nint2 is the internal name for smallint (2 bytes)\n\nTry\nSELECT format_type(oid, NULL) AS friendly, typname AS internal,\ntyplen AS length FROM pg_type WHERE typlen>0;\n\nto see them all (negative typlen is a variable size (usually an array or \nbytea etc))\n\nI think with packing he was referring to simply having more values in \nthe same disk space by using int2 instead of int4. (half the storage space)\n\n> \n> If there's some way I can pack multipe columns into one to save space, yet still effectively query on them, even if it's a lot slower, that would be great.\n\nDepending on the size of data you need to store you may be able to get \nsome benefit from \"Packing\" multiple values into one column. But I'm not \nsure if you need to go that far. What range of numbers do you need to \nstore? If you don't need the full int4 range of values then try a \nsmaller data type. If int2 is sufficient then just change the columns \nfrom integer to int2 and cut your storage in half. Easy gain.\n\nThe \"packing\" theory would fall under general programming algorithms not \npostgres specific.\n\nBasically let's say you have 4 values that are in the range of 1-254 (1 \nbyte) you can do something like\ncol1=((val1<<0)&(val2<<8)&(val3<<16)&(val4<<24))\n\nThis will put the four values into one 4 byte int.\n\nSo searching would be something like\nWHERE col1 & ((val1<<8)&(val3<<0))=((val1<<8)&(val3<<0))\nif you needed to search on more than one value at a time.\n\nGuess you can see what your queries will be looking like.\n\n(Actually I'm not certain I got that 100% correct)\n\nThat's a simple example that should give you the general idea. In \npractice you would only get gains if you have unusual length values, so \nif you had value ranges from 0 to 1023 (10 bits each) then you could \npack 3 values into an int4 instead of using 3 int2 cols. (that's 32 bits \nfor the int4 against 64 bits for the 3 int2 cols) and you would use <<10 \nand <<20 in the above example.\n\nYou may find it easier to define a function or two to automate this \ninstead of repeating it for each query. But with disks and ram as cheap \nas they are these days this sort of packing is getting rarer (except \nmaybe embedded systems with limited resources)\n\n> My current scheme, though as normalized and summarized as I can make it, really chews up a ton of space. It might even be chewing up more than the data files i'm summarizing, I assume due to the indexing.\n> \n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Thu, 08 Mar 2007 13:19:33 +1030", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any advantage to integer vs stored date w. timestamp" }, { "msg_contents": "thanks for your replying\n\n>>I think with packing he was referring to simply having more values in \n>>the same disk space by using int2 instead of int4. (half the storage space)\n\nI see yes, the values I'm dealing with are a bit too large to do that but yes good technique. Were they smaller I would use that.\n\nIt looks like if I do CREATE TYPE with variable length, I can make a type that's potentially toastable. I could decrease the toast threshold and recompile.\n\nI'm not sure if that's very practical I know next to nothing about using create type. But if I can essentially make a toastable integer column type, that's indexable and doesn't have an insane performance penalty for using, that would be great.\n\nLooks like my daily data is about 25 mbs before insert (ex via) COPY table to 'somefile';). After insert, and doing vacuum full and reindex, it's at about 75 megs.\n\nIf i gzip compress that 25 meg file it's only 6.3 megs so I'd think if I could make a toastable type it'd benefit.\n\nNeed to look into it now, I may be completely off my rocker.\n\nThank you\n\nShane Ambler <[email protected]> wrote: Zoolin Lin wrote:\n> Thank you for the reply\n> \n>>> Primary table is all integers like:\n>>>\n>>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 \n>>> -------------------------------------------------------------------------------------------------\n>>> primary key is on date to num->6 columns\n>>>> What types are num1->8?\n>> They are all integer\n> \n>>> Hmm - not sure if you'd get any better packing if you could make some\n>>> int2 and put them next to each other. Need to test.\n> \n> Thanks, I find virtually nothing on the int2 column type? beyond brief mention here\n> http://www.postgresql.org/docs/8.2/interactive/datatype-numeric.html#DATATYPE-INT\n> \n> Could i prevail on you to expand on packing wtih int2 a bit more, or point me in the right direction for documentation?\n\nint4 is the internal name for integer (4 bytes)\nint2 is the internal name for smallint (2 bytes)\n\nTry\nSELECT format_type(oid, NULL) AS friendly, typname AS internal,\ntyplen AS length FROM pg_type WHERE typlen>0;\n\nto see them all (negative typlen is a variable size (usually an array or \nbytea etc))\n\nI think with packing he was referring to simply having more values in \nthe same disk space by using int2 instead of int4. (half the storage space)\n\n> \n> If there's some way I can pack multipe columns into one to save space, yet still effectively query on them, even if it's a lot slower, that would be great.\n\nDepending on the size of data you need to store you may be able to get \nsome benefit from \"Packing\" multiple values into one column. But I'm not \nsure if you need to go that far. What range of numbers do you need to \nstore? If you don't need the full int4 range of values then try a \nsmaller data type. If int2 is sufficient then just change the columns \nfrom integer to int2 and cut your storage in half. Easy gain.\n\nThe \"packing\" theory would fall under general programming algorithms not \npostgres specific.\n\nBasically let's say you have 4 values that are in the range of 1-254 (1 \nbyte) you can do something like\ncol1=((val1<<0)&(val2<<8)&(val3<<16)&(val4<<24))\n\nThis will put the four values into one 4 byte int.\n\nSo searching would be something like\nWHERE col1 & ((val1<<8)&(val3<<0))=((val1<<8)&(val3<<0))\nif you needed to search on more than one value at a time.\n\nGuess you can see what your queries will be looking like.\n\n(Actually I'm not certain I got that 100% correct)\n\nThat's a simple example that should give you the general idea. In \npractice you would only get gains if you have unusual length values, so \nif you had value ranges from 0 to 1023 (10 bits each) then you could \npack 3 values into an int4 instead of using 3 int2 cols. (that's 32 bits \nfor the int4 against 64 bits for the 3 int2 cols) and you would use <<10 \nand <<20 in the above example.\n\nYou may find it easier to define a function or two to automate this \ninstead of repeating it for each query. But with disks and ram as cheap \nas they are these days this sort of packing is getting rarer (except \nmaybe embedded systems with limited resources)\n\n> My current scheme, though as normalized and summarized as I can make it, really chews up a ton of space. It might even be chewing up more than the data files i'm summarizing, I assume due to the indexing.\n> \n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n \n---------------------------------\nBored stiff? Loosen up...\nDownload and play hundreds of games for free on Yahoo! Games.\nthanks for your replying>>I think with packing he was referring to simply having more values in >>the same disk space by using int2 instead of int4. (half the storage space)I see yes, the values I'm dealing with are a bit too large to do that but yes good technique. Were they smaller I would use that.It looks like if I do CREATE TYPE with variable length, I can make a type that's potentially toastable. I could decrease the toast threshold and recompile.I'm not sure if that's very practical I know next to nothing about using create type. But if I can essentially make a toastable integer column type, that's indexable and doesn't have an insane performance penalty for using, that would be great.Looks like my daily data is about 25 mbs  before insert (ex via) COPY table to 'somefile';). After insert, and doing vacuum full and reindex, it's at about 75 megs.If i gzip compress that 25 meg file it's only 6.3 megs\n so I'd think if I could make a toastable type it'd benefit.Need to look into it now, I may be completely off my rocker.Thank youShane Ambler <[email protected]> wrote: Zoolin Lin wrote:> Thank you for the reply> >>> Primary table is all integers like:>>>>>> date id | num1 | num2 | num3 | num4 | num5 | num6 | num7 | num 8 >>> ------------------------------------------------------------------------------------------------->>> primary key is on date to num->6 columns>>>> What types are num1->8?>> They are all integer> >>> Hmm - not sure if you'd get any better packing if you could make some>>> int2 and put them next to each other. Need to test.> > Thanks, I find\n virtually nothing on the int2 column type? beyond brief mention here> http://www.postgresql.org/docs/8.2/interactive/datatype-numeric.html#DATATYPE-INT> > Could i prevail on you to expand on packing wtih int2 a bit more, or point me in the right direction for documentation?int4 is the internal name for integer (4 bytes)int2 is the internal name for smallint (2 bytes)TrySELECT format_type(oid, NULL) AS friendly, typname AS internal,typlen AS length FROM pg_type WHERE typlen>0;to see them all (negative typlen is a variable size (usually an array or bytea etc))I think with packing he was referring to simply having more values in the same disk space by using int2 instead of int4. (half the storage space)> > If there's some way I can pack multipe columns into one to save space, yet still effectively query on them, even if it's a lot slower, that would be great.Depending on the\n size of data you need to store you may be able to get some benefit from \"Packing\" multiple values into one column. But I'm not sure if you need to go that far. What range of numbers do you need to store? If you don't need the full int4 range of values then try a smaller data type. If int2 is sufficient then just change the columns from integer to int2 and cut your storage in half. Easy gain.The \"packing\" theory would fall under general programming algorithms not postgres specific.Basically let's say you have 4 values that are in the range of 1-254 (1 byte) you can do something likecol1=((val1<<0)&(val2<<8)&(val3<<16)&(val4<<24))This will put the four values into one 4 byte int.So searching would be something likeWHERE col1 & ((val1<<8)&(val3<<0))=((val1<<8)&(val3<<0))if you needed to search on more than one value at a\n time.Guess you can see what your queries will be looking like.(Actually I'm not certain I got that 100% correct)That's a simple example that should give you the general idea. In practice you would only get gains if you have unusual length values, so if you had value ranges from 0 to 1023 (10 bits each) then you could pack 3 values into an int4 instead of using 3 int2 cols. (that's 32 bits for the int4 against 64 bits for the 3 int2 cols) and you would use <<10 and <<20 in the above example.You may find it easier to define a function or two to automate this instead of repeating it for each query. But with disks and ram as cheap as they are these days this sort of packing is getting rarer (except maybe embedded systems with limited resources)> My current scheme, though as normalized and summarized as I can make it, really chews up a ton of space. It might even be chewing up more than the\n data files i'm summarizing, I assume due to the indexing.> -- Shane [email protected] Sheeky @ http://Sheeky.Biz---------------------------(end of broadcast)---------------------------TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq\nBored stiff? Loosen up...Download and play hundreds of games for free on Yahoo! Games.", "msg_date": "Fri, 9 Mar 2007 06:26:43 -0800 (PST)", "msg_from": "Zoolin Lin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any advantage to integer vs stored date w. timestamp" } ]
[ { "msg_contents": "I see that one can now get compact flash to SATA connectors.\n\nIf I were to use a filesystem with noatime etc and little non-sql traffic,\ndoes the physical update pattern tend to have hot sectors that will tend to\nwear out CF?\n\nI'm wondering about a RAID5 with data on CF drives and RAID1 for teh WAL on\na fast SATA or SAS drive pair. I'm thhinking that this would tend to have\ngood performance because the seek time for the data is very low, even if the\nactual write speed can be slower than state of the art. 2GB CF isn't so\npricey any more.\n\nJust wondering.\n\nJames\n\n--\nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 268.18.7/711 - Release Date: 05/03/2007\n09:41\n\n", "msg_date": "Tue, 6 Mar 2007 22:18:57 -0000", "msg_from": "\"James Mansion\" <[email protected]>", "msg_from_op": true, "msg_subject": "compact flash disks?" }, { "msg_contents": "On 3/7/07, James Mansion <[email protected]> wrote:\n> I see that one can now get compact flash to SATA connectors.\n>\n> If I were to use a filesystem with noatime etc and little non-sql traffic,\n> does the physical update pattern tend to have hot sectors that will tend to\n> wear out CF?\n>\n> I'm wondering about a RAID5 with data on CF drives and RAID1 for teh WAL on\n> a fast SATA or SAS drive pair. I'm thhinking that this would tend to have\n> good performance because the seek time for the data is very low, even if the\n> actual write speed can be slower than state of the art. 2GB CF isn't so\n> pricey any more.\n>\n> Just wondering.\n\nme too. I think if you were going to do this I would configure as\nraid 0. Sequential performance might be a problem, and traditional\nhard drive failure is not. I think some of the better flash drives\nspread out the writes so that life is maximized.\n\nIt's still probably cheaper buying a better motherboard and stuffing\nmore memory in it, and a good raid controller.\n\nmerlin\n", "msg_date": "Wed, 7 Mar 2007 07:42:03 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "* James Mansion:\n\n> If I were to use a filesystem with noatime etc and little non-sql traffic,\n> does the physical update pattern tend to have hot sectors that will tend to\n> wear out CF?\n\nThanks to the FAT file system and its update pattern, most (if not\nall) CF disks implement wear leveling nowadays. I wouldn't worry\nabout the issue, unless your write rates are pretty high.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Wed, 07 Mar 2007 10:34:23 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "At 05:18 PM 3/6/2007, James Mansion wrote:\n>I see that one can now get compact flash to SATA connectors.\n>\n>If I were to use a filesystem with noatime etc and little non-sql traffic,\n>does the physical update pattern tend to have hot sectors that will tend to\n>wear out CF?\nMost flash RAMs have drivers that make sure the pattern of writes \nover time is uniform across the entire device.\n\n\n>I'm wondering about a RAID5 with data on CF drives and RAID1 for the WAL on\n>a fast SATA or SAS drive pair. I'm thinking that this would tend to have\n>good performance because the seek time for the data is very low, even if the\n>actual write speed can be slower than state of the art.\n\nWARNING: modern TOtL flash RAMs are only good for ~1.2M writes per \nmemory cell. and that's the =good= ones.\nUsing flash RAM for write heavy applications like OLTP, or for WAL, \netc can be very dangerous\nFlash write speeds also stink; being ~1/2 flash's already low read speed.\n\nMuch better to use flash RAM for read heavy applications.\nEven there you have to be careful that seek performance, not \nthroughput, is what is gating your day to day performance with those tables.\n\nGot tables or indexes that are\na= too big to fit in RAM and\nb= are write few, read many times and\nc= whose pattern of access is large enough that it does not cache well?\n=Those= are what should be put into flash RAMs\n\n\nSide Note:\nIn the long run, we are going to have to seriously rethink pg's use \nof WAL as the way we implement MVCC as it becomes more and more of a \nperformance bottleneck.\nWe have WAL because Stonebreaker made an assumption about the future \ndominance of optical media that has turned out to be false.\n...and it's been one of pg's big issues every since.\n\n\n> 2GB CF isn't so\n>pricey any more.\nHeck =16= GB Flash only costs ~$300 US and 128GB SSDs based on flash \nRAM are due out this year.\n\n\nCheers,\nRon\n\n\n", "msg_date": "Wed, 07 Mar 2007 10:14:31 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "\n>\n> Much better to use flash RAM for read heavy applications.\n> Even there you have to be careful that seek performance, not \n> throughput, is what is gating your day to day performance with those \n> tables.\n\n\nIsn't precisely there where Flash disks would have *the* big advantage??\n\nI mean, access time is severely held down by the *mechanical* movement of\nthe heads to the right cylinder on the disk --- that's a brutally large \namount of\ntime compared to anything else happening on the computer (well, floppy\ndisks aside, and things like trying-to-break-128-bit-encryption aside \n:-)).\n\nOr are these Flash disks so slow that they compare to the HD's latency \nfigures?\n\nCarlos\n--\n\n", "msg_date": "Wed, 07 Mar 2007 10:48:40 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "> Or are these Flash disks so slow that they compare to the HD's latency \n> figures?\n\nOn sequential read speed HDs outperform flash disks... only on random\naccess the flash disks are better. So if your application is a DW one,\nyou're very likely better off using HDs.\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Wed, 07 Mar 2007 16:49:07 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "Le mardi 6 mars 2007 23:18, James Mansion a écrit :\n> I see that one can now get compact flash to SATA connectors.\nI can suggest you to have a look at Gigabyte i-ram .\nWe use it on a website with higth traffic with lot of succes. Unfortunely, I \ncan not provide any benchmark...\n>\n> If I were to use a filesystem with noatime etc and little non-sql traffic,\n> does the physical update pattern tend to have hot sectors that will tend to\n> wear out CF?\n>\n> I'm wondering about a RAID5 with data on CF drives and RAID1 for teh WAL on\n> a fast SATA or SAS drive pair. I'm thhinking that this would tend to have\n> good performance because the seek time for the data is very low, even if\n> the actual write speed can be slower than state of the art. 2GB CF isn't\n> so pricey any more.\n>\n> Just wondering.\n>\n> James\n>\n> --\n> No virus found in this outgoing message.\n> Checked by AVG Free Edition.\n> Version: 7.5.446 / Virus Database: 268.18.7/711 - Release Date: 05/03/2007\n> 09:41\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Wed, 7 Mar 2007 17:12:37 +0100", "msg_from": "cedric <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": ">WARNING: modern TOtL flash RAMs are only good for ~1.2M writes per\n>memory cell. and that's the =good= ones.\n>Using flash RAM for write heavy applications like OLTP, or for WAL,\n>etc can be very dangerous\n\nWell, that's why I suggested that the WAL would stream to a hard disk\narray, where the large IO sequential write speed will be helpful.\n\nWhether OLTP is a problem will presumably depend on the freqency of updates\nand vacuum to each physical cluster of rows in a disk block.\n\nMost rows in a trading application will have quite a long lifetime, and be\nupdated relatively few times (even where we writing fixings info into\ntrades).\n\n>Flash write speeds also stink; being ~1/2 flash's already low read speed.\n\nSure - but it may still be an effective tradoff where the limiting factor\nwould otherwise be seek time.\n\n>Much better to use flash RAM for read heavy applications.\n\nWhy? I can get a 'PC' server with 128GB of RAM quite easily now,\nand that will mean I can cache most of not all hot data for any trading\napp I've worked on. Settled trades that matured in prior periods can\nbe moved to tables on real disks - they are hardly ever accessed\nanyway.\n\n\nIn the long run, we are going to have to seriously rethink pg's use\nof WAL as the way we implement MVCC as it becomes more and more of a\nperformance bottleneck.\nWe have WAL because Stonebreaker made an assumption about the future\ndominance of optical media that has turned out to be false.\n...and it's been one of pg's big issues every since.\n\n\n>> 2GB CF isn't so\n>>pricey any more.\n>Heck =16= GB Flash only costs ~$300 US and 128GB SSDs based on flash\n>RAM are due out this year.\n\nQuite. Suppose I have a RAID with double redundancy, then I get enough\ncapacity\nfor quite a lot of raw data, and can swap a card out every weekend and let\nthe\nRAID rebuild it in rotation to keep them within conservative wear limits.\n\nSo long as the wear levelling works moderately well (and without needing FAT\non the disk or whatever) then I should be fine.\n\nI think. Maybe.\n\nJames\n\n--\nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 268.18.7/713 - Release Date: 07/03/2007\n09:24\n\n", "msg_date": "Thu, 8 Mar 2007 06:24:35 -0000", "msg_from": "\"James Mansion\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "On Thu, Mar 08, 2007 at 06:24:35AM -0000, James Mansion wrote:\n> \n> In the long run, we are going to have to seriously rethink pg's use\n> of WAL as the way we implement MVCC as it becomes more and more of a\n> performance bottleneck.\n> We have WAL because Stonebreaker made an assumption about the future\n> dominance of optical media that has turned out to be false.\n> ...and it's been one of pg's big issues every since.\n\nUh. pg didn't *have* WAL back when Stonebreaker was working on it. It\nwas added in PostgreSQL 7.1, by Vadim. And it significantly increased \nperformance at the time, since we no longer had to sync the datafiles \nafter every transaction commit.\n(We also didn't have MVCC back in the Stonebreaker days - it was added\nin 6.5)\n\nThat said, it's certainly possible that someone can find an even better\nway to do it :-)\n\n//Magnus\n", "msg_date": "Thu, 8 Mar 2007 09:48:23 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "On 3/8/07, Magnus Hagander <[email protected]> wrote:\n> On Thu, Mar 08, 2007 at 06:24:35AM -0000, James Mansion wrote:\n> >\n> > In the long run, we are going to have to seriously rethink pg's use\n> > of WAL as the way we implement MVCC as it becomes more and more of a\n> > performance bottleneck.\n> > We have WAL because Stonebreaker made an assumption about the future\n> > dominance of optical media that has turned out to be false.\n> > ...and it's been one of pg's big issues every since.\n>\n> Uh. pg didn't *have* WAL back when Stonebreaker was working on it. It\n> was added in PostgreSQL 7.1, by Vadim. And it significantly increased\n> performance at the time, since we no longer had to sync the datafiles\n> after every transaction commit.\n> (We also didn't have MVCC back in the Stonebreaker days - it was added\n> in 6.5)\n\nExactly, and WAL services other purposes than minimizing the penalty\nfrom writing to high latency media. WAL underlies PITR for example.\n\nNear-zero latency media is coming, eventually...and I don't think the\nissue is reliability (catastrophic failure is extremely unlikely) but\ncost. I think the poor write performance is not an issue because you\ncan assemble drives in a giant raid 0 (or even 00 or 000) which will\nblow away disk based raid 10 systems at virtually everything.\n\nSolid State Drives consume less power (a big deal in server farms) and\nthe storage density and life-span will continue to improve. I give it\nfive years (maybe less) before you start to see SSD penetration in a\nbig way. It will simply become cheaper to build a box with SSD than\nwithout since you won't need to buy as much RAM, draws less power, and\nis much more reliable.\n\nDisk drives will displace tape as low speed archival storage but will\nprobably live on in super high storage enterprise environments.\n\nmy 0.02$, as usual,\nmerlin\n", "msg_date": "Thu, 8 Mar 2007 09:11:41 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "At 09:11 AM 3/8/2007, Merlin Moncure wrote:\n>On 3/8/07, Magnus Hagander <[email protected]> wrote:\n>>On Thu, Mar 08, 2007 at 06:24:35AM -0000, James Mansion wrote:\n>> >\n>> > In the long run, we are going to have to seriously rethink pg's use\n>> > of WAL as the way we implement MVCC as it becomes more and more of a\n>> > performance bottleneck.\n>> > We have WAL because Stonebreaker made an assumption about the future\n>> > dominance of optical media that has turned out to be false.\n>> > ...and it's been one of pg's big issues every since.\n>>\n>>Uh. pg didn't *have* WAL back when Stonebreaker was working on it. It\n>>was added in PostgreSQL 7.1, by Vadim. And it significantly increased\n>>performance at the time, since we no longer had to sync the datafiles\n>>after every transaction commit.\n>>(We also didn't have MVCC back in the Stonebreaker days - it was added\n>>in 6.5)\n\nHuh. I have to go re-read my references. Seems I've misremembered history.\nThanks for correcting my mistake.\n\n\n>Exactly, and WAL services other purposes than minimizing the penalty\n>from writing to high latency media. WAL underlies PITR for example.\n\nNever argued with any of this.\n\n\n>Near-zero latency media is coming, eventually...and I don't think the\n>issue is reliability (catastrophic failure is extremely unlikely) but\n>cost. I think the poor write performance is not an issue because you\n>can assemble drives in a giant raid 0 (or even 00 or 000) which will\n>blow away disk based raid 10 systems at virtually everything.\nHave you considered what the $cost$ of that much flash RAM would be?\n\n\n>Solid State Drives consume less power (a big deal in server farms) and\n>the storage density and life-span will continue to improve. I give it\n>five years (maybe less) before you start to see SSD penetration in a\n>big way. It will simply become cheaper to build a box with SSD than\n>without since you won't need to buy as much RAM, draws less power, and\n>is much more reliable.\nDon't bet on it. HDs 2x in density faster than RAM does, even flash \nRAM, and have a -much- lower cost per bit.\n...and it's going to stay that way for the foreseeable future.\n\nATM, I can buy a 500GB 7200 rpm SATA II HD w/ a 5 yr warranty for \n~$170 US per HD.\n1TB HDs of that class cost ~$350-$400 US per.\n(...and bear in mind the hybrid HDs coming out later this year that \noffer the best of both HD and flash technologies at very close to \ncurrent HD costs per bit.)\n\nThe 128GB flash RAM SSDs coming out later this year are going to cost \n4x - 10x those HD prices...\n\n4+ decades of history shows that initial acquisition cost is =by far= \nthe primary deciding factor in IT spending.\nQED: SSDs are going to remain a niche product unless or until \nSomething Drastic (tm) happens to the economics and storage requirements of IT.\n\n\n\n>Disk drives will displace tape as low speed archival storage but will\n>probably live on in super high storage enterprise environments.\n\nAgain, don't bet on it. Tape 2x in density even faster than HD does \nand has an even lower cost per bit.\n\n>my 0.02$, as usual,\n>merlin\n\n", "msg_date": "Thu, 08 Mar 2007 13:40:32 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "On 3/8/07, Ron <[email protected]> wrote:\n> >Exactly, and WAL services other purposes than minimizing the penalty\n> >from writing to high latency media. WAL underlies PITR for example.\n>\n> Never argued with any of this.\n>\n>\n> >Near-zero latency media is coming, eventually...and I don't think the\n> >issue is reliability (catastrophic failure is extremely unlikely) but\n> >cost. I think the poor write performance is not an issue because you\n> >can assemble drives in a giant raid 0 (or even 00 or 000) which will\n> >blow away disk based raid 10 systems at virtually everything.\n> Have you considered what the $cost$ of that much flash RAM would be?\n>\n>\n> >Solid State Drives consume less power (a big deal in server farms) and\n> >the storage density and life-span will continue to improve. I give it\n> >five years (maybe less) before you start to see SSD penetration in a\n> >big way. It will simply become cheaper to build a box with SSD than\n> >without since you won't need to buy as much RAM, draws less power, and\n> >is much more reliable.\n> Don't bet on it. HDs 2x in density faster than RAM does, even flash\n> RAM, and have a -much- lower cost per bit.\n> ...and it's going to stay that way for the foreseeable future.\n\nyes, but SSD drives do not have to overtake hard disks to be cost\ncompetitive. one reason for this is that a SSD based system does not\nneed nearly as much ram as a hard drive based system due to 99%\nreduction of cache miss penalty. Also, flash drives are falling\nfaster in price than hard drives. Unless there is some new\nbreakthrough in hard drive technology the price differential is going\nto narrow a bit. Ultimately (and this may be quite some way down the\nroad, but maybe not), SSD will be cheaper than spinning disk due to\nlower cost of materials.\n\n> ATM, I can buy a 500GB 7200 rpm SATA II HD w/ a 5 yr warranty for\n> ~$170 US per HD.\n> 1TB HDs of that class cost ~$350-$400 US per.\n> (...and bear in mind the hybrid HDs coming out later this year that\n> offer the best of both HD and flash technologies at very close to\n> current HD costs per bit.)\n\nit is not fair to compare cost of 10ms latency hard drive to 10us\nflash drive unless you add in the cost of ram to get the systems to\nperformance parity.\n\n> The 128GB flash RAM SSDs coming out later this year are going to cost\n> 4x - 10x those HD prices...\n\n> 4+ decades of history shows that initial acquisition cost is =by far=\n> the primary deciding factor in IT spending.\n> QED: SSDs are going to remain a niche product unless or until\n> Something Drastic (tm) happens to the economics and storage requirements of IT.\n\nHistorically what you are saying is true but something drastic is\nindeed happening.\n\nThere is currently a power crisis in many colocation facilities in the\nu.s. For example, for at least 1/3 of the cages in qwest at sterling,\nva are sitting empty because there are not enough circuits to give out\n(qwest is hitting environmental regs so that adding any additional\ngenerator power they would have to re-classify as a power generating\nfacility). This basically puts their customers in a bidding war for\navailable circuits and is making power draw a very significant factor\nof cost of rack rental. It is not unreasonable to factor in, say,\none year of savings of monthly power bill at a facility.\n\nsome facilities are so strapped they will only give out dc power circuits...\n\n> >Disk drives will displace tape as low speed archival storage but will\n> >probably live on in super high storage enterprise environments.\n>\n> Again, don't bet on it. Tape 2x in density even faster than HD does\n> and has an even lower cost per bit.\n\nthis is a completely different argument but disk based backup systems\nare exploding as are the companies that design and implement them.\ntape has problems that go way beyond performance and price. whizzing\nrobot arms that stick tapes in and out of things may have looked cool\nand futuristic in 1960 but its time to put this technology to bed for\ngood.\n\nmerlin\n", "msg_date": "Thu, 8 Mar 2007 15:24:04 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "Hey there :)\n\nI'm re-writing a summarization process that used to be very 'back and \nforth' intensive (i.e. very chatty between my summarization software and \nthe DB). I'm trying to reduce that by practically reducing all this back \nand forth to a single SQL query, but in order to support certain \ncomplexities it looks to me like I'm going to have to write some postgres \nC language functions.\n\nThis is something I'm actually familiar with and have done before, but let \nme describe what I'm trying to do here so I can be sure that this is the \nright thing to do, and to be sure I do it correctly and don't cause memory \nleaks :)\n\n\n---\n\nI have two columns, one called \"procedure_code\" and the other called \n\"wrong_procedure_code\" in my summary table. These are both of type \nvarchar(32) I believe or possibly text -- if it matters I can double check \nbut because all the involved columns are the same type and size it \nshouldn't matter. :)\n\nThese are actually populated by the \"procedure_code\" and \n\"corrected_procedure_code\" in the source table. The logic is, basically:\n\nIF strlen(source.corrected_procedure_code)\nTHEN:\n summary.procedure_code=source.corrected_procedure_code\n summary.wrong_procedure_code=source.procedure_code\nELSE:\n summary.procedure_code=source.procedure_code\n summary.wrong_procedure_code=NULL\n\n\nSimple, right? Making a C function to handle this should be no sweat -- \nI would basically split this logic into two separate functions, one to \npopulate summary.procedure_code and one to populate \nsummary.wrong_procedure_code, and it removes the need of having any sort \nof back and forth between the program and DB... I can just do like:\n\nupdate summary_table\n set procedure_code=pickCorrect(source.procedure_code,\n source.corrected_procedure_code),\n wrong_procedure_code=pickWrong(source.procedure_code,\n source.corrected_procedure_code),....\n from source where summary_table.source_id=source.source_id;\n\n\nMake sense? So question 1, is this the good way to do all this?\n\n\nQuestion 2: Assuming it is the good way to do all this, would this \nfunction be correct assuming I did all the other stuff right (like \nPG_FUNCTION_INFO_V1, etc.):\n\nDatum pickCorrect(PG_FUNCTION_ARGS){\n \ttext*\tprocedure_code=PG_GETARG_TEXT_P(0);\n \ttext*\tcorrected_code=PG_GETARG_TEXT_P(1);\n\n \tif(VARSIZE(corrected_code)-VARHDRSZ){\n \t\tPG_RETURN_TEXT_P(corrected_code);\n \t}else{\n \t\tPG_RETURN_TEXT_P(procedure_code);\n \t}\n}\n\nWould that simply work because I'm not actually modifying the data, or \nwould I have to pmalloc a separate chunk of memory, copy the data, and \nreturn the newly allocated memory because the memory allocated for the \nargs \"goes away\" or gets corrupted or something otherwise?\n\n\n\nThanks a lot for the info!\n\n\nSteve\n", "msg_date": "Thu, 8 Mar 2007 17:36:12 -0500 (EST)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Question about PGSQL functions" }, { "msg_contents": "Steve wrote:\n> IF strlen(source.corrected_procedure_code)\n> THEN:\n> summary.procedure_code=source.corrected_procedure_code\n> summary.wrong_procedure_code=source.procedure_code\n> ELSE:\n> summary.procedure_code=source.procedure_code\n> summary.wrong_procedure_code=NULL\n\nUm, so you test if source.corrected_procedure_code is an empty string? \nAnd if it is, summary.procedure_code is set to an empty string? But in \nwrong_procedure_code, you use NULLs?\n\n> Simple, right? Making a C function to handle this should be no sweat -- \n> I would basically split this logic into two separate functions, one to \n> populate summary.procedure_code and one to populate \n> summary.wrong_procedure_code, and it removes the need of having any sort \n> of back and forth between the program and DB... I can just do like:\n> \n> update summary_table\n> set procedure_code=pickCorrect(source.procedure_code,\n> source.corrected_procedure_code),\n> wrong_procedure_code=pickWrong(source.procedure_code,\n> source.corrected_procedure_code),....\n> from source where summary_table.source_id=source.source_id;\n\nISTM you could write this easily with a little bit of SQL, with no need \nfor C-functions (I haven't run this, probably full of typos..) :\n\nupdate summary_table\n set procedure_code = (CASE WHEN source.corrected_procedure_code = '' \nTHEN '' ELSE source.procedure_code END;),\n wrong_procedure_code = (CASE WHEN source.corrected_procedure_code \n= '' THEN source.procedure_code ELSE NULL END;)\n from source where summary_table.source_id=source.source_id;\n\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 08 Mar 2007 23:17:52 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about PGSQL functions" }, { "msg_contents": "> Steve wrote:\n>> IF strlen(source.corrected_procedure_code)\n>> THEN:\n>> summary.procedure_code=source.corrected_procedure_code\n>> summary.wrong_procedure_code=source.procedure_code\n>> ELSE:\n>> summary.procedure_code=source.procedure_code\n>> summary.wrong_procedure_code=NULL\n>\n> Um, so you test if source.corrected_procedure_code is an empty string? And if \n> it is, summary.procedure_code is set to an empty string? But in \n> wrong_procedure_code, you use NULLs?\n\n \tYeah; we could use empty strings if that make it easier for \nwhatever reason, but to our front end software NULL vs. empty string \ndoesn't actually matter and we never query based on these columns, they're \nfor display purposes only.\n\n>> Simple, right? Making a C function to handle this should be no sweat -- I \n>> would basically split this logic into two separate functions, one to \n>> populate summary.procedure_code and one to populate \n>> summary.wrong_procedure_code, and it removes the need of having any sort of \n>> back and forth between the program and DB... I can just do like:\n>> \n>> update summary_table\n>> set procedure_code=pickCorrect(source.procedure_code,\n>> source.corrected_procedure_code),\n>> wrong_procedure_code=pickWrong(source.procedure_code,\n>> source.corrected_procedure_code),....\n>> from source where summary_table.source_id=source.source_id;\n>\n> ISTM you could write this easily with a little bit of SQL, with no need for \n> C-functions (I haven't run this, probably full of typos..) :\n>\n> update summary_table\n> set procedure_code = (CASE WHEN source.corrected_procedure_code = '' THEN \n> '' ELSE source.procedure_code END;),\n> wrong_procedure_code = (CASE WHEN source.corrected_procedure_code = '' \n> THEN source.procedure_code ELSE NULL END;)\n> from source where summary_table.source_id=source.source_id;\n\n \tThis looks interesting and I'm going to give this a shot tomorrow \nand see how it goes. Speed is somewhat of an issue which is why I \ninitially thought of the C function -- plus I wasn't aware you could do \nCASE statements like that :) Thanks for the idea!\n\n\nSteve\n", "msg_date": "Thu, 8 Mar 2007 21:09:52 -0500 (EST)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about PGSQL functions" }, { "msg_contents": "Isn't it likely that a single stream (or perhaps one that can be partitioned\nacross spindles) will tend to be fastest, since it has a nice localised\nstream that a) allows for compression of reasonable blocks and b) fits with\ncommit aggregation?\n\nRAM capacity on servers is going up and up, but the size of a customer\naddress or row on an invoice isn't. I'd like to see an emphasis on speed of\nupdate with an assumption that most hot data is cached, most of the time.\n\nMy understanding also is that storing data columnwise is handy when its\npersisted because linear scans are much faster. Saw it once with a system\nmodelled after APL, blew me away even on a sparc10 once the data was\norganised and could be mapped.\n\nStill, for the moment anything that helps with the existing system would be\ngood. Would it help to define triggers to be deferrable to commit as well\nas end of statement (and per row)? Seems to me it should be, at least for\nones that raise 'some thing changed' events. And/or allow specification\nthat events can fold and should be very cheap (don't know if this is the\ncase now? Its not as well documented how this works as I'd like)\n\nJames\n--\nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 268.18.7/713 - Release Date: 07/03/2007\n09:24\n\n", "msg_date": "Fri, 9 Mar 2007 06:24:11 -0000", "msg_from": "\"James Mansion\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": ">On sequential read speed HDs outperform flash disks... only on random\n>access the flash disks are better. So if your application is a DW one,\n>you're very likely better off using HDs.\n\nThis looks likely to be a non-issue shortly, see here:\n\nhttp://www.reghardware.co.uk/2007/03/27/sams_doubles_ssd_capacity/\n\nI still think this sort of devices will become the OLTP device\nof choice before too long - even if we do have to watch the wear rate.\n\n>WARNING: modern TOtL flash RAMs are only good for ~1.2M writes per\n>memory cell. and that's the =good= ones.\n\nWell, my original question was whether the physical update pattern\nof the server does have hotspots that will tend to cause a problem\nin normal usage if the wear levelling (such as it is) doesn't entirely\nspread the load. The sorts of application I'm interested in will not\nupdate individual data elements very often. There's a danger that\nindex nodes might be rewritted frequently, but one might want to allow\nthat indexes persist lazily and should be recovered from a scan after\na crash that leaves them dirty, so that they can be cached and avoid\nsuch an access pattern.\n\n\nOut of interest with respect to WAL - has anyone tested to see whether\none could tune the group commit to pick up slightly bigger blocks and\nwrite the WAL using compression, to up the *effective* write speed of\nmedia? Once again, most data I'm interested in is far from a random\npattern and tends to compress quite well.\n\nIf the WAL write is committed to the disk platter, is it OK for\narbitrary data blocks to have failed to commit to disk so we can\nrecover the updates for committed transactions?\n\nIs theer any documentation on the write barrier usage?\n\nJames\n\n--\nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 268.18.25/744 - Release Date: 03/04/2007\n05:32\n\n", "msg_date": "Wed, 4 Apr 2007 06:18:52 +0100", "msg_from": "\"James Mansion\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: compact flash disks?" }, { "msg_contents": "Please help me to set up optimal values in the postgresql.conf file for \nPostgreSQL 8.2.3\n\nCan you please give us an advice, which of your DBs and which \nconfiguration should we take for a project that has the following \nparameters:\n\n 1. DB size: 25-30Gb\n 2. number of tables: 100 - 150\n 3. maximum number of records for one table: 50 - 100 millions\n 4. avarage number of records for one table: 3 - 5 millions\n 5. DB's querying frequency: 150-500 per minute\n 6. SELECT/INSERT/UPDATE correlation is equal\n 7. DELETE operations are performed rarely\n\n\nServer Hardware:\n\n * 2x Xeon 2.8Ghz\n * 6GB RAM (2 GB for OS and DB, 4GB for Application)\n * RAID-10 w/ 6x72GB 15.000rpm HDDs\n\n\n\n-- \nThanks,\n\nEugene Ogurtsov\nInternal Development Chief Architect\nSWsoft, Inc.\n\n\n\n\n\n\n\n\nPlease help me to set up optimal values in the postgresql.conf file for\nPostgreSQL 8.2.3\n\nCan you please give us an advice, which of your DBs and which\nconfiguration should we take for a project that has the following\nparameters:\n\nDB size: 25-30Gb\nnumber of tables: 100 - 150\nmaximum number of records for one table: 50 - 100 millions\navarage number of records for one table: 3 - 5 millions\nDB's querying frequency: 150-500 per minute\nSELECT/INSERT/UPDATE correlation is equal\nDELETE operations are performed rarely\n\n\n\nServer Hardware:\n\n2x Xeon 2.8Ghz\n6GB RAM (2 GB for OS and DB, 4GB for Application)\nRAID-10 w/ 6x72GB 15.000rpm HDDs\n\n\n\n-- \nThanks,\n\nEugene Ogurtsov\nInternal Development Chief Architect\nSWsoft, Inc.", "msg_date": "Wed, 04 Apr 2007 13:07:20 +0700", "msg_from": "Eugene Ogurtsov <[email protected]>", "msg_from_op": false, "msg_subject": "postgresql.conf file for PostgreSQL 8.2.3" } ]
[ { "msg_contents": "I think I have an issue with the planning of this query that sometimes\nruns really slow.\n\nthis is the output of the EXPLAIN ANALYZE in the SLOW case\n\nSort (cost=4105.54..4105.54 rows=2 width=28) (actual\ntime=11404.225..11404.401 rows=265 loops=1)\n Sort Key: table1.fdeventfromdate, table2.fdsurname, table2.fdtitle\n -> Nested Loop Left Join (cost=192.34..4105.53 rows=2 width=28)\n(actual time=0.770..11402.185 rows=265 loops=1)\n Join Filter: (\"inner\".table2_id = \"outer\".id)\n -> Nested Loop Left Join (cost=192.34..878.40 rows=1\nwidth=28) (actual time=0.750..6.878 rows=96 loops=1)\n Join Filter: (\"inner\".id = \"outer\".table1_id)\n -> Nested Loop Left Join (cost=192.34..872.82 rows=1\nwidth=24) (actual time=0.551..5.453 rows=96 loops=1)\n -> Nested Loop Left Join (cost=192.34..866.86\nrows=1 width=28) (actual time=0.534..4.370 rows=96 loops=1)\n -> Nested Loop (cost=192.34..862.46\nrows=1 width=28) (actual time=0.515..3.100 rows=96 loops=1)\n -> Bitmap Heap Scan on table2\n(cost=192.34..509.00 rows=96 width=24) (actual time=0.488..1.140\nrows=96 loops=1)\n Recheck Cond: ((id = ...\n[CUT]\n\nthis query takes 11000 milliseconds\n\nthis is the output of the EXPLAIN ANALYZE in the FAST case\n\nSort (cost=8946.80..8946.82 rows=10 width=28) (actual\ntime=286.969..287.208 rows=363 loops=1)\n Sort Key: table1.fdeventfromdate, table2.fdsurname, table2.fdtitle\n -> Merge Left Join (cost=8617.46..8946.63 rows=10 width=28)\n(actual time=232.330..284.750 rows=363 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".table2_id)\n -> Sort (cost=946.68..946.69 rows=4 width=28) (actual\ntime=4.505..4.568 rows=101 loops=1)\n Sort Key: table2.id\n -> Hash Left Join (cost=208.33..946.64 rows=4\nwidth=28) (actual time=0.786..4.279 rows=101 loops=1)\n Hash Cond: (\"outer\".table1_id = \"inner\".id)\n -> Nested Loop Left Join (cost=202.35..940.64\nrows=4 width=24) (actual time=0.719..4.011 rows=101 loops=1)\n -> Nested Loop Left Join\n(cost=202.35..916.76 rows=4 width=28) (actual time=0.701..3.165\nrows=101 loops=1)\n -> Nested Loop (cost=202.35..899.50\nrows=4 width=28) (actual time=0.676..2.284 rows=101 loops=1)\n -> Bitmap Heap Scan on table2\n(cost=202.35..534.18 rows=101 width=24) (actual time=0.644..1.028\nrows=101 loops=1)\n Recheck Cond: ((id = ...\n[CUT]\n\nthis time the query takes 290 milliseconds\n\nAs you can see the forecast about the returned rows are completely off\nin both case but the forecast of 10 rows in the second case is enough\nto plan the query in a more clever way.\nI tried to increase the default_statistics_target from 10 to 100 and\nafter I relaunched analyze on the DB on the test machine but this\nhasn't improved in any way the situation.\nThe problem is, the distribution of the data across the tables joined\nin this query is quite uneven and I can see the avg_width of the\nrelations keys is really not a good representative value.\nIs there something I can do to improve this situation?\n\nThanks\n\nPaolo\n", "msg_date": "Tue, 6 Mar 2007 23:25:58 +0000", "msg_from": "\"Paolo Negri\" <[email protected]>", "msg_from_op": true, "msg_subject": "problem with wrong query planning and ineffective statistics" } ]
[ { "msg_contents": "Hi List,\n\nCan i find out the timestamp when last a record from a table got updated.\nDo any of the pg system tables store this info.\n\n-- \nRegards\nGauri\n\nHi List,\n \nCan i find out the timestamp when last a record from a table got updated.\nDo any of the pg system tables store this info.-- RegardsGauri", "msg_date": "Wed, 7 Mar 2007 12:13:55 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "When the Record Got Updated." }, { "msg_contents": "am Wed, dem 07.03.2007, um 12:13:55 +0530 mailte Gauri Kanekar folgendes:\n> Hi List,\n> \n> Can i find out the timestamp when last a record from a table got updated.\n> Do any of the pg system tables store this info.\n\nNo, impossible. But you can write a TRIGGER for such tasks.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Wed, 7 Mar 2007 08:30:53 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When the Record Got Updated." } ]
[ { "msg_contents": "Hi performers,\n\nafter following this list for a while, I try to configure a database \nserver with a limited budget.\nPlanned are 2 databases\n- archiveopteryx - http://www.archiveopteryx.org/sql-schema.html\n- ERDB - https://www.chaos1.de/svn-public/repos/network-tools/ERDB/ \ntrunk/database/ERD.pdf\n\nIn peak times I expect something like\n- 50 inserts\n- 20 updates\n- 200 selects\nper second.\n\nCurrent configuration is:\n- Tyan S3992G3NR\n- 2 x Opteron 2212 (2GHz)\n- 8 GB RAM (DDR2-667)\n- ARC-1261ML with 1GB and BBU\n- 16 Seagate ST3250820NS (250GB, 7200 rpm, 8GB, with perpendicular \nrecording)\n\n1 raid 1 for OS (FreeBSD) and WAL\n1 raid 0 with 7 raid 1 for tablespace\n\nCan I expect similar performance as 5 drives at 10k rpm (same costs)?\nShould I revert to a single-CPU to prevent from oscillating cache \nupdates between CPUS?\nAnybody experience about NUMA stuff with FreeBSD?\n\nDo you have any suggestions to enhance the configuration, staying at \ncost level?\nPlease advice.\n\nAxel\n---------------------------------------------------------------------\nAxel Rau, ☀Frankfurt , Germany +49 69 9514 18 0\n\n\n", "msg_date": "Thu, 8 Mar 2007 12:30:45 +0100", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": true, "msg_subject": "configuring new server / many slow disks?" }, { "msg_contents": "\nAm 08.03.2007 um 12:30 schrieb Axel Rau:\n\n> Can I expect similar performance as 5 drives at 10k rpm (same costs)?\n> Should I revert to a single-CPU to prevent from oscillating cache \n> updates between CPUS?\n> Anybody experience about NUMA stuff with FreeBSD?\n>\n> Do you have any suggestions to enhance the configuration, staying \n> at cost level?\n> Please advice.\nNo response at all?\nDoes this mean, my config sounds reasonable?\n\nAxel\n---------------------------------------------------------------------\nAxel Rau, ☀Frankfurt , Germany +49 69 9514 18 0\n\n\n", "msg_date": "Fri, 9 Mar 2007 12:22:13 +0100", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: configuring new server / many slow disks?" }, { "msg_contents": "Axel Rau wrote:\n> Hi performers,\n> \n> after following this list for a while, I try to configure a database \n> server with a limited budget.\n> Planned are 2 databases\n> - archiveopteryx - http://www.archiveopteryx.org/sql-schema.html\n> - ERDB - \n> https://www.chaos1.de/svn-public/repos/network-tools/ERDB/trunk/database/ERD.pdf \n> \n> In peak times I expect something like\n> - 50 inserts\n> - 20 updates\n> - 200 selects\n> per second.\n\nPresumably with multiple clients, not just one extremely busy one?\nHow big do you expect the databases to get? That'll affect the next point.\n\n> Current configuration is:\n> - Tyan S3992G3NR\n> - 2 x Opteron 2212 (2GHz)\n> - 8 GB RAM (DDR2-667)\n\nDepending on the amount of data you've got to deal with, it might be \nworth trading disks/cpu for more RAM.\n\n> - ARC-1261ML with 1GB and BBU\n\nOK, so you can turn write-caching on. That should let you handle more \nupdates than you need. You probably don't need so much RAM on board \neither, unless each update has a lot of data in it.\n\n> - 16 Seagate ST3250820NS (250GB, 7200 rpm, 8GB, with perpendicular \n> recording)\n> \n> 1 raid 1 for OS (FreeBSD) and WAL\n> 1 raid 0 with 7 raid 1 for tablespace\n> \n> Can I expect similar performance as 5 drives at 10k rpm (same costs)?\n\nThe main question is whether you're going to need to hit the disks \noften. If you can get to the stage where the working-set of your DBs are \n all in RAM you could sacrifice some disks. If not, disk I/O dominates.\n\n> Should I revert to a single-CPU to prevent from oscillating cache \n> updates between CPUS?\n> Anybody experience about NUMA stuff with FreeBSD?\n\nSorry - I know nothing about FreeBSD.\n\nThat any use - I didn't bother to reply before because I couldn't help \nwith the BSD stuff, and it's always guesswork with these sorts of questions.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 09 Mar 2007 11:42:39 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configuring new server / many slow disks?" }, { "msg_contents": "\nAm 09.03.2007 um 12:42 schrieb Richard Huxton:\n\n> Axel Rau wrote:\n>> Hi performers,\n>> after following this list for a while, I try to configure a \n>> database server with a limited budget.\n>> Planned are 2 databases\n>> - archiveopteryx - http://www.archiveopteryx.org/sql-schema.html\n>> - ERDB - https://www.chaos1.de/svn-public/repos/network-tools/ERDB/ \n>> trunk/database/ERD.pdf In peak times I expect something like\n>> - 50 inserts\n>> - 20 updates\n>> - 200 selects\n>> per second.\n>\n> Presumably with multiple clients, not just one extremely busy one?\nMultiple clients do mainly selects (the IMAP users), few mailservers \nare busy and do mainly inserts and updates.\n> How big do you expect the databases to get?\nUp to 1 TB. The 1st DB is an IMAP message store, which keeps Mime \nmessage parts as byteas in one table (bodyparts).\n\n> That'll affect the next point.\n>\n>> Current configuration is:\n>> - Tyan S3992G3NR\n>> - 2 x Opteron 2212 (2GHz)\n>> - 8 GB RAM (DDR2-667)\n>\n> Depending on the amount of data you've got to deal with, it might \n> be worth trading disks/cpu for more RAM.\n>\n>> - ARC-1261ML with 1GB and BBU\n>\n> OK, so you can turn write-caching on. That should let you handle \n> more updates than you need. You probably don't need so much RAM on \n> board either, unless each update has a lot of data in it.\nupdates not, but inserts may have 10-20 MBs.\n>\n>> - 16 Seagate ST3250820NS (250GB, 7200 rpm, 8GB, with perpendicular \n>> recording)\n>> 1 raid 1 for OS (FreeBSD) and WAL\n>> 1 raid 0 with 7 raid 1 for tablespace\n>> Can I expect similar performance as 5 drives at 10k rpm (same costs)?\n>\n> The main question is whether you're going to need to hit the disks \n> often. If you can get to the stage where the working-set of your \n> DBs are all in RAM you could sacrifice some disks. If not, disk I/ \n> O dominates.\nBecause of the table with the blobs, I need the many disks.\nPerhaps this table would be worth of on an own table space / raid 10 \nset.\n>\n>> Should I revert to a single-CPU to prevent from oscillating cache \n>> updates between CPUS?\n>> Anybody experience about NUMA stuff with FreeBSD?\n>\n> Sorry - I know nothing about FreeBSD.\n>\n> That any use - I didn't bother to reply before because I couldn't \n> help with the BSD stuff, and it's always guesswork with these sorts \n> of questions.\n> -- \nAxel\n---------------------------------------------------------------------\nAxel Rau, ☀Frankfurt , Germany +49 69 9514 18 0\n\n\n", "msg_date": "Fri, 9 Mar 2007 18:39:33 +0100", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: configuring new server / many slow disks?" }, { "msg_contents": "On Fri, 2007-03-09 at 11:39, Axel Rau wrote:\n> Am 09.03.2007 um 12:42 schrieb Richard Huxton:\n> \n> > Axel Rau wrote:\n> >> Hi performers,\n> >> after following this list for a while, I try to configure a \n> >> database server with a limited budget.\n> >> Planned are 2 databases\n> >> - archiveopteryx - http://www.archiveopteryx.org/sql-schema.html\n> >> - ERDB - https://www.chaos1.de/svn-public/repos/network-tools/ERDB/\n> >> trunk/database/ERD.pdf In peak times I expect something like\n> >> - 50 inserts\n> >> - 20 updates\n> >> - 200 selects\n> >> per second.\n> >\n> > Presumably with multiple clients, not just one extremely busy one?\n> Multiple clients do mainly selects (the IMAP users), few mailservers \n> are busy and do mainly inserts and updates.\n> > How big do you expect the databases to get?\n> Up to 1 TB. The 1st DB is an IMAP message store, which keeps Mime \n> message parts as byteas in one table (bodyparts).\n\nI'd benchmark it to make sure. 20 updates and 50 inserts per second on\nsimple tables with small amounts of data? No problem. Lots of big\nbytea / blob data getting inserted and read back out? Not so sure.\n\nCertainly your BBU RAID controller will help, and lots of slower hard\ndrives is generally far better than a few faster drives.\n\nThe only way to know is to make up a test and see how it performs.\n", "msg_date": "Fri, 09 Mar 2007 13:28:34 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: configuring new server / many slow disks?" } ]
[ { "msg_contents": "I am having a performance problem with a query implemented within a\nserver side function. If I use an SQL client(EMS Postgres) and manually\ngenerate the sql query I get about 100 times performance improvement\nover using the function.\n\nI've also tried using a prepared statement from my application and\nobserved a similar performance improvement over the the function.\n\nThe table I am quering against has several hundred thousand records. I\nhave indexes defined and I've run vacuum several times. \n\nIs there something basic I am missing here with the use of a function. I\nam no database expert, but my assumption was that a function would give\nme better results than in-line sql.\n\nI've seen a mailing list entry in another list that implied that the\nquery planner for a function behaves differently than in-line sql. \n\n\nThanks \nKarl\n", "msg_date": "Thu, 8 Mar 2007 11:24:16 -0500", "msg_from": "\"Schwarz, Karl\" <[email protected]>", "msg_from_op": true, "msg_subject": "function performance vs in-line sql" }, { "msg_contents": "Schwarz, Karl wrote:\n> Is there something basic I am missing here with the use of a function. I\n> am no database expert, but my assumption was that a function would give\n> me better results than in-line sql.\n\nNot necessarily. Usually it means the planner has less information to go on.\n\nWe'll need more information though - table definitions, queries, how is \nthe function called etc.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 08 Mar 2007 17:49:41 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: function performance vs in-line sql" }, { "msg_contents": "Schwarz, Karl wrote:\n> I am having a performance problem with a query implemented within a\n> server side function. If I use an SQL client(EMS Postgres) and manually\n> generate the sql query I get about 100 times performance improvement\n> over using the function.\n> \n> I've also tried using a prepared statement from my application and\n> observed a similar performance improvement over the the function.\n> \n> The table I am quering against has several hundred thousand records. I\n> have indexes defined and I've run vacuum several times. \n> \n> Is there something basic I am missing here with the use of a function. I\n> am no database expert, but my assumption was that a function would give\n> me better results than in-line sql.\n> \n> I've seen a mailing list entry in another list that implied that the\n> query planner for a function behaves differently than in-line sql. \n\nFor starters, can you show us the function, the manual sql query and the \nschema, please?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 08 Mar 2007 17:52:34 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: function performance vs in-line sql" }, { "msg_contents": "Thanks. I was not looking for help with the query just wanted to know\nthat I didn't overlook some basic\nconfiguration setting.\n\nKarl\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: Thursday, March 08, 2007 12:50 PM\nTo: Schwarz, Karl\nCc: [email protected]\nSubject: Re: [PERFORM] function performance vs in-line sql\n\nSchwarz, Karl wrote:\n> Is there something basic I am missing here with the use of a function.\n\n> I am no database expert, but my assumption was that a function would \n> give me better results than in-line sql.\n\nNot necessarily. Usually it means the planner has less information to go\non.\n\nWe'll need more information though - table definitions, queries, how is\nthe function called etc.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 8 Mar 2007 13:45:09 -0500", "msg_from": "\"Schwarz, Karl\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: function performance vs in-line sql" } ]
[ { "msg_contents": "\n\n\n\nHi!\n\nI have two tables with some indices on them:\n\nCREATE TABLE subscriber\n(\n  id serial NOT NULL,\n  anumber character varying(32) NOT NULL,\n  CONSTRAINT subscriber_pk PRIMARY KEY (id)\n) \n\nCREATE INDEX anumber_idx_numeric\n  ON subscriber\n  USING btree\n  (anumber::numeric);\n\nCREATE TABLE output_message_log\n(\n  id serial NOT NULL,\n  subscriber_id integer NOT NULL,\n  crd timestamp without time zone NOT NULL DEFAULT now(),\n  CONSTRAINT output_message_log_pk PRIMARY KEY (id),\n  CONSTRAINT subscriber_fk FOREIGN KEY (subscriber_id)\n      REFERENCES subscriber (id) MATCH SIMPLE\n      ON UPDATE NO ACTION ON DELETE NO ACTION,\n) \n\nCREATE INDEX crd_idx\n  ON output_message_log\n  USING btree\n  (crd);\n\nCREATE INDEX subscriber_id_idx\n  ON output_message_log\n  USING btree\n  (subscriber_id);\n\nI would like to run a query like this one:\n\nselect l.id\nfrom output_message_log l join subscriber s on l.subscriber_id = s.id \nwhere s.anumber::numeric = 5555555555\norder by l.crd desc\nlimit 41\noffset 20\n\nThe thing I do not understand is why postgresql wants to use crd_idx:\n\n\"Limit  (cost=4848.58..14788.18 rows=41 width=12) (actual\ntime=7277.115..8583.814 rows=41 loops=1)\"\n\"  ->  Nested Loop  (cost=0.00..1195418.42 rows=4931 width=12)\n(actual time=92.083..8583.713 rows=61 loops=1)\"\n\"        ->  Index Scan Backward using crd_idx on output_message_log\nl  (cost=0.00..17463.80 rows=388646 width=16) (actual\ntime=0.029..975.095 rows=271447 loops=1)\"\n\"        ->  Index Scan using subscriber_pk on subscriber s \n(cost=0.00..3.02 rows=1 width=4) (actual time=0.026..0.026 rows=0\nloops=271447)\"\n\"              Index Cond: (\"outer\".subscriber_id = s.id)\"\n\"              Filter: ((anumber)::numeric = 36308504669::numeric)\"\n\"Total runtime: 8584.016 ms\"\n\nI would like postgresql to use subscriber_id_idx which resulst in a far less execution\ntime on this database.\n\nI tried to lower random_page_cost, but that didn't help as an index is\nalready used, just not the \"good\" one.\n\nCould you please comment on this issue and suggest some possible\nsoulutions?\n\nThanks,\n\nZizi\n\n\n\n\n", "msg_date": "Fri, 09 Mar 2007 14:20:15 +0100", "msg_from": "=?UTF-8?B?TWV6ZWkgWm9sdMOhbg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Deceiding which index to use" }, { "msg_contents": "Mezei Zoltán wrote:\n> Hi!\n> \n> I have two tables with some indices on them:\n> \n> CREATE TABLE subscriber\n> (\n> id serial NOT NULL,\n> anumber character varying(32) NOT NULL,\n> CONSTRAINT subscriber_pk PRIMARY KEY (id)\n> )\n> \n> CREATE INDEX anumber_idx_numeric\n> ON subscriber\n> USING btree\n> (anumber::numeric);\n\n> I would like to run a query like this one:\n> \n> select l.id\n> from output_message_log l join subscriber s on l.subscriber_id = s.id\n> where s.anumber::numeric = 5555555555\n> order by l.crd desc\n> limit 41\n> offset 20\n\nQ1. Why are you storing a numeric in a varchar?\nQ2. How many unique values does anumber have? And how many rows in \nsubscriber?\nQ3. What happens if you create the index on plain (anumber) and then \ntest against '555555555'?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 09 Mar 2007 14:38:22 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "\n\n\n\n\nRichard Huxton wrote:\n> Mezei Zoltán wrote:\n> Q1. Why are you storing a numeric in a varchar?\n\nBecause it's not always numeric info. :/\n\n> Q2. How many unique values does anumber have? And how many rows in\n> subscriber?\n\nAbout 10k distinct anumbers and 20k rows. Nothing special...\n\n> Q3. What happens if you create the index on plain (anumber) and\nthen\n> test against '555555555'?\n\nNothing, everything is the same - the problem lies on the other table's\nindex usage, using this index is fine.\n\nZizi\n\n\n\n", "msg_date": "Fri, 09 Mar 2007 15:53:03 +0100", "msg_from": "=?UTF-8?B?TWV6ZWkgWm9sdMOhbg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "Mezei Zoltán wrote:\n> Richard Huxton wrote:\n> > Mezei Zoltán wrote:\n> > Q1. Why are you storing a numeric in a varchar?\n> \n> Because it's not always numeric info. :/\n> \n> > Q2. How many unique values does anumber have? And how many rows in\n> > subscriber?\n> \n> About 10k distinct anumbers and 20k rows. Nothing special...\n\nAnd does the planner know that?\nSELECT * FROM pg_stats WHERE tablename='subscriber' AND attname='anumber';\nIt's the n_distinct you're interested in, and perhaps most_common_freqs.\n\n> > Q3. What happens if you create the index on plain (anumber) and then\n> > test against '555555555'?\n> \n> Nothing, everything is the same - the problem lies on the other table's index \n> usage, using this index is fine.\n\nThe planner has to guess how many matches it will have for \nsubscriber=5555555. Based on that choice, it will either:\n a. Do the join, then find the highest crd values (sort)\n b. Scan the crd values backwards and then join\nIt's chosen (b) because it's estimating the numbers of matches \nincorrectly. I'm wondering whether the system can't see through your \nfunction-call (the cast to numeric) to determine how many matches it's \ngoing to get for any given value.\n\nIf the system can't be persuaded into getting its estimates more \naccurate, it might be worth trying an index on (subscriber_id,crd) and \ndropping the index on (crd) - if that's reasonable for your query patterns.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 09 Mar 2007 15:03:54 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "\n\n\n\n\nRichard Huxton wrote:\n\n\n\nRe: [PERFORM] Deceiding which index to use\n\nAnd does the planner know that?\nSELECT * FROM pg_stats WHERE tablename='subscriber' AND\nattname='anumber';\nIt's the n_distinct you're interested in, and perhaps most_common_freqs.\n\n\nn_distinct is -0.359322 and most_common_vals contains about 10\ndifferent anumbers (which are corretct), most_common_freqs are between\n0.01 and 0.001. What does n_distinct exactly mean? Why is it negative?\n\n> Nothing, everything is the same - the problem\nlies on the other table's index\n> usage, using this index is fine.\n\nThe planner has to guess how many matches it will have for\nsubscriber=5555555. Based on that choice, it will either:\n   a. Do the join, then find the highest crd values (sort)\n   b. Scan the crd values backwards and then join\nIt's chosen (b) because it's estimating the numbers of matches\nincorrectly. I'm wondering whether the system can't see through your\nfunction-call (the cast to numeric) to determine how many matches it's\ngoing to get for any given value.\n\n\nIt can see through the cast - I have just tried to create the\nsame database omitting the non-numeric anumbers and the results are the\nsame.\n\nIf the system can't be persuaded into getting its\nestimates more\naccurate, it might be worth trying an index on (subscriber_id,crd) and\ndropping the index on (crd) - if that's reasonable for your query\npatterns.\n\n\nI'll try that one if the negative n_distinct value can be a\ncorrect one :-)\n\nZizi\n\n\n", "msg_date": "Fri, 09 Mar 2007 16:13:47 +0100", "msg_from": "=?UTF-8?B?TWV6ZWkgWm9sdMOhbg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "Mezei Zoltán wrote:\n> Richard Huxton wrote:\n>>\n>> And does the planner know that?\n>> SELECT * FROM pg_stats WHERE tablename='subscriber' AND attname='anumber';\n>> It's the n_distinct you're interested in, and perhaps most_common_freqs.\n>>\n> n_distinct is -0.359322 and most_common_vals contains about 10 different \n> anumbers (which are corretct), most_common_freqs are between 0.01 and 0.001. \n> What does n_distinct exactly mean? Why is it negative?\n\nIt's saying that it's a ratio, so if you doubled the number of \nsubscribers it would expect that the number of unique anumber's would \ndouble too. So you've got about 36% of the rows with unique values - \npretty much what you said earlier. That's not bad, since the planner \nonly uses an estimate.\n\nOK - so the next place to look is the distribution of values for \nsubscriber_id on the output_message_log. Does that have some subscribers \nwith many rows and lots with hardly any? If so, you might need to \nincrease the stats on that column:\n\nALTER TABLE output_message_log ALTER COLUMN subscriber_id SET STATISTICS \n<num>;\nANALYSE output_message_log (subscriber_id);\n\nThe <num> defaults to 10, but can be set as high as 1000. You want to \ntry and capture the \"big\" subscribers.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 09 Mar 2007 15:21:20 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "\n\n\n\n\nRichard Huxton wrote:\n\n\n\nRe: [PERFORM] Deceiding which index to use\n\nOK - so the next place to look is the distribution\nof values for\nsubscriber_id on the output_message_log. Does that have some subscribers\nwith many rows and lots with hardly any?\n\nHmm... There are about 1.5k subscribers with 100-200 messages\neach - all the other 19k has an average of 8.9 messages, most of them\nhaving only 1 message. I think that's exactly the situation you\nmention...\n\nIf so, you might need to\nincrease the stats on that column:\n\nALTER TABLE output_message_log ALTER COLUMN subscriber_id SET STATISTICS\n<num>;\nANALYSE output_message_log (subscriber_id);\n\nThe <num> defaults to 10, but can be set as high as 1000. You\nwant to\ntry and capture the \"big\" subscribers.\n\n\nSo if I'm correct: this statistics gathering can be fine tuned,\nand if i set the <num> to 1000 then not only the first 10\nsubsribers (with most messages) will be stored in pg_stats, but the\nfirst 1000? Is 1000 a hard-coded highest-possible-value? I think it\nwould be best to set that to simething like 1800-1900 as I have about\nthat many subscibers with high message count.\n\nZizi\n\n\n\n", "msg_date": "Fri, 09 Mar 2007 16:44:18 +0100", "msg_from": "=?UTF-8?B?TWV6ZWkgWm9sdMOhbg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "Mezei Zoltán wrote:\n> Richard Huxton wrote:\n>>\n>> OK - so the next place to look is the distribution of values for\n>> subscriber_id on the output_message_log. Does that have some subscribers\n>> with many rows and lots with hardly any?\n>>\n> Hmm... There are about 1.5k subscribers with 100-200 messages each - all the \n> other 19k has an average of 8.9 messages, most of them having only 1 message. I \n> think that's exactly the situation you mention...\n\n[snip alter table ... set statistics]\n\n> So if I'm correct: this statistics gathering can be fine tuned, and if i set the \n> <num> to 1000 then not only the first 10 subsribers (with most messages) will be \n> stored in pg_stats, but the first 1000? Is 1000 a hard-coded \n> highest-possible-value? I think it would be best to set that to simething like \n> 1800-1900 as I have about that many subscibers with high message count.\n\nThere is a cost to increasing the stats values, otherwise it'd already \nbe set at 1000. In your case I'm not sure if 100-200 vs 8-9 messages is \nenough to skew things. Only one way to find out...\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 09 Mar 2007 15:52:23 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "Mezei Zolt�n wrote:\n> <!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n> <html>\n> <head>\n> <meta content=\"text/html;charset=UTF-8\" http-equiv=\"Content-Type\">\n> </head>\n> <body bgcolor=\"#ffffff\" text=\"#000000\">\n> Richard Huxton wrote:\n> <blockquote cite=\"mid:[email protected]\" type=\"cite\">\n> <meta http-equiv=\"Content-Type\" content=\"text/html; \">\n> <meta name=\"Generator\" content=\"MS Exchange Server version 6.5.7226.0\">\n> <title>Re: [PERFORM] Deceiding which index to use</title>\n> <!-- Converted from text/plain format -->\n\nWould you mind instructing your mail system to skip converting your\ntext/plain messages to HTML? It's kind of annoying for some of us.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 9 Mar 2007 13:00:09 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "\n\n\n\n\nRichard Huxton wrote:\n\n\n\nRe: [PERFORM] Deceiding which index to use\n\nThere is a cost to increasing the stats values,\notherwise it'd already\nbe set at 1000. In your case I'm not sure if 100-200 vs 8-9 messages is\nenough to skew things. Only one way to find out...\n\n\nWell, I tried. The situation is:\n\n- when I look for a subscriber which the planner thinks to have more\nmessages --> it uses the index on crd\n- when I look for a subscriber which the planner thinks to have less\nmessages --> it uses the index on subscriber_id\n\nI think that's OK, even if it still don't do what I really would like\nto see it do :-)\n\nThanks for all your help, maybe I know a little bit more about\npoostgresql than before.\n\nZizi\n\n\n", "msg_date": "Fri, 09 Mar 2007 17:07:05 +0100", "msg_from": "=?UTF-8?B?TWV6ZWkgWm9sdMOhbg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "Alvaro Herrera wrote:\n>\n> Would you mind instructing your mail system to skip converting your\n> text/plain messages to HTML? It's kind of annoying for some of us.\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>\nSorry about that - is this message OK now?\n\nZizi\n", "msg_date": "Fri, 09 Mar 2007 17:14:15 +0100", "msg_from": "=?ISO-8859-1?Q?Mezei_Zolt=E1n?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "Mezei Zolt�n wrote:\n> Alvaro Herrera wrote:\n>>\n>> Would you mind instructing your mail system to skip converting your\n>> text/plain messages to HTML? It's kind of annoying for some of us.\n>> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>>\n> Sorry about that - is this message OK now?\n\nThat's done the trick.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 09 Mar 2007 16:17:43 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deceiding which index to use" }, { "msg_contents": "Mezei Zoltán wrote:\n> Richard Huxton wrote:\n>> \n>> There is a cost to increasing the stats values, otherwise it'd\n>> already be set at 1000. In your case I'm not sure if 100-200 vs 8-9\n>> messages is enough to skew things. Only one way to find out...\n>> \n> Well, I tried. The situation is:\n> \n> - when I look for a subscriber which the planner thinks to have more\n> messages --> it uses the index on crd - when I look for a subscriber\n> which the planner thinks to have less messages --> it uses the index\n> on subscriber_id\n> \n> I think that's OK, even if it still don't do what I really would like\n> to see it do :-)\n\nWell, it's decision about where the costs cross over will be in the \nother \"cost\" settings in postgresql.conf. The thing is, it's not worth \ntuning for just this one query, because you can easily end up running \neverything else more slowly.\n\n> Thanks for all your help, maybe I know a little bit more about\n> poostgresql than before.\n\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 09 Mar 2007 16:18:57 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deceiding which index to use" } ]
[ { "msg_contents": "Hi,\n\nI'm running a Zope web application that obtains most of its data\nfrom a PostgreSQL 8.1 database in a virtual machine. I'm able to\nadjust the memory of this machine according to reasonable values\nand can choose between one or two (emulated) processors. The\nquestion is: How can I find an optimal relation between the\nvirtual hardware parameters and PostgreSQL performance. I guess\nit makes no sense to blindly increase virtual memory without\nadjusting PostgreSQL configuration. Are there any experiences\nabout reasonable performance increasing strategies? Are there any\nspecial things to regard in a VM?\n\nKind regards\n\n Andreas.\n\n-- \nhttp://fam-tille.de\n", "msg_date": "Mon, 12 Mar 2007 09:24:45 +0100 (CET)", "msg_from": "Andreas Tille <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL in virtual machine" }, { "msg_contents": "Andreas,\n\nI am responsible for some active PostgreSQL databases within virtual\nmachines and on \"plain metal\"; all using Windows.\n\nThere are spurious strange performance issues on those PostreSQL\ndatabases within the virtual machines. I can not pin them down (yet),\nand am not really able to blame them on vmware; but just want to\nrecommend to be very very carefull about PostgreSQL in vmware and\nplease communcate your findings.\n\n> from a PostgreSQL 8.1 database in a virtual machine. I'm able to\n> adjust the memory of this machine according to reasonable values\n> and can choose between one or two (emulated) processors. The\n> question is: How can I find an optimal relation between the\n> virtual hardware parameters and PostgreSQL performance. I guess\n\nYou did not specify the OS in your VM. I can report my experiences\nwith W2k3 inside the VM:\n\ncontrary to usual recommendations, smaller shared_buffers yielded\nbetter results.\nGrowing effective_cache_size yielded the best results for me.\n\n\nHarald\n\n\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nPython: the only language with more web frameworks than keywords.\n", "msg_date": "Mon, 12 Mar 2007 16:00:26 +0100", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL in virtual machine" }, { "msg_contents": "Andreas Tille wrote:\n\n> Are there any experiences\n> about reasonable performance increasing strategies? Are there any\n> special things to regard in a VM?\n\nNot directly about Postgresql, but I'm seeing evidence that upgrading\nfrom vmware 2.5.3 to 3.0.1 seems to have solved disk access\nperformance issues (measured with simple dd runs).\n\nWith vmware 2.5.3 + RedHat Enterprise 4.0 I measured a sequential\nread performance on 1-2 Gb files of less than 10 Mbytes/sec on a\nIBM FastT600 SAN volume partition.\n\nAfter the upgrade to 3.0 I had feedback from sysadmins that issue\nwas solved, but I didn't have the opportunity to repeat the read\ntests yet.\n\n-- \nCosimo\n\n", "msg_date": "Tue, 13 Mar 2007 21:22:45 +0100", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL in virtual machine" } ]
[ { "msg_contents": "Hello,\n\n\nI have upgraded from 7.3.9 to 8.2.3 and now the application that is \nusing Postgres is really slow.\nUsing pgfouine, I was able to identify a SQL select statement that was \nrunning in 500 ms before and now that is running in more than 20 seconds !\n\nThe reason is that the execution plan is different from the 2 versions.\nThe difference is the order the tables are joined :\n\nFor 8.2.3 :\nSeq Scan on lm05_t_tarif_panneau g (cost=0.00..2977.08 rows=18947 \nwidth=43) (actual time=0.006..65.388 rows=4062 loops=280)\n\nFor 7.3.9 :\nSeq Scan on lm05_t_tarif_panneau g (cost=0.00..90.00 rows=692 \nwidth=190) (actual time=0.03..206.23 rows=4062 loops=1)\n\nIs there an option in the 8.2.3 to change in order to have the same \nexecution plan than before ?\nI have compared the 2 postgresql.conf files and there are no differences \nas far as I know.\n\nThanks for your help.\n\nBest Regards,\nVincent Moreau\n\n\nFor 7.3.9 :\n\nUnique (cost=232.48..232.51 rows=1 width=497) (actual \ntime=524.49..543.00 rows=140 loops=1)\n....\n\n -> Seq Scan on \nlm05_t_tarif_panneau g (cost=0.00..90.00 rows=692 width=190) (actual \ntime=0.03..206.23 rows=4062 loops=1)\n Filter: \n(((lrg_min < 1000) AND (lrg_max >= 1000)) OR ((lrg_min < 500) AND \n(lrg_max >= 500)) OR (((\nlrg_min)::numeric < 333.333333333333) AND ((lrg_max)::numeric >= \n333.333333333333)) OR ((lrg_min < 250) AND (lrg_max >= 250)) OR \n((lrg_min < 200) AND (lrg_\nmax >= 200)) OR (((lrg_min)::numeric < 166.666666666667) AND \n((lrg_max)::numeric >= 166.666666666667)) OR (((lrg_min)::numeric < \n142.857142857143) AND ((lr\ng_max)::numeric >= 142.857142857143)) OR ((lrg_min < 125) AND (lrg_max \n >= 125)) OR (((lrg_min)::numeric < 111.111111111111) AND \n((lrg_max)::numeric >= 111.\n111111111111)) OR ((lrg_min < 100) AND (lrg_max >= 100)))\n -> Hash \n(cost=32.35..32.35 rows=1 width=8) (actual time=19.07..19.07 rows=0 \nloops=1)\n -> Nested Loop \n(cost=0.00..32.35 rows=1 width=8) (actual time=17.99..19.07 rows=1 loops=1)\n -> Seq \nScan on cm_gestion_modele_ca h (cost=0.00..27.50 rows=1 width=4) \n(actual time=0.09..17.35 rows=165 loops=1)\n \nFilter: ((idmagasin = '011'::character varying) AND (idoav = \n'PC_PLACARD'::character varying) AND (autorise = 1))\n -> Index \nScan using lm05_t_modele_cod_modele_key on lm05_t_modele a \n(cost=0.00..4.83 rows=1 width=4) (actual time=0.01..0.01 rows=0 loops=165)\n \nIndex Cond: (\"outer\".cod_modele = a.cod_modele)\n \nFilter: ((cod_type_ouverture = 'OUV_COU'::character varying) AND \n(cod_type_panneau = 'PAN_MEL'::character varying) AND (cod_fournisseur = \n5132) AND (cod_gamme_prof = 'Design Xtra'::character varying))\n\n\n\n\nFor 8.2.3 :\n\nUnique (cost=5278.93..5278.95 rows=1 width=32) (actual \ntime=27769.435..27771.863 rows=140 loops=1)\n\n...\n\n-> Hash Join (cost=6.31..3055.59 rows=115 width=47) (actual \ntime=58.096..67.787 rows=48 loops=280)\n\nHash Cond: (g.cod_modele = a.cod_modele)\n\n-> Seq Scan on lm05_t_tarif_panneau g (cost=0.00..2977.08 rows=18947 \nwidth=43) (actual time=0.006..65.388 rows=4062 loops=280)\n\nFilter: (((lrg_min < 1000) AND (lrg_max >= 1000)) OR ((lrg_min < 500) \nAND (lrg_max >= 500)) OR (((lrg_min)::numeric < 333.333333333333) AND \n((lrg_max)::numeric >= 333.333333333333)) OR ((lrg_min < 250) AND \n(lrg_max >= 250)) OR ((lrg_min < 200) AND (lrg_max >= 200)) OR \n(((lrg_min)::numeric < 166.666666666667) AND ((lrg_max)::numeric >= \n166.666666666667)) OR (((lrg_min)::numeric < 142.857142857143) AND \n((lrg_max)::numeric >= 142.857142857143)) OR ((lrg_min < 125) AND \n(lrg_max >= 125)) OR (((lrg_min)::numeric < 111.111111111111) AND \n((lrg_max)::numeric >= 111.111111111111)) OR ((lrg_min < 100) AND \n(lrg_max >= 100)))\n\n-> Hash (cost=6.30..6.30 rows=1 width=4) (actual time=0.135..0.135 \nrows=1 loops=1)\n\n-> Seq Scan on lm05_t_modele a (cost=0.00..6.30 rows=1 width=4) (actual \ntime=0.053..0.124 rows=1 loops=1)\n\nFilter: (((cod_type_ouverture)::text = 'OUV_COU'::text) AND \n((cod_type_panneau)::text = 'PAN_MEL'::text) AND (cod_fournisseur = \n5132) AND ((cod_gamme_prof)::text = 'Design Xtra'::text))\n\n-> Seq Scan on mag_gestion_modele_mag i (cost=0.00..8.78 rows=165 \nwidth=4) (actual time=0.053..0.214 rows=165 loops=1120)\n\nFilter: (((idmagasin)::text = '011'::text) AND ((idoav)::text = \n'PC_PLACARD'::text) AND (selection = 1))\n\n\n\nCe message et toutes les pi�ces jointes sont �tablis � l'attention exclusive de leurs destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le d�truire et d'en avertir imm�diatement l'exp�diteur. L'internet ne permettant pas d'assurer l'int�grit� de ce message, le contenu de ce message ne repr�sente en aucun cas un engagement de la part de Leroy Merlin.\n\n", "msg_date": "Tue, 13 Mar 2007 09:19:47 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Execution plan changed after upgrade from 7.3.9 to 8.2.3" }, { "msg_contents": "On Tue, Mar 13, 2007 at 09:19:47AM +0100, [email protected] wrote:\n> Is there an option in the 8.2.3 to change in order to have the same \n> execution plan than before ?\n\nLet's see if we can figure out why 8.2.3 is choosing a bad plan.\nHave you run ANALYZE on the tables in 8.2.3? Could you post the\nquery and the complete output of EXPLAIN ANALYZE (preferably without\nwrapping) for both versions?\n\n-- \nMichael Fuhr\n", "msg_date": "Tue, 13 Mar 2007 04:11:17 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to 8.2.3" }, { "msg_contents": "I have attached the requested information.\n\nYou will see that the query is quite messy and could be easily improved.\nUnfortunately, it came from a third party application and we do not have \naccess to the source code.\n\nThanks for your help,\n\nBest Regards,\nVincent\n\n\n\n\nMichael Fuhr wrote:\n> On Tue, Mar 13, 2007 at 09:19:47AM +0100, [email protected] wrote:\n> \n>> Is there an option in the 8.2.3 to change in order to have the same \n>> execution plan than before ?\n>> \n>\n> Let's see if we can figure out why 8.2.3 is choosing a bad plan.\n> Have you run ANALYZE on the tables in 8.2.3? Could you post the\n> query and the complete output of EXPLAIN ANALYZE (preferably without\n> wrapping) for both versions?\n>\n> \n\n\nCe message et toutes les pièces jointes sont établis à l'attention exclusive de leurs destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d'en avertir immédiatement l'expéditeur. L'internet ne permettant pas d'assurer l'intégrité de ce message, le contenu de ce message ne représente en aucun cas un engagement de la part de Leroy Merlin.", "msg_date": "Tue, 13 Mar 2007 12:11:54 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to\n 8.2.3" }, { "msg_contents": "[email protected] wrote:\n> I have attached the requested information.\n> \n> You will see that the query is quite messy and could be easily improved.\n> Unfortunately, it came from a third party application and we do not have \n> access to the source code.\n\n-> Hash Join (cost=6.31..3056.17 rows=116 width=47) (actual \ntime=60.055..70.078 rows=48 loops=280)\n Hash Cond: (g.cod_modele = a.cod_modele)\n -> Seq Scan on lm05_t_tarif_panneau g (cost=0.00..2977.08 \nrows=19097 width=43) (actual time=0.008..67.670 rows=4062 loops=280)\n\nIt does seem to be running that sequential scan 280 times, which is a \nstrange choice to say the least.\n\nObvious thing #1 is to look at I'd say is the stats on lrg_min,lrg_max - \ntry something like:\nALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_min SET STATISTICS <n>\nYou can set <n> up to 1000 (and then the same for lrg_max of course).\nAnalyse the table again and see if that gives it a clue.\n\nSecond thing might be to try indexes on lrg_min and lrg_max and see if \nthe bitmap code in 8.2 helps things.\n\nVery strange plan.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 13 Mar 2007 11:26:43 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to\n 8.2.3" }, { "msg_contents": "Thanks for the update.\n\nThe following did not change anything in the execution plan\n\nALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_min SET STATISTICS 1000\nALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_max SET STATISTICS 1000\nANALYZE lm05_t_tarif_panneau\n\nI was able to improve response time by creating indexes, but I would \nlike to avoid changing the database structure because it is not \nmaintained by ourseleves, but by the third party vendor.\n\n\n\nRichard Huxton wrote:\n> [email protected] wrote:\n>> I have attached the requested information.\n>>\n>> You will see that the query is quite messy and could be easily improved.\n>> Unfortunately, it came from a third party application and we do not \n>> have access to the source code.\n>\n> -> Hash Join (cost=6.31..3056.17 rows=116 width=47) (actual \n> time=60.055..70.078 rows=48 loops=280)\n> Hash Cond: (g.cod_modele = a.cod_modele)\n> -> Seq Scan on lm05_t_tarif_panneau g (cost=0.00..2977.08 \n> rows=19097 width=43) (actual time=0.008..67.670 rows=4062 loops=280)\n>\n> It does seem to be running that sequential scan 280 times, which is a \n> strange choice to say the least.\n>\n> Obvious thing #1 is to look at I'd say is the stats on lrg_min,lrg_max \n> - try something like:\n> ALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_min SET STATISTICS <n>\n> You can set <n> up to 1000 (and then the same for lrg_max of course).\n> Analyse the table again and see if that gives it a clue.\n>\n> Second thing might be to try indexes on lrg_min and lrg_max and see if \n> the bitmap code in 8.2 helps things.\n>\n> Very strange plan.\n\n\nCe message et toutes les pi�ces jointes sont �tablis � l'attention exclusive de leurs destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le d�truire et d'en avertir imm�diatement l'exp�diteur. L'internet ne permettant pas d'assurer l'int�grit� de ce message, le contenu de ce message ne repr�sente en aucun cas un engagement de la part de Leroy Merlin.\n\n", "msg_date": "Tue, 13 Mar 2007 13:28:36 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to\n 8.2.3" }, { "msg_contents": "[email protected] wrote:\n> I have attached the requested information.\n> \n> You will see that the query is quite messy and could be easily improved.\n> Unfortunately, it came from a third party application and we do not have \n> access to the source code.\n\nThere are only nested loops and hash joins, while the other plan seems\nto be more elaborate -- I wonder if you have disabled bitmap scan, merge\njoins, in 8.2? Try a SHOW enable_mergejoin in psql.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 13 Mar 2007 09:09:45 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to 8.2.3" }, { "msg_contents": "> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> [email protected]\n> Subject: Re: [PERFORM] Execution plan changed after upgrade \n> from 7.3.9 to 8.2.3\n> \n> The following did not change anything in the execution plan\n> \n> ALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_min SET \n> STATISTICS 1000\n> ALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_max SET \n> STATISTICS 1000\n> ANALYZE lm05_t_tarif_panneau\n> \n> I was able to improve response time by creating indexes, but I would \n> like to avoid changing the database structure because it is not \n> maintained by ourseleves, but by the third party vendor.\n\n\nI would actually try increasing the statistics on table\nlm05_t_couleur_panneau columns ht_min, ht_max, cod_aspect, and\ncod_gamme_panneau. Because I think the planner is thrown off because the\nsequential scan on lm05_t_couleur_panneau returns 280 rows when it expects\n1. Maybe to start you could just SET default_statistics_target=1000,\nanalyze everything, and see if that makes any difference.\n\nDave\n\n", "msg_date": "Tue, 13 Mar 2007 08:24:37 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to 8.2.3" }, { "msg_contents": "Here it is :\n\nCCM=# SHOW enable_mergejoin;\nenable_mergejoin\n------------------\non\n(1 row)\n\nCCM=#\n\n\n\n\nAlvaro Herrera wrote:\n> [email protected] wrote:\n> \n>> I have attached the requested information.\n>>\n>> You will see that the query is quite messy and could be easily improved.\n>> Unfortunately, it came from a third party application and we do not have \n>> access to the source code.\n>> \n>\n> There are only nested loops and hash joins, while the other plan seems\n> to be more elaborate -- I wonder if you have disabled bitmap scan, merge\n> joins, in 8.2? Try a SHOW enable_mergejoin in psql.\n>\n> \n\n\nCe message et toutes les pièces jointes sont établis à l'attention exclusive de leurs destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d'en avertir immédiatement l'expéditeur. L'internet ne permettant pas d'assurer l'intégrité de ce message, le contenu de ce message ne représente en aucun cas un engagement de la part de Leroy Merlin.\n\n", "msg_date": "Tue, 13 Mar 2007 14:28:24 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to\n 8.2.3" }, { "msg_contents": "[email protected] wrote:\n> Thanks for the update.\n> \n> The following did not change anything in the execution plan\n> \n> ALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_min SET STATISTICS 1000\n> ALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_max SET STATISTICS 1000\n> ANALYZE lm05_t_tarif_panneau\n\nHmm - so it's not the distribution of those values.\n\n> I was able to improve response time by creating indexes, but I would \n> like to avoid changing the database structure because it is not \n> maintained by ourseleves, but by the third party vendor.\n\nWell, the indexes can't do any harm, but it would be nice not to need them.\n\nCould you post the explain analyse with the indexes? To see how the \ncosts compare.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 13 Mar 2007 13:28:47 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to\n 8.2.3" }, { "msg_contents": "[email protected] wrote:\n> Here it is :\n> \n> CCM=# SHOW enable_mergejoin;\n> enable_mergejoin\n> ------------------\n> on\n> (1 row)\n\nSorry, my question was more general. Do you have _any_ of the planner\ntypes disabled? Try also enable_indexscan, etc; maybe\n\nselect * from pg_settings where name like 'enable_%';\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 13 Mar 2007 09:43:06 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to 8.2.3" }, { "msg_contents": "Increasing the default_statistics_target to 1000 did not help.\nIt just make the vacuum full analyze to take longer to complete.\n\nHere is the output :\n\nCCM=# VACUUM FULL ANALYZE ;\nVACUUM\nCCM=# explain ANALYZE SELECT distinct C.cod_couleur_panneau, \nC.cod_couleur_panneau, cast ('LM05' as varchar), cast ('OMM_TEINTE' as \nvarchar), cast ('IM' as varchar) FROM lm05_t_modele AS A, \nlm05_t_couleur_panneau AS C, lm05_t_infos_modele AS D, \nlm05_t_tarif_panneau AS G , lm05_t_composition AS E , \nlm05_t_couleur_profile AS F , cm_gestion_modele_ca as H, \nmag_gestion_modele_mag as I WHERE A.cod_type_ouverture = 'OUV_COU' AND \nA.cod_type_panneau = 'PAN_MEL' AND A.cod_modele = C.cod_modele AND \nA.cod_modele = D.cod_modele AND A.cod_modele = G.cod_modele AND \nG.cod_tarif_panneau = C.cod_tarif_panneau AND A.cod_modele = \nE.cod_modele AND nb_vantaux >= 2 AND A.cod_modele = F.cod_modele AND \nF.couleur_profile = 'acajou mat' AND F.cod_tarif_profile = \nG.cod_tarif_profile AND A.cod_fournisseur = '5132' AND A.cod_gamme_prof \n= 'Design Xtra' AND C.ht_min < 2000 AND C.ht_max >= 2000 AND \nD.largeur_maxi_rail >= 1000 AND C.cod_aspect = 'tons bois et cuirs' AND \nC.cod_gamme_panneau = 'BOIS et CUIR XTRA 3' AND ((G.lrg_min < 1000 AND \nG.lrg_max >= 1000) OR (G.lrg_min < 500 AND G.lrg_max >= 500) OR \n(G.lrg_min < 333.333333333333 AND G.lrg_max >= 333.333333333333) OR \n(G.lrg_min < 250 AND G.lrg_max >= 250) OR (G.lrg_min < 200 AND G.lrg_max \n >= 200) OR (G.lrg_min < 166.666666666667 AND G.lrg_max >= \n166.666666666667) OR (G.lrg_min < 142.857142857143 AND G.lrg_max >= \n142.857142857143) OR (G.lrg_min < 125 AND G.lrg_max >= 125) OR \n(G.lrg_min < 111.111111111111 AND G.lrg_max >= 111.111111111111) OR \n(G.lrg_min < 100 AND G.lrg_max >= 100)) AND H.idmagasin = '011' AND \nH.idoav='PC_PLACARD' AND H.cod_modele = A.cod_modele AND H.autorise = 1 \nAND I.idmagasin = '011' AND I.idoav='PC_PLACARD' AND I.cod_modele = \nA.cod_modele AND I.selection = 1;\n\n\nQUERY PLAN\n\n\n\n--------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------\nUnique (cost=5275.40..5275.42 rows=1 width=32) (actual \ntime=21566.453..21568.917 rows=140 loops=1)\n-> Sort (cost=5275.40..5275.41 rows=1 width=32) (actual \ntime=21566.450..21567.212 rows=1400 loops=1)\nSort Key: c.cod_couleur_panneau, c.cod_couleur_panneau, \n'LM05'::character varying, 'OMM_TEINTE'::character varyin\ng, 'IM'::character varying\n-> Nested Loop (cost=105.58..5275.39 rows=1 width=32) (actual \ntime=94.901..21534.435 rows=1400 loops=1)\nJoin Filter: (a.cod_modele = d.cod_modele)\n-> Nested Loop (cost=105.58..5267.27 rows=1 width=60) (actual \ntime=94.700..21213.793 rows=1400 loops=1)\nJoin Filter: (a.cod_modele = e.cod_modele)\n-> Nested Loop (cost=105.58..5245.28 rows=1 width=56) (actual \ntime=93.912..20996.857 rows=280 loops\n=1)\nJoin Filter: (h.cod_modele = a.cod_modele)\n-> Nested Loop (cost=105.58..4731.94 rows=1 width=52) (actual \ntime=86.994..19181.638 rows=280\nloops=1)\nJoin Filter: (i.cod_modele = a.cod_modele)\n-> Nested Loop (cost=105.58..4721.10 rows=1 width=48) (actual \ntime=86.651..19091.147 ro\nws=280 loops=1)\nJoin Filter: ((a.cod_modele = c.cod_modele) AND \n((g.cod_tarif_panneau)::text = (c.c\nod_tarif_panneau)::text) AND ((f.cod_tarif_profile)::text = \n(g.cod_tarif_profile)::text))\n-> Hash Join (cost=99.26..1665.04 rows=1 width=84) (actual \ntime=25.598..31.845 ro\nws=280 loops=1)\nHash Cond: (c.cod_modele = f.cod_modele)\n-> Seq Scan on lm05_t_couleur_panneau c (cost=0.00..1565.60 rows=4 width=62\n) (actual time=23.817..29.048 rows=280 loops=1)\nFilter: ((ht_min < 2000) AND (ht_max >= 2000) AND ((cod_aspect)::text =\n'tons bois et cuirs'::text) AND ((cod_gamme_panneau)::text = 'BOIS et \nCUIR XTRA 3'::text))\n-> Hash (cost=98.86..98.86 rows=32 width=22) (actual time=1.653..1.653 rows\n=32 loops=1)\n-> Seq Scan on lm05_t_couleur_profile f (cost=0.00..98.86 rows=32 wid\nth=22) (actual time=1.159..1.614 rows=32 loops=1)\nFilter: ((couleur_profile)::text = 'acajou mat'::text)\n-> Hash Join (cost=6.31..3054.10 rows=112 width=48) (actual \ntime=58.304..68.027 r\nows=48 loops=280)\nHash Cond: (g.cod_modele = a.cod_modele)\n-> Seq Scan on lm05_t_tarif_panneau g (cost=0.00..2977.08 rows=18557 width=\n44) (actual time=0.009..65.642 rows=4062 loops=280)\nFilter: (((lrg_min < 1000) AND (lrg_max >= 1000)) OR ((lrg_min < 500) A\nND (lrg_max >= 500)) OR (((lrg_min)::numeric < 333.333333333333) AND \n((lrg_max)::numeric >= 333.333333333333)) OR ((lrg_mi\nn < 250) AND (lrg_max >= 250)) OR ((lrg_min < 200) AND (lrg_max >= 200)) \nOR (((lrg_min)::numeric < 166.666666666667) AND (\n(lrg_max)::numeric >= 166.666666666667)) OR (((lrg_min)::numeric < \n142.857142857143) AND ((lrg_max)::numeric >= 142.857142\n857143)) OR ((lrg_min < 125) AND (lrg_max >= 125)) OR \n(((lrg_min)::numeric < 111.111111111111) AND ((lrg_max)::numeric >=\n111.111111111111)) OR ((lrg_min < 100) AND (lrg_max >= 100)))\n-> Hash (cost=6.30..6.30 rows=1 width=4) (actual time=0.118..0.118 rows=1 l\noops=1)\n-> Seq Scan on lm05_t_modele a (cost=0.00..6.30 rows=1 width=4) (actu\nal time=0.039..0.110 rows=1 loops=1)\nFilter: (((cod_type_ouverture)::text = 'OUV_COU'::text) AND ((cod\n_type_panneau)::text = 'PAN_MEL'::text) AND (cod_fournisseur = 5132) AND \n((cod_gamme_prof)::text = 'Design Xtra'::text))\n-> Seq Scan on mag_gestion_modele_mag i (cost=0.00..8.78 rows=165 \nwidth=4) (actual time\n=0.059..0.224 rows=165 loops=280)\nFilter: (((idmagasin)::text = '011'::text) AND ((idoav)::text = \n'PC_PLACARD'::text)\nAND (selection = 1))\n-> Seq Scan on cm_gestion_modele_ca h (cost=0.00..511.27 rows=165 \nwidth=4) (actual time=0.032\n..6.379 rows=165 loops=280)\nFilter: (((idmagasin)::text = '011'::text) AND ((idoav)::text = \n'PC_PLACARD'::text) AND (\nautorise = 1))\n-> Seq Scan on lm05_t_composition e (cost=0.00..14.82 rows=573 width=4) \n(actual time=0.010..0.452 r\nows=573 loops=280)\nFilter: (nb_vantaux >= 2)\n-> Seq Scan on lm05_t_infos_modele d (cost=0.00..6.06 rows=165 width=4) \n(actual time=0.004..0.136 rows=16\n5 loops=1400)\nFilter: (largeur_maxi_rail >= 1000)\nTotal runtime: 21569.332 ms\n(36 rows)\n\nCCM=#\n\n\n\nDave Dutcher wrote:\n>> From: [email protected] \n>> [mailto:[email protected]] On Behalf Of \n>> [email protected]\n>> Subject: Re: [PERFORM] Execution plan changed after upgrade \n>> from 7.3.9 to 8.2.3\n>>\n>> The following did not change anything in the execution plan\n>>\n>> ALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_min SET \n>> STATISTICS 1000\n>> ALTER TABLE lm05_t_tarif_panneau ALTER COLUMN lrg_max SET \n>> STATISTICS 1000\n>> ANALYZE lm05_t_tarif_panneau\n>>\n>> I was able to improve response time by creating indexes, but I would \n>> like to avoid changing the database structure because it is not \n>> maintained by ourseleves, but by the third party vendor.\n>> \n>\n>\n> I would actually try increasing the statistics on table\n> lm05_t_couleur_panneau columns ht_min, ht_max, cod_aspect, and\n> cod_gamme_panneau. Because I think the planner is thrown off because the\n> sequential scan on lm05_t_couleur_panneau returns 280 rows when it expects\n> 1. Maybe to start you could just SET default_statistics_target=1000,\n> analyze everything, and see if that makes any difference.\n>\n> Dave\n>\n>\n> \n\n\nCe message et toutes les pièces jointes sont établis à l'attention exclusive de leurs destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d'en avertir immédiatement l'expéditeur. L'internet ne permettant pas d'assurer l'intégrité de ce message, le contenu de ce message ne représente en aucun cas un engagement de la part de Leroy Merlin.\n\n", "msg_date": "Tue, 13 Mar 2007 15:04:49 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to\n 8.2.3" }, { "msg_contents": "All planner types were enabled.\n\nCCM=# select * from pg_settings where name like 'enable_%';\n name | setting | unit | category | short_desc | extra_desc | context | vartype | source | min_val | max_val \n-------------------+---------+------+---------------------------------------------+--------------------------------------------------------+------------+---------+---------+---------+---------+---------\n enable_bitmapscan | on | | Query Tuning / Planner Method Configuration | Enables the planner's use of bitmap-scan plans. | | user | bool | default | | \n enable_hashagg | on | | Query Tuning / Planner Method Configuration | Enables the planner's use of hashed aggregation plans. | | user | bool | default | | \n enable_hashjoin | on | | Query Tuning / Planner Method Configuration | Enables the planner's use of hash join plans. | | user | bool | default | | \n enable_indexscan | on | | Query Tuning / Planner Method Configuration | Enables the planner's use of index-scan plans. | | user | bool | default | | \n enable_mergejoin | on | | Query Tuning / Planner Method Configuration | Enables the planner's use of merge join plans. | | user | bool | default | | \n enable_nestloop | on | | Query Tuning / Planner Method Configuration | Enables the planner's use of nested-loop join plans. | | user | bool | default | | \n enable_seqscan | on | | Query Tuning / Planner Method Configuration | Enables the planner's use of sequential-scan plans. | | user | bool | default | | \n enable_sort | on | | Query Tuning / Planner Method Configuration | Enables the planner's use of explicit sort steps. | | user | bool | default | | \n enable_tidscan | on | | Query Tuning / Planner Method Configuration | Enables the planner's use of TID scan plans. | | user | bool | default | | \n(9 rows)\n\n\n\nI was able to improve response time by seting enable_seqscan to off\n\nHere is the new analyze result :\n\nCCM=# explain ANALYZE SELECT distinct C.cod_couleur_panneau, \nC.cod_couleur_panneau, cast ('LM05' as varchar), cast ('OMM_TEINTE' as \nvarchar), cast ('IM' as varchar) FROM lm05_t_modele AS A, \nlm05_t_couleur_panneau AS C, lm05_t_infos_modele AS D, \nlm05_t_tarif_panneau AS G , lm05_t_composition AS E , \nlm05_t_couleur_profile AS F , cm_gestion_modele_ca as H, \nmag_gestion_modele_mag as I WHERE A.cod_type_ouverture = 'OUV_COU' AND \nA.cod_type_panneau = 'PAN_MEL' AND A.cod_modele = C.cod_modele AND \nA.cod_modele = D.cod_modele AND A.cod_modele = G.cod_modele AND \nG.cod_tarif_panneau = C.cod_tarif_panneau AND A.cod_modele = \nE.cod_modele AND nb_vantaux >= 2 AND A.cod_modele = F.cod_modele AND \nF.couleur_profile = 'acajou mat' AND F.cod_tarif_profile = \nG.cod_tarif_profile AND A.cod_fournisseur = '5132' AND A.cod_gamme_prof \n= 'Design Xtra' AND C.ht_min < 2000 AND C.ht_max >= 2000 AND \nD.largeur_maxi_rail >= 1000 AND C.cod_aspect = 'tons bois et cuirs' AND \nC.cod_gamme_panneau = 'BOIS et CUIR XTRA 3' AND ((G.lrg_min < 1000 AND \nG.lrg_max >= 1000) OR (G.lrg_min < 500 AND G.lrg_max >= 500) OR \n(G.lrg_min < 333.333333333333 AND G.lrg_max >= 333.333333333333) OR \n(G.lrg_min < 250 AND G.lrg_max >= 250) OR (G.lrg_min < 200 AND G.lrg_max \n >= 200) OR (G.lrg_min < 166.666666666667 AND G.lrg_max >= \n166.666666666667) OR (G.lrg_min < 142.857142857143 AND G.lrg_max >= \n142.857142857143) OR (G.lrg_min < 125 AND G.lrg_max >= 125) OR \n(G.lrg_min < 111.111111111111 AND G.lrg_max >= 111.111111111111) OR \n(G.lrg_min < 100 AND G.lrg_max >= 100)) AND H.idmagasin = '011' AND \nH.idoav='PC_PLACARD' AND H.cod_modele = A.cod_modele AND H.autorise = 1 \nAND I.idmagasin = '011' AND I.idoav='PC_PLACARD' AND I.cod_modele = \nA.cod_modele AND I.selection = 1;\n\n\nQUERY PLAN\n\n\n-----------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------\nUnique (cost=700005413.95..700005413.97 rows=1 width=32) (actual \ntime=1232.497..1234.961 rows=140 loops=1)\n-> Sort (cost=700005413.95..700005413.96 rows=1 width=32) (actual \ntime=1232.494..1233.231 rows=1400 loops=1)\nSort Key: c.cod_couleur_panneau, c.cod_couleur_panneau, \n'LM05'::character varying, 'OMM_TEINTE'::character varying, \n'IM'::character va\nrying\n-> Hash Join (cost=700002228.09..700005413.94 rows=1 width=32) (actual \ntime=1192.211..1204.675 rows=1400 loops=1)\nHash Cond: ((g.cod_modele = a.cod_modele) AND \n((g.cod_tarif_profile)::text = (f.cod_tarif_profile)::text) AND \n((g.cod_tarif_pann\neau)::text = (c.cod_tarif_panneau)::text))\n-> Seq Scan on lm05_t_tarif_panneau g (cost=100000000.00..100002977.08 \nrows=18557 width=44) (actual time=0.038..69.017 rows=40\n62 loops=1)\nFilter: (((lrg_min < 1000) AND (lrg_max >= 1000)) OR ((lrg_min < 500) \nAND (lrg_max >= 500)) OR (((lrg_min)::numeric < 333.\n333333333333) AND ((lrg_max)::numeric >= 333.333333333333)) OR ((lrg_min \n< 250) AND (lrg_max >= 250)) OR ((lrg_min < 200) AND (lrg_max >= 200))\nOR (((lrg_min)::numeric < 166.666666666667) AND ((lrg_max)::numeric >= \n166.666666666667)) OR (((lrg_min)::numeric < 142.857142857143) AND ((lr\ng_max)::numeric >= 142.857142857143)) OR ((lrg_min < 125) AND (lrg_max \n >= 125)) OR (((lrg_min)::numeric < 111.111111111111) AND ((lrg_max)::num\neric >= 111.111111111111)) OR ((lrg_min < 100) AND (lrg_max >= 100)))\n-> Hash (cost=600002228.07..600002228.07 rows=1 width=104) (actual \ntime=1129.717..1129.717 rows=700 loops=1)\n-> Nested Loop (cost=600001665.30..600002228.07 rows=1 width=104) \n(actual time=43.012..1127.646 rows=700 loops=1)\nJoin Filter: (a.cod_modele = e.cod_modele)\n-> Nested Loop (cost=500001665.30..500002206.08 rows=1 width=100) \n(actual time=42.246..1020.245 rows=140 loops=1)\nJoin Filter: (a.cod_modele = d.cod_modele)\n-> Nested Loop (cost=400001665.30..400002197.96 rows=1 width=96) (actual \ntime=42.032..986.021 rows=140 loops\n=1)\nJoin Filter: (h.cod_modele = a.cod_modele)\n-> Nested Loop (cost=300001665.30..300001684.62 rows=1 width=92) (actual \ntime=35.244..83.822 rows=140\nloops=1)\nJoin Filter: (i.cod_modele = a.cod_modele)\n-> Nested Loop (cost=200001665.30..200001673.78 rows=1 width=88) (actual \ntime=34.916..39.601 row\ns=140 loops=1)\n-> Merge Join (cost=200001665.30..200001665.49 rows=1 width=84) (actual \ntime=34.806..36.35\n2 rows=280 loops=1)\nMerge Cond: (c.cod_modele = f.cod_modele)\n-> Sort (cost=100001565.64..100001565.65 rows=4 width=62) (actual \ntime=32.859..33.07\n7 rows=280 loops=1)\nSort Key: c.cod_modele\n-> Seq Scan on lm05_t_couleur_panneau c (cost=100000000.00..100001565.60 \nrows=\n4 width=62) (actual time=27.553..32.501 rows=280 loops=1)\nFilter: ((ht_min < 2000) AND (ht_max >= 2000) AND ((cod_aspect)::text = 't\nons bois et cuirs'::text) AND ((cod_gamme_panneau)::text = 'BOIS et CUIR \nXTRA 3'::text))\n-> Sort (cost=100000099.66..100000099.74 rows=32 width=22) (actual \ntime=1.909..2.188\nrows=308 loops=1)\nSort Key: f.cod_modele\n-> Seq Scan on lm05_t_couleur_profile f (cost=100000000.00..100000098.86 \nrows=\n32 width=22) (actual time=1.268..1.828 rows=32 loops=1)\nFilter: ((couleur_profile)::text = 'acajou mat'::text)\n-> Index Scan using lm05_t_modele_cod_modele_key on lm05_t_modele a \n(cost=0.00..8.28 rows=\n1 width=4) (actual time=0.007..0.009 rows=0 loops=280)\nIndex Cond: (a.cod_modele = c.cod_modele)\nFilter: (((cod_type_ouverture)::text = 'OUV_COU'::text) AND \n((cod_type_panneau)::text\n= 'PAN_MEL'::text) AND (cod_fournisseur = 5132) AND \n((cod_gamme_prof)::text = 'Design Xtra'::text))\n-> Seq Scan on mag_gestion_modele_mag i (cost=100000000.00..100000008.78 \nrows=165 width=4) (actu\nal time=0.056..0.220 rows=165 loops=140)\nFilter: (((idmagasin)::text = '011'::text) AND ((idoav)::text = \n'PC_PLACARD'::text) AND (sel\nection = 1))\n-> Seq Scan on cm_gestion_modele_ca h (cost=100000000.00..100000511.28 \nrows=165 width=4) (actual time=\n0.031..6.343 rows=165 loops=140)\nFilter: (((idmagasin)::text = '011'::text) AND ((idoav)::text = \n'PC_PLACARD'::text) AND (autorise\n= 1))\n-> Seq Scan on lm05_t_infos_modele d (cost=100000000.00..100000006.06 \nrows=165 width=4) (actual time=0.009..\n0.149 rows=165 loops=140)\nFilter: (largeur_maxi_rail >= 1000)\n-> Seq Scan on lm05_t_composition e (cost=100000000.00..100000014.83 \nrows=573 width=4) (actual time=0.007..0.445 r\nows=573 loops=140)\nFilter: (nb_vantaux >= 2)\nTotal runtime: 1235.660 ms\n(39 rows)\n\n\n\n\n\nAlvaro Herrera wrote:\n> [email protected] wrote:\n> \n>> Here it is :\n>>\n>> CCM=# SHOW enable_mergejoin;\n>> enable_mergejoin\n>> ------------------\n>> on\n>> (1 row)\n>> \n>\n> Sorry, my question was more general. Do you have _any_ of the planner\n> types disabled? Try also enable_indexscan, etc; maybe\n>\n> select * from pg_settings where name like 'enable_%';\n>\n> \n\n\nCe message et toutes les pièces jointes sont établis à l'attention exclusive de leurs destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le détruire et d'en avertir immédiatement l'expéditeur. L'internet ne permettant pas d'assurer l'intégrité de ce message, le contenu de ce message ne représente en aucun cas un engagement de la part de Leroy Merlin.\n\n", "msg_date": "Tue, 13 Mar 2007 15:05:29 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to\n 8.2.3" }, { "msg_contents": "> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> [email protected]\n> Subject: Re: [PERFORM] Execution plan changed after upgrade \n> from 7.3.9 to 8.2.3\n> \n> \n> Increasing the default_statistics_target to 1000 did not help.\n> It just make the vacuum full analyze to take longer to complete.\n\nJust FYI when you change statistics you only need to run ANALYZE, not VACUUM\nANALYZE, and definetly not VACUUM FULL ANALYZE.\n\nI don't know what else to suggest for this query since you can't change the\nSQL. I would talk to the vendor and ask them to add indexes if you know\nthat helps.\n\nDave\n\n", "msg_date": "Tue, 13 Mar 2007 09:48:41 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to 8.2.3" }, { "msg_contents": "<[email protected]> writes:\n> I was able to improve response time by seting enable_seqscan to off\n\nenable_nestloop = off would probably be a saner choice, at least for\nthis particular query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2007 11:06:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to 8.2.3 " }, { "msg_contents": "Thanks for the advice Tom !\n\nSetting enable_nestloop = off did improve the query a much better way \nthan setting enable_seqscan to off.\n\nIt does not screw the costs either (I had very odd costs with \nenable_seqscan to off like this : Nested Loop \n(cost=400001665.30..400002197.96 rows=1 width=96)\n\nIs there a \"performance risk\" to have enable_nestloop = off for other \nqueries ?\n\nIf I had the choice, should I go for index creation for the specific \ntables or should I tweak the optimizer with enable_nestloop = off ?\n\n\nThanks again to all of you for your help.\n\nBest Regards,\nVincent\n\nTom Lane wrote:\n> <[email protected]> writes:\n> \n>> I was able to improve response time by seting enable_seqscan to off\n>> \n>\n> enable_nestloop = off would probably be a saner choice, at least for\n> this particular query.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n> \n\n\nCe message et toutes les pi�ces jointes sont �tablis � l'attention exclusive de leurs destinataires et sont confidentiels. Si vous recevez ce message par erreur, merci de le d�truire et d'en avertir imm�diatement l'exp�diteur. L'internet ne permettant pas d'assurer l'int�grit� de ce message, le contenu de ce message ne repr�sente en aucun cas un engagement de la part de Leroy Merlin.\n\n", "msg_date": "Tue, 13 Mar 2007 16:46:34 +0100", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Execution plan changed after upgrade from 7.3.9 to\n 8.2.3" } ]
[ { "msg_contents": "\nFolks !\n\nI have a batch application that writes approx. 4 million rows into a narrow\ntable. I am using JDBC addBatch/ExecuteBatch with auto commit turned off.\nBatch size is 100. So far I am seeing Postgres take roughly five times the\ntime it takes to do this in the Oracle. \n\nI have played with many parameters. Only one that seems to have any affect\nis fsync - but thats only 10% or so. \nInitially I got the warning that checkpoints were happening too often so I\nincreased the segments to 24. Warnings stopped, but no real improvement in\nperformance.\n\nIs postgres really that slow ? What am I missing ? \n\nHere are the changes to my postgressql.cong file. \n\nshared_buffers = 768MB\nwork_mem = 256MB \nmaintenance_work_mem = 128MB\nfsync = off \n\ncheckpoint_segments = 24\nautovacuum = on \n\nThank you,\n\n-Sanjay\n-- \nView this message in context: http://www.nabble.com/Postgres-batch-write-very-slow---what-to-do-tf3395195.html#a9452092\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Tue, 13 Mar 2007 03:55:12 -0700 (PDT)", "msg_from": "femski <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres batch write very slow - what to do" }, { "msg_contents": "femski wrote:\n> I have a batch application that writes approx. 4 million rows into a narrow\n> table. I am using JDBC addBatch/ExecuteBatch with auto commit turned off.\n> Batch size is 100. So far I am seeing Postgres take roughly five times the\n> time it takes to do this in the Oracle. \n\nThe usual tricks are:\n- Drop indexes before loading, and rebuild them afterwards.\n- Disable foreign key references while loading\n- Use COPY instead of JDBC\n\nIs the loading I/O or CPU bound? How long does the load actually take?\n\nAre you running the latest version of PostgreSQL and the JDBC driver?\n\nHave you read the \"Populating a Database\" chapter in the manual: \nhttp://www.postgresql.org/docs/8.2/interactive/populate.html\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 13 Mar 2007 11:06:31 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "On 3/13/07, femski <[email protected]> wrote:\n>\n> Folks !\n>\n> I have a batch application that writes approx. 4 million rows into a narrow\n> table. I am using JDBC addBatch/ExecuteBatch with auto commit turned off.\n> Batch size is 100. So far I am seeing Postgres take roughly five times the\n> time it takes to do this in the Oracle.\n>\n> I have played with many parameters. Only one that seems to have any affect\n> is fsync - but thats only 10% or so.\n> Initially I got the warning that checkpoints were happening too often so I\n> increased the segments to 24. Warnings stopped, but no real improvement in\n> performance.\n>\n> Is postgres really that slow ? What am I missing ?\n\nhow many inserts/sec are you getting approximately. Maybe you have\nsome type of network issue.\n\nmerlin\n", "msg_date": "Tue, 13 Mar 2007 07:40:49 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "\nI am runing Postgres 8.2 on OpenSuse 10.2 with latest jdbc driver. I moved\nthe app to be collocated with the server. Oracle takes 60 sec. Postgres 275\nsec. For 4.7 million rows.\n\nThere are 4 CPUs on the server and one is runing close to 100% during\ninserts.\nNetwork history shows spikes of upto 60% of the bandwidth (Gnome System\nmonitor graph). I have a gigabit card - but should not enter into picture\nsince its on local host.\n\nthanks,\n\n-Sanjay\n-- \nView this message in context: http://www.nabble.com/Postgres-batch-write-very-slow---what-to-do-tf3395195.html#a9453712\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Tue, 13 Mar 2007 05:56:23 -0700 (PDT)", "msg_from": "femski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "femski <[email protected]> writes:\n> I am runing Postgres 8.2 on OpenSuse 10.2 with latest jdbc driver. I moved\n> the app to be collocated with the server. Oracle takes 60 sec. Postgres 275\n> sec. For 4.7 million rows.\n\n> There are 4 CPUs on the server and one is runing close to 100% during\n> inserts.\n> Network history shows spikes of upto 60% of the bandwidth (Gnome System\n> monitor graph).\n\nIt sounds like you're incurring a network round trip for each row, which\nwill be expensive even for a co-located application. Perhaps Oracle's\nJDBC driver is smart enough to avoid that. I'm not sure what tricks are\navailable for bulk loading with our JDBC driver --- the page Heikki\nmentioned explains things from a server perspective but I dunno how that\ntranslates into JDBC. The folks who hang out on pgsql-jdbc could\nprobably give you some tips.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2007 11:29:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do " }, { "msg_contents": "Tom Lane wrote:\n> It sounds like you're incurring a network round trip for each row, which\n> will be expensive even for a co-located application. Perhaps Oracle's\n> JDBC driver is smart enough to avoid that. I'm not sure what tricks are\n> available for bulk loading with our JDBC driver --- the page Heikki\n> mentioned explains things from a server perspective but I dunno how that\n> translates into JDBC. The folks who hang out on pgsql-jdbc could\n> probably give you some tips.\n\nOP said he's using addBatch/executeBatch with a batch size of 100. The \nJDBC driver sends the whole batch before waiting for responses.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 13 Mar 2007 15:56:11 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> OP said he's using addBatch/executeBatch with a batch size of 100. The \n> JDBC driver sends the whole batch before waiting for responses.\n\nPerhaps a bit of checking with a packet sniffer would be warranted.\nIf it's really working like that he shouldn't see the network utilization\nload he reported ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2007 12:02:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do " }, { "msg_contents": "\nOk, I turned off XDMCP and network bandwidth utilization dropped to less than\n5%.\nTimings remained the same.\n\nCuriously five times faster time for Oracle came from a client running on a\ndifferent host than the server.\nTo make things worse for Postgres, when I replace \"hostname\" in jdbc string\nto \"localhost\" or 127.0.0.1\nit runs another 60% slower (446 sec vs 275 sec). Strange.\n\nBefore I take this discussion to jdbc list, why is CPU utilization 100%\nduring insert ? could that be a bottleneck. How to eliminate it ? These are\nIntel WordCrest 5110 Xeon cores.\n\nthank you\n\n-Sanjay\n\n\nfemski wrote:\n> \n> I am runing Postgres 8.2 on OpenSuse 10.2 with latest jdbc driver. I moved\n> the app to be collocated with the server. Oracle takes 60 sec. Postgres\n> 275 sec. For 4.7 million rows.\n> \n> There are 4 CPUs on the server and one is runing close to 100% during\n> inserts.\n> Network history shows spikes of upto 60% of the bandwidth (Gnome System\n> monitor graph). I have a gigabit card - but should not enter into picture\n> since its on local host.\n> \n> thanks,\n> \n> -Sanjay\n> \n\n-- \nView this message in context: http://www.nabble.com/Postgres-batch-write-very-slow---what-to-do-tf3395195.html#a9473692\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 14 Mar 2007 05:46:55 -0700 (PDT)", "msg_from": "femski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "On 3/14/07, femski <[email protected]> wrote:\n>\n> Ok, I turned off XDMCP and network bandwidth utilization dropped to less than\n> 5%.\n> Timings remained the same.\n>\n> Curiously five times faster time for Oracle came from a client running on a\n> different host than the server.\n> To make things worse for Postgres, when I replace \"hostname\" in jdbc string\n> to \"localhost\" or 127.0.0.1\n> it runs another 60% slower (446 sec vs 275 sec). Strange.\n>\n> Before I take this discussion to jdbc list, why is CPU utilization 100%\n> during insert ? could that be a bottleneck. How to eliminate it ? These are\n> Intel WordCrest 5110 Xeon cores.\n\nwhen loading to oracle, does it utilize more than one core? istm your\nbest bet would be to split load process to 4+ backends...\n\nmerlin\n", "msg_date": "Thu, 15 Mar 2007 06:22:19 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "\nI am using Oracle XE so its using only one core and doing just fine.\nHow do I split backend to 4+ processes ?\nI don't want to write a multithreaded loader app.\nAnd I didn't think Postgres can utilize multiple cores for the \nsame insert statement.\n\nthanks,\n\n-Sanjay\n\n\nOn 3/14/07, femski <[email protected]> wrote:\n\nwhen loading to oracle, does it utilize more than one core? istm your\nbest bet would be to split load process to 4+ backends...\n\nmerlin\n\n\n-- \nView this message in context: http://www.nabble.com/Postgres-batch-write-very-slow---what-to-do-tf3395195.html#a9486986\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 14 Mar 2007 18:24:36 -0700 (PDT)", "msg_from": "femski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "On 3/15/07, femski <[email protected]> wrote:\n>\n> I am using Oracle XE so its using only one core and doing just fine.\n> How do I split backend to 4+ processes ?\n> I don't want to write a multithreaded loader app.\n> And I didn't think Postgres can utilize multiple cores for the\n> same insert statement.\n\nwell, what sql is the jdbc driver creating exactly? It is probably\nrunning inserts in a transaction. your load is about 17k inserts/sec\nwhich about right for postgres on your hardware. you have the\nfollowing options to play increase insert performance:\n\n* tweak postgresql.conf\n fsync: off it is not already\n wal_segments: bump to at least 24 or so\n maintenance_work_mem: if you create key after insert, bump this high\n(it speeds create index)\n bgwriter settings: you can play with these, try disabling bgwriter\nfirst (maxpages=0)\n full_page_writes=off might help, not 100% sure about this\n\n* distribute load\n make load app multi thrreaded.\n\n* use copy for bulk load\n [is there a way to make jdbc driver do this?]\n\n* use multi-line inserts (at least 10 rows/insert)...nearly as fast as copy\n\n* if jdbc driver is not already doing so, prepare your statements and execute.\n\nmerlin\n", "msg_date": "Thu, 15 Mar 2007 07:56:20 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "\nI tried maxpages = 0 and full_page_writes=off and it seemed to be taking\nforever.\nAll other tricks I have already tried.\n\nAt this point I wondering if its a jdbc client side issue - I am using the\nlatest 8.1.\n(as I said in an earlier post - I am using addBatch with batch size of 100).\nBut just in case - I am missing something.\n\nIf 17k record/sec is right around expected then I must say I am little\ndisappointed from the \"most advanced open source database\".\n\nthanks for all your help.\n\n-Sanjay\n\n\nMerlin Moncure-2 wrote:\n> \n> On 3/15/07, femski <[email protected]> wrote:\n> well, what sql is the jdbc driver creating exactly? It is probably\n> running inserts in a transaction. your load is about 17k inserts/sec\n> which about right for postgres on your hardware. you have the\n> following options to play increase insert performance:\n> \n> * tweak postgresql.conf\n> fsync: off it is not already\n> wal_segments: bump to at least 24 or so\n> maintenance_work_mem: if you create key after insert, bump this high\n> (it speeds create index)\n> bgwriter settings: you can play with these, try disabling bgwriter\n> first (maxpages=0)\n> full_page_writes=off might help, not 100% sure about this\n> \n> * distribute load\n> make load app multi thrreaded.\n> \n> * use copy for bulk load\n> [is there a way to make jdbc driver do this?]\n> \n> * use multi-line inserts (at least 10 rows/insert)...nearly as fast as\n> copy\n> \n> * if jdbc driver is not already doing so, prepare your statements and\n> execute.\n> \n> merlin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n> \n\n-- \nView this message in context: http://www.nabble.com/Postgres-batch-write-very-slow---what-to-do-tf3395195.html#a9492938\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 15 Mar 2007 04:52:53 -0700 (PDT)", "msg_from": "femski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "femski <[email protected]> writes:\n> If 17k record/sec is right around expected then I must say I am little\n> disappointed from the \"most advanced open source database\".\n\nWell, the software is certainly capable of much more than that;\nfor instance, on a not-too-new Dell x86_64 machine:\n\nregression=# \\timing\nTiming is on.\nregression=# create table t1(f1 int);\nCREATE TABLE\nTime: 3.614 ms\nregression=# insert into t1 select * from generate_series(1,1000000);\nINSERT 0 1000000\nTime: 3433.483 ms\n\nwhich works out to something a little shy of 300K rows/sec. Of course\nthe main difference from what I think you're trying to do is the lack of\nany per-row round trips to the client code. But you need to look into\nwhere the bottleneck is, not just assume it's insoluble.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2007 22:15:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do " }, { "msg_contents": "On 3/15/07, femski <[email protected]> wrote:\n>\n> I tried maxpages = 0 and full_page_writes=off and it seemed to be taking\n> forever.\n> All other tricks I have already tried.\n>\n> At this point I wondering if its a jdbc client side issue - I am using the\n> latest 8.1.\n> (as I said in an earlier post - I am using addBatch with batch size of 100).\n> But just in case - I am missing something.\n>\n> If 17k record/sec is right around expected then I must say I am little\n> disappointed from the \"most advanced open source database\".\n\nBe careful...you are just testing one very specific thing and it its\nextremely possible that the Oracle JDBC batch insert is more optimized\nthan PostgreSQL's. On my little pentium 4 workstation, by inserting\n10 rows per insert:\ninsert values ([...]), ([...]), [8 more rows];\n\nI got a 5x speedup in insert performance using this feature (which is\nunfortunately new for 8.2). Oracle is most likely pulling similar\ntricks inside the driver. PostgreSQL is much faster than you think...\n\nmerlin\n", "msg_date": "Fri, 16 Mar 2007 07:48:22 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "On 3/16/07, Merlin Moncure <[email protected]> wrote:\n> Be careful...you are just testing one very specific thing and it its\n> extremely possible that the Oracle JDBC batch insert is more optimized\n> than PostgreSQL's. On my little pentium 4 workstation, by inserting\n> 10 rows per insert:\n> insert values ([...]), ([...]), [8 more rows];\n\nsmall correction here, I actually went and looked at the JDBC api and\nrealized 'addBatch' means to run multiple stmts at once, not batch\ninserting. femski, your best bet is to lobby the JDBC folks to build\nsupport for 'copy' into the driver for faster bulk loads (or help out\nin that regard).\n\nmerlin\n", "msg_date": "Fri, 16 Mar 2007 08:52:32 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "femski wrote:\n> Folks !\n> \n> I have a batch application that writes approx. 4 million rows into a narrow\n> table. I am using JDBC addBatch/ExecuteBatch with auto commit turned off.\n> Batch size is 100. So far I am seeing Postgres take roughly five times the\n> time it takes to do this in the Oracle. \n\nIf you are using 8.2 could you try with the multi value inserts?\n\ninsert into foo(bar) values (bang) (bong) (bing) ...?\n\n> \n> I have played with many parameters. Only one that seems to have any affect\n> is fsync - but thats only 10% or so. \n> Initially I got the warning that checkpoints were happening too often so I\n> increased the segments to 24. Warnings stopped, but no real improvement in\n> performance.\n> \n> Is postgres really that slow ? What am I missing ? \n> \n> Here are the changes to my postgressql.cong file. \n> \n> shared_buffers = 768MB\n> work_mem = 256MB \n> maintenance_work_mem = 128MB\n> fsync = off \n> \n> checkpoint_segments = 24\n> autovacuum = on \n> \n> Thank you,\n> \n> -Sanjay\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Fri, 16 Mar 2007 07:00:47 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "Carlos Moreno wrote:\n> Joshua D. Drake wrote:\n> \n>> insert into foo(bar) values (bang) (bong) (bing) ...?\n>>\n>> \n>>\n> \n> Nit pick (with a \"correct me if I'm wrong\" disclaimer :-)) :\n> \n> Wouldn't that be (bang), (bong), (bing) .... ??\n\nYes.\n\nJ\n\n\n> \n> Carlos\n> -- \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Fri, 16 Mar 2007 10:19:36 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "Joshua D. Drake wrote:\n\n>insert into foo(bar) values (bang) (bong) (bing) ...?\n>\n> \n>\n\nNit pick (with a \"correct me if I'm wrong\" disclaimer :-)) :\n\nWouldn't that be (bang), (bong), (bing) .... ??\n\nCarlos\n--\n\n", "msg_date": "Fri, 16 Mar 2007 13:13:31 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" }, { "msg_contents": "On 3/13/07, femski <[email protected]> wrote:\n> I have a batch application that writes approx. 4 million rows into a narrow\n> table. I am using JDBC addBatch/ExecuteBatch with auto commit turned off.\n> Batch size is 100. So far I am seeing Postgres take roughly five times the\n> time it takes to do this in the Oracle.\n\nyou can try to use pg_bulkload.\nsince it is called as standard function you shouldn't have problems\nwith jdbc. and it's apparently fast.\n\ndepesz\n\nhttp://pgfoundry.org/projects/pgbulkload/\n", "msg_date": "Sat, 17 Mar 2007 05:52:39 +0100", "msg_from": "\"hubert depesz lubaczewski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres batch write very slow - what to do" } ]
[ { "msg_contents": "Howdy-\n\nI am currently using PostgreSQL to store and process a high-bandwidth\nevent stream. I do not need old events but the delete and vacuum does\nnot terminate due to the large number of events being inserted (it\njust pushes me over the tipping point of where the machine can keep up\nwith the events).\n\nI ended up implementing a scheme where a trigger is used to redirect\nthe events (round robin based on time) to a series of identically\nstructured tables. I can then use TRUNCATE older tables rather than\nDELETE and VACUUM (which is a significant speed up).\n\nIt worked out pretty well so thought post the idea to find out if\n\n - it is stupid way of doing things and there is a correct database\n abstraction for doing this\n\nor\n\n - it is a reasonable way of solving this problem and might be of use\n to other folks using rdbs as event processing\n\nI then use a view to merge the tables. Obviously update would be a\nproblem for my purposes, and I suppose a lot of event processing, it\nisn't an issue.\n\nEither way, details are at:\n http://unsyntax.net/james/blog/tools+and+programming/2007/03/08/Dispatch-Merge-Database-Pattern\n\nCheers,\nJames\n", "msg_date": "Tue, 13 Mar 2007 14:36:47 +0100", "msg_from": "\"James Riordan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Dispatch-Merge pattern" }, { "msg_contents": "\nOn Mar 13, 2007, at 6:36 AM, James Riordan wrote:\n\n> Howdy-\n>\n> I am currently using PostgreSQL to store and process a high-bandwidth\n> event stream. I do not need old events but the delete and vacuum does\n> not terminate due to the large number of events being inserted (it\n> just pushes me over the tipping point of where the machine can keep up\n> with the events).\n>\n> I ended up implementing a scheme where a trigger is used to redirect\n> the events (round robin based on time) to a series of identically\n> structured tables. I can then use TRUNCATE older tables rather than\n> DELETE and VACUUM (which is a significant speed up).\n>\n> It worked out pretty well so thought post the idea to find out if\n>\n> - it is stupid way of doing things and there is a correct database\n> abstraction for doing this\n>\n> or\n>\n> - it is a reasonable way of solving this problem and might be of use\n> to other folks using rdbs as event processing\n>\n> I then use a view to merge the tables. Obviously update would be a\n> problem for my purposes, and I suppose a lot of event processing, it\n> isn't an issue.\n>\n> Either way, details are at:\n> http://unsyntax.net/james/blog/tools+and+programming/2007/03/08/ \n> Dispatch-Merge-Database-Pattern\n\nI can't reach that URL, but from what you say it sounds like you've\nre-invented table partitioning.\n\nhttp://www.postgresql.org/docs/8.2/static/ddl-partitioning.html\n\nIf you do it the postgresql way you can also take advantage of\nconstraint exclusion, to speed up some selects on the set of\npartitioned tables, and inheritance means you don't have to\nmaintain the union view yourself.\n\nCheers,\n Steve\n\n", "msg_date": "Thu, 15 Mar 2007 08:58:25 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dispatch-Merge pattern" } ]
[ { "msg_contents": "Hi,\n\n \n\nPlease forgive me if I missed something (I have been searching for a definitive answer for this for 2 days).\n\n \n\nIs there any way to disable autocommit in libpq? (PG 7.4.1)\n\n \n\nThanks\n\n \n\nMike\n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nPlease forgive me if I missed something (I have been\nsearching for a definitive answer for this for 2 days).\n \nIs there any way to disable autocommit in libpq? (PG 7.4.1)\n \nThanks\n \nMike", "msg_date": "Tue, 13 Mar 2007 10:55:40 -0400", "msg_from": "\"Dengler, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Autocommit in libpq" }, { "msg_contents": "Dengler, Michael wrote:\n> Please forgive me if I missed something (I have been searching for a definitive answer for this for 2 days).\n> \n> Is there any way to disable autocommit in libpq? (PG 7.4.1)\n\nJust call BEGIN to start a transaction, and COMMIT to commit it. Other \nthan that, no.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 13 Mar 2007 15:15:28 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autocommit in libpq" }, { "msg_contents": "Heikki Linnakangas wrote:\n> Dengler, Michael wrote:\n>> Please forgive me if I missed something (I have been searching for a\n>> definitive answer for this for 2 days).\n>>\n>> Is there any way to disable autocommit in libpq? (PG 7.4.1)\n> \n> Just call BEGIN to start a transaction, and COMMIT to commit it. Other\n> than that, no.\n> \n\nAnd very on topic, you need to upgrade ASAP to the latest 7.4.x.\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Tue, 13 Mar 2007 08:29:00 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autocommit in libpq" }, { "msg_contents": "Thanks for the reply. Your advice to upgrade sounds urgent. Are there critical reasons I need to go to 7.4.16?\n\nThanks\n\nMike\n\n\n-----Original Message-----\nFrom: Joshua D. Drake [mailto:[email protected]] \nSent: March 13, 2007 11:29 AM\nTo: Heikki Linnakangas\nCc: Dengler, Michael; [email protected]\nSubject: Re: [PERFORM] Autocommit in libpq\n\nHeikki Linnakangas wrote:\n> Dengler, Michael wrote:\n>> Please forgive me if I missed something (I have been searching for a\n>> definitive answer for this for 2 days).\n>>\n>> Is there any way to disable autocommit in libpq? (PG 7.4.1)\n> \n> Just call BEGIN to start a transaction, and COMMIT to commit it. Other\n> than that, no.\n> \n\nAnd very on topic, you need to upgrade ASAP to the latest 7.4.x.\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Tue, 13 Mar 2007 12:03:47 -0400", "msg_from": "\"Dengler, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autocommit in libpq" }, { "msg_contents": "\"Dengler, Michael\" <[email protected]> writes:\n> Thanks for the reply. Your advice to upgrade sounds urgent. Are there critical reasons I need to go to 7.4.16?\n\nRead the release notes between 7.4.1 and 7.4.16 and judge for yourself:\nhttp://developer.postgresql.org/pgdocs/postgres/release.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2007 12:13:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autocommit in libpq " } ]
[ { "msg_contents": "Test...please ignore \n\n \n\nThanks\n\n \n\nMike\n\n\n\n\n\n\n\n\n\n\n\nTest…please ignore \n\n\n \n\n\nThanks\n\n\n \n\n\nMike", "msg_date": "Tue, 13 Mar 2007 14:22:20 -0400", "msg_from": "\"Dengler, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "test ...please ignore" } ]
[ { "msg_contents": "Hello guys,\n\nis it possible to determine dead tuples size for table?\n\n-- \nAlexey Romanchuk\n\n\n", "msg_date": "Thu, 15 Mar 2007 13:58:47 +0600", "msg_from": "Alexey Romanchuk <[email protected]>", "msg_from_op": true, "msg_subject": "Determine dead tuples size" }, { "msg_contents": "Try the contrib module pgstattuple.\n\n2007/3/15, Alexey Romanchuk <[email protected]>:\n> Hello guys,\n>\n> is it possible to determine dead tuples size for table?\n>\n> --\n> Alexey Romanchuk\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n-- \nDaniel Cristian Cruz\nAnalista de Sistemas\n", "msg_date": "Thu, 15 Mar 2007 08:43:55 -0300", "msg_from": "\"Daniel Cristian Cruz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Determine dead tuples size" }, { "msg_contents": "On Thu, Mar 15, 2007 at 01:58:47PM +0600, Alexey Romanchuk wrote:\n> is it possible to determine dead tuples size for table?\n\nSee contrib/pgstattuple.\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 15 Mar 2007 05:45:08 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Determine dead tuples size" }, { "msg_contents": "Hello, Michael.\n\n> On Thu, Mar 15, 2007 at 01:58:47PM +0600, Alexey Romanchuk wrote:\n>> is it possible to determine dead tuples size for table?\n\n> See contrib/pgstattuple.\n\nthanks, i install contribs and try to analyze result of pgstattuple\nfunction and found it strange.\n\nHere it is output:\n pgstattuple\n----------------------------------------------------------------------\n (233242624,1186804,206555428,88.56,20707,3380295,1.45,13896816,5.96)\n\nWhen i try to sum all size (live, dead and free) the sum is not equal\ntotal size. For this table 206555428 + 3380295 + 13896816 = 223832539.\nThe difference between total and sum is 9410085. It is near 5%.\n\nIs it ok?\n\n-- \nAlexey Romanchuk\n\n\n", "msg_date": "Fri, 16 Mar 2007 11:41:50 +0600", "msg_from": "Alexey Romanchuk <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Determine dead tuples size" }, { "msg_contents": "Alexey Romanchuk wrote:\n> thanks, i install contribs and try to analyze result of pgstattuple\n> function and found it strange.\n\nTry \"SELECT * FROM pgstattuple('foo')\", that'll tell you what the \ncolumns are. Take a look at README.pgstattuple as well for more details.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 16 Mar 2007 09:35:20 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Determine dead tuples size" }, { "msg_contents": "Alexey Romanchuk <[email protected]> writes:\n> When i try to sum all size (live, dead and free) the sum is not equal\n> total size. For this table 206555428 + 3380295 + 13896816 = 223832539.\n> The difference between total and sum is 9410085. It is near 5%.\n\npgstattuple is a bit simplistic: it doesn't count the page headers or\nitem pointers at all. It looks to me like it also fails to consider\nthe effects of alignment padding --- if a tuple's length is shown as\n63, that's what it counts, even though the effective length is 64.\n(This might not be a problem in practice --- I'm not sure if the stored\nt_len has always been maxaligned or not.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2007 10:29:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Determine dead tuples size " } ]
[ { "msg_contents": "On 3/16/07, Bob Dusek <[email protected]> wrote:\n> This may or may not be related to what you're seeing... but, when we\n> changed from Postgres 7.4.2 to 7.4.8, our batch processing slowed down\n> fairly significantly.\n>\n> Here's what we were doing:\n>\n> Step 1) Build a larg file full of SQL insert statements.\n> Step 2) Feed the file directly to \"psql\" using \"psql dbname <\n> insertfile\".\n>\n> The time of execution for step 2 seemed like it nearly doubled from\n> 7.4.2 to 7.4.8, for whatever reason (could have been the way Suse\n> compiled the binaries). Perhaps the slowdown was something we could\n> have/should have tweaked with config options.\n\n> At any rate, what we did to speed it up was to wrap the entire file in a\n> transaction, as such: \"BEGIN; ..filecontents.. COMMIT;\"\n>\n> Apparently the autocommit stuff in the version of 7.4.8 we were using\n> was just *doggedly* slow.\n>\n> Perhaps you're already using a transaction for your batch, though. Or,\n> maybe the problem isn't with Postgres. Just thought I'd share.\n\nIf you are inserting records one by one without transaction (and no\nfsync), i/o is going to determine your insertion speed. not really\nsure what was happening in your case...it looks like quite a different\ntype of issue from the OP.\n\nanyways, to the OP some quick googling regarding postgresql jdbc\ndriver showed that the batch insert case is just not as optimized (in\nthe driver) as it could be. The driver could do multi statement\ninserts or use the libpq copy api, either of which would result in\nhuge performance gain.\n\nmerlin\n", "msg_date": "Fri, 16 Mar 2007 08:30:40 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres batch write very slow - what to do" } ]
[ { "msg_contents": "Hi all.\n\nI'm experiencing a strange behaviour with 8.1.8 (cannot do upgrades to 8.2 at\nthe moment).\n\nOn a 13+ million rows table I can do a query with results back in less than\n100 ms. Result is a set of bigint.\nBut when I encapsulate that query into an \"SQL\" function with three parameters\nthe results come back in about one minute. The function contains just the same\nquery as above.\nOf course there's been no change in indices or even into the table itself\nbetween the two tests.\n\nI'm almost sure I'm missing something, but have no clue about what!\nAny hint?\n", "msg_date": "Sun, 18 Mar 2007 07:50:54 +0100", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "function call vs staright query" }, { "msg_contents": "Hi all again.\n\nI've seen that there has been another post similar to mine:\nhttp://archives.postgresql.org/pgsql-performance/2007-03/msg00166.php\n\nI understand that the query planner has less infos about the query\nat the time the function is defined / loaded.\nIn my case in the query there is a \"field like string\" expression that seems\nto be the performance killer.\nIf the string is 'SOMETING%' the straight query is fast. While '%SOMETING%'\nmakes the straight query be as slow as the function.\nThis thing clears part of the problem. The EXPLAIN actually explains a lot.\n\nBut the details should be complete at call time when the pattern string is\nknown. So at least the first case should have comparable performances for both\nthe straight query and the function call.\n\nSo my previous question becomes:\n\nHow can I delay the query planner decisions until the actual query is to be\ndone inside the function body?\n\nMany thanks again for any hint.\n\nOn Sunday 18 March 2007 07:50 Vincenzo Romano wrote:\n> Hi all.\n>\n> I'm experiencing a strange behaviour with 8.1.8 (cannot do upgrades to 8.2\n> at the moment).\n>\n> On a 13+ million rows table I can do a query with results back in less than\n> 100 ms. Result is a set of bigint.\n> But when I encapsulate that query into an \"SQL\" function with three\n> parameters the results come back in about one minute. The function contains\n> just the same query as above.\n> Of course there's been no change in indices or even into the table itself\n> between the two tests.\n>\n> I'm almost sure I'm missing something, but have no clue about what!\n> Any hint?\n\n-- \nVincenzo Romano\n----\nMaybe Computers will never become as intelligent as Humans.\nFor sure they won't ever become so stupid. [VR-1987]\n", "msg_date": "Sun, 18 Mar 2007 08:12:48 +0100", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: function call vs staright query" }, { "msg_contents": "Vincenzo Romano <[email protected]> writes:\n> How can I delay the query planner decisions until the actual query is to be\n> done inside the function body?\n\nUse plpgsql's EXECUTE. AFAIR there is no way in a SQL-language function.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Mar 2007 00:07:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: function call vs staright query " }, { "msg_contents": "On Monday 19 March 2007 05:07 Tom Lane wrote:\n> Vincenzo Romano <[email protected]> writes:\n> > How can I delay the query planner decisions until the actual query is to\n> > be done inside the function body?\n>\n> Use plpgsql's EXECUTE. AFAIR there is no way in a SQL-language function.\n>\n> \t\t\tregards, tom lane\n\nThe body of a function is *always* treated by the planner as if it were a\ndynamically created query. The fact we all use the \"$$\"s (or also the 's)\naround the function body tells it all.\n\nThe PREPARE requires every session to do it upon connections, because prepared\nstatements are managed on a per-session basis.\n\nWhat I don't understand is why the planner gets passed by during those\nqueries while it is at full steam during the \"normal\" queries.\nBut this could be due to my ignorance! :-)\n\n-- \nVincenzo Romano\n----\nMaybe Computers will never become as intelligent as Humans.\nFor sure they won't ever become so stupid. [VR-1987]\n", "msg_date": "Mon, 19 Mar 2007 11:18:55 +0100", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: function call vs staright query" } ]
[ { "msg_contents": "I'm running in some weird (IMHO) bahviour.\nWhen I search a table for certain text (equality est on the relelvant field)\nit takes much more time than doing the same test by adding a trailing '%' and\nusing the LIKE operator.\nWith much more I mean 1000+ times slower.\n\nThis is the table (sorry for the Italian strings):\n\n----| PSQL |----\nnoa=# \\d ts_t_records\n Tabella \"public.ts_t_records\"\n Colonna | Tipo | \nModificatori\n---------------+--------------------------+----------------------------------------------------------------------\n fiel_uniqueid | bigint | not null\n item_uniqueid | bigint | not null\n reco_alphanum | text | not null default ''::text\n reco_floating | double precision | default 0.0\n reco_integral | bigint | default 0\n reco_timedate | timestamp with time zone | default now()\n reco_isactive | boolean | default true\n reco_effectiv | timestamp with time zone | default '-infinity'::timestamp \nwith time zone\n reco_uniqueid | bigint | not null default \nnextval('ts_t_records_reco_uniqueid_seq'::regclass)\nIndici:\n \"ts_i_records_0\" btree (item_uniqueid)\n \"ts_i_records_1\" btree (reco_uniqueid)\n \"ts_i_records_2\" btree (reco_isactive, reco_effectiv)\n \"ts_i_records_3\" btree (reco_alphanum)\n \"ts_i_records_4\" btree (fiel_uniqueid)\n----| /PSQL |----\n\nAnd these are the EXPLAINs for the queries:\n----| PSQL |----\nnoa=# EXPLAIN SELECT * FROM ts_t_records WHERE fiel_uniqueid=2 AND \nreco_alphanum='TEST' AND reco_isactive AND reco_effectiv<=NOW();\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Bitmap Heap Scan on ts_t_records (cost=5110.50..6191.86 rows=277 width=65)\n Recheck Cond: ((reco_alphanum = 'TEST'::text) AND (fiel_uniqueid = 2))\n Filter: (reco_isactive AND (reco_effectiv <= now()))\n -> BitmapAnd (cost=5110.50..5110.50 rows=277 width=0)\n -> Bitmap Index Scan on ts_i_records_3 (cost=0.00..36.32 rows=5234 \nwidth=0)\n Index Cond: (reco_alphanum = 'TEST'::text)\n -> Bitmap Index Scan on ts_irecords_4 (cost=0.00..5073.93 \nrows=812550 width=0)\n Index Cond: (fiel_uniqueid = 2)\n(8 righe)\n\nnoa=# EXPLAIN SELECT * FROM ts_t_records WHERE fiel_uniqueid=2 AND \nreco_alphanum LIKE 'TEST%' AND reco_isactive AND reco_effectiv<=NOW();\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Index Scan using ts_i_records_3 on ts_t_records (cost=0.00..6.01 rows=1 \nwidth=65)\n Index Cond: ((reco_alphanum >= 'TEST'::text) AND (reco_alphanum \n< 'TESU'::text))\n Filter: ((fiel_uniqueid = 2) AND (reco_alphanum ~~ 'TEST%'::text) AND \nreco_isactive AND (reco_effectiv <= now()))\n(3 righe)\n\n----| /PSQL |----\n\nNot only are query plans very different, but the equality query is much worse \nthan the pattern matching one.\n\nIn my (maybe wrong) mind I expected the reverse.\n\nWhat's wrong with the my expectations? Am I missing something?\n\nMTIA.\n\n-- \nVincenzo Romano\n----\nMaybe Computers will never become as intelligent as Humans.\nFor sure they won't ever become so stupid. [VR-1987]\n", "msg_date": "Sun, 18 Mar 2007 10:59:19 +0100", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "text equality worse than pattern matching (v8.1.8)" }, { "msg_contents": "On 3/18/07, Vincenzo Romano <[email protected]> wrote:\n> And these are the EXPLAINs for the queries:\n\nplease provide output of \"explain analyze\" of the queries. otherwise -\nit is not really useful.\n\ndepesz\n", "msg_date": "Sun, 18 Mar 2007 12:45:46 +0100", "msg_from": "\"hubert depesz lubaczewski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: text equality worse than pattern matching (v8.1.8)" }, { "msg_contents": "\nThe problem seems due to a weird LOCALES setup: the default DB locale was UTF8\nwhile the tables and the client encodig were LATIN9. The index in the\nreco_alphanum field had no special operator class defined.\nA complete initdb and a reload of everything with the same locale (LATIN9)\nfixed the issue, though I'm not sure whether this is an \"expected feature\" or\nnot.\n\nThanks a lot.\n\nP.S.\nI've seen hubert depesz lubaczewski's remark only on the web interface on\nnabble.com. The email from the list manager never reached my mailbox!\n\nOn Sunday 18 March 2007 10:59 Vincenzo Romano wrote:\n> I'm running in some weird (IMHO) bahviour.\n> When I search a table for certain text (equality est on the relelvant\n> field) it takes much more time than doing the same test by adding a\n> trailing '%' and using the LIKE operator.\n> With much more I mean 1000+ times slower.\n>\n> This is the table (sorry for the Italian strings):\n>\n> ----| PSQL |----\n> noa=# \\d ts_t_records\n> Tabella \"public.ts_t_records\"\n> Colonna | Tipo |\n> Modificatori\n> ---------------+--------------------------+--------------------------------\n>-------------------------------------- fiel_uniqueid | bigint \n> | not null\n> item_uniqueid | bigint | not null\n> reco_alphanum | text | not null default ''::text\n> reco_floating | double precision | default 0.0\n> reco_integral | bigint | default 0\n> reco_timedate | timestamp with time zone | default now()\n> reco_isactive | boolean | default true\n> reco_effectiv | timestamp with time zone | default '-infinity'::timestamp\n> with time zone\n> reco_uniqueid | bigint | not null default\n> nextval('ts_t_records_reco_uniqueid_seq'::regclass)\n> Indici:\n> \"ts_i_records_0\" btree (item_uniqueid)\n> \"ts_i_records_1\" btree (reco_uniqueid)\n> \"ts_i_records_2\" btree (reco_isactive, reco_effectiv)\n> \"ts_i_records_3\" btree (reco_alphanum)\n> \"ts_i_records_4\" btree (fiel_uniqueid)\n> ----| /PSQL |----\n>\n> And these are the EXPLAINs for the queries:\n> ----| PSQL |----\n> noa=# EXPLAIN SELECT * FROM ts_t_records WHERE fiel_uniqueid=2 AND\n> reco_alphanum='TEST' AND reco_isactive AND reco_effectiv<=NOW();\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n>--------------- Bitmap Heap Scan on ts_t_records (cost=5110.50..6191.86\n> rows=277 width=65) Recheck Cond: ((reco_alphanum = 'TEST'::text) AND\n> (fiel_uniqueid = 2)) Filter: (reco_isactive AND (reco_effectiv <= now()))\n> -> BitmapAnd (cost=5110.50..5110.50 rows=277 width=0)\n> -> Bitmap Index Scan on ts_i_records_3 (cost=0.00..36.32\n> rows=5234 width=0)\n> Index Cond: (reco_alphanum = 'TEST'::text)\n> -> Bitmap Index Scan on ts_irecords_4 (cost=0.00..5073.93\n> rows=812550 width=0)\n> Index Cond: (fiel_uniqueid = 2)\n> (8 righe)\n>\n> noa=# EXPLAIN SELECT * FROM ts_t_records WHERE fiel_uniqueid=2 AND\n> reco_alphanum LIKE 'TEST%' AND reco_isactive AND reco_effectiv<=NOW();\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n>------------------------------------------ Index Scan using ts_i_records_3\n> on ts_t_records (cost=0.00..6.01 rows=1 width=65)\n> Index Cond: ((reco_alphanum >= 'TEST'::text) AND (reco_alphanum\n> < 'TESU'::text))\n> Filter: ((fiel_uniqueid = 2) AND (reco_alphanum ~~ 'TEST%'::text) AND\n> reco_isactive AND (reco_effectiv <= now()))\n> (3 righe)\n>\n> ----| /PSQL |----\n>\n> Not only are query plans very different, but the equality query is much\n> worse than the pattern matching one.\n>\n> In my (maybe wrong) mind I expected the reverse.\n>\n> What's wrong with the my expectations? Am I missing something?\n>\n> MTIA.\n\n-- \nVincenzo Romano\n----\nMaybe Computers will never become as intelligent as Humans.\nFor sure they won't ever become so stupid. [VR-1987]\n", "msg_date": "Mon, 19 Mar 2007 13:02:57 +0100", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: text equality worse than pattern matching (v8.1.8)" } ]
[ { "msg_contents": "I have a very slow query that I'm trying to tune. I think my \nperformance tuning is being complicated by the system's page cache.\n\nIf a run the query after the system has been busy with other tasks \nfor quite a long time then the query can take up to 8-10 minutes to \ncomplete. If I then rerun the same query it will complete in a \ncouple of seconds.\n\nDoes anyone know how I can repeatedly run the same query in the \n\"worst case scenario\" of no postgres data in the disk cache (e.g., \nclear the page cache or force it to be ignored)?\n\nThanks for any help.\n\nBarry\n\n", "msg_date": "Sun, 18 Mar 2007 06:45:34 -0600", "msg_from": "Barry Moore <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Tuning and Disk Cache" }, { "msg_contents": "On 3/18/07, Barry Moore <[email protected]> wrote:\n> Does anyone know how I can repeatedly run the same query in the\n> \"worst case scenario\" of no postgres data in the disk cache (e.g.,\n> clear the page cache or force it to be ignored)?\n\ntry to disconnect from postgresql, reconnect, rerun the query.\nif it doesn't help - you can try unmounting filesystem which contains\npostgresql data, and remounting it again. of course with postgresql\nshutdown.\n\ndepesz\n\n-- \nhttp://www.depesz.com/ - nowy, lepszy depesz\n", "msg_date": "Sun, 18 Mar 2007 14:18:19 +0100", "msg_from": "\"hubert depesz lubaczewski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning and Disk Cache" }, { "msg_contents": "Barry Moore wrote:\n\n> I have a very slow query that I'm trying to tune. I think my \n> performance tuning is being complicated by the system's page cache.\n>\n> If a run the query after the system has been busy with other tasks \n> for quite a long time then the query can take up to 8-10 minutes to \n> complete. If I then rerun the same query it will complete in a \n> couple of seconds.\n>\n> Does anyone know how I can repeatedly run the same query in the \n> \"worst case scenario\" of no postgres data in the disk cache (e.g., \n> clear the page cache or force it to be ignored)?\n\nIn my experience the only 100% reliable way to do this is to reboot the \nmachine.\n\n\n", "msg_date": "Sun, 18 Mar 2007 09:26:37 -0600", "msg_from": "David Boreham <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning and Disk Cache" }, { "msg_contents": "If you are running on a Linux kernel, try /proc/sys/vm/drop_caches. I\nbelieve the appropriate command is \"echo 3 > /proc/sys/vm/drop_caches\".\nSince Postgres has its own cache of data, the above followed by a PG\nrestart should do what you are looking for.\n\nRanga\n\n\n> Barry Moore wrote:\n>\n>> I have a very slow query that I'm trying to tune. I think my\n>> performance tuning is being complicated by the system's page cache.\n>>\n>> If a run the query after the system has been busy with other tasks\n>> for quite a long time then the query can take up to 8-10 minutes to\n>> complete. If I then rerun the same query it will complete in a\n>> couple of seconds.\n>>\n>> Does anyone know how I can repeatedly run the same query in the\n>> \"worst case scenario\" of no postgres data in the disk cache (e.g.,\n>> clear the page cache or force it to be ignored)?\n>\n> In my experience the only 100% reliable way to do this is to reboot the\n> machine.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n\n", "msg_date": "Sun, 18 Mar 2007 11:22:30 -0700 (PDT)", "msg_from": "\"Rangarajan Vasudevan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning and Disk Cache" }, { "msg_contents": "On Sun, Mar 18, 2007 at 06:45:34AM -0600, Barry Moore wrote:\n>Does anyone know how I can repeatedly run the same query in the \n>\"worst case scenario\" of no postgres data in the disk cache (e.g., \n>clear the page cache or force it to be ignored)?\n\nDepends on your OS. On linux you can run:\necho 1 > /proc/sys/vm/drop_caches\n\nMike Stone\n", "msg_date": "Sun, 18 Mar 2007 14:49:06 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning and Disk Cache" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n\nVacuum full is very slow for me . I dont know how to speed it up. It\ntakes between 60 and 90 minutes.\n\nI have set up autovacuum but I also run vacuum full once per week.\n\nThe slowest parts in the vacuum full output are :\n\nINFO: \"a\": moved 14076 row versions, truncated 6013 to 1005 pages\nDETAIL: CPU 3.51s/2.16u sec elapsed 1156.00 sec.\n\nINFO: \"b\": moved 22174 row versions, truncated 1285 to 933 pages\nDETAIL: CPU 3.77s/1.52u sec elapsed 443.79 sec.\n\nINFO: \"c\": moved 36897 row versions, truncated 2824 to 1988 pages\nDETAIL: CPU 3.26s/1.45u sec elapsed 676.18 sec.\n\nHow can I speed it up?\n\nPostgres version 8.1.3\n\nThanks in advance\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFF/m3UIo1XmbAXRboRAnfHAKCVobTZGF9MlTjuAOkzIQESv1SDoQCfah67\nhdCkn/4KtnlYk1mqcS1u8bY=\n=/3Y4\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 19 Mar 2007 12:02:44 +0100", "msg_from": "Ruben Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum full is slow" }, { "msg_contents": "Ruben Rubio wrote:\n> Vacuum full is very slow for me . I dont know how to speed it up. It\n> takes between 60 and 90 minutes.\n> \n> I have set up autovacuum but I also run vacuum full once per week.\n\nDo you really need to run vacuum full? I don't know you're workload, but \nusually you're better off just not running it.\n\nOne alternative is to run CLUSTER instead of VACUUM FULL. It's usually \nfaster, but beware that it's not safe if you're concurrently running \nserializable transactions that access the table. pg_dump in particular \nis a problem. In a maintenance window with no other activity, however, \nit's ok.\n\n> The slowest parts in the vacuum full output are :\n> \n> INFO: \"a\": moved 14076 row versions, truncated 6013 to 1005 pages\n> DETAIL: CPU 3.51s/2.16u sec elapsed 1156.00 sec.\n> \n> INFO: \"b\": moved 22174 row versions, truncated 1285 to 933 pages\n> DETAIL: CPU 3.77s/1.52u sec elapsed 443.79 sec.\n> \n> INFO: \"c\": moved 36897 row versions, truncated 2824 to 1988 pages\n> DETAIL: CPU 3.26s/1.45u sec elapsed 676.18 sec.\n> \n> How can I speed it up?\n\nYou don't have vacuum_cost_delay set, do you? How long does normal \nvacuum run?\n\nThe manual suggests dropping all indexes before running vacuum full, and \nrecreating them afterwards. That's worth trying.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 19 Mar 2007 11:23:15 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum full is slow" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n> \n> You don't have vacuum_cost_delay set, do you? How long does normal\n> vacuum run?\n\nvacuum_cost_delay = 100\nNo idea how long will take normal vacuum. I ll try tonight when there is\nnot too much load.\n\n> \n> The manual suggests dropping all indexes before running vacuum full, and\n> recreating them afterwards. That's worth trying.\n> \n\nI ll try that also. Is there any way to do it? Do i have to delete /\ncreate each one manually?\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFF/pxLIo1XmbAXRboRAjR1AJ9V4kBDCd++HSmUm8+ZCLs2RY0xnACfZ7Mp\nuBC031TFhO2NGOihfWPAQQ8=\n=QCYi\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 19 Mar 2007 15:20:59 +0100", "msg_from": "Ruben Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum full is slow" }, { "msg_contents": ">>vacuum_cost_delay = 100\n>>No idea how long will take normal vacuum. I ll try tonight when there is\n>>not too much load.\n\nThat can really take the VACUUM a long time to complete, but you might want\nto have it there as it will be good for performance by setting it a little\nhigh in a high OLTP environment.\n\nI will recommend setting it to 0 first and then you can start moving it high\nas per your needs...\n\n--\nShoaib Mir\nEnterpriseDB (www.enterprisedb.com)\n\nOn 3/19/07, Ruben Rubio <[email protected]> wrote:\n>\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> >\n> > You don't have vacuum_cost_delay set, do you? How long does normal\n> > vacuum run?\n>\n> vacuum_cost_delay = 100\n> No idea how long will take normal vacuum. I ll try tonight when there is\n> not too much load.\n>\n> >\n> > The manual suggests dropping all indexes before running vacuum full, and\n> > recreating them afterwards. That's worth trying.\n> >\n>\n> I ll try that also. Is there any way to do it? Do i have to delete /\n> create each one manually?\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.2.2 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n>\n> iD8DBQFF/pxLIo1XmbAXRboRAjR1AJ9V4kBDCd++HSmUm8+ZCLs2RY0xnACfZ7Mp\n> uBC031TFhO2NGOihfWPAQQ8=\n> =QCYi\n> -----END PGP SIGNATURE-----\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n>>vacuum_cost_delay = 100>>No idea how long will take normal vacuum. I ll try tonight when there is>>not too much load.That can really take the VACUUM a long time to complete, but you might want to have it there as it will be good for performance by setting it a little high in a high OLTP environment. \nI will recommend setting it to 0 first and then you can start moving it high as per your needs...--Shoaib MirEnterpriseDB (www.enterprisedb.com)\nOn 3/19/07, Ruben Rubio <[email protected]> wrote:\n-----BEGIN PGP SIGNED MESSAGE-----Hash: SHA1>> You don't have vacuum_cost_delay set, do you? How long does normal> vacuum run?vacuum_cost_delay = 100No idea how long will take normal vacuum. I ll try tonight when there is\nnot too much load.>> The manual suggests dropping all indexes before running vacuum full, and> recreating them afterwards. That's worth trying.>I ll try that also. Is there any way to do it? Do i have to delete /\ncreate each one manually?-----BEGIN PGP SIGNATURE-----Version: GnuPG v1.4.2.2 (GNU/Linux)Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.orgiD8DBQFF/pxLIo1XmbAXRboRAjR1AJ9V4kBDCd++HSmUm8+ZCLs2RY0xnACfZ7Mp\nuBC031TFhO2NGOihfWPAQQ8==QCYi-----END PGP SIGNATURE--------------------------------(end of broadcast)---------------------------TIP 3: Have you checked our extensive FAQ?               \nhttp://www.postgresql.org/docs/faq", "msg_date": "Mon, 19 Mar 2007 19:37:41 +0500", "msg_from": "\"Shoaib Mir\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum full is slow" }, { "msg_contents": "Hello all,\n\nI sent a similar post to a FreeBSD group, but thought I'd might try here too.\n\nI am completing a box for PostgreSQL server on FreeBSD. Selecting a RAID controller I decided to go \nwith 3ware SE9650-16, following good opinions about 3ware controllers found on FreeBSD and \nPostgreSQL groups.\n\nHowever my dealer suggest me not to go with 3ware, and take Promise SuperTrak EX16350, instead. This \nsuggestion does not have any technical background and it comes generally from the fact of limited \navailability of 16x 3ware controllers on the local market and immediate availability of Promise.\n\nIs this technically a good idea to take Promise instead of 3ware or rather I definitely should \ninsist on 3ware and wait for it?\n\nThank you\n\nIreneusz Pluta\n\n", "msg_date": "Tue, 20 Mar 2007 14:23:11 +0100", "msg_from": "Ireneusz Pluta <[email protected]>", "msg_from_op": false, "msg_subject": "SATA RAID: Promise vs. 3ware" }, { "msg_contents": "On 3/20/07, Ireneusz Pluta <[email protected]> wrote:\n> Hello all,\n>\n> I sent a similar post to a FreeBSD group, but thought I'd might try here too.\n>\n> I am completing a box for PostgreSQL server on FreeBSD. Selecting a RAID controller I decided to go\n> with 3ware SE9650-16, following good opinions about 3ware controllers found on FreeBSD and\n> PostgreSQL groups.\n>\n> However my dealer suggest me not to go with 3ware, and take Promise SuperTrak EX16350, instead. This\n> suggestion does not have any technical background and it comes generally from the fact of limited\n> availability of 16x 3ware controllers on the local market and immediate availability of Promise.\n>\n> Is this technically a good idea to take Promise instead of 3ware or rather I definitely should\n> insist on 3ware and wait for it?\n\n\nPromise raid controllers are famous for being software based with all\nthe real work being done in the driver. Without doing the research\nthis may or may not be the case with this particular controller.\nAnother issue with cheap RAID controllers is the performance may not\nbe as good as software raid...in fact it may be worse. Look for\nbenchmarks on the web and be skeptical.\n\nmerlin\n", "msg_date": "Tue, 20 Mar 2007 10:18:45 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA RAID: Promise vs. 3ware" }, { "msg_contents": "On Tue, Mar 20, 2007 at 10:18:45AM -0400, Merlin Moncure wrote:\n> On 3/20/07, Ireneusz Pluta <[email protected]> wrote:\n> >Hello all,\n> >\n> >I sent a similar post to a FreeBSD group, but thought I'd might try here \n> >too.\n> >\n> >I am completing a box for PostgreSQL server on FreeBSD. Selecting a RAID \n> >controller I decided to go\n> >with 3ware SE9650-16, following good opinions about 3ware controllers \n> >found on FreeBSD and\n> >PostgreSQL groups.\n> >\n> >However my dealer suggest me not to go with 3ware, and take Promise \n> >SuperTrak EX16350, instead. This\n> >suggestion does not have any technical background and it comes generally \n> >from the fact of limited\n> >availability of 16x 3ware controllers on the local market and immediate \n> >availability of Promise.\n> >\n> >Is this technically a good idea to take Promise instead of 3ware or rather \n> >I definitely should\n> >insist on 3ware and wait for it?\n> \n> \n> Promise raid controllers are famous for being software based with all\n> the real work being done in the driver. Without doing the research\n> this may or may not be the case with this particular controller.\n> Another issue with cheap RAID controllers is the performance may not\n> be as good as software raid...in fact it may be worse. Look for\n> benchmarks on the web and be skeptical.\n\nA Promise RAID is the only hardware RAID I've ever had eat an entire\narray for me... Granted this was one of those \"external array with SCSI\nto the host\", but it's certainly turned me away from Promise.. Probably\nnot related to the controller in question, just their general quality\nlevel.\n\n//Magnus\n\n", "msg_date": "Tue, 20 Mar 2007 15:34:48 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA RAID: Promise vs. 3ware" }, { "msg_contents": "\n> Is this technically a good idea to take Promise instead of 3ware or\n> rather I definitely should insist on 3ware and wait for it?\n\nUse 3Ware they are proven to provide a decent raid controller for\nSATA/PATA. Promise on the other hand... not so much.\n\nJoshua D. Drake\n\n> \n> Thank you\n> \n> Ireneusz Pluta\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Tue, 20 Mar 2007 08:41:50 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA RAID: Promise vs. 3ware" }, { "msg_contents": "\nOn 20-Mar-07, at 9:23 AM, Ireneusz Pluta wrote:\n\n> Hello all,\n>\n> I sent a similar post to a FreeBSD group, but thought I'd might try \n> here too.\n>\n> I am completing a box for PostgreSQL server on FreeBSD. Selecting a \n> RAID controller I decided to go with 3ware SE9650-16, following \n> good opinions about 3ware controllers found on FreeBSD and \n> PostgreSQL groups.\n>\n> However my dealer suggest me not to go with 3ware, and take Promise \n> SuperTrak EX16350, instead. This suggestion does not have any \n> technical background and it comes generally from the fact of \n> limited availability of 16x 3ware controllers on the local market \n> and immediate availability of Promise.\n>\n> Is this technically a good idea to take Promise instead of 3ware or \n> rather I definitely should insist on 3ware and wait for it?\n>\nThe reality is that most dealers have no idea what is \"good\" for a \ndatabase application. It is likely that this card is better for him \nsomehow ( more margin, easier to get, etc.)\n\nI'd stick with 3ware, areca, or lsi. And even then I'd check it when \nI got it to make sure it lived up to it's reputation.\nDave\n> Thank you\n>\n> Ireneusz Pluta\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\n", "msg_date": "Tue, 20 Mar 2007 13:01:40 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA RAID: Promise vs. 3ware" }, { "msg_contents": "\nOn 20-Mar-07, at 1:53 PM, Benjamin Arai wrote:\n\n> This is a little biased but I would stay away from areca only \n> because they have fans on the card. At some point down the line \n> that card is going to die. When it does there is really no telling \n> what it will do to your data. I personally use 3Ware cards, they \n> work well but I have had one die before (1/10).\n>\nWell, they are also the only one of the bunch that I am aware of that \nwill sell you 1G of cache. Plus if you use battery backup sooner or \nlater you have to replace the batteries. I use areca all the time \nand I've never had a fan die, but I admit it is a point of failure.\n\nDave\n\n", "msg_date": "Tue, 20 Mar 2007 14:08:23 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA RAID: Promise vs. 3ware" }, { "msg_contents": "At 02:08 PM 3/20/2007, Dave Cramer wrote:\n\n>On 20-Mar-07, at 1:53 PM, Benjamin Arai wrote:\n>\n>>This is a little biased but I would stay away from areca only \n>>because they have fans on the card. At some point down the line \n>>that card is going to die. When it does there is really no telling \n>>what it will do to your data.\n\nUmmm ?what? fan? The Intel IOP341 (AKA 81341) based ARC-12xx cards \nare what people are most likely going to want to buy at this point, \nand they are fanless:\nhttp://www.areca.us/support/photo_gallery.htm\n\nThe \"lore\" is that\n+3ware is best at random IO and Areca is best at streaming IO. OLTP \n=> 3ware. OLAP => Areca.\n- stay away from Adaptec or Promise for any mission critical role.\n= LSI is a mixed bag.\n\n\n>Well, they are also the only one of the bunch that I am aware of \n>that will sell you 1G of cache.\n\nActually, it's up to 2GB of BB cache... 2GB DDR2 SDRAMs are cheap \nand easy to get now. I've actually been agitating for Areca to \nsupport 4GB of RAM.\n\n\n>Plus if you use battery backup sooner or later you have to replace \n>the batteries. I use areca all the time and I've never had a fan \n>die, but I admit it is a point of failure.\n\nI've had the whole card die (massive cooling failure in NOC led to \n...), but never any component on the card. OTOH, I'm conservative \nabout how much heat per unit area I'm willing to allow to occur in or \nnear my DB servers.\n\nCheers,\nRon \n\n", "msg_date": "Tue, 20 Mar 2007 14:44:06 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA RAID: Promise vs. 3ware" }, { "msg_contents": "On Mon, 2007-03-19 at 06:02, Ruben Rubio wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Hi,\n> \n> Vacuum full is very slow for me . I dont know how to speed it up. It\n> takes between 60 and 90 minutes.\n> \n> I have set up autovacuum but I also run vacuum full once per week.\n\nNote two things.\n\n1: you need to update your pgsql version. 8.1.3 is a bit old.\n\n2: You shouldn't normally need to run vacuum full. Vacuum full is\nthere to get you out of problems created when regular vacuum falls\nbehind. It contributes to index bloat as well. If routine vacuuming\nisn't working, regular vacuum full is not the answer (well, 99% of the\ntime it's not). Fixing routing vacuuming is the answer.\n\nIf you don't have an actual problem with routine vacuuming, you would be\nbetter off writing a monitoring script to keep track of bloat in tables\nand send you an email than running vacuum full all the time.\n", "msg_date": "Fri, 23 Mar 2007 11:32:35 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum full is slow" } ]
[ { "msg_contents": "I've got one logging table that is over 330 million rows to store 6 \nmonths' worth of data. It consists of two integers and a 4-character \nlong string. I have one primary key which is the two integers, and \nan additional index on the second integer.\n\nI'm planning to use inheritance to split the table into a bunch of \nsmaller ones by using a modulo function on one of the integers on \nwhich we scan often.\n\nMy question is how small to make each inherited piece? If I do \nmodulo 10, then each sub-table will be between 32 and 34 million rows \ntoday based on current distribution.\n\nIf I expect to increase traffic 2 times over the next year (thus \ndoubling my logs) what would you recommend?", "msg_date": "Tue, 20 Mar 2007 11:02:32 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "how small to split a table?" }, { "msg_contents": "\n> My question is how small to make each inherited piece? If I do \n> modulo 10, then each sub-table will be between 32 and 34 million \n> rows today based on current distribution.\n\nYou might try this with various sizes.\nI did some testing lateley and found out that insert performance - \neven if only inserting into one partition through the master\ntable abould halfed the speed with 4 partitions and made a 50% \nincrease for 2 partitions.\nPlease note: this is not representative in any kind!\n\nSo while it might be cool in your case to have e.g. one partition per \nmonth, this might slow inserts down too much, so\nthat a different number of partitions could be better. The same \napplies for queries as well (here perhaps in the other\ndirection).\n\n-- \nHeiko W.Rupp\[email protected], http://www.dpunkt.de/buch/3-89864-429-4.html\n\n\n\n", "msg_date": "Tue, 20 Mar 2007 16:20:42 +0100", "msg_from": "\"Heiko W.Rupp\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how small to split a table?" }, { "msg_contents": "On Mar 20, 2007, at 11:20 AM, Heiko W.Rupp wrote:\n\n> partition through the master\n> table abould halfed the speed with 4 partitions and made a 50% \n> increase for 2 partitions.\n> Please note: this is not representative in any kind!\n\nI fully intend to build knowledge of the partitions into the insert \npart of the logging. Only the queries which do joins on the current \nbig table would use the master name. Everything else can be trained \nto go directly to the proper subtable.\n\nThanks for your note. It implies to me I'm making the right choice \nto build that knowledge into the system.", "msg_date": "Tue, 20 Mar 2007 12:17:10 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how small to split a table?" } ]
[ { "msg_contents": "After upgrading to 8.2.3 INSERTs and UPDATEs on one of my tables became \nincredibly slow. I traced the problem to one of my triggers that calls \none of my defined functions (that is IMMUTABLE). If I inline the \nfunction instead of calling it the runtime for my test update drops from \n 10261.234 ms to 564.094 ms. The time running the trigger itself \ndropped from 9749.910 to 99.504.\n\nBTW does make any sense to bother marking trigger functions as STABLE or \nIMMUTABLE?\n", "msg_date": "Tue, 20 Mar 2007 15:26:33 -0400", "msg_from": "Joseph S <[email protected]>", "msg_from_op": true, "msg_subject": "Horrible trigger performance after upgrade 8.0.12 -> 8.2.3" }, { "msg_contents": "Joseph S <[email protected]> writes:\n> After upgrading to 8.2.3 INSERTs and UPDATEs on one of my tables became \n> incredibly slow. I traced the problem to one of my triggers that calls \n> one of my defined functions (that is IMMUTABLE). If I inline the \n> function instead of calling it the runtime for my test update drops from \n> 10261.234 ms to 564.094 ms. The time running the trigger itself \n> dropped from 9749.910 to 99.504.\n\nWith no more details than that, I don't see how you expect any useful\ncomments. Let's see the code. Also, what PG version are you comparing to?\n\n> BTW does make any sense to bother marking trigger functions as STABLE or \n> IMMUTABLE?\n\nNo, the trigger mechanisms don't pay any attention to that. I can\nhardly conceive of a useful trigger that wouldn't be VOLATILE anyway,\nsince side effects are more or less the point.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Mar 2007 15:58:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Horrible trigger performance after upgrade 8.0.12 -> 8.2.3 " } ]
[ { "msg_contents": "I've found that it would be helpful to be able to tell how busy my \ndedicated PG server is ( Linux 2.6 kernel, v8.0.3 currently ) before \npounding it with some OLAP-type queries. Specifically, I have a \nmulti-threaded client program that needs to run several thousand \nsequential queries. I broke it into threads to take advantage of the \nmulti-core architecture of the server hardware. It would be very nice \nif I could check the load of the server at certain intervals to throttle \nthe number of concurrent queries and mitigate load problems when other \nprocesses might be already inducing a significant load.\n\nI have seen some other nice back-end things exposed through PG functions \n( e.g. database size on disk ) and wondered if there was anything \napplicable to this. Even if it can't return the load average proper, is \nthere anything else in the pg_* tables that might give me a clue how \n\"busy\" the server is for a period of time?\n\nI've thought about allowing an ssh login without a keyphrase to log in \nand capture it. But, the client process is running as an apache user. \nGiving the apache user a shell login to the DB box does not seem like a \nsmart idea for obvious security reasons...\n\nSo far, that's all I can come up with, other than a dedicated socket \nserver daemon on the DB machine to do it.\n\nAny creative ideas are welcomed :)\n\nThanks\n\n-Dan\n", "msg_date": "Tue, 20 Mar 2007 18:47:30 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Determining server load from client" }, { "msg_contents": "Dan Harris wrote:\n> I've found that it would be helpful to be able to tell how busy my \n> dedicated PG server is ...\n> \n> I have seen some other nice back-end things exposed through PG functions \n> ( e.g. database size on disk ) and wondered if there was anything \n> applicable to this.\n\nI'd write a simple pg-perl function to do this. You can access operating-system calls to find out the system's load. But notice that you need \"Untrusted Perl\" to do this, so you can only do it on a system where you trust every application that connects to your database. Something like this:\n\ncreate or replace function get_stats()\n returns text as '\n open(STAT, \"</proc/stat\");\n my @stats = <STAT>;\n close STAT;\n return join(\"\", @stats);\n' language plperlu;\n\nSee http://www.postgresql.org/docs/8.1/interactive/plperl-trusted.html\n\nCraig\n", "msg_date": "Tue, 20 Mar 2007 17:15:51 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Determining server load from client" }, { "msg_contents": "Dan\n\nUse the following plperlu function\n\ncreate or replace function LoadAVG()\nreturns record\nas\n$$\nuse Sys::Statistics::Linux::LoadAVG;\nmy $lxs = new Sys::Statistics::Linux::LoadAVG;\nmy $stats = $lxs->get;\nreturn $stats;\n\n$$\nlanguage plperlu;\n\n\nselect * from LoadAVg() as (avg_1 float,avg_5 float,avg_15 float);\n\nThe Sys::Statistics::Linux has all kind of info (from the /proc) file\nsystem. \n\nJim\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Dan Harris\nSent: Tuesday, March 20, 2007 8:48 PM\nTo: PostgreSQL Performance\nSubject: [PERFORM] Determining server load from client\n\nI've found that it would be helpful to be able to tell how busy my \ndedicated PG server is ( Linux 2.6 kernel, v8.0.3 currently ) before \npounding it with some OLAP-type queries. Specifically, I have a \nmulti-threaded client program that needs to run several thousand \nsequential queries. I broke it into threads to take advantage of the \nmulti-core architecture of the server hardware. It would be very nice \nif I could check the load of the server at certain intervals to throttle \nthe number of concurrent queries and mitigate load problems when other \nprocesses might be already inducing a significant load.\n\nI have seen some other nice back-end things exposed through PG functions \n( e.g. database size on disk ) and wondered if there was anything \napplicable to this. Even if it can't return the load average proper, is \nthere anything else in the pg_* tables that might give me a clue how \n\"busy\" the server is for a period of time?\n\nI've thought about allowing an ssh login without a keyphrase to log in \nand capture it. But, the client process is running as an apache user. \nGiving the apache user a shell login to the DB box does not seem like a \nsmart idea for obvious security reasons...\n\nSo far, that's all I can come up with, other than a dedicated socket \nserver daemon on the DB machine to do it.\n\nAny creative ideas are welcomed :)\n\nThanks\n\n-Dan\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n", "msg_date": "Tue, 20 Mar 2007 21:22:52 -0400", "msg_from": "\"Jim Buttafuoco\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Determining server load from client" }, { "msg_contents": "(forgot to send to list)\nDan Harris wrote:\n> architecture of the server hardware. It would be very nice if I could \n> check the load of the server at certain intervals to throttle the \n> number of concurrent queries and mitigate load problems when other \n> processes might be already inducing a significant load.\n>\n> I have seen some other nice back-end things exposed through PG \n> functions ( e.g. database size on disk ) and wondered if there was \n> anything applicable to this. Even if it can't return the load average \n> proper, is there anything else in the pg_* tables that might give me a \n> clue how \"busy\" the server is for a period of time?\n\n\n\nI have installed munin (http://munin.projects.linpro.no/) on a few \nsystems. This lets you look at graphs of system resources/load etc. I \nhave also added python scripts which do sample queries to let me know if \nperformance/index size is changing dramatically. I have attached an \nexample script.\n\n\n\nHope that helps,\n\n\n\nJoe\n\n\n------------------------------------------------------------------------\n\n#! /usr/bin/python\nimport psycopg\nimport sys\n\ndef fixName(name):\n return name[:19]\n\nif len(sys.argv) > 1 and sys.argv[1] == \"config\":\n print \"\"\"graph_title Postgresql Index Sizes\ngraph_vlabel Mb\"\"\"\n\n con = psycopg.connect(\"host=xxx user=xxx dbname=xxx password=xxx\")\n cur = con.cursor()\n \n cur.execute(\"select relname, relpages from pg_class where relowner > 10 and relkind='i' and relpages > 256 order by reltuples desc;\")\n results = cur.fetchall()\n for name, pages in results:\n print \"%s.label %s\" % (fixName(name), name)\n\nelse:\n con = psycopg.connect(\"host=xxx user=xxx dbname=xxx password=xxx\")\n cur = con.cursor()\n \n cur.execute(\"select relname, relpages from pg_class where relowner > 10 and relkind='i' and relpages > 256 order by reltuples desc;\")\n results = cur.fetchall()\n \n for name, pages in results:\n print \"%s.value %.2f\" % (name[:19], pages*8.0/1024.0)\n\n", "msg_date": "Wed, 21 Mar 2007 12:24:01 +1100", "msg_from": "Joe Healy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Determining server load from client" }, { "msg_contents": "Dan Harris wrote:\n> I've found that it would be helpful to be able to tell how busy my \n> dedicated PG server is ( Linux 2.6 kernel, v8.0.3 currently ) before \n> pounding it with some OLAP-type queries. \n..snip\n\nThank you all for your great ideas! I'm going to try the perl function \nas that seems like a very elegant way of doing it.\n\n-Dan\n", "msg_date": "Tue, 20 Mar 2007 19:27:12 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Determining server load from client" }, { "msg_contents": "Joe Healy wrote:\n> (forgot to send to list)\n> Dan Harris wrote:\n>> architecture of the server hardware. It would be very nice if I could \n>> check the load of the server at certain intervals to throttle the \n>> number of concurrent queries and mitigate load problems when other \n>> processes might be already inducing a significant load.\n\n> I have installed munin (http://munin.projects.linpro.no/) on a few \n> systems. This lets you look at graphs of system resources/load etc. I \n> have also added python scripts which do sample queries to let me know if \n> performance/index size is changing dramatically. I have attached an \n> example script.\n\nFor general monitoring of a handful of servers, I've been impressed with \nmunin. It's very simple to get it running and write your own plugins.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 21 Mar 2007 07:39:43 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "OT: Munin (was Re: Determining server load from client)" }, { "msg_contents": "I have my postgres munin monitoring script at\nhttp://oppetid.no/~tobixen/pg_activity.munin.txt (had to suffix it with\n.txt to make the local apache happy).\n\nI would like to see what others have done as well.\n\n", "msg_date": "Wed, 21 Mar 2007 11:13:40 +0100", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Munin (was Re: Determining server load from client)" }, { "msg_contents": "Tobias Brox wrote:\n> I have my postgres munin monitoring script at\n> http://oppetid.no/~tobixen/pg_activity.munin.txt (had to suffix it with\n> .txt to make the local apache happy).\n> \n> I would like to see what others have done as well.\n\nWell, I use Perl rather than shell, but that's just me.\n\nThe main difference is that although I downloaded a couple of simple \npg-monitoring scripts from the web, I've concentrated on monitoring the \napplication(s) instead. Things like:\n - number of news items posted\n - searches run\n - logins, logouts\n\nThe main limitation with it for me is the fixed 5-min time interval. It \nprovides a slight irritation that I've got hourly/daily cron jobs that \nare being monitored continually.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 21 Mar 2007 10:36:05 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Munin (was Re: Determining server load from client)" }, { "msg_contents": "On Mar 21, 2007, at 5:13 AM, Tobias Brox wrote:\n\n> I have my postgres munin monitoring script at\n> http://oppetid.no/~tobixen/pg_activity.munin.txt (had to suffix it \n> with\n> .txt to make the local apache happy).\n>\n> I would like to see what others have done as well.\n\nI use cacti (http://cacti.net) which does the same thing that munin \ndoes but in php instead. Here's what I use to db stats to it (again, \nphp):\n\nYou basically call the script with the database name and the stat you \nwant. I have the active_queries stat set up as a gauge in cacti and \nthe others as counters:\n\nif(!isset($argv[1])) { echo \"DB name argument required!\\n\"; exit \n();\n}\n\n$stats = array('xact_commit', 'xact_rollback', 'blks_read', \n'blks_hit', 'active_queries');\nif(!isset($argv[2]) || !in_array($argv[2], $stats)) { echo \n\"Invalid stat arg!: {$argv[2]}\";\n exit();\n}\nrequire_once('DB.php');\n\n$db_name = $argv[1];\nif(DB::isError($db = DB::connect(\"pgsql://user@host:5432/$db_name\"))) {\n exit();\n}\n\nif($argv[2] == 'active_queries') {\n $actives_sql = \"SELECT COUNT(*)\n FROM pg_stat_activity\n WHERE current_query NOT ILIKE '<idle>'\n AND now() - query_start > '1 second';\";\n if(DB::isError($db_stat = $db->getOne($actives_sql))) {\n exit();\n }\n echo \"$db_stat\\n\";\n exit();\n}\n\n$db_stat_sql = \"SELECT {$argv[2]}\n FROM pg_stat_database\n WHERE datname='$db_name';\";\nif(DB::isError($db_stat = $db->getOne($db_stat_sql))) {\n exit();\n}\n\necho \"$db_stat\\n\";\n\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 21, 2007, at 5:13 AM, Tobias Brox wrote:I have my postgres munin monitoring script athttp://oppetid.no/~tobixen/pg_activity.munin.txt (had to suffix it with.txt to make the local apache happy).I would like to see what others have done as well.I use cacti (http://cacti.net) which does the same thing that munin does but in php instead.  Here's what I use to db stats to it (again, php):You basically call the script with the database name and the stat you want.  I have the active_queries stat set up as a gauge in cacti and the others as counters:if(!isset($argv[1])) {    echo \"DB name argument required!\\n\";    exit();}$stats = array('xact_commit', 'xact_rollback', 'blks_read', 'blks_hit', 'active_queries');if(!isset($argv[2]) || !in_array($argv[2], $stats)) {    echo \"Invalid stat arg!: {$argv[2]}\";    exit();}require_once('DB.php');$db_name = $argv[1];if(DB::isError($db = DB::connect(\"pgsql://user@host:5432/$db_name\"))) {    exit();}if($argv[2] == 'active_queries') {    $actives_sql = \"SELECT COUNT(*)                    FROM pg_stat_activity                    WHERE current_query NOT ILIKE '<idle>'                        AND now() - query_start > '1 second';\";    if(DB::isError($db_stat = $db->getOne($actives_sql))) {        exit();    }    echo \"$db_stat\\n\";    exit();}$db_stat_sql = \"SELECT {$argv[2]}                 FROM pg_stat_database                 WHERE datname='$db_name';\";if(DB::isError($db_stat = $db->getOne($db_stat_sql))) {    exit();}echo \"$db_stat\\n\"; erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Wed, 21 Mar 2007 09:31:48 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Munin (was Re: Determining server load from client)" }, { "msg_contents": "[Erik Jones - Wed at 09:31:48AM -0500]\n> I use cacti (http://cacti.net) which does the same thing that munin \n> does but in php instead. Here's what I use to db stats to it (again, \n> php):\n\nI haven't tried cacti, but our sysadm has done a little bit of research\nand concluded \"cacti is better\". Maybe some day we'll move over.\n\nMunin is generating all the graphs statically every fifth minute, while\ncacti generates them on demand as far as I've understood. The munin\napproach is pretty bloat, since one usually would watch the graphs much\nmore seldom than what they are generated (at least, we do). That's not\nreally an argument since CPU is cheap nowadays - but a real argument is\nthat the munin approach is less flexible. One would like to adjust the\ngraph (like, min/max values for both axis) while watching quite some\ntimes.\n\n> $actives_sql = \"SELECT COUNT(*)\n> FROM pg_stat_activity\n> WHERE current_query NOT ILIKE '<idle>'\n> AND now() - query_start > '1 second';\";\n\nSo this one is quite similar to mine ...\n\n> $db_stat_sql = \"SELECT {$argv[2]}\n> FROM pg_stat_database\n> WHERE datname='$db_name';\";\n\nI was not aware of this view - it can probably be useful for us. I will\nadd this one when I get the time ... (I'm at vacation now).\n\n", "msg_date": "Wed, 21 Mar 2007 22:13:05 +0100", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Munin (was Re: Determining server load from client)" }, { "msg_contents": "On Mar 21, 2007, at 4:13 PM, Tobias Brox wrote:\n\n> [Erik Jones - Wed at 09:31:48AM -0500]\n>> I use cacti (http://cacti.net) which does the same thing that munin\n>> does but in php instead. Here's what I use to db stats to it (again,\n>> php):\n>\n> I haven't tried cacti, but our sysadm has done a little bit of \n> research\n> and concluded \"cacti is better\". Maybe some day we'll move over.\n>\n> Munin is generating all the graphs statically every fifth minute, \n> while\n> cacti generates them on demand as far as I've understood. The munin\n> approach is pretty bloat, since one usually would watch the graphs \n> much\n> more seldom than what they are generated (at least, we do). That's \n> not\n> really an argument since CPU is cheap nowadays - but a real \n> argument is\n> that the munin approach is less flexible. One would like to adjust \n> the\n> graph (like, min/max values for both axis) while watching quite some\n> times.\n\nWell, by \"default\", Cacti polls all of the data sources you've set up \nevery five minutes as well as that's how the docs instruct you to set \nup the cron job for the poller. However, with a little understanding \nof how the rrdtool rras work, you could definitely poll more often \nand simply edit the existing rras and datasources to expect that or \ncreate new ones. And, yes, the graph customization is pretty cool \nalthough for the most part the just map what's available from the \nrrdtool graph functionality. If you do decide to set up Cacti I \nsuggest you go straight to the faq section of the manual and read the \npart about going from a simple script to a graph. The main manual is \nalmost entirely centered on the built-in networking (e.g. snmp) data \nsources and, as such, doesn't do much for explaining how to set up \nother data sources.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 21, 2007, at 4:13 PM, Tobias Brox wrote:[Erik Jones - Wed at 09:31:48AM -0500] I use cacti (http://cacti.net) which does the same thing that munin  does but in php instead.  Here's what I use to db stats to it (again,  php): I haven't tried cacti, but our sysadm has done a little bit of researchand concluded \"cacti is better\".  Maybe some day we'll move over.Munin is generating all the graphs statically every fifth minute, whilecacti generates them on demand as far as I've understood.  The muninapproach is pretty bloat, since one usually would watch the graphs muchmore seldom than what they are generated (at least, we do).  That's notreally an argument since CPU is cheap nowadays - but a real argument isthat the munin approach is less flexible.  One would like to adjust thegraph (like, min/max values for both axis) while watching quite sometimes. Well, by \"default\", Cacti polls all of the data sources you've set up every five minutes as well as that's how the docs instruct you to set up the cron job for the poller.  However, with a little understanding of how the rrdtool rras work, you could definitely poll more often and simply edit the existing rras and datasources to expect that or create new ones.  And, yes, the graph customization is pretty cool although for the most part the just map what's available from the rrdtool graph functionality.  If you do decide to set up Cacti I suggest you go straight to the faq section of the manual and read the part about going from a simple script to a graph.  The main manual is almost entirely centered on the built-in networking (e.g. snmp) data sources and, as such, doesn't do much for explaining how to set up other data sources. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Wed, 21 Mar 2007 17:07:03 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Munin (was Re: Determining server load from client)" }, { "msg_contents": "On 3/21/07, Erik Jones <[email protected]> wrote:\n>\n>\n> On Mar 21, 2007, at 4:13 PM, Tobias Brox wrote:\n>\n> [Erik Jones - Wed at 09:31:48AM -0500]\n>\n> I use cacti (http://cacti.net) which does the same thing that munin\n> does but in php instead. Here's what I use to db stats to it (again,\n> php):\n>\n>\n> I haven't tried cacti, but our sysadm has done a little bit of research\n> and concluded \"cacti is better\". Maybe some day we'll move over.\n>\n> Munin is generating all the graphs statically every fifth minute, while\n> cacti generates them on demand as far as I've understood. The munin\n> approach is pretty bloat, since one usually would watch the graphs much\n> more seldom than what they are generated (at least, we do). That's not\n> really an argument since CPU is cheap nowadays - but a real argument is\n> that the munin approach is less flexible. One would like to adjust the\n> graph (like, min/max values for both axis) while watching quite some\n> times.\n>\n>\n> Well, by \"default\", Cacti polls all of the data sources you've set up\n> every five minutes as well as that's how the docs instruct you to set up the\n> cron job for the poller. However, with a little understanding of how the\n> rrdtool rras work, you could definitely poll more often and simply edit the\n> existing rras and datasources to expect that or create new ones. And, yes,\n> the graph customization is pretty cool although for the most part the just\n> map what's available from the rrdtool graph functionality. If you do decide\n> to set up Cacti I suggest you go straight to the faq section of the manual\n> and read the part about going from a simple script to a graph. The main\n> manual is almost entirely centered on the built-in networking (e.g. snmp)\n> data sources and, as such, doesn't do much for explaining how to set up\n> other data sources.\n>\n\n\nHas anyone had experience setting up something similar with Nagios? We\nmonitor servers using nagios and not having to install additional software\n(cacti/munin) for postgres resource usage monitoring would be great.\n\nThanks in advance!\n\nOn 3/21/07, Erik Jones <[email protected]> wrote:\nOn Mar 21, 2007, at 4:13 PM, Tobias Brox wrote:[Erik Jones - Wed at 09:31:48AM -0500] \nI use cacti (http://cacti.net) which does the same thing that munin  \ndoes but in php instead.  Here's what I use to db stats to it (again,  php): \nI haven't tried cacti, but our sysadm has done a little bit of researchand concluded \"cacti is better\".  Maybe some day we'll move over.\nMunin is generating all the graphs statically every fifth minute, while\ncacti generates them on demand as far as I've understood.  The muninapproach is pretty bloat, since one usually would watch the graphs much\nmore seldom than what they are generated (at least, we do).  That's notreally an argument since CPU is cheap nowadays - but a real argument is\nthat the munin approach is less flexible.  One would like to adjust thegraph (like, min/max values for both axis) while watching quite some\ntimes. Well, by \"default\", Cacti polls all of the data sources you've set up every five minutes as well as that's how the docs instruct you to set up the cron job for the poller.  However, with a little understanding of how the rrdtool rras work, you could definitely poll more often and simply edit the existing rras and datasources to expect that or create new ones.  And, yes, the graph customization is pretty cool although for the most part the just map what's available from the rrdtool graph functionality.  If you do decide to set up Cacti I suggest you go straight to the faq section of the manual and read the part about going from a simple script to a graph.  The main manual is almost entirely centered on the built-in networking (\ne.g. snmp) data sources and, as such, doesn't do much for explaining how to set up other data sources.Has anyone had experience setting up something similar with Nagios? We monitor servers using nagios and not having to install additional software (cacti/munin) for postgres resource usage monitoring would be great.\nThanks in advance!", "msg_date": "Sat, 24 Mar 2007 22:46:17 -0700", "msg_from": "\"CAJ CAJ\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Munin (was Re: Determining server load from client)" }, { "msg_contents": "CAJ CAJ wrote:\n> \n> \n> On 3/21/07, *Erik Jones* <[email protected] <mailto:[email protected]>> wrote:\n> \n> \n> On Mar 21, 2007, at 4:13 PM, Tobias Brox wrote:\n> \n>> [Erik Jones - Wed at 09:31:48AM -0500]\n>>> I use cacti (http://cacti.net) which does the same thing that\n>>> munin \n>>> does but in php instead. Here's what I use to db stats to it\n>>> (again, \n>>> php):\n>>\n>> I haven't tried cacti, but our sysadm has done a little bit of\n>> research\n>> and concluded \"cacti is better\". Maybe some day we'll move over.\n>>\n>> Munin is generating all the graphs statically every fifth minute,\n>> while\n>> cacti generates them on demand as far as I've understood. The munin\n>> approach is pretty bloat, since one usually would watch the graphs\n>> much\n>> more seldom than what they are generated (at least, we do). \n>> That's not\n>> really an argument since CPU is cheap nowadays - but a real\n>> argument is\n>> that the munin approach is less flexible. One would like to\n>> adjust the\n>> graph (like, min/max values for both axis) while watching quite some\n>> times.\n> \n> Well, by \"default\", Cacti polls all of the data sources you've set\n> up every five minutes as well as that's how the docs instruct you to\n> set up the cron job for the poller. However, with a little\n> understanding of how the rrdtool rras work, you could definitely\n> poll more often and simply edit the existing rras and datasources to\n> expect that or create new ones. And, yes, the graph customization\n> is pretty cool although for the most part the just map what's\n> available from the rrdtool graph functionality. If you do decide to\n> set up Cacti I suggest you go straight to the faq section of the\n> manual and read the part about going from a simple script to a\n> graph. The main manual is almost entirely centered on the built-in\n> networking ( e.g. snmp) data sources and, as such, doesn't do much\n> for explaining how to set up other data sources.\n> \n> \n> \n> Has anyone had experience setting up something similar with Nagios? We\n> monitor servers using nagios and not having to install additional\n> software (cacti/munin) for postgres resource usage monitoring would be\n> great.\n\na lot of nagios plugins can supply performance data in addition to the\nOK/WARNING/CRITICAL state information - there are a number of solutions\nout there that can take that information and graph it on a per\nhosts/server base automatically - examples for such addons are\nnagiosgrapher and n2rrd(or look at www.nagiosexchange.org it has a large\nnumber of addons listed).\n\n\nStefan\n", "msg_date": "Sun, 25 Mar 2007 10:12:16 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OT: Munin (was Re: Determining server load from client)" } ]
[ { "msg_contents": "Hello\n\nI plan to buy a new development server and I wonder what will be the best HD\ncombination.\n\nI'm aware that \"best combination\" also relay on DB structure and usage.\nso lets assume, heavy duty large DB with mostly reads and heavy write\nactions from time to time ( updates / huge transactions ).\n\nHere are the options:\n\nOne very fast 10K RPM SATA Western Digital Raptor 150GB HD.\n Pro: very low access time and generally 30% faster regarding mainstream\nHD.\n Con: Expensive.\n\n2 mainstream 7.2K RPM SATA HD in RAID 0.\n Pro: fast transfer rate.\n Con: Access time is lowered as both HD has to sync for read / write ( true\n? ).\n\n2 mainstream 7.2K RPM SATA HD in RAID 1.\n Pro: can access parallely different files in the same time ( true ? ).\n Con: Slower at writing.\n\nRandom access benchmark:\nhttp://www23.tomshardware.com/storage.html?modelx=33&model1=280&model2=675&chart=32\n\n\nWill be happy to hear recommendations and ideas.\n\nThanks,\nMiki\n\n-- \n--------------------------------------------------\nMichael Ben-Nes - Internet Consultant and Director.\nhttp://www.epoch.co.il - weaving the Net.\nCellular: 054-4848113\n--------------------------------------------------\n\nHelloI plan to buy a new development server and I wonder what will be the best HD combination.I'm aware that \"best combination\" also relay on DB structure and usage.so lets assume, heavy duty large DB with mostly reads and heavy write actions from time to time ( updates / huge transactions ).\nHere are the options:One very fast 10K RPM SATA Western Digital Raptor 150GB HD.  Pro: very low access time and generally 30% faster regarding mainstream HD.  Con: Expensive.2 mainstream 7.2K\n\n RPM SATA HD in RAID 0.  Pro: fast transfer rate.  Con: Access time is lowered as both HD has to sync for read / write ( true ? ).2 mainstream 7.2K RPM SATA HD in RAID 1.  Pro: can access parallely different files in the same time ( true ? ).\n\n  Con: Slower at writing.Random access benchmark:\nhttp://www23.tomshardware.com/storage.html?modelx=33&model1=280&model2=675&chart=32\nWill be happy to hear recommendations and ideas.Thanks,Miki-- --------------------------------------------------Michael Ben-Nes - Internet Consultant and  Director.\n\nhttp://www.epoch.co.il - weaving the Net.Cellular: 054-4848113--------------------------------------------------", "msg_date": "Thu, 22 Mar 2007 11:08:02 +0200", "msg_from": "\"Michael Ben-Nes\" <[email protected]>", "msg_from_op": true, "msg_subject": "Lower Random Access Time vs RAID 0 / 1" }, { "msg_contents": "1= a better HD comparison resource can be found at www.storagereview.com\nhttp://www.storagereview.com/comparison.html\n\nYou will find that storagereview has better information on any and \nall things HD than Tom's does.\n\n\n2= DB servers work best with as many spindles as possible. None of \nyour example configurations is adequate; and any configuration with \nonly 1 HD is a data loss / data corruption disaster waiting to happen.\nIn general, the more spindles the better with any DB. The =minimum= \nshould be at least 4 HD's =dedicated= to the DB. OS HD's are \nindependent and in addition to the 4+ DB HDs.\n\n\n3= \"heavy duty large DB with mostly reads and heavy write actions \nfrom time to time ( updates / huge transactions ).\" Does not have \nanywhere near the precision needed to adequately describe your needs \nin engineering terms.\nHow big a DB?\nWhat % of the IO will be reads? % writes?\nHow big is a \"huge transaction\"?\nExactly what is the primary use case of this server?\netc. We need =numbers= if we are going to think about \"speeds and \nfeeds\" and specify HW.\n\n\n4= =seriously= consider HW RAID controllers like 3ware (AKA AMCC) or \nAreca. with BB IO caches.\n\n\nYou've got a lot more work ahead of you.\nRon\n\n\nAt 05:08 AM 3/22/2007, Michael Ben-Nes wrote:\n>Hello\n>\n>I plan to buy a new development server and I wonder what will be the \n>best HD combination.\n>\n>I'm aware that \"best combination\" also relay on DB structure and usage.\n>so lets assume, heavy duty large DB with mostly reads and heavy \n>write actions from time to time ( updates / huge transactions ).\n>\n>Here are the options:\n>\n>One very fast 10K RPM SATA Western Digital Raptor 150GB HD.\n> Pro: very low access time and generally 30% faster regarding mainstream HD.\n> Con: Expensive.\n>\n>2 mainstream 7.2K RPM SATA HD in RAID 0.\n> Pro: fast transfer rate.\n> Con: Access time is lowered as both HD has to sync for read / \n> write ( true ? ).\n>\n>2 mainstream 7.2K RPM SATA HD in RAID 1.\n> Pro: can access parallely different files in the same time ( true ? ).\n> Con: Slower at writing.\n>\n>Random access benchmark:\n><http://www23.tomshardware.com/storage.html?modelx=33&model1=280&model2=675&chart=32>http://www23.tomshardware.com/storage.html?modelx=33&model1=280&model2=675&chart=32 \n>\n>\n>Will be happy to hear recommendations and ideas.\n>\n>Thanks,\n>Miki\n>\n>--\n>--------------------------------------------------\n>Michael Ben-Nes - Internet Consultant and Director.\n><http://www.epoch.co.il>http://www.epoch.co.il - weaving the Net.\n>Cellular: 054-4848113\n>--------------------------------------------------\n\n", "msg_date": "Thu, 22 Mar 2007 10:37:04 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Lower Random Access Time vs RAID 0 / 1" } ]
[ { "msg_contents": "Hi,\n\nI just try to find out why a simple count(*) might last that long.\nAt first I tried explain, which rather quickly knows how many rows\nto check, but the final count is two orders of magnitude slower.\n\nMy MS_SQL server using colleague can't believe that.\n\n$ psql InfluenzaWeb -c 'explain SELECT count(*) from agiraw ;'\n QUERY PLAN \n-----------------------------------------------------------------------\n Aggregate (cost=196969.77..196969.77 rows=1 width=0)\n -> Seq Scan on agiraw (cost=0.00..185197.41 rows=4708941 width=0)\n(2 rows)\n\nreal 0m0.066s\nuser 0m0.024s\nsys 0m0.008s\n\n$ psql InfluenzaWeb -c 'SELECT count(*) from agiraw ;'\n count \n---------\n 4708941\n(1 row)\n\nreal 0m4.474s\nuser 0m0.036s\nsys 0m0.004s\n\n\nAny explanation?\n\nKind regards\n\n Andreas.\n\n-- \nhttp://fam-tille.de\n", "msg_date": "Thu, 22 Mar 2007 11:53:00 +0100 (CET)", "msg_from": "Andreas Tille <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of count(*)" }, { "msg_contents": "As you can see, PostgreSQL needs to do a sequencial scan to count because its \nMVCC nature and indices don't have transaction information. It's a known \ndrawback inherent to the way PostgreSQL works and which gives very good \nresults in other areas. It's been talked about adding some kind of \napproximated count which wouldn't need a full table scan but I don't think \nthere's anything there right now.\n\nA Dijous 22 Març 2007 11:53, Andreas Tille va escriure:\n> Hi,\n>\n> I just try to find out why a simple count(*) might last that long.\n> At first I tried explain, which rather quickly knows how many rows\n> to check, but the final count is two orders of magnitude slower.\n>\n> My MS_SQL server using colleague can't believe that.\n>\n> $ psql InfluenzaWeb -c 'explain SELECT count(*) from agiraw ;'\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Aggregate (cost=196969.77..196969.77 rows=1 width=0)\n> -> Seq Scan on agiraw (cost=0.00..185197.41 rows=4708941 width=0)\n> (2 rows)\n>\n> real 0m0.066s\n> user 0m0.024s\n> sys 0m0.008s\n>\n> $ psql InfluenzaWeb -c 'SELECT count(*) from agiraw ;'\n> count\n> ---------\n> 4708941\n> (1 row)\n>\n> real 0m4.474s\n> user 0m0.036s\n> sys 0m0.004s\n>\n>\n> Any explanation?\n>\n> Kind regards\n>\n> Andreas.\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Thu, 22 Mar 2007 12:08:19 +0100", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "* Andreas Tille <[email protected]> [070322 12:07]:\n> Hi,\n> \n> I just try to find out why a simple count(*) might last that long.\n> At first I tried explain, which rather quickly knows how many rows\n> to check, but the final count is two orders of magnitude slower.\n\nWhich version of PG?\n\nThe basic problem is, that explain knows quickly, because it has it's\nstatistics.\n\nThe select proper, OTOH, has to go through the whole table to make\nsure which rows are valid for your transaction.\n\nThat's the reason why PG (check the newest releases, I seem to\nremember that there has been some aggregate optimizations there), does\na SeqScan for select count(*) from table. btw, depending upon your\ndata, doing a select count(*) from table where user=X will use an\nIndex, but will still need to fetch the rows proper to validate them.\n\nAndreas\n\n> \n> My MS_SQL server using colleague can't believe that.\n> \n> $ psql InfluenzaWeb -c 'explain SELECT count(*) from agiraw ;'\n> QUERY PLAN -----------------------------------------------------------------------\n> Aggregate (cost=196969.77..196969.77 rows=1 width=0)\n> -> Seq Scan on agiraw (cost=0.00..185197.41 rows=4708941 width=0)\n> (2 rows)\n> \n> real 0m0.066s\n> user 0m0.024s\n> sys 0m0.008s\n> \n> $ psql InfluenzaWeb -c 'SELECT count(*) from agiraw ;'\n> count ---------\n> 4708941\n> (1 row)\n> \n> real 0m4.474s\n> user 0m0.036s\n> sys 0m0.004s\n> \n> \n> Any explanation?\n> \n> Kind regards\n> \n> Andreas.\n> \n> -- \n> http://fam-tille.de\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n", "msg_date": "Thu, 22 Mar 2007 12:10:47 +0100", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "\nexplain is just \"quessing\" how many rows are in table. sometimes quess is \nright, sometimes just an estimate.\n\nsailabdb=# explain SELECT count(*) from sl_tuote;\n QUERY PLAN \n----------------------------------------------------------------------\n Aggregate (cost=10187.10..10187.11 rows=1 width=0)\n -> Seq Scan on sl_tuote (cost=0.00..9806.08 rows=152408 width=0)\n(2 rows)\n\nsailabdb=# SELECT count(*) from sl_tuote;\n count \n-------\n 62073\n(1 row)\n\n\nso in that case explain estimates that sl_tuote table have 152408 rows, but \nthere are only 62073 rows.\n\nafter analyze estimates are better:\n\nsailabdb=# vacuum analyze sl_tuote;\nVACUUM\nsailabdb=# explain SELECT count(*) from sl_tuote;\n QUERY PLAN \n---------------------------------------------------------------------\n Aggregate (cost=9057.91..9057.92 rows=1 width=0)\n -> Seq Scan on sl_tuote (cost=0.00..8902.73 rows=62073 width=0)\n(2 rows)\n\nyou can't never trust that estimate, you must always count it!\n\nIsmo\n\nOn Thu, 22 Mar 2007, Andreas Tille wrote:\n\n> Hi,\n> \n> I just try to find out why a simple count(*) might last that long.\n> At first I tried explain, which rather quickly knows how many rows\n> to check, but the final count is two orders of magnitude slower.\n> \n> My MS_SQL server using colleague can't believe that.\n> \n> $ psql InfluenzaWeb -c 'explain SELECT count(*) from agiraw ;'\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Aggregate (cost=196969.77..196969.77 rows=1 width=0)\n> -> Seq Scan on agiraw (cost=0.00..185197.41 rows=4708941 width=0)\n> (2 rows)\n> \n> real 0m0.066s\n> user 0m0.024s\n> sys 0m0.008s\n> \n> $ psql InfluenzaWeb -c 'SELECT count(*) from agiraw ;'\n> count ---------\n> 4708941\n> (1 row)\n> \n> real 0m4.474s\n> user 0m0.036s\n> sys 0m0.004s\n> \n> \n> Any explanation?\n> \n> Kind regards\n> \n> Andreas.\n> \n> -- \n> http://fam-tille.de\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n", "msg_date": "Thu, 22 Mar 2007 13:18:16 +0200 (EET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "\napproximated count?????\n\nwhy? who would need it? where you can use it?\n\ncalculating costs and desiding how to execute query needs \napproximated count, but it's totally worthless information for any user \nIMO.\n\nIsmo\n\nOn Thu, 22 Mar 2007, Albert Cervera Areny wrote:\n\n> As you can see, PostgreSQL needs to do a sequencial scan to count because its \n> MVCC nature and indices don't have transaction information. It's a known \n> drawback inherent to the way PostgreSQL works and which gives very good \n> results in other areas. It's been talked about adding some kind of \n> approximated count which wouldn't need a full table scan but I don't think \n> there's anything there right now.\n> \n> A Dijous 22 Març 2007 11:53, Andreas Tille va escriure:\n> > Hi,\n> >\n> > I just try to find out why a simple count(*) might last that long.\n> > At first I tried explain, which rather quickly knows how many rows\n> > to check, but the final count is two orders of magnitude slower.\n> >\n> > My MS_SQL server using colleague can't believe that.\n> >\n> > $ psql InfluenzaWeb -c 'explain SELECT count(*) from agiraw ;'\n> > QUERY PLAN\n> > -----------------------------------------------------------------------\n> > Aggregate (cost=196969.77..196969.77 rows=1 width=0)\n> > -> Seq Scan on agiraw (cost=0.00..185197.41 rows=4708941 width=0)\n> > (2 rows)\n> >\n> > real 0m0.066s\n> > user 0m0.024s\n> > sys 0m0.008s\n> >\n> > $ psql InfluenzaWeb -c 'SELECT count(*) from agiraw ;'\n> > count\n> > ---------\n> > 4708941\n> > (1 row)\n> >\n> > real 0m4.474s\n> > user 0m0.036s\n> > sys 0m0.004s\n> >\n> >\n> > Any explanation?\n> >\n> > Kind regards\n> >\n> > Andreas.\n> \n> -- \n> Albert Cervera Areny\n> Dept. Informàtica Sedifa, S.L.\n> \n> Av. Can Bordoll, 149\n> 08202 - Sabadell (Barcelona)\n> Tel. 93 715 51 11\n> Fax. 93 715 51 12\n> \n> ====================================================================\n> ........................ AVISO LEGAL ............................\n> La presente comunicación y sus anexos tiene como destinatario la\n> persona a la que va dirigida, por lo que si usted lo recibe\n> por error debe notificarlo al remitente y eliminarlo de su\n> sistema, no pudiendo utilizarlo, total o parcialmente, para\n> ningún fin. Su contenido puede tener información confidencial o\n> protegida legalmente y únicamente expresa la opinión del\n> remitente. El uso del correo electrónico vía Internet no\n> permite asegurar ni la confidencialidad de los mensajes\n> ni su correcta recepción. En el caso de que el\n> destinatario no consintiera la utilización del correo electrónico,\n> deberá ponerlo en nuestro conocimiento inmediatamente.\n> ====================================================================\n> ........................... DISCLAIMER .............................\n> This message and its attachments are intended exclusively for the\n> named addressee. If you receive this message in error, please\n> immediately delete it from your system and notify the sender. You\n> may not use this message or any part of it for any purpose.\n> The message may contain information that is confidential or\n> protected by law, and any opinions expressed are those of the\n> individual sender. Internet e-mail guarantees neither the\n> confidentiality nor the proper receipt of the message sent.\n> If the addressee of this message does not consent to the use\n> of internet e-mail, please inform us inmmediately.\n> ====================================================================\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> ", "msg_date": "Thu, 22 Mar 2007 13:30:35 +0200 (EET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On Thu, 22 Mar 2007, Andreas Kostyrka wrote:\n\n> Which version of PG?\n\nAhh, sorry, forgot that. The issue occurs in Debian (Etch) packaged\nversion 7.4.16. I plan to switch soon to 8.1.8.\n\n> That's the reason why PG (check the newest releases, I seem to\n> remember that there has been some aggregate optimizations there),\n\nI'll verify this once I moved to the new version.\n\n> does\n> a SeqScan for select count(*) from table. btw, depending upon your\n> data, doing a select count(*) from table where user=X will use an\n> Index, but will still need to fetch the rows proper to validate them.\n\nI have an index on three (out of 7 columns) of this table. Is there\nany chance to optimize indexing regarding this.\n\nWell, to be honest I'm not really interested in the performance of\ncount(*). I was just discussing general performance issues on the\nphone line and when my colleague asked me about the size of the\ndatabase he just wonderd why this takes so long for a job his\nMS-SQL server is much faster. So in principle I was just asking\na first question that is easy to ask. Perhaps I come up with\nmore difficult optimisation questions.\n\nKind regards\n\n Andreas.\n\n-- \nhttp://fam-tille.de\n", "msg_date": "Thu, 22 Mar 2007 12:48:08 +0100 (CET)", "msg_from": "Andreas Tille <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "* Andreas Tille <[email protected]> [070322 13:24]:\n> On Thu, 22 Mar 2007, Andreas Kostyrka wrote:\n> \n> >Which version of PG?\n> \n> Ahh, sorry, forgot that. The issue occurs in Debian (Etch) packaged\n> version 7.4.16. I plan to switch soon to 8.1.8.\nI'd recommend 8.2 if at all possible :)\n> \n> >That's the reason why PG (check the newest releases, I seem to\n> >remember that there has been some aggregate optimizations there),\n> \n> I'll verify this once I moved to the new version.\n8.1 won't help you I'd guess. ;)\n\n> \n> >does\n> >a SeqScan for select count(*) from table. btw, depending upon your\n> >data, doing a select count(*) from table where user=X will use an\n> >Index, but will still need to fetch the rows proper to validate them.\n> \n> I have an index on three (out of 7 columns) of this table. Is there\n> any chance to optimize indexing regarding this.\nWell, that depends upon you query pattern. It's an art and a science\nat the same time ;)\n> \n> Well, to be honest I'm not really interested in the performance of\n> count(*). I was just discussing general performance issues on the\n> phone line and when my colleague asked me about the size of the\n> database he just wonderd why this takes so long for a job his\n> MS-SQL server is much faster. So in principle I was just asking\n> a first question that is easy to ask. Perhaps I come up with\n> more difficult optimisation questions.\n\nSimple. MSSQL is optimized for this case, and uses \"older\"\ndatastructures. PG uses a MVCC storage, which is not optimized for\nthis usecase. It's quite fast for different kinds of queries.\n\nThe basic trouble here is that mvcc makes it a little harder to decide\nwhat is valid for your transaction, plus the indexes seems to be\ndesigned for lookup, not for data fetching. (Basically, PG can use\nindexes only to locate potential data, but cannot return data directly\nout of an index)\n\nAndreas\n", "msg_date": "Thu, 22 Mar 2007 13:29:46 +0100", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "In response to [email protected]:\n> \n> approximated count?????\n> \n> why? who would need it? where you can use it?\n> \n> calculating costs and desiding how to execute query needs \n> approximated count, but it's totally worthless information for any user \n> IMO.\n\nI don't think so.\n\nWe have some AJAX stuff where users enter search criteria on a web form,\nand the # of results updates in \"real time\" as they change their criteria.\n\nRight now, this works fine with small tables using count(*) -- it's fast\nenough not to be an issue, but we're aware that we can't use it on large\ntables.\n\nAn estimate_count(*) or similar that would allow us to put an estimate of\nhow many results will be returned (not guaranteed accurate) would be very\nnice to have in these cases.\n\nWe're dealing with complex sets of criteria. It's very useful for the users\nto know in \"real time\" how much their search criteria is effecting the\nresult pool. Once they feel they've limited as much as they can without\nreducing the pool too much, they can hit submit and get the actual result.\n\nAs I said, we do this with small data sets, but it's not terribly useful\nthere. Where it will be useful is searches of large data sets, where\nconstantly submitting and then retrying is overly time-consuming.\n\nOf course, this is count(*)ing the results of a complex query, possibly\nwith a bunch of joins and many limitations in the WHERE clause, so I'm\nnot sure what could be done overall to improve the response time.\n\n> On Thu, 22 Mar 2007, Albert Cervera Areny wrote:\n> \n> > As you can see, PostgreSQL needs to do a sequencial scan to count because its \n> > MVCC nature and indices don't have transaction information. It's a known \n> > drawback inherent to the way PostgreSQL works and which gives very good \n> > results in other areas. It's been talked about adding some kind of \n> > approximated count which wouldn't need a full table scan but I don't think \n> > there's anything there right now.\n> > \n> > A Dijous 22 Març 2007 11:53, Andreas Tille va escriure:\n> > > Hi,\n> > >\n> > > I just try to find out why a simple count(*) might last that long.\n> > > At first I tried explain, which rather quickly knows how many rows\n> > > to check, but the final count is two orders of magnitude slower.\n> > >\n> > > My MS_SQL server using colleague can't believe that.\n> > >\n> > > $ psql InfluenzaWeb -c 'explain SELECT count(*) from agiraw ;'\n> > > QUERY PLAN\n> > > -----------------------------------------------------------------------\n> > > Aggregate (cost=196969.77..196969.77 rows=1 width=0)\n> > > -> Seq Scan on agiraw (cost=0.00..185197.41 rows=4708941 width=0)\n> > > (2 rows)\n> > >\n> > > real 0m0.066s\n> > > user 0m0.024s\n> > > sys 0m0.008s\n> > >\n> > > $ psql InfluenzaWeb -c 'SELECT count(*) from agiraw ;'\n> > > count\n> > > ---------\n> > > 4708941\n> > > (1 row)\n> > >\n> > > real 0m4.474s\n> > > user 0m0.036s\n> > > sys 0m0.004s\n> > >\n> > >\n> > > Any explanation?\n> > >\n> > > Kind regards\n> > >\n> > > Andreas.\n> > \n> > -- \n> > Albert Cervera Areny\n> > Dept. Informàtica Sedifa, S.L.\n> > \n> > Av. Can Bordoll, 149\n> > 08202 - Sabadell (Barcelona)\n> > Tel. 93 715 51 11\n> > Fax. 93 715 51 12\n> > \n> > ====================================================================\n> > ........................ AVISO LEGAL ............................\n> > La presente comunicación y sus anexos tiene como destinatario la\n> > persona a la que va dirigida, por lo que si usted lo recibe\n> > por error debe notificarlo al remitente y eliminarlo de su\n> > sistema, no pudiendo utilizarlo, total o parcialmente, para\n> > ningún fin. Su contenido puede tener información confidencial o\n> > protegida legalmente y únicamente expresa la opinión del\n> > remitente. El uso del correo electrónico vía Internet no\n> > permite asegurar ni la confidencialidad de los mensajes\n> > ni su correcta recepción. En el caso de que el\n> > destinatario no consintiera la utilización del correo electrónico,\n> > deberá ponerlo en nuestro conocimiento inmediatamente.\n> > ====================================================================\n> > ........................... DISCLAIMER .............................\n> > This message and its attachments are intended exclusively for the\n> > named addressee. If you receive this message in error, please\n> > immediately delete it from your system and notify the sender. You\n> > may not use this message or any part of it for any purpose.\n> > The message may contain information that is confidential or\n> > protected by law, and any opinions expressed are those of the\n> > individual sender. Internet e-mail guarantees neither the\n> > confidentiality nor the proper receipt of the message sent.\n> > If the addressee of this message does not consent to the use\n> > of internet e-mail, please inform us inmmediately.\n> > ====================================================================\n> > \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 7: You can help support the PostgreSQL project by donating at\n> > \n> > http://www.postgresql.org/about/donate\n> > \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Thu, 22 Mar 2007 08:31:30 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Am Donnerstag, 22. März 2007 12:30 schrieb [email protected]:\n> approximated count?????\n>\n> why? who would need it? where you can use it?\n>\n> calculating costs and desiding how to execute query needs\n> approximated count, but it's totally worthless information for any user\n> IMO.\n\nNo, it is not useless. Try: \nhttp://www.google.com/search?hl=de&q=test&btnG=Google-Suche&meta=\n\nDo you really think google counted each of those individual 895 million \nresults? It doesn't. In fact, the estimate of google can be off by an order \nof magnitude, and still nobody complains...\n\n", "msg_date": "Thu, 22 Mar 2007 14:26:51 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On 3/22/07, Andreas Tille <[email protected]> wrote:\n> I just try to find out why a simple count(*) might last that long.\n> At first I tried explain, which rather quickly knows how many rows\n> to check, but the final count is two orders of magnitude slower.\n\nYou can get the approximate count by selecting reltuples from\npg_class. It is valid as of last analyze.\n\nAs others suggest select count(*) from table is very special case\nwhich non-mvcc databases can optimize for. There are many reasons why\nthis is the case and why it explains nothing about the relative\nperformance of the two databases. This is probably #1 most\nfrequenctly asked question to -performance...there is a wealth of\ninformation in the archives.\n\nmerlin\n", "msg_date": "Thu, 22 Mar 2007 09:39:18 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On Thu, Mar 22, 2007 at 01:30:35PM +0200, [email protected] wrote:\n>approximated count?????\n>\n>why? who would need it? where you can use it?\n\nDo a google query. Look at the top of the page, where it says \n\"results N to M of about O\". For user interfaces (which is where a lot \nof this count(*) stuff comes from) you quite likely don't care about the \nexact count, because the user doesn't really care about the exact count. \n\nIIRC, that's basically what you get with the mysql count anyway, since \nthere are corner cases for results in a transaction. Avoiding those \ncases is why the postgres count takes so long; sometimes that's what's \ndesired and sometimes it is not.\n\nMike Stone\n", "msg_date": "Thu, 22 Mar 2007 10:18:10 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On 3/22/07, Merlin Moncure <[email protected]> wrote:\n> As others suggest select count(*) from table is very special case\n> which non-mvcc databases can optimize for.\n\nWell, other MVCC database still do it faster than we do. However, I\nthink we'll be able to use the dead space map for speeding this up a\nbit wouldn't we?\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Thu, 22 Mar 2007 10:33:29 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Am Donnerstag, 22. März 2007 15:33 schrieb Jonah H. Harris:\n> On 3/22/07, Merlin Moncure <[email protected]> wrote:\n> > As others suggest select count(*) from table is very special case\n> > which non-mvcc databases can optimize for.\n>\n> Well, other MVCC database still do it faster than we do. However, I\n> think we'll be able to use the dead space map for speeding this up a\n> bit wouldn't we?\n\nWhich MVCC DB do you mean? Just curious...\n", "msg_date": "Thu, 22 Mar 2007 15:36:32 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On Thu, Mar 22, 2007 at 01:29:46PM +0100, Andreas Kostyrka wrote:\n> * Andreas Tille <[email protected]> [070322 13:24]:\n> > Well, to be honest I'm not really interested in the performance of\n> > count(*). I was just discussing general performance issues on the\n> > phone line and when my colleague asked me about the size of the\n> > database he just wonderd why this takes so long for a job his\n> > MS-SQL server is much faster. So in principle I was just asking\n> > a first question that is easy to ask. Perhaps I come up with\n> > more difficult optimisation questions.\n> \n> Simple. MSSQL is optimized for this case, and uses \"older\"\n> datastructures. PG uses a MVCC storage, which is not optimized for\n> this usecase. It's quite fast for different kinds of queries.\n\nAsk about performing concurrent selects, inserts, updates, and\ndeletes in SQL Server and about the implications on ACID of locking\nhints such as NOLOCK. Then consider how MVCC handles concurrency\nwithout blocking or the need for dirty reads.\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 22 Mar 2007 08:37:13 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On Thu, Mar 22, 2007 at 09:39:18AM -0400, Merlin Moncure wrote:\n>You can get the approximate count by selecting reltuples from\n>pg_class. It is valid as of last analyze.\n\nOf course, that only works if you're not using any WHERE clause. \nHere's a (somewhat ugly) example of getting an approximate count based \noff the statistics info, which will work for more complicated queries:\nhttp://archives.postgresql.org/pgsql-sql/2005-08/msg00046.php\nThe ugliness is that you have to provide the whole query as a \nparameter to the function, instead of using it as a drop-in replacement \nfor count. I assume that the TODO item is to provide the latter, but for \nnow this method can be useful for UI type stuff where you just want to \nknow whether there's \"a little\" or \"a lot\".\n\nMike Stone\n", "msg_date": "Thu, 22 Mar 2007 10:41:51 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Andreas,\n\nOn 3/22/07 4:48 AM, \"Andreas Tille\" <[email protected]> wrote:\n\n> Well, to be honest I'm not really interested in the performance of\n> count(*). I was just discussing general performance issues on the\n> phone line and when my colleague asked me about the size of the\n> database he just wonderd why this takes so long for a job his\n> MS-SQL server is much faster. So in principle I was just asking\n> a first question that is easy to ask. Perhaps I come up with\n> more difficult optimisation questions.\n\nThis may be the clue you needed - in Postgres SELECT COUNT(*) is an\napproximate way to measure the speed of your disk setup (up to about\n1,200MB/s). Given that you are having performance problems, it may be that\nyour disk layout is either:\n- slow by design\n- malfunctioning\n\nIf this is the case, then any of your queries that require a full table scan\nwill be affected.\n\nYou should check your sequential disk performance using the following:\n\ntime bash -c \"dd if=/dev/zero of=/your_file_system/bigfile bs=8k\ncount=(your_memory_size_in_KB*2/8) && sync\"\ntime dd if=/your_file_system/bigfile of=/dev/null bs=8k\n\nReport those times here and we can help you with it.\n\n- Luke\n\n\n", "msg_date": "Thu, 22 Mar 2007 07:46:26 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On Thu, Mar 22, 2007 at 10:18:10AM -0400, Michael Stone wrote:\n> IIRC, that's basically what you get with the mysql count anyway, since \n> there are corner cases for results in a transaction. Avoiding those \n> cases is why the postgres count takes so long; sometimes that's what's \n> desired and sometimes it is not.\n\nAdding to this point:\n\nIn any production system, the count presented to the user is usually\nwrong very shortly after it is displayed anyways. Transactions in the\nbackground or from other users are adding or removing items, perhaps\neven before the count reaches the user's display.\n\nThe idea of transaction-safety for counts doesn't apply in this case.\nBoth the transaction and the number are complete before the value is\ndisplayed.\n\nIn my own systems, I rarely use count(*) for anything except user\nvisible results. For the PostgreSQL system I use, I keep a table of\ncounts, and lock the row for update when adding or removing items.\nThis turns out to be best in this system anyways, as I need my new\nrows to be ordered, and locking the 'count' row lets me assign a\nnew sequence number for the row. (Don't want to use SEQUENCE objects,\nas there could as the rows are [key, sequence, data], with thousands\nor more keys)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Thu, 22 Mar 2007 10:52:27 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Michael Stone wrote:\n> On Thu, Mar 22, 2007 at 01:30:35PM +0200, [email protected] wrote:\n>> approximated count?????\n>>\n>> why? who would need it? where you can use it?\n> \n> Do a google query. Look at the top of the page, where it says \"results N \n> to M of about O\". For user interfaces (which is where a lot of this \n> count(*) stuff comes from) you quite likely don't care about the exact \n> count...\n\nRight on, Michael.\n\nOne of our biggest single problems is this very thing. It's not a Postgres problem specifically, but more embedded in the idea of a relational database: There are no \"job status\" or \"rough estimate of results\" or \"give me part of the answer\" features that are critical to many real applications.\n\nIn our case (for a variety of reasons, but this one is critical), we actually can't use Postgres indexing at all -- we wrote an entirely separate indexing system for our data, one that has the following properties:\n\n 1. It can give out \"pages\" of information (i.e. \"rows 50-60\") without\n rescanning the skipped pages the way \"limit/offset\" would.\n 2. It can give accurate estimates of the total rows that will be returned.\n 3. It can accurately estimate the time it will take.\n\nFor our primary business-critical data, Postgres is merely a storage system, not a search system, because we have to do the \"heavy lifting\" in our own code. (To be fair, there is no relational database that can handle our data.)\n\nMany or most web-based search engines face these exact problems.\n\nCraig\n", "msg_date": "Thu, 22 Mar 2007 07:16:51 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "* Mario Weilguni <[email protected]> [070322 15:59]:\n> Am Donnerstag, 22. M�rz 2007 15:33 schrieb Jonah H. Harris:\n> > On 3/22/07, Merlin Moncure <[email protected]> wrote:\n> > > As others suggest select count(*) from table is very special case\n> > > which non-mvcc databases can optimize for.\n> >\n> > Well, other MVCC database still do it faster than we do. However, I\n> > think we'll be able to use the dead space map for speeding this up a\n> > bit wouldn't we?\n> \n> Which MVCC DB do you mean? Just curious...\nWell, mysql claims InnoDB to be mvcc ;)\n\nAndreas\n", "msg_date": "Thu, 22 Mar 2007 16:17:17 +0100", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Craig A. James schrieb:\n...\n> In our case (for a variety of reasons, but this one is critical), we \n> actually can't use Postgres indexing at all -- we wrote an entirely \n> separate indexing system for our data, one that has the following \n> properties:\n> \n> 1. It can give out \"pages\" of information (i.e. \"rows 50-60\") without\n> rescanning the skipped pages the way \"limit/offset\" would.\n> 2. It can give accurate estimates of the total rows that will be returned.\n> 3. It can accurately estimate the time it will take.\n> \n\nThats certainly not entirely correct. There is no need to store or\nmaintain this information along with postgres when you can store\nand maintain it directly in postgres as well. When you have some\noutside application I think I can savely assume you are doing\nless updates compared to many reads to have it actually pay out.\n\nSo why not store this information in separate \"index\" and \"statistic\"\ntables? You would have just to join with your real data for retrival.\n\nOn top of that, postgres has a very flexible and extensible index\nsystem. This would mean you save on database roundtrips and\ndouble information storage (and the sync problems you certainly\nget from it)\n\nRegards\nTino\n\n", "msg_date": "Thu, 22 Mar 2007 16:31:39 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Craig A. James wrote:\n\n>\n> One of our biggest single problems is this very thing. It's not a \n> Postgres problem specifically, but more embedded in the idea of a \n> relational database: There are no \"job status\" or \"rough estimate of \n> results\" or \"give me part of the answer\" features that are critical to \n> many real applications.\n>\nFor the \"give me part of the answer\", I'm wondering if cursors wouldn't \nwork (and if not, why not)?\n\nBrian\n\n", "msg_date": "Thu, 22 Mar 2007 12:10:51 -0400", "msg_from": "Brian Hurt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Am Donnerstag, 22. März 2007 16:17 schrieb Andreas Kostyrka:\n> * Mario Weilguni <[email protected]> [070322 15:59]:\n> > Am Donnerstag, 22. März 2007 15:33 schrieb Jonah H. Harris:\n> > > On 3/22/07, Merlin Moncure <[email protected]> wrote:\n> > > > As others suggest select count(*) from table is very special case\n> > > > which non-mvcc databases can optimize for.\n> > >\n> > > Well, other MVCC database still do it faster than we do. However, I\n> > > think we'll be able to use the dead space map for speeding this up a\n> > > bit wouldn't we?\n> >\n> > Which MVCC DB do you mean? Just curious...\n>\n> Well, mysql claims InnoDB to be mvcc ;)\n\nOk, but last time I tried count(*) with InnoDB tables did take roughly(*) the \nsame time last time I tried - because InnoDB has the same problem as postgres \nand has to do a seqscan too (I think it's mentioned somewhere in their docs).\n\n(*) in fact, postgres was faster, but the values were comparable, 40 seconds \nvs. 48 seconds \n\nMaybe the InnoDB have made some progress here, I tested it with MySQL 5.0.18.\n\n", "msg_date": "Thu, 22 Mar 2007 17:20:02 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "\n>>count(*). I was just discussing general performance issues on the\n>>phone line and when my colleague asked me about the size of the\n>>database he just wonderd why this takes so long for a job his\n>>MS-SQL server is much faster. [...].\n>> \n>>\n>\n>Simple. MSSQL is optimized for this case, and uses \"older\"\n>datastructures. PG uses a MVCC storage, \n>\n\nWhich version of MSSQL? Wikipedia says that SQL Server 2005 uses the\nMVCC model.\n\nCarlos\n--\n\n", "msg_date": "Thu, 22 Mar 2007 11:58:45 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Brian Hurt wrote:\n>> One of our biggest single problems is this very thing. It's not a \n>> Postgres problem specifically, but more embedded in the idea of a \n>> relational database: There are no \"job status\" or \"rough estimate of \n>> results\" or \"give me part of the answer\" features that are critical to \n>> many real applications.\n>>\n> For the \"give me part of the answer\", I'm wondering if cursors wouldn't \n> work (and if not, why not)?\n\nThere is no mechanism in Postgres (or any RDB that I know of) to say, \"Give me rows 1000 through 1010\", that doesn't also execute the query on rows 1-1000. In other words, the RDBMS does the work for 1010 rows, when only 10 are needed -- 100 times more work than is necessary.\n\nLimit/Offset will return the correct 10 rows, but at the cost of doing the previous 1000 rows and discarding them.\n\nWeb applications are stateless. To use a cursor, you'd have to keep it around for hours or days, and create complex \"server affinity\" code to direct a user back to the same server of your server farm (where that cursor is being held), on the chance that the user will come back and ask for rows 1000 through 1010, then a cursor isn't up to the task.\n\nCraig\n", "msg_date": "Thu, 22 Mar 2007 09:02:00 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Tino Wildenhain wrote:\n> Craig A. James schrieb:\n> ...\n>> In our case (for a variety of reasons, but this one is critical), we \n>> actually can't use Postgres indexing at all -- we wrote an entirely \n>> separate indexing system for our data...\n>\n> ...There is no need to store or\n> maintain this information along with postgres when you can store\n> and maintain it directly in postgres as well.\n\nWhether we store our data inside or outside Postgres misses the point (in fact, most of our data is stored IN Postgres). It's the code that actually performs the index operation that has to be external to Postgres.\n\n> On top of that, postgres has a very flexible and extensible index\n> system.\n\nYou guys can correct me if I'm wrong, but the key feature that's missing from Postgres's flexible indexing is the ability to maintain state across queries. Something like this:\n\n select a, b, my_index_state() from foo where ...\n offset 100 limit 10 using my_index(prev_my_index_state);\n\nThe my_index_state() function would issue something like a \"cookie\", an opaque text or binary object that would record information about how it got from row 1 through row 99. When you issue the query above, it could start looking for row 100 WITHOUT reexamining rows 1-99.\n\nThis could be tricky in a OLTP environment, where the \"cookie\" could be invalidated by changes to the database. But in warehouse read-mostly or read-only environments, it could yield vastly improved performance for database web applications.\n\nIf I'm not mistaken, Postgres (nor Oracle, MySQL or other RDBMS) can't do this. I would love to be corrected.\n\nThe problem is that relational databases were invented before the web and its stateless applications. In the \"good old days\", you could connect to a database and work for hours, and in that environment cursors and such work well -- the RDBMS maintains the internal state of the indexing system. But in a web environment, state information is very difficult to maintain. There are all sorts of systems that try (Enterprise Java Beans, for example), but they're very complex.\n\nCraig\n\n", "msg_date": "Thu, 22 Mar 2007 09:21:29 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Craig A. James schrieb:\n> Tino Wildenhain wrote:\n>> Craig A. James schrieb:\n>> ...\n>>> In our case (for a variety of reasons, but this one is critical), we \n>>> actually can't use Postgres indexing at all -- we wrote an entirely \n>>> separate indexing system for our data...\n>>\n>> ...There is no need to store or\n>> maintain this information along with postgres when you can store\n>> and maintain it directly in postgres as well.\n> \n> Whether we store our data inside or outside Postgres misses the point \n> (in fact, most of our data is stored IN Postgres). It's the code that \n> actually performs the index operation that has to be external to Postgres.\n> \n>> On top of that, postgres has a very flexible and extensible index\n>> system.\n> \n> You guys can correct me if I'm wrong, but the key feature that's missing \n> from Postgres's flexible indexing is the ability to maintain state \n> across queries. Something like this:\n> \n> select a, b, my_index_state() from foo where ...\n> offset 100 limit 10 using my_index(prev_my_index_state);\n> \n\nYes, you are wrong :-) The technique is called \"CURSOR\"\nif you maintain persistent connection per session\n(e.g. stand allone application or clever pooling webapplication)\n\nIf its a naive web application you just store your session\nin tables where you can easily maintain the scroll state\nas well.\n\nRegards\nTino\n", "msg_date": "Thu, 22 Mar 2007 18:27:32 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On Thu, Mar 22, 2007 at 06:27:32PM +0100, Tino Wildenhain wrote:\n>Craig A. James schrieb:\n>>You guys can correct me if I'm wrong, but the key feature that's missing \n>>from Postgres's flexible indexing is the ability to maintain state \n>>across queries. Something like this:\n>>\n>> select a, b, my_index_state() from foo where ...\n>> offset 100 limit 10 using my_index(prev_my_index_state);\n>>\n>\n>Yes, you are wrong :-) The technique is called \"CURSOR\"\n>if you maintain persistent connection per session\n>(e.g. stand allone application or clever pooling webapplication)\n\nDid you read the email before correcting it? From the part you trimmed \nout:\n\n>The problem is that relational databases were invented before the web \n>and its stateless applications. In the \"good old days\", you could \n>connect to a database and work for hours, and in that environment \n>cursors and such work well -- the RDBMS maintains the internal state of \n>the indexing system. But in a web environment, state information is \n>very difficult to maintain. There are all sorts of systems that try \n>(Enterprise Java Beans, for example), but they're very complex.\n\nIt sounds like they wrote their own middleware to handle the problem, \nwhich is basically what you suggested (a \"clever pooling web \napplication\") after saying \"wrong\".\n\nMike Stone\n", "msg_date": "Thu, 22 Mar 2007 13:39:32 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Tino Wildenhain wrote:\n>> You guys can correct me if I'm wrong, but the key feature that's \n>> missing from Postgres's flexible indexing is the ability to maintain \n>> state across queries. Something like this:\n>>\n>> select a, b, my_index_state() from foo where ...\n>> offset 100 limit 10 using my_index(prev_my_index_state);\n>>\n> \n> Yes, you are wrong :-) The technique is called \"CURSOR\"\n> if you maintain persistent connection per session\n> (e.g. stand allone application or clever pooling webapplication)\n\nThat's my whole point: If relational databases had a simple mechanism for storing their internal state in an external application, the need for cursors, connection pools, and all those other tricks would be eliminated.\n\nAs I said earlier, relational technology was invented in an earlier era, and hasn't caught up with the reality of modern web apps.\n\n> If its a naive web application you just store your session\n> in tables where you can easily maintain the scroll state\n> as well.\n\nOne thing I've learned in 25 years of software development is that people who use my software have problems I never imagined. I've been the one who was naive when I said similar things about my customers, and was later embarrassed to learn that their problems were more complex than I ever imagined.\n\nCraig\n", "msg_date": "Thu, 22 Mar 2007 09:43:08 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "\nOn Mar 22, 2007, at 10:21 AM, Craig A. James wrote:\n\n> Tino Wildenhain wrote:\n>> Craig A. James schrieb:\n>> ...\n>>> In our case (for a variety of reasons, but this one is critical), \n>>> we actually can't use Postgres indexing at all -- we wrote an \n>>> entirely separate indexing system for our data...\n>>\n>> ...There is no need to store or\n>> maintain this information along with postgres when you can store\n>> and maintain it directly in postgres as well.\n>\n> Whether we store our data inside or outside Postgres misses the \n> point (in fact, most of our data is stored IN Postgres). It's the \n> code that actually performs the index operation that has to be \n> external to Postgres.\n>\n>> On top of that, postgres has a very flexible and extensible index\n>> system.\n>\n> You guys can correct me if I'm wrong, but the key feature that's \n> missing from Postgres's flexible indexing is the ability to \n> maintain state across queries. Something like this:\n>\n> select a, b, my_index_state() from foo where ...\n> offset 100 limit 10 using my_index(prev_my_index_state);\n>\n> The my_index_state() function would issue something like a \n> \"cookie\", an opaque text or binary object that would record \n> information about how it got from row 1 through row 99. When you \n> issue the query above, it could start looking for row 100 WITHOUT \n> reexamining rows 1-99.\n>\n> This could be tricky in a OLTP environment, where the \"cookie\" \n> could be invalidated by changes to the database. But in warehouse \n> read-mostly or read-only environments, it could yield vastly \n> improved performance for database web applications.\n>\n> If I'm not mistaken, Postgres (nor Oracle, MySQL or other RDBMS) \n> can't do this. I would love to be corrected.\n\nAs long as you're ordering by some row in the table then you can do \nthat in\nstraight SQL.\n\nselect a, b, ts from foo where (stuff) and foo > X order by foo limit 10\n\nThen, record the last value of foo you read, and plug it in as X the \nnext\ntime around.\n\nThis has the advantage over a simple offset approach of actually\ndisplaying all the data as a user pages through it too. (Consider\nthe case where the user is viewing offsets 91-100, and you delete\nthe record at offset 15. The user goes to the next page and will\nmiss the record that used to be at offset 101 and is now at offset\n100).\n\n> The problem is that relational databases were invented before the \n> web and its stateless applications. In the \"good old days\", you \n> could connect to a database and work for hours, and in that \n> environment cursors and such work well -- the RDBMS maintains the \n> internal state of the indexing system. But in a web environment, \n> state information is very difficult to maintain. There are all \n> sorts of systems that try (Enterprise Java Beans, for example), but \n> they're very complex.\n\nI think the problem is more that most web developers aren't very good\nat using the database, and tend to fall back on simplistic, wrong, \napproaches\nto displaying the data. There's a lot of monkey-see, monkey-do in web\nUI design too, which doesn't help.\n\nCheers,\n Steve\n\n", "msg_date": "Thu, 22 Mar 2007 10:53:24 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Steve Atkins wrote:\n> As long as you're ordering by some row in the table then you can do that in\n> straight SQL.\n> \n> select a, b, ts from foo where (stuff) and foo > X order by foo limit 10\n> \n> Then, record the last value of foo you read, and plug it in as X the next\n> time around.\n\nWe've been over this before in this forum: It doesn't work as advertised. Look for postings by me regarding the fact that there is no way to tell the optimizer the cost of executing a function. There's one, for example, on Oct 18, 2006.\n\n> I think the problem is more that most web developers aren't very good\n> at using the database, and tend to fall back on simplistic, wrong, \n> approaches\n> to displaying the data. There's a lot of monkey-see, monkey-do in web\n> UI design too, which doesn't help.\n\nThanks, I'm sure your thoughtful comments will help me solve my problem. Somehow. ;-)\n\nCraig\n", "msg_date": "Thu, 22 Mar 2007 10:21:21 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On 3/22/07, Michael Stone <[email protected]> wrote:\n> On Thu, Mar 22, 2007 at 06:27:32PM +0100, Tino Wildenhain wrote:\n> >Craig A. James schrieb:\n> >>You guys can correct me if I'm wrong, but the key feature that's missing\n> >>from Postgres's flexible indexing is the ability to maintain state\n> >>across queries. Something like this:\n> >>\n> >> select a, b, my_index_state() from foo where ...\n> >> offset 100 limit 10 using my_index(prev_my_index_state);\n> >>\n> >\n> >Yes, you are wrong :-) The technique is called \"CURSOR\"\n> >if you maintain persistent connection per session\n> >(e.g. stand allone application or clever pooling webapplication)\n>\n> Did you read the email before correcting it? From the part you trimmed\n> out:\n>\n> >The problem is that relational databases were invented before the web\n> >and its stateless applications. In the \"good old days\", you could\n> >connect to a database and work for hours, and in that environment\n> >cursors and such work well -- the RDBMS maintains the internal state of\n> >the indexing system. But in a web environment, state information is\n> >very difficult to maintain. There are all sorts of systems that try\n> >(Enterprise Java Beans, for example), but they're very complex.\n>\n> It sounds like they wrote their own middleware to handle the problem,\n> which is basically what you suggested (a \"clever pooling web\n> application\") after saying \"wrong\".\n\nTino was saying that rather that build a complete indexing storage\nmanagement solution that lives outside the database, it is better to\ndo intelligent session management so that you get the simplicity if a\ntwo tier client server system but the scalability of a web app.\n\nWeb apps are not necessarily stateless, you just have to be a little\nclever about how database connections are opened and closed. Then you\nget all the database stuff that comes along with a persistent\nconnection (advisory locks, cursors, prepared statements, etc) without\nbuilding all kinds of data management into the middleware.\n\nmerlin\n", "msg_date": "Thu, 22 Mar 2007 14:24:39 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On 22.03.2007, at 11:53, Steve Atkins wrote:\n\n> As long as you're ordering by some row in the table then you can do \n> that in\n> straight SQL.\n>\n> select a, b, ts from foo where (stuff) and foo > X order by foo \n> limit 10\n>\n> Then, record the last value of foo you read, and plug it in as X \n> the next\n> time around.\n\nThis does only work if you have unique values in foo. You might have \n\"batch breaks\" inside a list of rows with equal values for foo.\n\nBut: a web application that needs state and doesn't maintain it by \nitself (or inside the dev toolkit) is imho broken by design. How \nshould the database store a \"state\" for a web app? It's only possible \non the web app part, because the app is either stateless and so are \nthe queries to the database - they have to be re-evaluated for every \nrequest as the request might come from totally different sources \n(users, ...) or it is stateful and has to maintain the state because \nonly the app developer knows, what information is needed for the \n\"current state\".\n\nThis is why all web application toolkits have a \"session\" concept.\n\n> I think the problem is more that most web developers aren't very good\n> at using the database, and tend to fall back on simplistic, wrong, \n> approaches\n> to displaying the data. There's a lot of monkey-see, monkey-do in web\n> UI design too, which doesn't help.\n\nSure. That is the other problem ... ;-) But, and I think this is much \nmore important: most toolkits today free you from using the database \ndirectly and writing lots and lots of lines of sql code which \ninstantly breaks when you switch the storage backend. It's just the \nthing from where you look at something.\n\ncug\n", "msg_date": "Thu, 22 Mar 2007 12:26:12 -0600", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "\nOn Mar 22, 2007, at 11:26 AM, Guido Neitzer wrote:\n\n> On 22.03.2007, at 11:53, Steve Atkins wrote:\n>\n>> As long as you're ordering by some row in the table then you can \n>> do that in\n>> straight SQL.\n>>\n>> select a, b, ts from foo where (stuff) and foo > X order by foo \n>> limit 10\n>>\n>> Then, record the last value of foo you read, and plug it in as X \n>> the next\n>> time around.\n>\n> This does only work if you have unique values in foo. You might \n> have \"batch breaks\" inside a list of rows with equal values for foo.\n\nIf I don't have unique values in foo, I certainly have unique values \nin (foo, pk).\n\n>\n> But: a web application that needs state and doesn't maintain it by \n> itself (or inside the dev toolkit) is imho broken by design. How \n> should the database store a \"state\" for a web app? It's only \n> possible on the web app part, because the app is either stateless \n> and so are the queries to the database - they have to be re- \n> evaluated for every request as the request might come from totally \n> different sources (users, ...) or it is stateful and has to \n> maintain the state because only the app developer knows, what \n> information is needed for the \"current state\".\n>\n> This is why all web application toolkits have a \"session\" concept.\n\nYes. HTTP is not very stateful. Web applications are stateful. There \nare some really obvious approaches to maintaining state cleanly that \nwork well with databases and let you do some quite complex stuff \n(tying a persistent database connection to a single user, for \ninstance). But they don't scale at all well.\n\nWhat Craig was suggesting is, basically, to assign a persistent \ndatabase connection to each user. But rather than maintain that \nconnection as a running process, to serialise all the state out of \nthe database connection and store that in the webapp, then when the \nnext action from that user comes in take a database connection and \nstuff all that state into it again.\n\nIt's a lovely idea, but strikes me as completely infeasible in the \ngeneral case. There's just too much state there. Doing it in the \nspecific case is certainly possible, but rapidly devolves to the \nstandard approach of \"On the first page of results, run the query and \nrecord the first 5000 results. Store those in a scratch table, \nindexed by session-id, or in external storage. On displaying later \npages of results to the same user, pull directly from the already \ncalculated results.\"\n\n>\n>> I think the problem is more that most web developers aren't very good\n>> at using the database, and tend to fall back on simplistic, wrong, \n>> approaches\n>> to displaying the data. There's a lot of monkey-see, monkey-do in web\n>> UI design too, which doesn't help.\n>\n> Sure. That is the other problem ... ;-) But, and I think this is \n> much more important: most toolkits today free you from using the \n> database directly and writing lots and lots of lines of sql code \n> which instantly breaks when you switch the storage backend. It's \n> just the thing from where you look at something.\n\nThe real problem is the user-interface is designed around what is \neasy to implement in elderly cgi scripts, rather than what's \nappropriate to the data being displayed or useful to the user. \nDisplaying tables of results, ten at a time, is just one of the more \negregious examples of that.\n\nCheers,\n Steve\n\n", "msg_date": "Thu, 22 Mar 2007 11:37:19 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> Steve Atkins wrote:\n>> As long as you're ordering by some row in the table then you can do that in\n>> straight SQL.\n>> \n>> select a, b, ts from foo where (stuff) and foo > X order by foo limit 10\n>> \n>> Then, record the last value of foo you read, and plug it in as X the next\n>> time around.\n\n> We've been over this before in this forum: It doesn't work as advertised. Look for postings by me regarding the fact that there is no way to tell the optimizer the cost of executing a function. There's one, for example, on Oct 18, 2006.\n\nYou mean\nhttp://archives.postgresql.org/pgsql-performance/2006-10/msg00283.php\n? I don't see anything there that bears on Steve's suggestion.\n(The complaint is obsolete as of CVS HEAD anyway.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Mar 2007 14:37:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*) " }, { "msg_contents": "Tom Lane wrote:\n> \"Craig A. James\" <[email protected]> writes:\n>> Steve Atkins wrote:\n>>> As long as you're ordering by some row in the table then you can do that in\n>>> straight SQL.\n>>>\n>>> select a, b, ts from foo where (stuff) and foo > X order by foo limit 10\n>>>\n>>> Then, record the last value of foo you read, and plug it in as X the next\n>>> time around.\n> \n>> We've been over this before in this forum: It doesn't work as advertised.\n>> Look for postings by me regarding the fact that there is no way to tell\n>> the optimizer the cost of executing a function. There's one, for example,\n>> on Oct 18, 2006.\n> \n> You mean\n> http://archives.postgresql.org/pgsql-performance/2006-10/msg00283.php\n> ? I don't see anything there that bears on Steve's suggestion.\n> (The complaint is obsolete as of CVS HEAD anyway.)\n\nMea culpa, it's October 8, not October 18:\n\n http://archives.postgresql.org/pgsql-performance/2006-10/msg00143.php\n\nThe relevant part is this:\n\n\"My example, discussed previously in this forum, is a classic. I have a VERY expensive function (it's in the class of NP-complete problems, so there is no faster way to do it). There is no circumstance when my function should be used as a filter, and no circumstance when it should be done before a join. But PG has no way of knowing the cost of a function, and so the optimizer assigns the same cost to every function. Big disaster.\n\n\"The result? I can't use my function in any WHERE clause that involves any other conditions or joins. Only by itself. PG will occasionally decide to use my function as a filter instead of doing the join or the other WHERE conditions first, and I'm dead.\n\n\"The interesting thing is that PG works pretty well for me on big tables -- it does the join first, then applies my expensive functions. But with a SMALL (like 50K rows) table, it applies my function first, then does the join. A search that completes in 1 second on a 5,000,000 row database can take a minute or more on a 50,000 row database.\"\n\nCraig\n", "msg_date": "Thu, 22 Mar 2007 11:18:24 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> Tom Lane wrote:\n>> You mean\n>> http://archives.postgresql.org/pgsql-performance/2006-10/msg00283.php\n>> ? I don't see anything there that bears on Steve's suggestion.\n\n> Mea culpa, it's October 8, not October 18:\n> http://archives.postgresql.org/pgsql-performance/2006-10/msg00143.php\n\nI still don't see the relevance to Steve's suggestion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Mar 2007 15:40:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*) " }, { "msg_contents": "Michael Stone schrieb:\n> On Thu, Mar 22, 2007 at 02:24:39PM -0400, Merlin Moncure wrote:\n>> Tino was saying that rather that build a complete indexing storage\n>> management solution that lives outside the database, it is better to\n>> do intelligent session management so that you get the simplicity if a\n>> two tier client server system but the scalability of a web app.\n> \n> No, what he was saying was \"there's this thing called a cursor\". I \n> thought there was enough information in the original message to indicate \n> that the author knew about cursors. There are certainly pros and cons \n> and nuances to different approaches, but Tino's message didn't touch on \n> anything that specific.\n\nSure, the message thread sometimes loose history so I wasnt 100% sure\nwhat the framework really is - although I assumed it could be a web\nsolution. With stand alone applications you usually have a limited\nnumber of users connecting and they are connected during the session\nso you can easily use cursors there.\n\n> And even if you do use some kind of \"intelligent session management\", \n> how many simultaneous cursors can postgres sanely keep track of? \n> Thousands? Millions? Tens of Millions? I suspect there's a scalability \n> limit in there somewhere. Luckily I don't spend much time in the web \n> application space, so I don't need to know. :)\n\nDepending on the application, you can even simulate above situation\nwith a web framework if you manage session in the web framework\nwith persistent connections for a limited amount of users to work\nthe same time (certainly not feasable for a public web shop but for\ndata management systems for inhouse use). In this case, cursors\nwould be perfect too.\n\nIn any other case I fail to see the advantage in storing \"index\ndata\" outside the database with all the roundtripping involved.\n\nIf the query is complex and rerunning it for every batch is\nexpensive, fetching the whole result to the application in\ncase of users really traversing the complete batch\n(How often is that really done? I mean, who browses to an\nend of a huge result set?) is costy as well w/o really\nbenefit.\n\nIt would be much more easy and clean imho, in this case\nto really fetch the data to session and batch linked\nscratch table.\n\nIf its fast or you can prepare a batch helper table\nwith index, you can just select the batch equival\nportion of the result.\n\nYou dont need extensive session management in the\nweb application to scroll thru result sets in this\nway. This can all be encoded in forms or links.\n\nRegards\nTino\n\n\n", "msg_date": "Thu, 22 Mar 2007 20:56:50 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Craig A. James schrieb:\n> Tino Wildenhain wrote:\n>>> You guys can correct me if I'm wrong, but the key feature that's \n>>> missing from Postgres's flexible indexing is the ability to maintain \n>>> state across queries. Something like this:\n>>>\n>>> select a, b, my_index_state() from foo where ...\n>>> offset 100 limit 10 using my_index(prev_my_index_state);\n>>>\n>>\n>> Yes, you are wrong :-) The technique is called \"CURSOR\"\n>> if you maintain persistent connection per session\n>> (e.g. stand allone application or clever pooling webapplication)\n> \n> That's my whole point: If relational databases had a simple mechanism \n> for storing their internal state in an external application, the need \n> for cursors, connection pools, and all those other tricks would be \n> eliminated.\n\nWell the cursor is exactly the simple handle to the internal\nstate of the relational db you are looking for.\nDo you really think transferring the whole query-tree, open index\nand data files to the client over the network would really improve\nthe situation?\n\n> As I said earlier, relational technology was invented in an earlier era, \n> and hasn't caught up with the reality of modern web apps.\n\nThere is nothing modern with todays web apps.\n\n>> If its a naive web application you just store your session\n>> in tables where you can easily maintain the scroll state\n>> as well.\n> \n> One thing I've learned in 25 years of software development is that \n> people who use my software have problems I never imagined. I've been \n> the one who was naive when I said similar things about my customers, and \n> was later embarrassed to learn that their problems were more complex \n> than I ever imagined.\n\nSure it really depends on the application how the best solution\nwould look like but I'm quite certain, counterfaiting internal\nstuff of the underlying relational database in the application\nmakes more problems then it solves. If you can't handle SQL,\ndont use SQL, you can build web applications w/o any relational\ndatabase if you want it.\n\nRegards\nTino Wildenhain\n", "msg_date": "Thu, 22 Mar 2007 21:35:50 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Michael Stone schrieb:\n> On Thu, Mar 22, 2007 at 06:27:32PM +0100, Tino Wildenhain wrote:\n>> Craig A. James schrieb:\n>>> You guys can correct me if I'm wrong, but the key feature that's \n>>> missing from Postgres's flexible indexing is the ability to maintain \n>>> state across queries. Something like this:\n>>>\n>>> select a, b, my_index_state() from foo where ...\n>>> offset 100 limit 10 using my_index(prev_my_index_state);\n>>>\n>>\n>> Yes, you are wrong :-) The technique is called \"CURSOR\"\n>> if you maintain persistent connection per session\n>> (e.g. stand allone application or clever pooling webapplication)\n> \n> Did you read the email before correcting it? From the part you trimmed out:\n> \n>> The problem is that relational databases were invented before the web \n>> and its stateless applications. In the \"good old days\", you could \n>> connect to a database and work for hours, and in that environment \n>> cursors and such work well -- the RDBMS maintains the internal state \n>> of the indexing system. But in a web environment, state information \n>> is very difficult to maintain. There are all sorts of systems that \n>> try (Enterprise Java Beans, for example), but they're very complex.\n\nYes, but actually this is not true. They are not so complex in this\nregard. All you have to do is to look in the pg_cursor view if\nyour cursor is there and if not, create it in your session.\nAll you need to maintain is the cursor name which maps to your\nsession + the special query you run. This should be easy\nin any web application.\n\n> It sounds like they wrote their own middleware to handle the problem, \n> which is basically what you suggested (a \"clever pooling web \n> application\") after saying \"wrong\".\n\nI read about \"building index data outside postgres\" which still is\nthe wrong approach imho.\n\nThis discussion is a bit theoretical until we see the actual problem\nand the proposed solution here.\n\nRegards\nTino\n", "msg_date": "Fri, 23 Mar 2007 13:01:02 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "On Fri, Mar 23, 2007 at 01:01:02PM +0100, Tino Wildenhain wrote:\n>This discussion is a bit theoretical until we see the actual problem\n>and the proposed solution here.\n\nIt's good to see you back off a bit from your previous stance of \nassuming that someone doesn't know what they're doing and that their \nsolution is absolutely wrong without actually knowing anything about \nwhat they are trying to do.\n\nMike Stone\n", "msg_date": "Fri, 23 Mar 2007 09:09:12 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" }, { "msg_contents": "Michael Stone schrieb:\n> On Fri, Mar 23, 2007 at 01:01:02PM +0100, Tino Wildenhain wrote:\n>> This discussion is a bit theoretical until we see the actual problem\n>> and the proposed solution here.\n> \n> It's good to see you back off a bit from your previous stance of \n> assuming that someone doesn't know what they're doing and that their \n> solution is absolutely wrong without actually knowing anything about \n> what they are trying to do.\n\nWell I'm sure its very likely wrong :-) At least the core part of\nit with the statement of \"keeping index data outside postgres\".\n\nWhat I meant with my comment about the theoreticalness: we cannot\nmake educated suggestions about alternative solutions to the problem\nuntil we know the problem and maybe the current solution in detail.\n\nRegards\nTino\n", "msg_date": "Fri, 23 Mar 2007 17:54:05 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*)" } ]
[ { "msg_contents": "Hi,\n\nI recently migrated one of our large (multi-hundred GB) dbs from an \nIntel 32bit platform (Dell 1650 - running 8.1.3) to a 64bit platform \n(Dell 1950 - running 8.1.5). However I am not seeing the performance \ngains I would expect - I am suspecting that some of this is due to \ndifferences I am seeing in reported memory usage.\n\nOn the 1650 - a 'typical' postmaster process looks like this in top:\n\n5267 postgres 16 0 439m 427m 386m S 3.0 21.1 3:31.73 postmaster\n\nOn the 1940 - a 'typical' postmaster process looks like:\n\n10304 postgres 16 0 41896 13m 11m D 4 0.3 0:11.73 postmaster\n\nI currently have both systems running in parallel so the workloads will \nbe approximately equal. The configurations of the two systems in terms \nof postgresql.conf is pretty much identical between the two systems, I \ndid make some changes to logging, but nothing to buffers/shared memory \nconfig.\n\nI have never seen a postmaster process on the new system consume \nanywhere near as much RAM as the old system - I am wondering if there is \nsomething up with the shared memory config/usage that is causing my \nperformance issues. Any thoughts as to where I should go from here?\n\nThanks,\n\nDavid.\n\n\n-- \nDavid Brain - bandwidth.com\[email protected]\n", "msg_date": "Thu, 22 Mar 2007 08:08:25 -0400", "msg_from": "David Brain <[email protected]>", "msg_from_op": true, "msg_subject": "Potential memory usage issue" }, { "msg_contents": "In response to David Brain <[email protected]>:\n> \n> I recently migrated one of our large (multi-hundred GB) dbs from an \n> Intel 32bit platform (Dell 1650 - running 8.1.3) to a 64bit platform \n> (Dell 1950 - running 8.1.5). However I am not seeing the performance \n> gains I would expect\n\nWhat were you expecting? It's possible that your expectations are\nunreasonable.\n\nIn our testing, we found that 64bit on the same hardware as 32bit only\ngave us a 5% gain, in the best case. In many cases the gain was near\n0, and in some there was a small performance loss. These findings seemed\nto jive with what others have been reporting.\n\n> - I am suspecting that some of this is due to \n> differences I am seeing in reported memory usage.\n> \n> On the 1650 - a 'typical' postmaster process looks like this in top:\n> \n> 5267 postgres 16 0 439m 427m 386m S 3.0 21.1 3:31.73 postmaster\n> \n> On the 1940 - a 'typical' postmaster process looks like:\n> \n> 10304 postgres 16 0 41896 13m 11m D 4 0.3 0:11.73 postmaster\n> \n> I currently have both systems running in parallel so the workloads will \n> be approximately equal. The configurations of the two systems in terms \n> of postgresql.conf is pretty much identical between the two systems, I \n> did make some changes to logging, but nothing to buffers/shared memory \n> config.\n> \n> I have never seen a postmaster process on the new system consume \n> anywhere near as much RAM as the old system - I am wondering if there is \n> something up with the shared memory config/usage that is causing my \n> performance issues. Any thoughts as to where I should go from here?\n\nProvide more information, for one thing. I'm assuming from the top output\nthat this is some version of Linux, but more details on that are liable\nto elicit more helpful feedback.\n\nWe run everything on FreeBSD here, but I haven't seen any difference in\nthe way PostgreSQL uses memory on ia32 FreeBSD vs. amd64 FreeBSD. Without\nmore details on your setup, my only suggestion would be to double-verify\nthat your postgresql.conf settings are correct on the 64 bit system.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Thu, 22 Mar 2007 08:50:37 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Potential memory usage issue" }, { "msg_contents": "Hi,\n\nThanks for the response.\nBill Moran wrote:\n> In response to David Brain <[email protected]>:\n>> I recently migrated one of our large (multi-hundred GB) dbs from an \n>> Intel 32bit platform (Dell 1650 - running 8.1.3) to a 64bit platform \n>> (Dell 1950 - running 8.1.5). However I am not seeing the performance \n>> gains I would expect\n> \n> What were you expecting? It's possible that your expectations are\n> unreasonable.\n> \n\nPossibly - but there is a fair step up hardware performance wise from a \n1650 (Dual 1.4 Ghz PIII with U160 SCSI) to a 1950 (Dual, Dual Core 2.3 \nGhz Xeons with SAS) - so I wasn't necessarily expecting much from the \n32->64 transition (except maybe the option to go > 4GB easily - although \ncurrently we only have 4GB in the box), but was from the hardware \nstandpoint.\n\nI am curious as to why 'top' gives such different output on the two \nsystems - the datasets are large and so I know I benefit from having \nhigh shared_buffers and effective_cache_size settings.\n\n> Provide more information, for one thing. I'm assuming from the top output\n> that this is some version of Linux, but more details on that are liable\n> to elicit more helpful feedback.\n> \nYes the OS is Linux - on the 1650 version 2.6.14, on the 1950 version 2.6.18\n\nThanks,\n\nDavid.\n\n\n\n-- \nDavid Brain - bandwidth.com\[email protected]\n919.297.1078\n", "msg_date": "Thu, 22 Mar 2007 09:06:45 -0400", "msg_from": "David Brain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Potential memory usage issue" }, { "msg_contents": "In response to David Brain <[email protected]>:\n> \n> Thanks for the response.\n> Bill Moran wrote:\n> > In response to David Brain <[email protected]>:\n> >> I recently migrated one of our large (multi-hundred GB) dbs from an \n> >> Intel 32bit platform (Dell 1650 - running 8.1.3) to a 64bit platform \n> >> (Dell 1950 - running 8.1.5). However I am not seeing the performance \n> >> gains I would expect\n> > \n> > What were you expecting? It's possible that your expectations are\n> > unreasonable.\n> \n> Possibly - but there is a fair step up hardware performance wise from a \n> 1650 (Dual 1.4 Ghz PIII with U160 SCSI) to a 1950 (Dual, Dual Core 2.3 \n> Ghz Xeons with SAS) - so I wasn't necessarily expecting much from the \n> 32->64 transition (except maybe the option to go > 4GB easily - although \n> currently we only have 4GB in the box), but was from the hardware \n> standpoint.\n\nAhh ... I didn't get that from your original message.\n\n> I am curious as to why 'top' gives such different output on the two \n> systems - the datasets are large and so I know I benefit from having \n> high shared_buffers and effective_cache_size settings.\n\nHave you done any actual queries on the new system? PG won't use the\nshm until it needs it -- and that doesn't occur until it gets a request\nfor data via a query.\n\nInstall the pg_bufferstats contrib module and take a look at how shared\nmemory is being use. I like to use MRTG to graph shared buffer usage\nover time, but you can just do a SELECT count(*) WHERE NOT NULL to see\nhow many buffers are actually in use.\n\n> > Provide more information, for one thing. I'm assuming from the top output\n> > that this is some version of Linux, but more details on that are liable\n> > to elicit more helpful feedback.\n> > \n> Yes the OS is Linux - on the 1650 version 2.6.14, on the 1950 version 2.6.18\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Thu, 22 Mar 2007 09:17:00 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Potential memory usage issue" }, { "msg_contents": "Bill Moran <[email protected]> writes:\n> In response to David Brain <[email protected]>:\n>> I am curious as to why 'top' gives such different output on the two \n>> systems - the datasets are large and so I know I benefit from having \n>> high shared_buffers and effective_cache_size settings.\n\n> Have you done any actual queries on the new system? PG won't use the\n> shm until it needs it -- and that doesn't occur until it gets a request\n> for data via a query.\n\nMore accurately, top won't consider shared mem to be part of the process\naddress space until it's actually touched by that process.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Mar 2007 11:35:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Potential memory usage issue " }, { "msg_contents": "Bill Moran wrote:\n\n> \n> Install the pg_bufferstats contrib module and take a look at how shared\n> memory is being use. I like to use MRTG to graph shared buffer usage\n> over time, but you can just do a SELECT count(*) WHERE NOT NULL to see\n> how many buffers are actually in use.\n> \n\nCan you explain what you'd use as a diagnostic on this - I just \ninstalled the module - but I'm not entirely clear as to what the output \nis actually showing me and/or what would be considered good or bad.\n\nThanks,\n\nDavid.\n-- \nDavid Brain - bandwidth.com\[email protected]\n", "msg_date": "Thu, 22 Mar 2007 12:30:22 -0400", "msg_from": "David Brain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Potential memory usage issue" }, { "msg_contents": "In response to David Brain <[email protected]>:\n\n> Bill Moran wrote:\n> \n> > \n> > Install the pg_bufferstats contrib module and take a look at how shared\n> > memory is being use. I like to use MRTG to graph shared buffer usage\n> > over time, but you can just do a SELECT count(*) WHERE NOT NULL to see\n> > how many buffers are actually in use.\n> > \n> \n> Can you explain what you'd use as a diagnostic on this - I just \n> installed the module - but I'm not entirely clear as to what the output \n> is actually showing me and/or what would be considered good or bad.\n\nWell, there are different things you can do with it. See the README, which\nI found pretty comprehensive.\n\nWhat I was referring to was the ability to track how many shared_buffers\nwere actually in use, which can easily be seen at a cluster-wide view\nwith two queries:\nselect count(*) from pg_buffercache;\nselect count(*) from pg_buffercache where reldatabase is not null;\n\nThe first gives you the total number of buffers available (you could get\nthis from your postgresql.conf as well, but with automated collection and\ngraphing via mrtg, doing it this way guarantees that we'll always know\nwhat the _real_ value is) The second gives you the number of buffers\nthat are actually holding data.\n\nIf #2 is smaller than #1, that indicates that the entire working set of\nyour database is able to fit in shared memory. This might not be your\nentire database, as some tables might never be queried from (i.e. log\ntables that are only queried when stuff goes wrong ...) This means\nthat Postgres is usually able to execute queries without going to the\ndisk for data, which usually equates to fast queries. If it's\nconsistently _much_ lower, it may indicate that your shared_buffers\nvalue is too high, and the system may benefit from re-balancing memory\nusage.\n\nIf #2 is equal to #1, it probably means that your working set is larger\nthan the available shared buffers, this _may_ mean that your queries are\nusing the disk a lot, and that you _may_ benefit from increasing\nshared_buffers, adding more RAM, sacrificing a 15000 RPM SCSI drive to\nthe gods of performance, etc ...\n\nAnother great thing to track is read activity. I do this via the\npg_stat_database table:\nselect sum(blks_hit) from pg_stat_database;\nselect sum(blks_read) from pg_stat_database;\n\n(Note that you need block-level stats collecting enabled to make these\nusable)\n\nIf the second one is increasing particularly fast, that's a strong\nindication that more shared_memory might improve performance. If\nneither of them are increasing, that indicates that nobody's really\ndoing much with the database ;)\n\nI strongly recommend that you graph these values using mrtg or cacti\nor one of the many other programs designed to do that. It makes life\nnice when someone says, \"hey, the DB system was really slow yesterday\nwhile you where busy in meetings, can you speed it up.\"\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n", "msg_date": "Thu, 22 Mar 2007 13:03:35 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Potential memory usage issue" }, { "msg_contents": "Thanks Bill for the explanation - that really helped me out considerably.\n\nWhat this showed me was that there were only 1024 buffers configured. \nI'm not quite clear as to how this happened as the postgresql.conf files \non both systems have the shared_buffers set to ~50000. However it looks \nas though the system start script was passing in -B 1024 to postmaster \nwhich was overriding the postgresql.conf settings.\n\nThe really odd thing is that that the db start script is also the same \non both systems, so there some other difference there that I need to \ntrack down. However removing the -B 1024 allowed the settings to revert \nto the file specified values.\n\nSo now I'm back to using ~50k buffers again and things are running a \nlittle more swiftly, and according to pg_buffercache I'm using 49151 of \nthem (-:\n\nThanks again to those who helped me track this down.\n\nDavid.\n\n\n\n-- \nDavid Brain - bandwidth.com\[email protected]\n", "msg_date": "Thu, 22 Mar 2007 14:20:44 -0400", "msg_from": "David Brain <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Potential memory usage issue [resolved]" } ]
[ { "msg_contents": "Folks,\n\nis there any constrains/problems/etc. to run several vacuum processes in\nparallel while each one is 'vaccuming' one different table?\n\nExample:\n\n vacuum -d db1 -t table1 &\n vacuum -d db1 -t table2 &\n vacuum -d db1 -t table3 &\n wait\n\n(sorry if it was already asked, but I did not find an explicit\nanswer in archives)\n\nThanks for any inputs!\n\nRgds,\n-Dimitri\n", "msg_date": "Thu, 22 Mar 2007 14:35:53 +0100", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel Vacuum" }, { "msg_contents": "Dimitri escribi�:\n> Folks,\n> \n> is there any constrains/problems/etc. to run several vacuum processes in\n> parallel while each one is 'vaccuming' one different table?\n\nNo, no problem. Keep in mind that if one of them takes a very long\ntime, the others will not be able to remove dead tuples that were\nkilled while the long vacuum was running -- unless you are in 8.2.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 22 Mar 2007 09:52:05 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Vacuum" }, { "msg_contents": "On Thursday 22 March 2007 14:52, Alvaro Herrera wrote:\n> Dimitri escribió:\n> > Folks,\n> >\n> > is there any constrains/problems/etc. to run several vacuum processes in\n> > parallel while each one is 'vaccuming' one different table?\n>\n> No, no problem. Keep in mind that if one of them takes a very long\n> time, the others will not be able to remove dead tuples that were\n> killed while the long vacuum was running -- unless you are in 8.2.\n\nYes, I'm using the last 8.2.3 version. So, will they *really* processing in \nparallel, or will block each other step by step?\n\nRgds,\n-Dimitri\n", "msg_date": "Thu, 22 Mar 2007 16:05:37 +0100", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Vacuum" }, { "msg_contents": "Dimitri escribi�:\n> On Thursday 22 March 2007 14:52, Alvaro Herrera wrote:\n> > Dimitri escribi�:\n> > > Folks,\n> > >\n> > > is there any constrains/problems/etc. to run several vacuum processes in\n> > > parallel while each one is 'vaccuming' one different table?\n> >\n> > No, no problem. Keep in mind that if one of them takes a very long\n> > time, the others will not be able to remove dead tuples that were\n> > killed while the long vacuum was running -- unless you are in 8.2.\n> \n> Yes, I'm using the last 8.2.3 version. So, will they *really* processing in \n> parallel, or will block each other step by step?\n\nThey won't block.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 22 Mar 2007 11:12:50 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Vacuum" }, { "msg_contents": "On Thursday 22 March 2007 16:12, Alvaro Herrera wrote:\n> Dimitri escribió:\n> > On Thursday 22 March 2007 14:52, Alvaro Herrera wrote:\n> > > Dimitri escribió:\n> > > > Folks,\n> > > >\n> > > > is there any constrains/problems/etc. to run several vacuum processes\n> > > > in parallel while each one is 'vaccuming' one different table?\n> > >\n> > > No, no problem. Keep in mind that if one of them takes a very long\n> > > time, the others will not be able to remove dead tuples that were\n> > > killed while the long vacuum was running -- unless you are in 8.2.\n> >\n> > Yes, I'm using the last 8.2.3 version. So, will they *really* processing\n> > in parallel, or will block each other step by step?\n>\n> They won't block.\n\nWow! Excellent! :)\nSo, in this case why not to add 'parallel' option integrated directly into \nthe 'vacuumdb' command? \n\nIn my case I have several CPU on the server and quite powerful storage box \nwhich is not really busy with a single vacuum. So, my idea is quite simple - \nspeed-up vacuum with parallel execution (just an algorithm):\n\n--------------------------------------------------------------------------\nPLL=parallel_degree\nselect tab_size, tabname, dbname from ... order by tab_size desc;\n vacuumdb -d $dbname -t $tabname 2>&1 > /tmp/vac.$dbname.$tabname.log &\n while (pgrep vacuumdb | wc -l ) >= $PLL\n sleep 1\n end\nend\nwait\n--------------------------------------------------------------------------\n\nbiggest tables are vacuumed first, etc.\n\nBut of course it will be much more cool to have something like:\n\n vacuumdb -a -P parallel_degree\n\nWhat do you think? ;)\n\nRgds,\n-Dimitri\n", "msg_date": "Thu, 22 Mar 2007 16:55:02 +0100", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Vacuum" }, { "msg_contents": "Dimitri escribi�:\n\n> But of course it will be much more cool to have something like:\n> \n> vacuumdb -a -P parallel_degree\n> \n> What do you think? ;)\n\nI think our time is better spent enhancing autovacuum ... but if you\nfeel like improving vacuumdb, be my guest. This discussion belongs into\npgsql-hackers though, and any patches you may feel like submitting for\nreview should go to pgsql-patches.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 22 Mar 2007 12:09:19 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Vacuum" }, { "msg_contents": "On Thu, Mar 22, 2007 at 04:55:02PM +0100, Dimitri wrote:\n>In my case I have several CPU on the server and quite powerful storage box \n>which is not really busy with a single vacuum. So, my idea is quite simple - \n>speed-up vacuum with parallel execution (just an algorithm):\n\nVacuum is I/O intensive, not CPU intensive. Running more of them will \nprobably make things slower rather than faster, unless each thing you're \nvacuuming has its own (separate) disks. The fact that your CPU isn't \npegged while vacuuming suggests that your disk is already your \nbottleneck--and doing multiple sequential scans on the same disk will \ndefinitely be slower than doing one.\n\nMike Stone\n", "msg_date": "Thu, 22 Mar 2007 13:10:08 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Vacuum" }, { "msg_contents": "Mike, \n\nyou're right until you're using a single disk :) \nNow, imagine you have more disks - more I/O operations you may perform, and \nyou'll need also a CPU time to process them :) until you fully use one CPU \nper 'vacuumdb' - and then you stop... \n\nAs well, even in case when CPU is not highly used by vacuumdb - single process \nis still not able to get a max performance of the storage array, just because \nyou need several concurrent I/O running in the system to reach max \nthroughput. And even filesystem might help you here - it's not all... More \nconcurrent writers you have - higher performance you reach (until real \nlimit)...\n\nIn my case I have a small storage array capable to give you more than \n500MB/sec and say 5000 op/s. All my data are striped throw all array disks. \nSingle 'vacuumdb' process here become more CPU-bound rather I/O as it cannot \nfully load storage array... So, more vacuum processes I start in parallel - \nfaster I'll finish database vacuuming.\n\nBest regards!\n-Dimitri\n\n\nOn Thursday 22 March 2007 18:10, Michael Stone wrote:\n> On Thu, Mar 22, 2007 at 04:55:02PM +0100, Dimitri wrote:\n> >In my case I have several CPU on the server and quite powerful storage box\n> >which is not really busy with a single vacuum. So, my idea is quite simple\n> > - speed-up vacuum with parallel execution (just an algorithm):\n>\n> Vacuum is I/O intensive, not CPU intensive. Running more of them will\n> probably make things slower rather than faster, unless each thing you're\n> vacuuming has its own (separate) disks. The fact that your CPU isn't\n> pegged while vacuuming suggests that your disk is already your\n> bottleneck--and doing multiple sequential scans on the same disk will\n> definitely be slower than doing one.\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n", "msg_date": "Thu, 22 Mar 2007 19:24:38 +0100", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Vacuum" }, { "msg_contents": "On Thu, Mar 22, 2007 at 07:24:38PM +0100, Dimitri wrote:\n>you're right until you're using a single disk :) \n>Now, imagine you have more disks\n\nI do have more disks. I maximize the I/O performance by dedicating \ndifferent sets of disks to different tables. YMMV. I do suggest watching \nyour I/O rates and wallclock time if you try this to see if your \naggregate is actually substantially faster than the single case. (I \nassume that you haven't yet gotten far enough to actually do performance \ntesting.) You may also want to look into tuning your sequential I/O \nperformance.\n\nMike Stone\n", "msg_date": "Thu, 22 Mar 2007 14:46:15 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Vacuum" }, { "msg_contents": "On Thursday 22 March 2007 19:46, Michael Stone wrote:\n> On Thu, Mar 22, 2007 at 07:24:38PM +0100, Dimitri wrote:\n> >you're right until you're using a single disk :)\n> >Now, imagine you have more disks\n>\n> I do have more disks. I maximize the I/O performance by dedicating\n> different sets of disks to different tables. YMMV. I do suggest watching\n> your I/O rates and wallclock time if you try this to see if your\n> aggregate is actually substantially faster than the single case. (I\n> assume that you haven't yet gotten far enough to actually do performance\n> testing.) You may also want to look into tuning your sequential I/O\n> performance.\n>\n> Mike Stone\n\nMike, specially for you :)\n\nParallel Vacuum Test\n======================\n\n- Database 'db_OBJ'\n PgSQL 8.2.3\n tables: object1, object2, ... object8 (all the same)\n volume: 10.000.000 rows in each table, 22GB in total\n\n- Script Mono Vacuum\n $ cat vac_mono.sh\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object1\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object2\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object3\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object4\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object5\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object6\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object7\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object8\n $\n\n- Script Parallel Vacuum\n $ cat vac_pll.sh\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object1 &\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object2 &\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object3 &\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object4 &\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object5 &\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object6 &\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object7 &\n/usr/local/pgsqlFF/bin/vacuumdb -p 5464 -d db_OBJ -t object8 &\nwait\n $\n\n\nTest 1: Cold Clean database (already previously vacuumed)\n=========================================================\n Scenario:\n - stop database\n - flush FS cache (umount/mount)\n - start database\n - execute vacuum script\n\n$ time sh vac_mono.sh\nreal 4m24.23s\nuser 0m0.00s\nsys 0m0.01s\n\n$ time sh vac_pll.sh\nreal 1m9.36s\nuser 0m0.00s\nsys 0m0.01s\n\n\nTest 2: Hot Dirty database (modified and not vacuumed)\n======================================================\n Scenario:\n - stop database\n - flush FS cache (umount/mount)\n - start database\n - execute 200.000 updates against each from 8 object' tables\n - execute vacuum script\n\n$ time sh vac_mono.sh\nreal 9m36.90s\nuser 0m0.00s\nsys 0m0.01s\n\n$ time sh vac_pll.sh\nreal 2m10.41s\nuser 0m0.00s\nsys 0m0.02s\n\n\nSpeed-up x4 is obtained just because single vacuum process reaching max \n80MB/sec in throughput, while with 8 parallel vacuum processes I'm jumping to \n360MB/sec... And speakink about Sequential I/O: while you're doing read - \nfile system may again prefetch incoming data in way once you reclaim next \nread - your data will be already in FS cache. However, file system \ncannot 'pre-write' data for you - so having more concurrent writers helps a \nlot! (Of course in case you have a storage configured to keep concurrent \nI/O :))\n\nWell, why all this staff?...\nLet's imagine once you need more performance, and you buy 10x times more \nperformant storage box, will you still able to kill it with a single-process \nI/O activity? No... :) To scale well you need to be able to split your work \nin several task executed in parallel. And personally, I'm very happy we can \ndo it with vacuum now - the one of the most critical part of PostgreSQL...\n\nBest regards!\n-Dimitri\n", "msg_date": "Fri, 23 Mar 2007 16:37:32 +0100", "msg_from": "Dimitri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Vacuum" }, { "msg_contents": "On Fri, Mar 23, 2007 at 04:37:32PM +0100, Dimitri wrote:\n>Speed-up x4 is obtained just because single vacuum process reaching max \n>80MB/sec in throughput\n\nI'd look at trying to improve that, it seems very low.\n\nMike Stone\n", "msg_date": "Fri, 23 Mar 2007 13:13:02 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Vacuum" } ]
[ { "msg_contents": "My company is purchasing a Sunfire x4500 to run our most I/O-bound databases, and I'd like to get some advice on configuration and tuning. We're currently looking at:\n - Solaris 10 + zfs + RAID Z\n - CentOS 4 + xfs + RAID 10\n - CentOS 4 + ext3 + RAID 10\nbut we're open to other suggestions.\n\n From previous message threads, it looks like some of you have achieved stellar performance under both Solaris 10 U2/U3 with zfs and CentOS 4.4 with xfs. Would those of you who posted such results please describe how you tuned the OS/fs to yield those figures (e.g. patches, special drivers, read-ahead, checksumming, write-through cache settings, etc.)?\n\nMost of our servers currently run CentOS/RedHat, and we have little experience with Solaris, but we're not opposed to Solaris if there's a compelling reason to switch. For example, it sounds like zfs snapshots may have a lighter performance penalty than LVM snapshots. We've heard that just using LVM (even without active snapshots) imposes a maximum sequential I/O rate of around 600 MB/s (although we haven't yet reached this limit experimentally).\n\nBy the way, we've also heard that Solaris is \"more stable\" under heavy I/O load than Linux. Have any of you experienced this? It's hard to put much stock in such a blanket statement, but naturally we don't want to introduce instabilities.\n\nThanks in advance for your thoughts!\n\nFor reference:\n\nOur database cluster will be 3-6 TB in size. The Postgres installation will be 8.1 (at least initially), compiled to use 32 KB blocks (rather than 8 KB). The workload will be predominantly OLAP. The Sunfire X4500 has 2 dual-core Opterons, 16 GB RAM, 48 SATA disks (500 GB/disk * 48 = 24 TB raw -> 12 TB usable under RAID 10).\n\nSo far, we've seen the X4500 deliver impressive but suboptimal results using the out-of-the-box installation of Solaris + zfs. The Linux testing is in the early stages (no xfs, yet), but so far it yeilds comparatively modest write rates and very poor read and rewrite rates.\n\n===============================\nResults under Solaris with zfs:\n===============================\n\nFour concurrent writers:\n% time dd if=/dev/zero of=/zpool1/test/50GB-zero1 bs=1024k count=51200 ; time sync\n% time dd if=/dev/zero of=/zpool1/test/50GB-zero2 bs=1024k count=51200 ; time sync\n% time dd if=/dev/zero of=/zpool1/test/50GB-zero3 bs=1024k count=51200 ; time sync\n% time dd if=/dev/zero of=/zpool1/test/50GB-zero4 bs=1024k count=51200 ; time sync\n\nSeq Write (bs = 1 MB): 128 + 122 + 131 + 124 = 505 MB/s\n\nFour concurrent readers:\n% time dd if=/zpool1/test/50GB-zero1 of=/dev/null bs=1024k\n% time dd if=/zpool1/test/50GB-zero2 of=/dev/null bs=1024k\n% time dd if=/zpool1/test/50GB-zero3 of=/dev/null bs=1024k\n% time dd if=/zpool1/test/50GB-zero4 of=/dev/null bs=1024k\n\nSeq Read (bs = 1 MB): 181 + 177 + 180 + 178 = 716 MB/s\n\n\nOne bonnie++ process:\n% bonnie++ -r 16384 -s 32g:32k -f -n0 -d /zpool1/test/bonnie_scratch\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nthumper1 32G:32k 604173 98 268893 43 543389 59 519.2 3\nthumper1,32G:32k,,,604173,98,268893,43,,,543389,59,519.2,3,,,,,,,,,,,,,\n\n\n4 concurrent synchronized bonnie++ processes:\n% bonnie++ -p4\n% bonnie++ -r 16384 -s 32g:32k -y -f -n0 -d /zpool1/test/bonnie_scratch\n% bonnie++ -r 16384 -s 32g:32k -y -f -n0 -d /zpool1/test/bonnie_scratch\n% bonnie++ -r 16384 -s 32g:32k -y -f -n0 -d /zpool1/test/bonnie_scratch\n% bonnie++ -r 16384 -s 32g:32k -y -f -n0 -d /zpool1/test/bonnie_scratch\n% bonnie++ -p-1\n\nCombined results of 4 sessions:\nSeq Output: 124 + 124 + 124 + 140 = 512 MB/s\nRewrite: 93 + 94 + 93 + 96 = 376 MB/s\nSeq Input: 192 + 194 + 193 + 197 = 776 MB/s\nRandom Seek: 327 + 327 + 335 + 332 = 1321 seeks/s\n\n\n=========================================\nResults under CentOS 4 with ext3 and LVM:\n=========================================\n\n% bonnie++ -s 32g:32k -f -n0 -d /large_lvm_stripe/test/bonnie_scratch\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nthumper1.rt 32G:32k 346595 94 59448 11 132471 12 479.4 2\nthumper1.rtkinternal,32G:32k,,,346595,94,59448,11,,,132471,12,479.4,2,,,,,,,,,,,,,\n\n\n============================\nSummary of bonnie++ results:\n============================\n\n sequential sequential sequential scattered\nTest case write MB/s rewrite MB/s read MB/s seeks/s\n------------------------- ---------- ------------ ---------- ---------\nSol10+zfs, 1 process 604 269 543 519\nSol10+zfs, 4 processes 512 376 776 1321\nCent4+ext3+LVM, 1 process 347 59 132 479\n\n\n", "msg_date": "Thu, 22 Mar 2007 19:20:17 -0700", "msg_from": "\"Matt Smiley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Sunfire X4500 recommendations" }, { "msg_contents": "On Friday 23 March 2007 03:20, Matt Smiley wrote:\n> My company is purchasing a Sunfire x4500 to run our most I/O-bound\n> databases, and I'd like to get some advice on configuration and tuning. \n> We're currently looking at: - Solaris 10 + zfs + RAID Z\n> - CentOS 4 + xfs + RAID 10\n> - CentOS 4 + ext3 + RAID 10\n> but we're open to other suggestions.\n>\n\nMatt,\n\nfor Solaris + ZFS you may find answers to all your questions here:\n\n http://blogs.sun.com/roch/category/ZFS\n http://blogs.sun.com/realneel/entry/zfs_and_databases\n\nThink to measure log (WAL) activity and use separated pool for logs if needed. \nAlso, RAID-Z is more security-oriented rather performance, RAID-10 should be \na better choice...\n\nRgds,\n-Dimitri\n", "msg_date": "Fri, 23 Mar 2007 10:28:59 +0100", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sunfire X4500 recommendations" } ]
[ { "msg_contents": "Hi\n\nI'm going to install a new server and I'm looking for some advice\nabout how I can help my database performing at its best. I'll be using\na redhat ES 4 on a dual core opteron with 2 SCSI 10.000rpm disks in\nRAID1.\n\nThe first question I have is, since I'd strongly prefer to use\npostgresql 8.1 or 8.2 instead of the 7.4 that comes with redhat, if\nsomeone else is using with success third party rpms with success in\nbusiness critical applications.\n\nDatabase size: under 1 G\nLoad: reaches 100 r/w queries per second peak time but the average is lower\nFilesystem: I'll probably have ext3 because of the redhat support.\nRAM: I will have enough to let postgresql use a quantity about the\nsize of the db itself.\n\nI'll have just one pair of disks so Linux will live on the same disk\narray of the database, but obviously I can make any choice I like\nabout the partitioning and my idea was to put postgresql data on a\ndedicated partition. As processes I'll have some other webby things\nrunning on the server but the I/O impact at disk level of these should\nbe not that bad.\n\nNow aside of the filesystem/partitioning choice, I have experience\nwith LVM, but I've never tried to take snapshots of postgresql\ndatabases and I'd be interested in knowing how well this works/perform\nand if there's any downside. I'd like even to read which procedures\nare common to restore a db from the LVM snapshot.\nAs an alternative i was thinking about using the wal incremental\nbackup strategy.\nIn any case I'll take a pg_dump daily.\n\nI would appreciate even just interesting web links (I've already\ngoogled a bit so I'm quite aware of the most common stuff)\n\nThanks\n\nPaolo\n", "msg_date": "Fri, 23 Mar 2007 12:48:19 +0000", "msg_from": "\"Paolo Negri\" <[email protected]>", "msg_from_op": true, "msg_subject": "linux - server configuration for small database" }, { "msg_contents": "\"Paolo Negri\" <[email protected]> writes:\n> The first question I have is, since I'd strongly prefer to use\n> postgresql 8.1 or 8.2 instead of the 7.4 that comes with redhat, if\n> someone else is using with success third party rpms with success in\n> business critical applications.\n\nRed Hat does support postgres 8.1 on RHEL4:\nhttp://www.redhat.com/appstack/\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Mar 2007 11:05:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux - server configuration for small database " }, { "msg_contents": "Tom Lane wrote:\n> \"Paolo Negri\" <[email protected]> writes:\n>> The first question I have is, since I'd strongly prefer to use\n>> postgresql 8.1 or 8.2 instead of the 7.4 that comes with redhat, if\n>> someone else is using with success third party rpms with success in\n>> business critical applications.\n> \n> Red Hat does support postgres 8.1 on RHEL4:\n> http://www.redhat.com/appstack/\n\nAnd so does the community:\n\nhttp://ftp9.us.postgresql.org/pub/mirrors/postgresql/binary/v8.1.8/linux/rpms/redhat/\nhttp://ftp9.us.postgresql.org/pub/mirrors/postgresql/binary/v8.2.3/linux/rpms/redhat/\n\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Fri, 23 Mar 2007 08:51:25 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: linux - server configuration for small database" } ]
[ { "msg_contents": "Thanks Dimitri! That was very educational material! I'm going to think out loud here, so please correct me if you see any errors.\n\nThe section on tuning for OLTP transactions was interesting, although my OLAP workload will be predominantly bulk I/O over large datasets of mostly-sequential blocks.\n\nThe NFS+ZFS section talked about the zil_disable control for making zfs ignore commits/fsyncs. Given that Postgres' executor does single-threaded synchronous I/O like the tar example, it seems like it might benefit significantly from setting zil_disable=1, at least in the case of frequently flushed/committed writes. However, zil_disable=1 sounds unsafe for the datafiles' filesystem, and would probably only be acceptible for the xlogs if they're stored on a separate filesystem and you're willing to loose recently committed transactions. This sounds pretty similar to just setting fsync=off in postgresql.conf, which is easier to change later, so I'll skip the zil_disable control.\n\nThe RAID-Z section was a little surprising. It made RAID-Z sound just like RAID 50, in that you can customize the trade-off between iops versus usable diskspace and fault-tolerance by adjusting the number/size of parity-protected disk groups. The only difference I noticed was that RAID-Z will apparently set the stripe size across vdevs (RAID-5s) to be as close as possible to the filesystem's block size, to maximize the number of disks involved in concurrently fetching each block. Does that sound about right?\n\nSo now I'm wondering what RAID-Z offers that RAID-50 doesn't. I came up with 2 things: an alleged affinity for full-stripe writes and (under RAID-Z2) the added fault-tolerance of RAID-6's 2nd parity bit (allowing 2 disks to fail per zpool). It wasn't mentioned in this blog, but I've heard that under certain circumstances, RAID-Z will magically decide to mirror a block instead of calculating parity on it. I'm not sure how this would happen, and I don't know the circumstances that would trigger this behavior, but I think the goal (if it really happens) is to avoid the performance penalty of having to read the rest of the stripe required to calculate parity. As far as I know, this is only an issue affecting small writes (e.g. single-row updates in an OLTP workload), but not large writes (compared to the RAID's stripe size). Anyway, when I saw the filesystem's intent log mentioned, I thought maybe the small writes are converted to full-stripe writes by deferring their commit until a full stripe's worth of data had been accumulated. Does that sound plausible?\n\nAre there any other noteworthy perks to RAID-Z, rather than RAID-50? If not, I'm inclined to go with your suggestion, Dimitri, and use zfs like RAID-10 to stripe a zpool over a bunch of RAID-1 vdevs. Even though many of our queries do mostly sequential I/O, getting higher seeks/second is more important to us than the sacrificed diskspace.\n\nFor the record, those blogs also included a link to a very helpful ZFS Best Practices Guide:\nhttp://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide\n\nTo sum up, so far the short list of tuning suggestions for ZFS includes:\n - Use a separate zpool and filesystem for xlogs if your apps write often.\n - Consider setting zil_disable=1 on the xlogs' dedicated filesystem. ZIL is the intent log, and it sounds like disabling it may be like disabling journaling. Previous message threads in the Postgres archives debate whether this is safe for the xlogs, but it didn't seem like a conclusive answer was reached.\n - Make filesystem block size (zfs record size) match the Postgres block size.\n - Manually adjust vdev_cache. I think this sets the read-ahead size. It defaults to 64 KB. For OLTP workload, reduce it; for DW/OLAP maybe increase it.\n - Test various settings for vq_max_pending (until zfs can auto-tune it). See http://blogs.sun.com/erickustarz/entry/vq_max_pending\n - A zpool of mirrored disks should support more seeks/second than RAID-Z, just like RAID 10 vs. RAID 50. However, no single Postgres backend will see better than a single disk's seek rate, because the executor currently dispatches only 1 logical I/O request at a time.\n\n\n>>> Dimitri <[email protected]> 03/23/07 2:28 AM >>>\nOn Friday 23 March 2007 03:20, Matt Smiley wrote:\n> My company is purchasing a Sunfire x4500 to run our most I/O-bound\n> databases, and I'd like to get some advice on configuration and tuning. \n> We're currently looking at: - Solaris 10 + zfs + RAID Z\n> - CentOS 4 + xfs + RAID 10\n> - CentOS 4 + ext3 + RAID 10\n> but we're open to other suggestions.\n>\n\nMatt,\n\nfor Solaris + ZFS you may find answers to all your questions here:\n\n http://blogs.sun.com/roch/category/ZFS\n http://blogs.sun.com/realneel/entry/zfs_and_databases\n\nThink to measure log (WAL) activity and use separated pool for logs if needed. \nAlso, RAID-Z is more security-oriented rather performance, RAID-10 should be \na better choice...\n\nRgds,\n-Dimitri\n\n\n", "msg_date": "Fri, 23 Mar 2007 06:32:55 -0700", "msg_from": "\"Matt Smiley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sunfire X4500 recommendations" }, { "msg_contents": "On Friday 23 March 2007 14:32, Matt Smiley wrote:\n> Thanks Dimitri! That was very educational material! I'm going to think\n> out loud here, so please correct me if you see any errors.\n\nYour mail is so long - I was unable to answer all questions same day :))\n\n>\n> The section on tuning for OLTP transactions was interesting, although my\n> OLAP workload will be predominantly bulk I/O over large datasets of\n> mostly-sequential blocks.\n\nI supposed mostly READ operations, right?\n\n>\n> The NFS+ZFS section talked about the zil_disable control for making zfs\n> ignore commits/fsyncs. Given that Postgres' executor does single-threaded\n> synchronous I/O like the tar example, it seems like it might benefit\n> significantly from setting zil_disable=1, at least in the case of\n> frequently flushed/committed writes. However, zil_disable=1 sounds unsafe\n> for the datafiles' filesystem, and would probably only be acceptible for\n> the xlogs if they're stored on a separate filesystem and you're willing to\n> loose recently committed transactions. This sounds pretty similar to just\n> setting fsync=off in postgresql.conf, which is easier to change later, so\n> I'll skip the zil_disable control.\n\nyes, you don't need it for PostgreSQL, it may be useful for other database \nvendors, but not here.\n\n>\n> The RAID-Z section was a little surprising. It made RAID-Z sound just like\n> RAID 50, in that you can customize the trade-off between iops versus usable\n> diskspace and fault-tolerance by adjusting the number/size of\n> parity-protected disk groups. The only difference I noticed was that\n> RAID-Z will apparently set the stripe size across vdevs (RAID-5s) to be as\n> close as possible to the filesystem's block size, to maximize the number of\n> disks involved in concurrently fetching each block. Does that sound about\n> right?\n\nWell, look at RAID-Z just as wide RAID solution. More you have disks in your \nsystem - more high is probability you may loose 2 disks on the same time, and \nin this case wide RAID-10 will simply make loose you whole the data set (and \nagain if you loose both disks in mirror pair). So, RAID-Z brings you more \nsecurity as you may use wider parity, but the price for it is I/O \nperformance...\n\n>\n> So now I'm wondering what RAID-Z offers that RAID-50 doesn't. I came up\n> with 2 things: an alleged affinity for full-stripe writes and (under\n> RAID-Z2) the added fault-tolerance of RAID-6's 2nd parity bit (allowing 2\n> disks to fail per zpool). It wasn't mentioned in this blog, but I've heard\n> that under certain circumstances, RAID-Z will magically decide to mirror a\n> block instead of calculating parity on it. I'm not sure how this would\n> happen, and I don't know the circumstances that would trigger this\n> behavior, but I think the goal (if it really happens) is to avoid the\n> performance penalty of having to read the rest of the stripe required to\n> calculate parity. As far as I know, this is only an issue affecting small\n> writes (e.g. single-row updates in an OLTP workload), but not large writes\n> (compared to the RAID's stripe size). Anyway, when I saw the filesystem's\n> intent log mentioned, I thought maybe the small writes are converted to\n> full-stripe writes by deferring their commit until a full stripe's worth of\n> data had been accumulated. Does that sound plausible?\n\nThe problem here that within the same workload you're able to do less I/O \noperations with RAID-Z then in RAID-10. So, bigger your I/O block size or \nsmaller - you'll still obtain lower throughput, no? :)\n\n>\n> Are there any other noteworthy perks to RAID-Z, rather than RAID-50? If\n> not, I'm inclined to go with your suggestion, Dimitri, and use zfs like\n> RAID-10 to stripe a zpool over a bunch of RAID-1 vdevs. Even though many\n> of our queries do mostly sequential I/O, getting higher seeks/second is\n> more important to us than the sacrificed diskspace.\n\nThere is still one point to check: if you do mostly READ on your database \nprobably RAID-Z will be not *too* bad and will give you more used space. \nHowever, if you need to update your data or load frequently - RAID-10 will be \nbetter...\n\n>\n> For the record, those blogs also included a link to a very helpful ZFS Best\n> Practices Guide:\n> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide\n\noh yes, it's constantly growing wiki, good start for any Solaris questions as \nwell performance points :)\n\n>\n> To sum up, so far the short list of tuning suggestions for ZFS includes:\n> - Use a separate zpool and filesystem for xlogs if your apps write often.\n> - Consider setting zil_disable=1 on the xlogs' dedicated filesystem. ZIL\n> is the intent log, and it sounds like disabling it may be like disabling\n> journaling. Previous message threads in the Postgres archives debate\n> whether this is safe for the xlogs, but it didn't seem like a conclusive\n> answer was reached. - Make filesystem block size (zfs record size) match\n> the Postgres block size. - Manually adjust vdev_cache. I think this sets\n> the read-ahead size. It defaults to 64 KB. For OLTP workload, reduce it;\n> for DW/OLAP maybe increase it. - Test various settings for vq_max_pending\n> (until zfs can auto-tune it). See\n> http://blogs.sun.com/erickustarz/entry/vq_max_pending - A zpool of mirrored\n> disks should support more seeks/second than RAID-Z, just like RAID 10 vs.\n> RAID 50. However, no single Postgres backend will see better than a single\n> disk's seek rate, because the executor currently dispatches only 1 logical\n> I/O request at a time.\n\nI'm currently just doing OLTP benchmark on ZFS and quite surprising it's \nreally *doing* several concurrent I/O operations on multi-user workload! :)\nEven vacuum seems to run much more faster (or probably it's just my \nimpression :))\nBut keep in mind - ZFS is a very young file systems and doing only its first \nsteps in database workload. So, current goal here is to bring ZFS performance \nat least at the same level as UFS is reaching in the same conditions...\nPositive news: PostgreSQL seems to me performing much more better than other \ndatabase vendors (currently I'm getting at least 80% of UFS performance)...\nAll tuning points already mentioned previously by you are correct, and I \npromise you to publish all other details/findings once I've finished my \ntests! (it's too early to get conclusions yet :))\n\nBest regards!\n-Dimitri\n\n>\n> >>> Dimitri <[email protected]> 03/23/07 2:28 AM >>>\n>\n> On Friday 23 March 2007 03:20, Matt Smiley wrote:\n> > My company is purchasing a Sunfire x4500 to run our most I/O-bound\n> > databases, and I'd like to get some advice on configuration and tuning.\n> > We're currently looking at: - Solaris 10 + zfs + RAID Z\n> > - CentOS 4 + xfs + RAID 10\n> > - CentOS 4 + ext3 + RAID 10\n> > but we're open to other suggestions.\n>\n> Matt,\n>\n> for Solaris + ZFS you may find answers to all your questions here:\n>\n> http://blogs.sun.com/roch/category/ZFS\n> http://blogs.sun.com/realneel/entry/zfs_and_databases\n>\n> Think to measure log (WAL) activity and use separated pool for logs if\n> needed. Also, RAID-Z is more security-oriented rather performance, RAID-10\n> should be a better choice...\n>\n> Rgds,\n> -Dimitri\n", "msg_date": "Mon, 26 Mar 2007 09:35:57 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sunfire X4500 recommendations" } ]
[ { "msg_contents": "I'm posting this to performance in case our workaround may be of benefit to someone with a similar issue. I'm posting to hackers because I hope we can improve our planner in this area so that a workaround is not necessary. (It might make sense to reply to one group or the other, depending on reply content.)\n \nWe are converting from a commercial database (which shall remain unnamed here, due to license restrictions on publishing benchmarks). Most queries run faster on PostgreSQL; a small number choose very poor plans and run much longer. This particular query runs on the commercial product in 6.1s first time, 1.4s cached. In PostgreSQL it runs in about 144s both first time and cached. I was able to use an easy but fairly ugly rewrite (getting duplicate rows and eliminating them with DISTINCT) which runs on the commercial product in 9.2s/3.0s and in PostgreSQL in 2.0s/0.7s.\n \nHere are the tables:\n \n Table \"public.TranHeader\"\n Column | Type | Modifiers\n---------------+------------------+-----------\n tranNo | \"TranNoT\" | not null\n countyNo | \"CountyNoT\" | not null\n acctPd | \"DateT\" | not null\n date | \"DateT\" | not null\n isComplete | boolean | not null\n tranId | \"TranIdT\" | not null\n tranType | \"TranTypeT\" | not null\n userId | \"UserIdT\" | not null\n workstationId | \"WorkstationIdT\" | not null\n time | \"TimeT\" |\nIndexes:\n \"TranHeader_pkey\" PRIMARY KEY, btree (\"tranNo\", \"countyNo\")\n \"TranHeader_TranAcctPeriod\" UNIQUE, btree (\"acctPd\", \"tranNo\", \"countyNo\")\n \"TranHeader_TranDate\" UNIQUE, btree (date, \"tranNo\", \"countyNo\")\n\n Table \"public.TranDetail\"\n Column | Type | Modifiers\n-----------------+--------------------+-----------\n tranNo | \"TranNoT\" | not null\n tranDetailSeqNo | \"TranDetailSeqNoT\" | not null\n countyNo | \"CountyNoT\" | not null\n acctCode | \"AcctCodeT\" | not null\n amt | \"MoneyT\" | not null\n assessNo | \"TranIdT\" |\n caseNo | \"CaseNoT\" |\n citnNo | \"CitnNoT\" |\n citnViolDate | \"DateT\" |\n issAgencyNo | \"IssAgencyNoT\" |\n partyNo | \"PartyNoT\" |\n payableNo | \"PayableNoT\" |\n rcvblNo | \"RcvblNoT\" |\nIndexes:\n \"TranDetail_pkey\" PRIMARY KEY, btree (\"tranNo\", \"tranDetailSeqNo\", \"countyNo\")\n \"TranDetail_TranDetCaseNo\" UNIQUE, btree (\"caseNo\", \"tranNo\", \"tranDetailSeqNo\", \"countyNo\")\n \"TranDetail_TranDetPay\" UNIQUE, btree (\"payableNo\", \"tranNo\", \"tranDetailSeqNo\", \"countyNo\")\n \"TranDetail_TranDetRcvbl\" UNIQUE, btree (\"rcvblNo\", \"tranNo\", \"tranDetailSeqNo\", \"countyNo\")\n \"TranDetail_TranDetAcct\" btree (\"acctCode\", \"citnNo\", \"countyNo\")\n\n Table \"public.Adjustment\"\n Column | Type | Modifiers\n-----------------+-----------------------+-----------\n adjustmentNo | \"TranIdT\" | not null\n countyNo | \"CountyNoT\" | not null\n date | \"DateT\" | not null\n isTranVoided | boolean | not null\n reasonCode | \"ReasonCodeT\" | not null\n tranNo | \"TranNoT\" | not null\n adjustsTranId | \"TranIdT\" |\n adjustsTranNo | \"TranNoT\" |\n adjustsTranType | \"TranTypeT\" |\n explanation | character varying(50) |\nIndexes:\n \"Adjustment_pkey\" PRIMARY KEY, btree (\"adjustmentNo\", \"countyNo\")\n \"Adjustment_AdjustsTranId\" btree (\"adjustsTranId\", \"adjustsTranType\", \"tranNo\", \"countyNo\")\n \"Adjustment_AdjustsTranNo\" btree (\"adjustsTranNo\", \"tranNo\", \"countyNo\")\n \"Adjustment_Date\" btree (date, \"countyNo\")\n \nAdmittedly, the indexes are optimized for our query load under the commercial product, which can use the \"covering index\" optimization.\n \nexplain analyze\nSELECT \"A\".\"adjustmentNo\", \"A\".\"tranNo\", \"A\".\"countyNo\", \"H\".\"date\", \"H\".\"userId\", \"H\".\"time\"\n FROM \"Adjustment\" \"A\"\n JOIN \"TranHeader\" \"H\" ON (\"H\".\"tranId\" = \"A\".\"adjustmentNo\" AND \"H\".\"countyNo\" = \"A\".\"countyNo\" AND \"H\".\"tranNo\" = \"A\".\"tranNo\")\n WHERE \"H\".\"tranType\" = 'A'\n AND \"A\".\"date\" > DATE '2006-01-01'\n AND \"H\".\"countyNo\" = 66\n AND \"A\".\"countyNo\" = 66\n AND EXISTS\n (\n SELECT 1 FROM \"TranDetail\" \"D\"\n WHERE \"D\".\"tranNo\" = \"H\".\"tranNo\"\n AND \"D\".\"countyNo\" = \"H\".\"countyNo\"\n AND \"D\".\"caseNo\" LIKE '2006TR%'\n )\n;\n\n Nested Loop (cost=182.56..72736.37 rows=1 width=46) (actual time=6398.108..143631.427 rows=2205 loops=1)\n Join Filter: ((\"H\".\"tranId\")::bpchar = (\"A\".\"adjustmentNo\")::bpchar)\n -> Bitmap Heap Scan on \"Adjustment\" \"A\" (cost=182.56..1535.69 rows=11542 width=22) (actual time=38.098..68.324 rows=12958 loops=1)\n Recheck Cond: (((date)::date > '2006-01-01'::date) AND ((\"countyNo\")::smallint = 66))\n -> Bitmap Index Scan on \"Adjustment_Date\" (cost=0.00..179.67 rows=11542 width=0) (actual time=32.958..32.958 rows=12958 loops=1)\n Index Cond: (((date)::date > '2006-01-01'::date) AND ((\"countyNo\")::smallint = 66))\n -> Index Scan using \"TranHeader_pkey\" on \"TranHeader\" \"H\" (cost=0.00..6.15 rows=1 width=46) (actual time=11.073..11.074 rows=0 loops=12958)\n Index Cond: (((\"H\".\"tranNo\")::integer = (\"A\".\"tranNo\")::integer) AND ((\"H\".\"countyNo\")::smallint = 66))\n Filter: (((\"tranType\")::bpchar = 'A'::bpchar) AND (subplan))\n SubPlan\n -> Index Scan using \"TranDetail_TranDetCaseNo\" on \"TranDetail\" \"D\" (cost=0.00..4.73 rows=1 width=0) (actual time=11.038..11.038 rows=0 loops=12958)\n Index Cond: (((\"caseNo\")::bpchar >= '2006TR'::bpchar) AND ((\"caseNo\")::bpchar < '2006TS'::bpchar) AND ((\"tranNo\")::integer = ($0)::integer) AND ((\"countyNo\")::smallint = ($1)::smallint))\n Filter: ((\"caseNo\")::bpchar ~~ '2006TR%'::text)\n Total runtime: 143633.838 ms\n \nThe commercial product scans the index on caseNo in TranDetail to build a work table of unique values, then uses indexed access to the TranHeader and then to Adjustment. I was able to get approximately the same plan (except the duplicates are eliminated at the end) in PostgreSQL by rewriting to this:\n \nSELECT DISTINCT \"A\".\"adjustmentNo\", \"A\".\"tranNo\", \"A\".\"countyNo\", \"H\".\"date\", \"H\".\"userId\", \"H\".\"time\"\n FROM \"Adjustment\" \"A\"\n JOIN \"TranHeader\" \"H\" ON (\"H\".\"tranId\" = \"A\".\"adjustmentNo\" AND \"H\".\"countyNo\" = \"A\".\"countyNo\" AND \"H\".\"tranNo\" = \"A\".\"tranNo\")\n JOIN \"TranDetail\" \"D\" ON (\"D\".\"tranNo\" = \"H\".\"tranNo\" AND \"D\".\"countyNo\" = \"H\".\"countyNo\" AND \"D\".\"caseNo\" LIKE '2006TR%')\n WHERE \"H\".\"tranType\" = 'A'\n AND \"A\".\"date\" > DATE '2006-01-01'\n AND \"H\".\"countyNo\" = 66\n AND \"A\".\"countyNo\" = 66\n;\n\n Unique (cost=130.96..130.98 rows=1 width=46) (actual time=694.591..715.008 rows=2205 loops=1)\n -> Sort (cost=130.96..130.96 rows=1 width=46) (actual time=694.586..701.808 rows=16989 loops=1)\n Sort Key: \"A\".\"adjustmentNo\", \"A\".\"tranNo\", \"A\".\"countyNo\", \"H\".date, \"H\".\"userId\", \"H\".\"time\"\n -> Nested Loop (cost=0.00..130.95 rows=1 width=46) (actual time=0.157..636.779 rows=16989 loops=1)\n Join Filter: ((\"H\".\"tranNo\")::integer = (\"A\".\"tranNo\")::integer)\n -> Nested Loop (cost=0.00..113.76 rows=4 width=50) (actual time=0.131..452.544 rows=16989 loops=1)\n -> Index Scan using \"TranDetail_TranDetCaseNo\" on \"TranDetail\" \"D\" (cost=0.00..27.57 rows=20 width=6) (actual time=0.049..83.005 rows=46293 loops=1)\n Index Cond: (((\"caseNo\")::bpchar >= '2006TR'::bpchar) AND ((\"caseNo\")::bpchar < '2006TS'::bpchar) AND (66 = (\"countyNo\")::smallint))\n Filter: ((\"caseNo\")::bpchar ~~ '2006TR%'::text)\n -> Index Scan using \"TranHeader_pkey\" on \"TranHeader\" \"H\" (cost=0.00..4.30 rows=1 width=46) (actual time=0.006..0.007 rows=0 loops=46293)\n Index Cond: (((\"D\".\"tranNo\")::integer = (\"H\".\"tranNo\")::integer) AND ((\"H\".\"countyNo\")::smallint = 66))\n Filter: ((\"tranType\")::bpchar = 'A'::bpchar)\n -> Index Scan using \"Adjustment_pkey\" on \"Adjustment\" \"A\" (cost=0.00..4.28 rows=1 width=22) (actual time=0.007..0.008 rows=1 loops=16989)\n Index Cond: (((\"H\".\"tranId\")::bpchar = (\"A\".\"adjustmentNo\")::bpchar) AND ((\"A\".\"countyNo\")::smallint = 66))\n Filter: ((date)::date > '2006-01-01'::date)\n Total runtime: 715.932 ms\n \nI can't see any reason that PostgreSQL can't catch up to the other product on this optimization issue. This usage of DISTINCT seems a bit sloppy; I usually try to dissuade the application programmers from accumulating duplicates during the joins and then eliminating them in this way.\n \n-Kevin\n \n\n", "msg_date": "Fri, 23 Mar 2007 12:01:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "EXISTS optimization" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> explain analyze\n> SELECT \"A\".\"adjustmentNo\", \"A\".\"tranNo\", \"A\".\"countyNo\", \"H\".\"date\", \"H\".\"userId\", \"H\".\"time\"\n> FROM \"Adjustment\" \"A\"\n> JOIN \"TranHeader\" \"H\" ON (\"H\".\"tranId\" = \"A\".\"adjustmentNo\" AND \"H\".\"countyNo\" = \"A\".\"countyNo\" AND \"H\".\"tranNo\" = \"A\".\"tranNo\")\n> WHERE \"H\".\"tranType\" = 'A'\n> AND \"A\".\"date\" > DATE '2006-01-01'\n> AND \"H\".\"countyNo\" = 66\n> AND \"A\".\"countyNo\" = 66\n> AND EXISTS\n> (\n> SELECT 1 FROM \"TranDetail\" \"D\"\n> WHERE \"D\".\"tranNo\" = \"H\".\"tranNo\"\n> AND \"D\".\"countyNo\" = \"H\".\"countyNo\"\n> AND \"D\".\"caseNo\" LIKE '2006TR%'\n> )\n> ;\n\n> The commercial product scans the index on caseNo in TranDetail to build a work table of unique values, then uses indexed access to the TranHeader and then to Adjustment.\n\nIf you want that, try rewriting the EXISTS to an IN:\n\n AND (\"H\".\"tranNo\", \"H\".\"countyNo\") IN\n (\n SELECT \"D\".\"tranNo\", \"D\".\"countyNo\" FROM \"TranDetail\" \"D\"\n WHERE \"D\".\"caseNo\" LIKE '2006TR%'\n )\n\nWe don't currently try to flatten EXISTS into a unique/join plan as we\ndo for IN. I seem to recall not doing so when I rewrote IN planning\nbecause I didn't think it would be exactly semantically equivalent,\nbut that was awhile ago. Right at the moment it seems like it ought\nto be equivalent as long as the comparison operators are strict.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Mar 2007 17:49:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXISTS optimization " }, { "msg_contents": "\n\n>>> On Fri, Mar 23, 2007 at 4:49 PM, in message <[email protected]>,\nTom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> explain analyze\n>> SELECT \"A\".\"adjustmentNo\", \"A\".\"tranNo\", \"A\".\"countyNo\", \"H\".\"date\", \n> \"H\".\"userId\", \"H\".\"time\"\n>> FROM \"Adjustment\" \"A\"\n>> JOIN \"TranHeader\" \"H\" ON (\"H\".\"tranId\" = \"A\".\"adjustmentNo\" AND \n> \"H\".\"countyNo\" = \"A\".\"countyNo\" AND \"H\".\"tranNo\" = \"A\".\"tranNo\")\n>> WHERE \"H\".\"tranType\" = 'A'\n>> AND \"A\".\"date\" > DATE '2006- 01- 01'\n>> AND \"H\".\"countyNo\" = 66\n>> AND \"A\".\"countyNo\" = 66\n>> AND EXISTS\n>> (\n>> SELECT 1 FROM \"TranDetail\" \"D\"\n>> WHERE \"D\".\"tranNo\" = \"H\".\"tranNo\"\n>> AND \"D\".\"countyNo\" = \"H\".\"countyNo\"\n>> AND \"D\".\"caseNo\" LIKE '2006TR%'\n>> )\n>> ;\n> \n>> The commercial product scans the index on caseNo in TranDetail to build a \n> work table of unique values, then uses indexed access to the TranHeader and \n> then to Adjustment.\n> \n> If you want that, try rewriting the EXISTS to an IN:\n> \n> AND (\"H\".\"tranNo\", \"H\".\"countyNo\") IN\n> (\n> SELECT \"D\".\"tranNo\", \"D\".\"countyNo\" FROM \"TranDetail\" \"D\"\n> WHERE \"D\".\"caseNo\" LIKE '2006TR%'\n> )\n\nNice. I get this:\n \nexplain analyze\nSELECT \"A\".\"adjustmentNo\", \"A\".\"tranNo\", \"A\".\"countyNo\", \"H\".\"date\", \"H\".\"userId\", \"H\".\"time\"\n FROM \"Adjustment\" \"A\"\n JOIN \"TranHeader\" \"H\" ON (\"H\".\"tranId\" = \"A\".\"adjustmentNo\" AND \"H\".\"countyNo\" = \"A\".\"countyNo\" AND \"H\".\"tranNo\" = \"A\".\"tranNo\")\n WHERE \"H\".\"tranType\" = 'A'\n AND \"A\".\"date\" > DATE '2006- 01- 01'\n AND \"H\".\"countyNo\" = 66\n AND \"A\".\"countyNo\" = 66\n AND (\"H\".\"tranNo\", \"H\".\"countyNo\") IN\n (\n SELECT \"D\".\"tranNo\", \"D\".\"countyNo\" FROM \"TranDetail\" \"D\"\n WHERE \"D\".\"caseNo\" LIKE '2006TR%'\n )\n;\n \n Nested Loop (cost=27.76..36.38 rows=1 width=46) (actual time=92.999..200.398 rows=2209 loops=1)\n Join Filter: ((\"H\".\"tranNo\")::integer = (\"A\".\"tranNo\")::integer)\n -> Nested Loop (cost=27.76..32.08 rows=1 width=50) (actual time=92.970..176.472 rows=2209 loops=1)\n -> HashAggregate (cost=27.76..27.77 rows=1 width=6) (actual time=92.765..100.810 rows=9788 loops=1)\n -> Index Scan using \"TranDetail_TranDetCaseNo\" on \"TranDetail\" \"D\" (cost=0.00..27.66 rows=20 width=6) (actual time=0.059..60.967 rows=46301 loops=1)\n Index Cond: (((\"caseNo\")::bpchar >= '2006TR'::bpchar) AND ((\"caseNo\")::bpchar < '2006TS'::bpchar) AND ((\"countyNo\")::smallint = 66))\n Filter: ((\"caseNo\")::bpchar ~~ '2006TR%'::text)\n -> Index Scan using \"TranHeader_pkey\" on \"TranHeader\" \"H\" (cost=0.00..4.30 rows=1 width=46) (actual time=0.006..0.006 rows=0 loops=9788)\n Index Cond: (((\"H\".\"tranNo\")::integer = (\"D\".\"tranNo\")::integer) AND ((\"H\".\"countyNo\")::smallint = 66))\n Filter: ((\"tranType\")::bpchar = 'A'::bpchar)\n -> Index Scan using \"Adjustment_pkey\" on \"Adjustment\" \"A\" (cost=0.00..4.28 rows=1 width=22) (actual time=0.008..0.009 rows=1 loops=2209)\n Index Cond: (((\"H\".\"tranId\")::bpchar = (\"A\".\"adjustmentNo\")::bpchar) AND ((\"A\".\"countyNo\")::smallint = 66))\n Filter: ((date)::date > '2006-01-01'::date)\n Total runtime: 201.306 ms\n \nThat's the good news. The bad news is that I operate under a management portability dictate which doesn't currently allow that syntax, since not all of the products they want to cover support it. I tried something which seems equivalent, but it is running for a very long time. I'll show it with just the explain while I wait to see how long the explain analyze takes.\n \nexplain\nSELECT \"A\".\"adjustmentNo\", \"A\".\"tranNo\", \"A\".\"countyNo\", \"H\".\"date\", \"H\".\"userId\", \"H\".\"time\"\n FROM \"Adjustment\" \"A\"\n JOIN \"TranHeader\" \"H\" ON (\"H\".\"tranId\" = \"A\".\"adjustmentNo\" AND \"H\".\"countyNo\" = \"A\".\"countyNo\" AND \"H\".\"tranNo\" = \"A\".\"tranNo\")\n WHERE \"H\".\"tranType\" = 'A'\n AND \"A\".\"date\" > DATE '2006- 01- 01'\n AND \"H\".\"countyNo\" = 66\n AND \"A\".\"countyNo\" = 66\n AND \"H\".\"tranNo\" IN\n (\n SELECT \"D\".\"tranNo\" FROM \"TranDetail\" \"D\"\n WHERE \"D\".\"caseNo\" LIKE '2006TR%'\n AND \"D\".\"countyNo\" = \"H\".\"countyNo\"\n )\n;\n\n Nested Loop (cost=0.00..181673.08 rows=1 width=46)\n Join Filter: ((\"H\".\"tranId\")::bpchar = (\"A\".\"adjustmentNo\")::bpchar)\n -> Seq Scan on \"Adjustment\" \"A\" (cost=0.00..2384.27 rows=11733 width=22)\n Filter: (((date)::date > '2006-01-01'::date) AND ((\"countyNo\")::smallint = 66))\n -> Index Scan using \"TranHeader_pkey\" on \"TranHeader\" \"H\" (cost=0.00..15.27 rows=1 width=46)\n Index Cond: (((\"H\".\"tranNo\")::integer = (\"A\".\"tranNo\")::integer) AND ((\"H\".\"countyNo\")::smallint = 66))\n Filter: (((\"tranType\")::bpchar = 'A'::bpchar) AND (subplan))\n SubPlan\n -> Index Scan using \"TranDetail_TranDetCaseNo\" on \"TranDetail\" \"D\" (cost=0.00..27.66 rows=20 width=4)\n Index Cond: (((\"caseNo\")::bpchar >= '2006TR'::bpchar) AND ((\"caseNo\")::bpchar < '2006TS'::bpchar) AND ((\"countyNo\")::smallint = ($0)::smallint))\n Filter: ((\"caseNo\")::bpchar ~~ '2006TR%'::text)\n\n> We don't currently try to flatten EXISTS into a unique/join plan as we\n> do for IN. I seem to recall not doing so when I rewrote IN planning\n> because I didn't think it would be exactly semantically equivalent,\n> but that was awhile ago. Right at the moment it seems like it ought\n> to be equivalent as long as the comparison operators are strict.\n \nThere are a great many situations where they are exactly semantically equivalent. In fact, the commercial database product usually generates an identical plan. I could try to work out (or better yet find) a formal description of when that equivalence holds, if someone would be up for implementing it. Barring that, I could see if management would approve some time for me to look at submitting a patch, but I haven't looked at the code involved, so I have no idea of the scale of effort involved yet.\n \n-Kevin\n \n\n", "msg_date": "Fri, 23 Mar 2007 17:26:04 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: EXISTS optimization" }, { "msg_contents": "On Fri, Mar 23, 2007 at 05:49:42PM -0400, Tom Lane wrote:\n> We don't currently try to flatten EXISTS into a unique/join plan as we\n> do for IN. I seem to recall not doing so when I rewrote IN planning\n> because I didn't think it would be exactly semantically equivalent,\n> but that was awhile ago. Right at the moment it seems like it ought\n> to be equivalent as long as the comparison operators are strict.\n\nWasn't it due to the fact that IN needs to scan through all\npossibilites anyway because of its interaction with NULL, whereas\nEXISTS can stop at the first row?\n\nThat would mean the subquery to be materialised would not be equivalent\nif it called any non-immutable functions. It's also much less clear to\nbe a win in the EXISTs case. But then, that's a costs issue the planner\ncan deal with...\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.", "msg_date": "Fri, 23 Mar 2007 23:26:41 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXISTS optimization" }, { "msg_contents": "I don't understand -- TRUE OR UNKNOWN evaluates to TRUE, so why would the IN need to continue? I'm not quite following the rest; could you elaborate or give an example? (Sorry if I'm lagging behind the rest of the class here.)\n \n-Kevin\n \n \n>>> Martijn van Oosterhout <[email protected]> 03/23/07 5:26 PM >>> \nOn Fri, Mar 23, 2007 at 05:49:42PM -0400, Tom Lane wrote:\n> We don't currently try to flatten EXISTS into a unique/join plan as we\n> do for IN. I seem to recall not doing so when I rewrote IN planning\n> because I didn't think it would be exactly semantically equivalent,\n> but that was awhile ago. Right at the moment it seems like it ought\n> to be equivalent as long as the comparison operators are strict.\n\nWasn't it due to the fact that IN needs to scan through all\npossibilites anyway because of its interaction with NULL, whereas\nEXISTS can stop at the first row?\n\nThat would mean the subquery to be materialised would not be equivalent\nif it called any non-immutable functions. It's also much less clear to\nbe a win in the EXISTs case. But then, that's a costs issue the planner\ncan deal with...\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.\n\n\n", "msg_date": "Fri, 23 Mar 2007 17:30:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: EXISTS optimization" }, { "msg_contents": "\n\n>>> On Fri, Mar 23, 2007 at 5:26 PM, in message\n<[email protected]>, \"Kevin Grittner\"\n<[email protected]> wrote: \n\n> I tried something which seems \n> equivalent, but it is running for a very long time. I'll show it with just \n> the explain while I wait to see how long the explain analyze takes.\n> \n> explain\n> SELECT \"A\".\"adjustmentNo\", \"A\".\"tranNo\", \"A\".\"countyNo\", \"H\".\"date\", \n> \"H\".\"userId\", \"H\".\"time\"\n> FROM \"Adjustment\" \"A\"\n> JOIN \"TranHeader\" \"H\" ON (\"H\".\"tranId\" = \"A\".\"adjustmentNo\" AND \n> \"H\".\"countyNo\" = \"A\".\"countyNo\" AND \"H\".\"tranNo\" = \"A\".\"tranNo\")\n> WHERE \"H\".\"tranType\" = 'A'\n> AND \"A\".\"date\" > DATE '2006- 01- 01'\n> AND \"H\".\"countyNo\" = 66\n> AND \"A\".\"countyNo\" = 66\n> AND \"H\".\"tranNo\" IN\n> (\n> SELECT \"D\".\"tranNo\" FROM \"TranDetail\" \"D\"\n> WHERE \"D\".\"caseNo\" LIKE '2006TR%'\n> AND \"D\".\"countyNo\" = \"H\".\"countyNo\"\n> )\n> ;\n\nexplain analyze results:\n \n Nested Loop (cost=0.00..181673.08 rows=1 width=46) (actual time=42224.077..964266.969 rows=2209 loops=1)\n Join Filter: ((\"H\".\"tranId\")::bpchar = (\"A\".\"adjustmentNo\")::bpchar)\n -> Seq Scan on \"Adjustment\" \"A\" (cost=0.00..2384.27 rows=11733 width=22) (actual time=15.355..146.620 rows=13003 loops=1)\n Filter: (((date)::date > '2006-01-01'::date) AND ((\"countyNo\")::smallint = 66))\n -> Index Scan using \"TranHeader_pkey\" on \"TranHeader\" \"H\" (cost=0.00..15.27 rows=1 width=46) (actual time=74.141..74.141 rows=0 loops=13003)\n Index Cond: (((\"H\".\"tranNo\")::integer = (\"A\".\"tranNo\")::integer) AND ((\"H\".\"countyNo\")::smallint = 66))\n Filter: (((\"tranType\")::bpchar = 'A'::bpchar) AND (subplan))\n SubPlan\n -> Index Scan using \"TranDetail_TranDetCaseNo\" on \"TranDetail\" \"D\" (cost=0.00..27.66 rows=20 width=4) (actual time=0.039..58.234 rows=42342 loops=13003)\n Index Cond: (((\"caseNo\")::bpchar >= '2006TR'::bpchar) AND ((\"caseNo\")::bpchar < '2006TS'::bpchar) AND ((\"countyNo\")::smallint = ($0)::smallint))\n Filter: ((\"caseNo\")::bpchar ~~ '2006TR%'::text)\n Total runtime: 964269.555 ms\n\n", "msg_date": "Fri, 23 Mar 2007 17:37:16 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] EXISTS optimization" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote: \n>> If you want that, try rewriting the EXISTS to an IN:\n>> \n>> AND (\"H\".\"tranNo\", \"H\".\"countyNo\") IN\n>> (\n>> SELECT \"D\".\"tranNo\", \"D\".\"countyNo\" FROM \"TranDetail\" \"D\"\n>> WHERE \"D\".\"caseNo\" LIKE '2006TR%'\n>> )\n\n> That's the good news. The bad news is that I operate under a\n> management portability dictate which doesn't currently allow that\n> syntax, since not all of the products they want to cover support it.\n\nWhich part of it don't they like --- the multiple IN-comparisons?\n\n> I tried something which seems equivalent, but it is running for a very\n> long time.\n> AND \"H\".\"tranNo\" IN\n> (\n> SELECT \"D\".\"tranNo\" FROM \"TranDetail\" \"D\"\n> WHERE \"D\".\"caseNo\" LIKE '2006TR%'\n> AND \"D\".\"countyNo\" = \"H\".\"countyNo\"\n> )\n\nNo, that's not gonna accomplish a darn thing, because you've still got\na correlated subquery (ie, a reference to outer \"H\") and so turning the\nIN into a join doesn't work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Mar 2007 19:04:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXISTS optimization " }, { "msg_contents": "On 3/23/07, Kevin Grittner <[email protected]> wrote:\n[...]\n> That's the good news. The bad news is that I operate under a management portability dictate which doesn't currently allow that syntax, since not all of the products they want to\n\nIt doesn't really touch the substance, but I am curious: are you not\neven allowed to discriminate between products in your code like:\nif db is 'postresql' then\n...\nelse\n...\n?\n\nWhat would be the rationale for that?\n\nThanks\nPeter\n\ncover support it. I tried something which seems equivalent, but it is\nrunning for a very long time. I'll show it with just the explain\nwhile I wait to see how long the explain analyze takes.\n>\n[...]\n", "msg_date": "Sat, 24 Mar 2007 00:04:44 +0100", "msg_from": "\"Peter Kovacs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] EXISTS optimization" }, { "msg_contents": ">>> On Fri, Mar 23, 2007 at 6:04 PM, in message\n<[email protected]>, \"Peter Kovacs\"\n<[email protected]> wrote: \n> On 3/23/07, Kevin Grittner <[email protected]> wrote:\n> [...]\n>> That's the good news. The bad news is that I operate under a management \n> portability dictate which doesn't currently allow that syntax, since not all \n> of the products they want to\n> \n> It doesn't really touch the substance, but I am curious: are you not\n> even allowed to discriminate between products in your code like:\n> if db is 'postresql' then\n> ...\n> else\n> ...\n> ?\n> \n> What would be the rationale for that?\n \nAnybody who's not curious about that should skip the rest of this email.\n \nManagement has simply given a mandate that the software be independent of OS and database vendor, and to use Java to help with the OS independence. I have to admit that I am the architect of the database independence solution that was devised. (The choice of Java for the OS independence has been very successful. We have run our bytecode on HP-UX, Windows, Sun Solaris, and various flavors of Linux without having to compile different versions of the bytecode. Other than when people get careless with case sensitivity on file names or with path separators, it just drops right in and runs.\n \nFor the data side, we write all of our queries in ANSI SQL in our own query tool, parse it, and generate Java classes to run it. The ANSI source is broken down to \"lowest common denominator\" queries, with all procedural code covered in the Java query classes. So we have stored procedures which can be called, triggers that fire, etc. in Java, issuing SELECT, INSERT, UPDATE, DELETE statements to the database. This allows us to funnel all DML through a few \"primitive\" routines which capture before and after images and save them in our own transaction image tables. We use this to replicate from our 72 county databases, which are the official court record, to multiple central databases, and a transaction repository, used for auditing case activity and assisting with failure recovery.\n \nThe problem with burying 'if db is MaxDB', 'if db is SQLServer', 'if db is PostgreSQL' everywhere is that you have no idea what to do when you then want to drop in some different product. We have a plugin layer to manage known areas of differences which aren't handled cleanly by JDBC, where the default behavior is ANSI-compliant, and a few dozen to a few hundred lines need to be written to modify that default support a new database product. (Of course, each one so far has brought in a few surprises, making the plugin layer just a little bit thicker.)\n \nSo, to support some new syntax, we have to update our parser, and have a way to generate code which runs on all the candidate database products, either directly or through a plugin layer. If any of the products don't support multi-value row value constructors, I have a hard time seeing a good way to cover that with the plugin. On the subject issue, I'm pretty sure it would actually be less work for me to modify the PostgreSQL optimizer to efficiently handle the syntax we do support than to try to bend row value constructors to a syntax that is supported on other database products.\n \nAnd, by the way, I did take a shot on getting them to commit to PostgreSQL as the long-term solution, and relax the portability rules. No sale. Perhaps when everything is converted to PostgreSQL and working for a while they may reconsider.\n \n-Kevin\n \n\n", "msg_date": "Fri, 23 Mar 2007 21:28:36 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] EXISTS optimization" }, { "msg_contents": "On Fri, Mar 23, 2007 at 05:30:27PM -0500, Kevin Grittner wrote:\n> I don't understand -- TRUE OR UNKNOWN evaluates to TRUE, so why would\n> the IN need to continue? I'm not quite following the rest; could you\n> elaborate or give an example? (Sorry if I'm lagging behind the rest\n> of the class here.)\n\nYou're right, I'm getting confused with the interaction of NULL and NOT\nIN.\n\nThe multiple evaluation thing still applies, but that's minor.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.", "msg_date": "Sat, 24 Mar 2007 14:07:59 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: EXISTS optimization" }, { "msg_contents": "Kevin Grittner wrote:\n> Management has simply given a mandate that the software be independent\n> of OS and database vendor, and to use Java to help with the OS independence.\n> ... we write all of our queries in ANSI SQL in our own query tool, parse it,\n> and generate Java classes to run it.\n\nA better solution, and one I've used for years, is to use OS- or database-specific features, but carefully encapsulate them in a single module, for example, \"database_specific.java\".\n\nFor example, when I started supporting both Oracle and Postgres, I encountered the MAX() problem, which (at the time) was very slow in Postgres, but could be replaced by \"select X from MYTABLE order by X desc limit 1\". So I created a function, \"GetColumnMax()\" that encapsulates the database-specific code for this. Similar functions encapsulate and a number of other database-specific optimizations.\n\nAnother excellent example: I have a function called \"TableExists(name)\". To the best of my knowledge, there simply is no ANSI SQL for this, so what do you do? Encapsulate it in one place.\n\nThe result? When I port to a new system, I know exactly where to find all of the non-ANSI SQL. I started this habit years ago with C/C++ code, which has the same problem: System calls are not consistent across the varients of Unix, Windows, and other OS's. So you put them all in one file called \"machine_dependent.c\".\n\nRemember the old adage: There is no such thing as portable code, only code that has been ported.\n\nCheers,\nCraig\n\n\n", "msg_date": "Tue, 03 Apr 2007 14:47:30 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] EXISTS optimization" } ]
[ { "msg_contents": "Hi,\n\nI have two queries that are very similar, that run on the same table \nwith slightly different conditions. However, despite a similar number \nof rows returned, the query planner is insisting on a different \nordering and different join algorithm, causing a huge performance \nhit. I'm not sure why the planner is doing the merge join the way it \nis in the slow case, rather than following a similar plan to the fast \ncase.\n\nNotice that the difference in the query is near the very end, where \nit's supplier_alias_id vs. buyer_alias_id and company_type = \n'Supplier' vs 'Buyer'.\n\nWhat I don't get is why, in the slow (supplier) case, the index scan \non customs_records is done first without the index condition of \ncr.supplier_alias_id = \"outer\".id, which means selecting 1.7 million \nrows; why wouldn't it do a nested loop left join and have the index \ncondition use that alias id the way the fast ('buyer') query is done?\n\nI'd appreciate any help -- thanks!\n\nSLOW:\n\nselect a.id as alias_id, a.company_type as alias_company_type, a.name \nas alias_name, cr.shipper as customs_record_shipper, cr.saddr1 as \ncustoms_record_saddr1, cr.saddr2 as customs_record_saddr2, cr.saddr3 \nas customs_record_saddr3, cr.consignee as customs_record_consignee, \ncr.caddr1 as customs_record_caddr1, cr.caddr2 as \ncustoms_record_caddr2, cr.caddr3 as customs_record_caddr3, \ncr.notify_party as customs_record_notify_party, cr.naddr1 as \ncustoms_record_naddr1, cr.naddr2 as customs_record_naddr2, cr.naddr3 \nas customs_record_naddr3, cr.also_notify_party as \ncustoms_record_also_notify_party, cr.anaddr1 as \ncustoms_record_anaddr1, cr.anaddr2 as customs_record_anaddr2, \ncr.anaddr3 as customs_record_addr3, cr.id as customs_record_id, \ncr.buyer_field as customs_record_buyer_field from aliases a left \nouter join customs_records cr on cr.supplier_alias_id = a.id where \na.company_type = 'Supplier' and a.company_id is NULL\n\n\nMerge Right Join (cost=1138.78..460482.84 rows=2993 width=405) \n(actual time=1244745.427..1245714.571 rows=39 loops=1)\n Merge Cond: (\"outer\".supplier_alias_id = \"inner\".id)\n -> Index Scan using index_customs_records_on_supplier_alias_id on \ncustoms_records cr (cost=0.00..6717806.37 rows=1704859 width=363) \n(actual time=54.567..1245210.707 rows=117424 loops=1)\n -> Sort (cost=1138.78..1139.53 rows=300 width=46) (actual \ntime=24.093..24.161 rows=39 loops=1)\n Sort Key: a.id\n -> Index Scan using index_aliases_company_type_company_id \non aliases a (cost=0.00..1126.44 rows=300 width=46) (actual \ntime=22.400..23.959 rows=10 loops=1)\n Index Cond: ((company_type)::text = 'Supplier'::text)\n Filter: (company_id IS NULL)\nTotal runtime: 1245714.752 ms\n\nFAST:\n\nNested Loop Left Join (cost=0.00..603052.46 rows=3244 width=405) \n(actual time=68.526..3115.407 rows=1355 loops=1)\n -> Index Scan using index_aliases_company_type_company_id on \naliases a (cost=0.00..639.56 rows=165 width=46) (actual \ntime=32.419..132.286 rows=388 loops=1)\n Index Cond: ((company_type)::text = 'Buyer'::text)\n Filter: (company_id IS NULL)\n -> Index Scan using index_customs_records_on_buyer_alias_id on \ncustoms_records cr (cost=0.00..3639.55 rows=915 width=363) (actual \ntime=2.133..7.649 rows=3 loops=388)\n Index Cond: (cr.buyer_alias_id = \"outer\".id)\nTotal runtime: 3117.713 ms\n(7 rows)\n\nselect a.id as alias_id, a.company_type as alias_company_type, a.name \nas alias_name, cr.shipper as customs_record_shipper, cr.saddr1 as \ncustoms_record_saddr1, cr.saddr2 as customs_record_saddr2, cr.saddr3 \nas customs_record_saddr3, cr.consignee as customs_record_consignee, \ncr.caddr1 as customs_record_caddr1, cr.caddr2 as \ncustoms_record_caddr2, cr.caddr3 as customs_record_caddr3, \ncr.notify_party as customs_record_notify_party, cr.naddr1 as \ncustoms_record_naddr1, cr.naddr2 as customs_record_naddr2, cr.naddr3 \nas customs_record_naddr3, cr.also_notify_party as \ncustoms_record_also_notify_party, cr.anaddr1 as \ncustoms_record_anaddr1, cr.anaddr2 as customs_record_anaddr2, \ncr.anaddr3 as customs_record_addr3, cr.id as customs_record_id, \ncr.buyer_field as customs_record_buyer_field from aliases a left \nouter join customs_records cr on cr.buyer_alias_id = a.id where \na.company_type = 'Buyer' and a.company_id is NULL\n\n", "msg_date": "Fri, 23 Mar 2007 15:44:44 -0400", "msg_from": "\"Noah M. Daniels\" <[email protected]>", "msg_from_op": true, "msg_subject": "Strange left outer join performance issue" }, { "msg_contents": "Run VACUUM ANALYZE and see if the cost estimates became close to the\neffective rows. This could make it faster.\n\n2007/3/23, Noah M. Daniels <[email protected]>:\n> SLOW:\n> Merge Right Join (cost=1138.78..460482.84 rows=2993 width=405)\n> (actual time=1244745.427..1245714.571 rows=39 loops=1)\n> Merge Cond: (\"outer\".supplier_alias_id = \"inner\".id)\n> -> Index Scan using index_customs_records_on_supplier_alias_id on\n> customs_records cr (cost=0.00..6717806.37 rows=1704859 width=363)\n> (actual time=54.567..1245210.707 rows=117424 loops=1)\n> -> Sort (cost=1138.78..1139.53 rows=300 width=46) (actual\n> time=24.093..24.161 rows=39 loops=1)\n> Sort Key: a.id\n> -> Index Scan using index_aliases_company_type_company_id\n> on aliases a (cost=0.00..1126.44 rows=300 width=46) (actual\n> time=22.400..23.959 rows=10 loops=1)\n> Index Cond: ((company_type)::text = 'Supplier'::text)\n> Filter: (company_id IS NULL)\n> Total runtime: 1245714.752 ms\n>\n> FAST:\n>\n> Nested Loop Left Join (cost=0.00..603052.46 rows=3244 width=405)\n> (actual time=68.526..3115.407 rows=1355 loops=1)\n> -> Index Scan using index_aliases_company_type_company_id on\n> aliases a (cost=0.00..639.56 rows=165 width=46) (actual\n> time=32.419..132.286 rows=388 loops=1)\n> Index Cond: ((company_type)::text = 'Buyer'::text)\n> Filter: (company_id IS NULL)\n> -> Index Scan using index_customs_records_on_buyer_alias_id on\n> customs_records cr (cost=0.00..3639.55 rows=915 width=363) (actual\n> time=2.133..7.649 rows=3 loops=388)\n> Index Cond: (cr.buyer_alias_id = \"outer\".id)\n> Total runtime: 3117.713 ms\n> (7 rows)\n\n\n-- \nDaniel Cristian Cruz\nAnalista de Sistemas\n\nRun VACUUM ANALYZE and see if the cost estimates became close to the effective rows. This could make it faster.2007/3/23, Noah M. Daniels <[email protected]>:> SLOW:\n> Merge Right Join  (cost=1138.78..460482.84 rows=2993 width=405)> (actual time=1244745.427..1245714.571 rows=39 loops=1)\n>    Merge Cond: (\"outer\".supplier_alias_id = \"inner\".id)>    ->  Index Scan using index_customs_records_on_supplier_alias_id on> customs_records cr  (cost=0.00..6717806.37 \nrows=1704859 width=363)> (actual time=54.567..1245210.707 rows=117424 loops=1)>    ->  Sort  (cost=1138.78..1139.53 \nrows=300 width=46) (actual> time=24.093..24.161 rows=39 loops=1)>          Sort Key: a.id>          ->  Index Scan using index_aliases_company_type_company_id\n> on aliases a  (cost=0.00..1126.44 rows=300 width=46) (actual> time=22.400..23.959 rows=10 loops=1)>                Index Cond: ((company_type)::text = 'Supplier'::text)\n>                Filter: (company_id IS NULL)> Total runtime: 1245714.752 ms> > FAST:> > Nested Loop Left Join  (cost=0.00..603052.46 rows=3244\n width=405)> (actual time=68.526..3115.407 rows=1355 loops=1)>     ->  Index Scan using index_aliases_company_type_company_id on> aliases a  (cost=\n0.00..639.56 rows=165 width=46) (actual> time=32.419..132.286 rows=388 loops=1)>           Index Cond: ((company_type)::text = 'Buyer'::text)\n>           Filter: (company_id IS NULL)>     ->  Index Scan using index_customs_records_on_buyer_alias_id on> customs_records cr  (cost=0.00..3639.55 rows=915\n width=363) (actual> time=2.133..7.649 rows=3 loops=388)>           Index Cond: (cr.buyer_alias_id = \"outer\".id)> Total runtime: 3117.713 ms> (7 rows)\n-- Daniel Cristian CruzAnalista de Sistemas", "msg_date": "Fri, 23 Mar 2007 17:04:51 -0300", "msg_from": "\"Daniel Cristian Cruz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange left outer join performance issue" }, { "msg_contents": "Not much of a difference, unfortunately... I still wonder why it's \ndoing the 'supplier' (slow) query using the merge right join.\n\nthe 'fast' query:\n\nNested Loop Left Join (cost=0.00..423342.71 rows=2481 width=410) \n(actual time=100.076..6380.865 rows=1355 loops=1)\n -> Index Scan using index_aliases_company_type_company_id on \naliases a (cost=0.00..462.33 rows=118 width=46) (actual \ntime=24.811..143.690 rows=388 loops=1)\n Index Cond: ((company_type)::text = 'Buyer'::text)\n Filter: (company_id IS NULL)\n -> Index Scan using index_customs_records_on_buyer_alias_id on \ncustoms_records cr (cost=0.00..3572.61 rows=890 width=368) (actual \ntime=5.526..16.042 rows=3 loops=388)\n Index Cond: (cr.buyer_alias_id = \"outer\".id)\nTotal runtime: 6382.940 ms\n(7 rows)\n\nthe 'slow' one:\n\nMerge Right Join (cost=842.53..479378.17 rows=2281 width=410) \n(actual time=554713.506..555584.825 rows=39 loops=1)\n Merge Cond: (\"outer\".supplier_alias_id = \"inner\".id)\n -> Index Scan using index_customs_records_on_supplier_alias_id \non customs_records cr (cost=0.00..6673133.76 rows=1704859 width=368) \n(actual time=42.327..555225.588 rows=117424 loops=1)\n -> Sort (cost=842.53..843.07 rows=218 width=46) (actual \ntime=0.109..0.164 rows=39 loops=1)\n Sort Key: a.id\n -> Index Scan using index_aliases_company_type_company_id \non aliases a (cost=0.00..834.06 rows=218 width=46) (actual \ntime=0.033..0.074 rows=10 loops=1)\n Index Cond: ((company_type)::text = 'Supplier'::text)\n Filter: (company_id IS NULL)\nTotal runtime: 555584.978 ms\n(9 rows)\n\n\nOn Mar 23, 2007, at 4:04 PM, Daniel Cristian Cruz wrote:\n\n> Run VACUUM ANALYZE and see if the cost estimates became close to \n> the effective rows. This could make it faster.\n>\n> 2007/3/23, Noah M. Daniels <[email protected]>:\n> > SLOW:\n> > Merge Right Join (cost=1138.78..460482.84 rows=2993 width=405)\n> > (actual time=1244745.427..1245714.571 rows=39 loops=1)\n> > Merge Cond: (\"outer\".supplier_alias_id = \"inner\".id)\n> > -> Index Scan using \n> index_customs_records_on_supplier_alias_id on\n> > customs_records cr (cost=0.00..6717806.37 rows=1704859 width=363)\n> > (actual time=54.567..1245210.707 rows=117424 loops=1)\n> > -> Sort (cost=1138.78..1139.53 rows=300 width=46) (actual\n> > time=24.093..24.161 rows=39 loops=1)\n> > Sort Key: a.id\n> > -> Index Scan using index_aliases_company_type_company_id\n> > on aliases a (cost=0.00..1126.44 rows=300 width=46) (actual\n> > time=22.400..23.959 rows=10 loops=1)\n> > Index Cond: ((company_type)::text = 'Supplier'::text)\n> > Filter: (company_id IS NULL)\n> > Total runtime: 1245714.752 ms\n> >\n> > FAST:\n> >\n> > Nested Loop Left Join (cost=0.00..603052.46 rows=3244 width=405)\n> > (actual time=68.526..3115.407 rows=1355 loops=1)\n> > -> Index Scan using index_aliases_company_type_company_id on\n> > aliases a (cost= 0.00..639.56 rows=165 width=46) (actual\n> > time=32.419..132.286 rows=388 loops=1)\n> > Index Cond: ((company_type)::text = 'Buyer'::text)\n> > Filter: (company_id IS NULL)\n> > -> Index Scan using index_customs_records_on_buyer_alias_id on\n> > customs_records cr (cost=0.00..3639.55 rows=915 width=363) (actual\n> > time=2.133..7.649 rows=3 loops=388)\n> > Index Cond: (cr.buyer_alias_id = \"outer\".id)\n> > Total runtime: 3117.713 ms\n> > (7 rows)\n>\n>\n> -- \n> Daniel Cristian Cruz\n> Analista de Sistemas\n\n", "msg_date": "Fri, 23 Mar 2007 17:16:48 -0400", "msg_from": "\"Noah M. Daniels\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange left outer join performance issue" }, { "msg_contents": "\"Noah M. Daniels\" <[email protected]> writes:\n> I have two queries that are very similar, that run on the same table \n> with slightly different conditions. However, despite a similar number \n> of rows returned, the query planner is insisting on a different \n> ordering and different join algorithm, causing a huge performance \n> hit. I'm not sure why the planner is doing the merge join the way it \n> is in the slow case, rather than following a similar plan to the fast \n> case.\n\nIt likes the merge join because it predicts (apparently correctly) that\nonly about 1/14th of the table will need to be scanned. This'd be an\nartifact of the relative ranges of supplier ids in the two tables.\n\nWhat PG version is this? 8.2 understands about repeated indexscans\nbeing cheaper than standalone ones, but I get the impression from the\nexplain estimates that you may be using something older that's\noverestimating the cost of the nestloop way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Mar 2007 18:13:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange left outer join performance issue " }, { "msg_contents": "Tom,\n\nYou're right; this is postgres 8.0.8. Perhaps upgrading will solve \nthis issue. Is there any way to get this query to perform better in \npostgres 8.0.8?\n\nthanks!\n\nOn Mar 23, 2007, at 6:13 PM, Tom Lane wrote:\n\n> \"Noah M. Daniels\" <[email protected]> writes:\n>> I have two queries that are very similar, that run on the same table\n>> with slightly different conditions. However, despite a similar number\n>> of rows returned, the query planner is insisting on a different\n>> ordering and different join algorithm, causing a huge performance\n>> hit. I'm not sure why the planner is doing the merge join the way it\n>> is in the slow case, rather than following a similar plan to the fast\n>> case.\n>\n> It likes the merge join because it predicts (apparently correctly) \n> that\n> only about 1/14th of the table will need to be scanned. This'd be an\n> artifact of the relative ranges of supplier ids in the two tables.\n>\n> What PG version is this? 8.2 understands about repeated indexscans\n> being cheaper than standalone ones, but I get the impression from the\n> explain estimates that you may be using something older that's\n> overestimating the cost of the nestloop way.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Fri, 23 Mar 2007 18:18:34 -0400", "msg_from": "\"Noah M. Daniels\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange left outer join performance issue" }, { "msg_contents": "\"Noah M. Daniels\" <[email protected]> writes:\n> You're right; this is postgres 8.0.8. Perhaps upgrading will solve \n> this issue. Is there any way to get this query to perform better in \n> postgres 8.0.8?\n\nYou could try reducing random_page_cost, but I'm not sure that will\nhelp much.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Mar 2007 18:22:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange left outer join performance issue " } ]
[ { "msg_contents": "I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32\nbit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one\nwith 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try\nto use rather the same parameter from the previous postgresql.conf :-\nthe older server config:\nshared_buffers = 31744\n#sort_mem = 1024 # min 64, size in KB\nsort_mem = 8192\n#vacuum_mem = 8064 # min 1024, size in KB\nvacuum_mem = 32768\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true # turns forced synchronization on or off\n\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or\nopen_datasync\n#wal_buffers = 8 # min 4, 8KB each\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\ncheckpoint_segments = 8\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\ncommit_delay = 20\n#commit_siblings = 5 # range 1-1000\n\n#effective_cache_size = 153600\neffective_cache_size = 307800\n\n\nI use pgbench to test the speed of my older database server and the result\nis\n\nbash-3.00$ pgbench test -t 20 -c 30 -s 50\n\nstarting vacuum...end.\n\ntransaction type: TPC-B (sort of)\n\nscaling factor: 50\n\nnumber of clients: 30\n\nnumber of transactions per client: 20\n\nnumber of transactions actually processed: 600/600\n\ntps = 337.196481 (including connections establishing)\n\ntps = 375.478735 (excluding connections establishing)\n\nBut my newer database server configuration is somewhat like this;-\n\n\nmax_connections = 200\n\n#shared_buffers = 2000 # min 16 or max_connections*2, 8KB each\n\nshared_buffers = 31744\n\n#temp_buffers = 1000 # min 100, 8KB each\n\n#max_prepared_transactions = 5 # can be 0 or more\n\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n\n#work_mem = 1024 # min 64, size in KB\n\nwork_mem = 8192\n\n#maintenance_work_mem = 16384 # min 1024, size in KB\n\nmaintenance_work_mem = 131078\n\n#max_stack_depth = 2048 # min 100, size in KB\n\n\n\n#commit_delay = 0 # range 0-100000, in microseconds\n\ncommit_delay = 20\n\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n\ncheckpoint_segments = 8\n\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n\n#checkpoint_warning = 30 # in seconds, 0 is off\n\n#effective_cache_size = 1000 # typically 8KB each\n\neffective_cache_size = 307800\n\n#autovacuum = off # enable autovacuum subprocess?\n\n#autovacuum_naptime = 60 # time between autovacuum runs, in secs\n\n#autovacuum_vacuum_threshold = 1000 # min # of tuple updates before\n\n# vacuum\n\n#autovacuum_analyze_threshold = 500 # min # of tuple updates before\n\n# analyze\n\n#autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before\n\n# vacuum\n\n#autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before\n\n# analyze\n\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n\n# autovac, -1 means use\n\n# vacuum_cost_delay\n\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n\n# autovac, -1 means use\n\n# vacuum_cost_limit\nand the result of pgbench from my new server is only\n\n-- pgbench test -t 20 -c 30 -s 50\n\ntps = 197 (including connections establishing)\n\ntps = 212\n\n\n\n1. How should I adjust my new configuration to improve the performance ?\n\n2. And should I set autovaccum = on and if it is on what is the other proper\nparameter to be set?\n\n\nThank a lot for your help.....\n\n\nAmrit Angsusingh\nThailand\n\nI try to change my database server from the older one ie. 2Cpu Xeon 2.4 32 bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14\n and try to use  rather the same parameter from the previous postgresql.conf :-\nthe older server config:\nshared_buffers = 31744 \n#sort_mem = 1024                # min 64, size in KBsort_mem = 8192\n#vacuum_mem = 8064              # min 1024, size in KBvacuum_mem = 32768\n \n# - Free Space Map -\n \n#max_fsm_pages = 20000          # min max_fsm_relations*16, 6 bytes each#max_fsm_relations = 1000       # min 100, ~50 bytes each\n \n# - Kernel Resource Usage -\n \n#max_files_per_process = 1000   # min 25#preload_libraries = ''\n \n#---------------------------------------------------------------------------# WRITE AHEAD LOG#---------------------------------------------------------------------------\n \n# - Settings -\n \n#fsync = true                   # turns forced synchronization on or off\n \n#wal_sync_method = fsync        # the default varies across platforms:                                # fsync, fdatasync, open_sync, or open_datasync#wal_buffers = 8                # min 4, 8KB each\n \n# - Checkpoints -\n \n#checkpoint_segments = 3        # in logfile segments, min 1, 16MB eachcheckpoint_segments = 8#checkpoint_timeout = 300       # range 30-3600, in seconds#checkpoint_warning = 30        # 0 is off, in seconds\n#commit_delay = 0               # range 0-100000, in microsecondscommit_delay = 20#commit_siblings = 5            # range 1-1000 \n#effective_cache_size = 153600effective_cache_size = 307800 \n \nI use pgbench to test the speed of my older database server and the result is\nbash-3.00$ pgbench  test -t 20 -c 30 -s 50\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 50\nnumber of clients: 30\nnumber of transactions per client: 20\nnumber of transactions actually processed: 600/600\ntps = 337.196481 (including connections establishing)\ntps = 375.478735 (excluding connections establishing)\n \nBut my newer database server configuration is somewhat like this;-\n \n\nmax_connections = 200\n#shared_buffers = 2000 # min 16 or max_connections*2, 8KB each\nshared_buffers = 31744\n#temp_buffers = 1000 # min 100, 8KB each\n#max_prepared_transactions = 5 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n#work_mem = 1024 # min 64, size in KB\nwork_mem = 8192\n#maintenance_work_mem = 16384 # min 1024, size in KB\nmaintenance_work_mem = 131078\n#max_stack_depth = 2048 # min 100, size in KB\n \n#commit_delay = 0 # range 0-100000, in microseconds\ncommit_delay = 20\n#commit_siblings = 5 # range 1-1000\n# - Checkpoints -\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\ncheckpoint_segments = 8\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # in seconds, 0 is off\n#effective_cache_size = 1000 # typically 8KB each\neffective_cache_size = 307800\n#autovacuum = off # enable autovacuum subprocess?\n#autovacuum_naptime = 60 # time between autovacuum runs, in secs\n#autovacuum_vacuum_threshold = 1000 # min # of tuple updates before\n# vacuum\n#autovacuum_analyze_threshold = 500 # min # of tuple updates before \n# analyze\n#autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before \n# vacuum\n#autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before \n# analyze\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for \n# autovac, -1 means use \n# vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for \n# autovac, -1 means use\n# vacuum_cost_limit\nand the result of pgbench from my new server is only\n-- \npgbench  test -t 20 -c 30 -s 50\n\ntps = 197 (including connections establishing)\ntps = 212 \n \n1. How should I adjust my new configuration to improve the performance ? \n2. And should I set autovaccum = on and if it is on what is the other proper parameter to be set?\n \n\nThank a lot for your help.....\n\n \nAmrit AngsusinghThailand", "msg_date": "Sat, 24 Mar 2007 03:50:16 +0700", "msg_from": "\"amrit angsusingh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimization pg 8.14 and postgresql.conf" } ]
[ { "msg_contents": "I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32\nbit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one\nwith 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try\nto use rather the same parameter from the previous postgresql.conf :-\nthe older server config --\nshared_buffers = 31744\n#sort_mem = 1024 # min 64, size in KB\nsort_mem = 8192\n#vacuum_mem = 8064 # min 1024, size in KB\nvacuum_mem = 32768\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true # turns forced synchronization on or off\n\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or\nopen_datasync\n#wal_buffers = 8 # min 4, 8KB each\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\ncheckpoint_segments = 8\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\ncommit_delay = 20\n#commit_siblings = 5 # range 1-1000\n\n#effective_cache_size = 153600\neffective_cache_size = 307800\n\n\nI use pgbench to test the speed of my older database server and the result\nis\n\nbash-3.00$ pgbench test -t 20 -c 30 -s 50\n\nstarting vacuum...end.\n\ntransaction type: TPC-B (sort of)\n\nscaling factor: 50\n\nnumber of clients: 30\n\nnumber of transactions per client: 20\n\nnumber of transactions actually processed: 600/600\n\ntps = 337.196481 (including connections establishing)\n\ntps = 375.478735 (excluding connections establishing)\n\nBut my newer database server configuration is somewhat like this;-\n\n\nmax_connections = 200\n\n#shared_buffers = 2000 # min 16 or max_connections*2, 8KB each\n\nshared_buffers = 31744\n\n#temp_buffers = 1000 # min 100, 8KB each\n\n#max_prepared_transactions = 5 # can be 0 or more\n\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n\n#work_mem = 1024 # min 64, size in KB\n\nwork_mem = 8192\n\n#maintenance_work_mem = 16384 # min 1024, size in KB\n\nmaintenance_work_mem = 131078\n\n#max_stack_depth = 2048 # min 100, size in KB\n\n\n\n#commit_delay = 0 # range 0-100000, in microseconds\n\ncommit_delay = 20\n\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n\ncheckpoint_segments = 8\n\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n\n#checkpoint_warning = 30 # in seconds, 0 is off\n\n#effective_cache_size = 1000 # typically 8KB each\n\neffective_cache_size = 307800\n\n#autovacuum = off # enable autovacuum subprocess?\n\n#autovacuum_naptime = 60 # time between autovacuum runs, in secs\n\n#autovacuum_vacuum_threshold = 1000 # min # of tuple updates before\n\n# vacuum\n\n#autovacuum_analyze_threshold = 500 # min # of tuple updates before\n\n# analyze\n\n#autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before\n\n# vacuum\n\n#autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before\n\n# analyze\n\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n\n# autovac, -1 means use\n\n# vacuum_cost_delay\n\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n\n# autovac, -1 means use\n\n# vacuum_cost_limit\nand the result of pgbench from my new server is only\n\n-- pgbench test -t 20 -c 30 -s 50\n\ntps = 197 (including connections establishing)\n\ntps = 212\n\n\n\n1. How should I adjust my new configuration to improve the performance ?\n\n2. And should I set autovaccum = on and if it is on what is the other proper\nparameter to be set?\n\n\nThank a lot for your help.....\n\n\nAmrit Angsusingh\nThailand\n\nI try to change my database server from the older one ie. 2Cpu Xeon 2.4 32 bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14\n and try to use  rather the same parameter from the previous postgresql.conf :-\nthe older server config --\nshared_buffers = 31744 \n#sort_mem = 1024                # min 64, size in KBsort_mem = 8192\n#vacuum_mem = 8064              # min 1024, size in KBvacuum_mem = 32768\n \n# - Free Space Map -\n \n#max_fsm_pages = 20000          # min max_fsm_relations*16, 6 bytes each#max_fsm_relations = 1000       # min 100, ~50 bytes each\n \n# - Kernel Resource Usage -\n \n#max_files_per_process = 1000   # min 25#preload_libraries = ''\n \n#---------------------------------------------------------------------------# WRITE AHEAD LOG#---------------------------------------------------------------------------\n \n# - Settings -\n \n#fsync = true                   # turns forced synchronization on or off\n \n#wal_sync_method = fsync        # the default varies across platforms:                                # fsync, fdatasync, open_sync, or open_datasync#wal_buffers = 8                # min 4, 8KB each\n \n# - Checkpoints -\n \n#checkpoint_segments = 3        # in logfile segments, min 1, 16MB eachcheckpoint_segments = 8#checkpoint_timeout = 300       # range 30-3600, in seconds#checkpoint_warning = 30        # 0 is off, in seconds \n#commit_delay = 0               # range 0-100000, in microsecondscommit_delay = 20#commit_siblings = 5            # range 1-1000 \n#effective_cache_size = 153600effective_cache_size = 307800 \n \nI use pgbench to test the speed of my older database server and the result is\nbash-3.00$ pgbench  test -t 20 -c 30 -s 50\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 50\nnumber of clients: 30\nnumber of transactions per client: 20\nnumber of transactions actually processed: 600/600\ntps = 337.196481 (including connections establishing)\ntps = 375.478735 (excluding connections establishing)\n \nBut my newer database server configuration is somewhat like this;-\n \n\nmax_connections = 200\n#shared_buffers = 2000 # min 16 or max_connections*2, 8KB each\nshared_buffers = 31744\n#temp_buffers = 1000 # min 100, 8KB each\n#max_prepared_transactions = 5 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n#work_mem = 1024 # min 64, size in KB\nwork_mem = 8192\n#maintenance_work_mem = 16384 # min 1024, size in KB\nmaintenance_work_mem = 131078\n#max_stack_depth = 2048 # min 100, size in KB\n \n#commit_delay = 0 # range 0-100000, in microseconds\ncommit_delay = 20\n#commit_siblings = 5 # range 1-1000\n# - Checkpoints -\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\ncheckpoint_segments = 8\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # in seconds, 0 is off\n#effective_cache_size = 1000 # typically 8KB each\neffective_cache_size = 307800\n#autovacuum = off # enable autovacuum subprocess?\n#autovacuum_naptime = 60 # time between autovacuum runs, in secs\n#autovacuum_vacuum_threshold = 1000 # min # of tuple updates before\n# vacuum\n#autovacuum_analyze_threshold = 500 # min # of tuple updates before \n# analyze\n#autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before \n# vacuum\n#autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before \n# analyze\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for \n# autovac, -1 means use \n# vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for \n# autovac, -1 means use\n# vacuum_cost_limit\nand the result of pgbench from my new server is only\n-- \npgbench  test -t 20 -c 30 -s 50\n\ntps = 197 (including connections establishing)\ntps = 212 \n \n1. How should I adjust my new configuration to improve the performance ? \n2. And should I set autovaccum = on and if it is on what is the other proper parameter to be set?\n \n\nThank a lot for your help.....\n\n \nAmrit AngsusinghThailand", "msg_date": "Sat, 24 Mar 2007 13:15:15 +0700", "msg_from": "\"amrit angsusingh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimization postgresql 8.1.4 FC 6 X64 ?" }, { "msg_contents": "amrit angsusingh wrote:\n> I try to change my database server from the older one ie. 2Cpu Xeon 2.4 32\n> bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the newer one\n> with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG 8.14 and try\n> to use rather the same parameter from the previous postgresql.conf :-\n> ...\n> I use pgbench to test the speed of my older database server and the result\n> is\n> \n> bash-3.00$ pgbench test -t 20 -c 30 -s 50\n> ...\n\n-t 20 is not enough to give repeatable results. Try something like -t 1000.\n\nThe speed of pgbench in that configuration (scaling factor 50, fsync \nenabled) is limited by the speed you can fsync the WAL. There isn't much \nyou can do in postgresql.conf for that. If you get similar results with \nhigher -t setting, it may be because your new RAID and drives have \nslightly higher latency.\n\nYou're better off testing with real queries with your real database.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 24 Mar 2007 10:44:47 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization postgresql 8.1.4 FC 6 X64 ?" }, { "msg_contents": "I also think there have been changes in pgbench itself.\n\nMake sure you run the same pgbench on both servers.\n\nDave\nOn 24-Mar-07, at 6:44 AM, Heikki Linnakangas wrote:\n\n> amrit angsusingh wrote:\n>> I try to change my database server from the older one ie. 2Cpu \n>> Xeon 2.4 32\n>> bit 4Gb SDram Hdd SCSI RAID 5 and FC 3 ix86 with 7..4.7 PG to the \n>> newer one\n>> with 2CPU Xeon 3.0 64 Bit 4Gb DDRram SCSI Raid5 and FC6 X64 PG \n>> 8.14 and try\n>> to use rather the same parameter from the previous \n>> postgresql.conf :-\n>> ...\n>> I use pgbench to test the speed of my older database server and \n>> the result\n>> is\n>> bash-3.00$ pgbench test -t 20 -c 30 -s 50\n>> ...\n>\n> -t 20 is not enough to give repeatable results. Try something like - \n> t 1000.\n>\n> The speed of pgbench in that configuration (scaling factor 50, \n> fsync enabled) is limited by the speed you can fsync the WAL. There \n> isn't much you can do in postgresql.conf for that. If you get \n> similar results with higher -t setting, it may be because your new \n> RAID and drives have slightly higher latency.\n>\n> You're better off testing with real queries with your real database.\n>\n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\n", "msg_date": "Sat, 24 Mar 2007 09:31:11 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimization postgresql 8.1.4 FC 6 X64 ?" } ]
[ { "msg_contents": "Hi List,\n\nhow to speedup nested loop queries and by which parameters.\n-- \nRegards\nGauri\n\nHi List,\n \nhow to speedup nested loop queries and by which parameters.\n-- RegardsGauri", "msg_date": "Mon, 26 Mar 2007 17:34:39 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Nested Loop" }, { "msg_contents": "On Mon, Mar 26, 2007 at 05:34:39PM +0530, Gauri Kanekar wrote:\n> how to speedup nested loop queries and by which parameters.\n\nPlease post a query you're trying to tune and the EXPLAIN ANALYZE\noutput, as well as any changes you've already made in postgresql.conf\nor configuration variables you've set in a particular session.\nWithout more information we can't give much advice other than to\nmake sure you're vacuuming and analyzing the tables often enough\nto keep them from becoming bloated with dead rows and to keep the\nstatistics current, and to review a configuration checklist such\nas this one:\n\nhttp://www.powerpostgresql.com/PerfList\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 26 Mar 2007 08:05:49 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop" }, { "msg_contents": "Sorry,\n\nthis are the Confg Setting\nmax_connections = 100 # (change requires restart)\nshared_buffers = 300MB\nwork_mem = 256MB\nmax_fsm_pages = 400000\nmax_fsm_relations = 500\nwal_buffers = 512\ncheckpoint_segments = 20\ncheckpoint_timeout = 900\nenable_bitmapscan = on\nenable_seqscan = off\nenable_tidscan = on\nrandom_page_cost = 2\ncpu_index_tuple_cost = 0.001\neffective_cache_size = 800MB\njoin_collapse_limit = 1 # JOINs\ndatestyle = 'iso, mdy'\nlc_messages = 'en_US.UTF-8' # locale for system error\nmessage\nlc_monetary = 'en_US.UTF-8' # locale for monetary\nformatting\nlc_numeric = 'en_US.UTF-8' # locale for number\nformatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\n\nall other are the default values.\n\n\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1116330.73..1116432.34 rows=6774 width=128) (actual\ntime=438565.297..440455.386 rows=646881 loops=1)\n -> Hash Join (cost=10802.93..1116093.64 rows=6774 width=128) (actual\ntime=1904.797..377717.036 rows=10438694 loops=1)\n Hash Cond: (rm.ck = rc.k)\n -> Hash Join (cost=10651.73..1115840.83 rows=6774 width=105)\n(actual time=1890.765..347169.113 rows=10438694 loops=1)\n Hash Cond: (rm.chk = rc.ky)\n -> Hash Join (cost=9835.35..1114905.90 rows=6774 width=83)\n(actual time=1873.463..317623.437 rows=10438694 loops=1)\n Hash Cond: (rm.ckey = rc.k)\n -> Hash Join (cost=615.77..1105533.91 rows=6774\nwidth=85) (actual time=1842.309..288198.666 rows=10438694 loops=1)\n Hash Cond: (rm.sk = rs.k)\n -> Hash Join (cost=77.32..1104885.39 rows=6774\nwidth=58) (actual time=1831.908..259147.154 rows=10438694 loops=1)\n Hash Cond: (rm.advk = ra.k)\n -> Nested Loop\n(cost=0.00..1104714.83rows=6801 width=44) (actual time=\n1820.153..229779.814 rows=10945938 loops=1)\n Join Filter: (rm.nk = rn.k)\n -> Index Scan using r_idx on rn\n(cost=0.00..4.27 rows=1 width=4) (actual time=0.093..0.095 rows=1 loops=1)\n Index Cond: (id = 607)\n -> Nested Loop (cost=\n0.00..1104370.50 rows=27205 width=48) (actual\ntime=7.920..202878.054rows=10945998 loops=1)\n -> Index Scan using\nrpts_ldt_idx on rd (cost=0.00..4.27 rows=1 width=12) (actual time=\n0.097..0.352 rows=30 loops=1)\n Index Cond: ((sdt >=\n'2006-12-01 00:00:00'::timestamp without time zone) AND (sd <= '2006-12-30\n00:00:00'::timestamp without time zone))\n -> Index Scan using rmidx on\nrm (cost=0.00..1100192.24 rows=333919 width=44) (actual time=\n3.109..5835.861 rows=364867 loops=30)\n Index Cond: (rmdkey =\nrd.k)\n -> Hash (cost=68.15..68.15 rows=734\nwidth=22) (actual time=11.692..11.692 rows=734 loops=1)\n -> Index Scan using radvki on radvt\n(cost=0.00..68.15 rows=734 width=22) (actual time=9.112..10.517 rows=734\nloops=1)\n Filter: ((name)::text <>\n'SYSTEM'::text)\n -> Hash (cost=500.35..500.35 rows=3048\nwidth=35) (actual time=10.377..10.377 rows=3048 loops=1)\n -> Index Scan using rskidx on rs (cost=\n0.00..500.35 rows=3048 width=35) (actual time=0.082..5.589 rows=3048\nloops=1)\n -> Hash (cost=9118.63..9118.63 rows=8076 width=6)\n(actual time=31.124..31.124 rows=8076 loops=1)\n -> Index Scan using rcridx on rcr (cost=\n0.00..9118.63 rows=8076 width=6) (actual time=2.036..19.218 rows=8076\nloops=1)\n -> Hash (cost=769.94..769.94 rows=3715 width=30) (actual\ntime=17.275..17.275 rows=3715 loops=1)\n -> Index Scan using ridx on rcl\n(cost=0.00..769.94rows=3715 width=30) (actual time=\n4.238..11.432 rows=3715 loops=1)\n -> Hash (cost=120.38..120.38 rows=2466 width=31) (actual time=\n14.010..14.010 rows=2466 loops=1)\n -> Index Scan using rckdx on rcpn\n(cost=0.00..120.38rows=2466 width=31) (actual time=\n4.564..9.926 rows=2466 loops=1)\n Total runtime: 441153.878 ms\n(32 rows)\n\n\nwe are using 8.2 version\n\n\nOn 3/26/07, Michael Fuhr <[email protected]> wrote:\n>\n> On Mon, Mar 26, 2007 at 05:34:39PM +0530, Gauri Kanekar wrote:\n> > how to speedup nested loop queries and by which parameters.\n>\n> Please post a query you're trying to tune and the EXPLAIN ANALYZE\n> output, as well as any changes you've already made in postgresql.conf\n> or configuration variables you've set in a particular session.\n> Without more information we can't give much advice other than to\n> make sure you're vacuuming and analyzing the tables often enough\n> to keep them from becoming bloated with dead rows and to keep the\n> statistics current, and to review a configuration checklist such\n> as this one:\n>\n> http://www.powerpostgresql.com/PerfList\n>\n> --\n> Michael Fuhr\n>\n\n\n\n-- \nRegards\nGauri\n\nSorry,\n \nthis are the Confg Setting \nmax_connections = 100                   # (change requires restart)shared_buffers = 300MBwork_mem = 256MBmax_fsm_pages = 400000max_fsm_relations = 500wal_buffers = 512checkpoint_segments = 20\ncheckpoint_timeout = 900enable_bitmapscan = onenable_seqscan = offenable_tidscan = onrandom_page_cost = 2cpu_index_tuple_cost = 0.001effective_cache_size = 800MBjoin_collapse_limit = 1                 # JOINs\ndatestyle = 'iso, mdy'lc_messages = 'en_US.UTF-8'                     # locale for system error messagelc_monetary = 'en_US.UTF-8'                     # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8'                      # locale for number formattinglc_time = 'en_US.UTF-8'                         # locale for time formatting \nall other are the default values.\n \n \n                                                                                                  QUERY PLAN                 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=1116330.73..1116432.34 rows=6774 width=128) (actual time=438565.297..440455.386 rows=646881 loops=1)   ->  Hash Join  (cost=10802.93..1116093.64 rows=6774 width=128) (actual time=1904.797..377717.036\n rows=10438694 loops=1)         Hash Cond: (rm.ck = rc.k)         ->  Hash Join  (cost=10651.73..1115840.83 rows=6774 width=105) (actual time=1890.765..347169.113 rows=10438694 loops=1)\n               Hash Cond: (rm.chk = rc.ky)               ->  Hash Join  (cost=9835.35..1114905.90 rows=6774 width=83) (actual time=1873.463..317623.437 rows=10438694 loops=1)                     Hash Cond: (\nrm.ckey = rc.k)                     ->  Hash Join  (cost=615.77..1105533.91 rows=6774 width=85) (actual time=1842.309..288198.666 rows=10438694 loops=1)                           Hash Cond: (\nrm.sk = rs.k)                           ->  Hash Join  (cost=77.32..1104885.39 rows=6774 width=58) (actual time=1831.908..259147.154 rows=10438694 loops=1)                                 Hash Cond: (rm.advk\n = ra.k)                                 ->  Nested Loop  (cost=0.00..1104714.83 rows=6801 width=44) (actual time=1820.153..229779.814 rows=10945938 loops=1)                                       Join Filter: (\nrm.nk = rn.k)                                       ->  Index Scan using r_idx on rn  (cost=0.00..4.27 rows=1 width=4) (actual time=0.093..0.095 rows=1 loops=1)                                             Index Cond: (id = 607)\n                                       ->  Nested Loop  (cost=0.00..1104370.50 rows=27205 width=48) (actual time=7.920..202878.054 rows=10945998 loops=1)                                             ->  Index Scan using rpts_ldt_idx on rd  (cost=\n0.00..4.27 rows=1 width=12) (actual time=0.097..0.352 rows=30 loops=1)                                                   Index Cond: ((sdt >= '2006-12-01 00:00:00'::timestamp without time zone) AND (sd <= '2006-12-30 00:00:00'::timestamp without time zone))\n                                             ->  Index Scan using rmidx on rm  (cost=0.00..1100192.24 rows=333919 width=44) (actual time=3.109..5835.861 rows=364867 loops=30)                                                   Index Cond: (rmdkey = \nrd.k)                                 ->  Hash  (cost=68.15..68.15 rows=734 width=22) (actual time=11.692..11.692 rows=734 loops=1)                                       ->  Index Scan using radvki on radvt  (cost=\n0.00..68.15 rows=734 width=22) (actual time=9.112..10.517 rows=734 loops=1)                                             Filter: ((name)::text <> 'SYSTEM'::text)                           ->  Hash  (cost=\n500.35..500.35 rows=3048 width=35) (actual time=10.377..10.377 rows=3048 loops=1)                                 ->  Index Scan using rskidx on rs  (cost=0.00..500.35 rows=3048 width=35) (actual time=0.082..5.589 rows=3048 loops=1)\n                     ->  Hash  (cost=9118.63..9118.63 rows=8076 width=6) (actual time=31.124..31.124 rows=8076 loops=1)                           ->  Index Scan using rcridx on rcr  (cost=0.00..9118.63 rows=8076 width=6) (actual time=\n2.036..19.218 rows=8076 loops=1)               ->  Hash  (cost=769.94..769.94 rows=3715 width=30) (actual time=17.275..17.275 rows=3715 loops=1)                     ->  Index Scan using ridx on rcl  (cost=0.00..769.94\n rows=3715 width=30) (actual time=4.238..11.432 rows=3715 loops=1)         ->  Hash  (cost=120.38..120.38 rows=2466 width=31) (actual time=14.010..14.010 rows=2466 loops=1)               ->  Index Scan using rckdx on rcpn  (cost=\n0.00..120.38 rows=2466 width=31) (actual time=4.564..9.926 rows=2466 loops=1) Total runtime: 441153.878 ms(32 rows) \n \nwe are using 8.2 version \nOn 3/26/07, Michael Fuhr <[email protected]> wrote:\nOn Mon, Mar 26, 2007 at 05:34:39PM +0530, Gauri Kanekar wrote:> how to speedup nested loop queries and by which parameters.\nPlease post a query you're trying to tune and the EXPLAIN ANALYZEoutput, as well as any changes you've already made in postgresql.confor configuration variables you've set in a particular session.\nWithout more information we can't give much advice other than tomake sure you're vacuuming and analyzing the tables often enoughto keep them from becoming bloated with dead rows and to keep thestatistics current, and to review a configuration checklist such\nas this one:http://www.powerpostgresql.com/PerfList--Michael Fuhr-- RegardsGauri", "msg_date": "Mon, 26 Mar 2007 20:33:34 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested Loop" }, { "msg_contents": "-----Original Message-----\n>From: [email protected] On Behalf Of Gauri Kanekar\n>Subject: Re: [PERFORM] Nested Loop\n>\n>join_collapse_limit = 1 # JOINs \n\nIs there a reason you have this set to 1? Postgres can't consider multiple\njoin orders when you do that. I would try setting that back to the default\nand seeing if this query is any faster.\n\nOther than that it looked like the problems with the query might be bad\nestimates of rows. One is that postgres expects there to be 1 matching row\nfrom rd when there are actually 30. You might try increasing the statistics\ntargets on rd.sd and rd.sdt, reanalyzing, and seeing if that helps. Also\npostgres expects the join of rd and rm to return about 27205 rows when it\nactually returns 10 million. I'm not sure what you can do about that.\nMaybe if Postgres gets a better estimate for rd it would then estimate the\njoin better.\n\nDave\n\n\n\n", "msg_date": "Mon, 26 Mar 2007 10:38:12 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop" }, { "msg_contents": "On m�n, 2007-03-26 at 20:33 +0530, Gauri Kanekar wrote:\n\nyou did not show your query, nor did you answer whather you had vacuumed\nand analyzed.\n\n> enable_seqscan = off\n\nwhy this? this is unlikely to help\n\n\n> \n> QUERY PLAN\n> ...\n> -> Nested Loop\n> (cost=0.00..1104714.83 rows=6801 width=44) (actual\n> time=1820.153..229779.814 rows=10945938 loops=1)\n\nthe estimates are way off here. you sure you have analyzed?\n\ngnari\n\n> \n\n", "msg_date": "Mon, 26 Mar 2007 16:22:33 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop" }, { "msg_contents": "Hi,\n\nhere is the query\n\nSELECT rs.id AS sid, rs.name AS sname, rc.id AS campid, rc.name AS campname,\nrc.rev_type AS revtype, rc.act_type AS actntype, ra.id AS advid, ra.name AS\nadvname, rpt_chn.id AS chanid, rpt_chn.name AS channame, rpt_cre.dn AS dn,\nSUM(rm.imdel) AS impression, SUM(rm.cdel) AS click, rd.sqldate AS date FROM\nrm, rn CROSS JOIN rd, ra, rs, rc, rpt_chn, rpt_cre WHERE rm.date_key =\nrd.key AND rm.net_key = rn.key AND rm.adv_key = ra.key AND rm.camp_key =\nrc.key AND rm.s_key = rs.key AND rm.chn_key = rpt_chn.key AND rm.cre_key =\nrpt_cre.key AND ra.name != 'SYSTEM' AND rd.sqldate BETWEEN '12/1/2006' AND\n'12/30/2006' AND ( rn.id IN ( 607 ) ) GROUP BY rd.sqldate, rs.id, rs.name,\nra.id, ra.name, rc.id, rc.name, rc.rev_type, rc.act_type, rpt_chn.id,\nrpt_chn.name, rpt_cre.dn;\n\n\n\nOn 3/26/07, Ragnar <[email protected]> wrote:\n>\n> On mán, 2007-03-26 at 20:33 +0530, Gauri Kanekar wrote:\n>\n> you did not show your query, nor did you answer whather you had vacuumed\n> and analyzed.\n>\n> > enable_seqscan = off\n>\n> why this? this is unlikely to help\n>\n>\n> >\n> > QUERY PLAN\n> > ...\n> > -> Nested Loop\n> > (cost=0.00..1104714.83 rows=6801 width=44) (actual\n> > time=1820.153..229779.814 rows=10945938 loops=1)\n>\n> the estimates are way off here. you sure you have analyzed?\n>\n> gnari\n>\n> >\n>\n>\n\n\n-- \nRegards\nGauri\n\nHi,\n \nhere is the query\n \nSELECT rs.id AS sid, rs.name AS sname, rc.id AS campid, rc.name AS campname, rc.rev_type AS revtype, rc.act_type\n AS actntype, ra.id AS advid, ra.name AS advname, rpt_chn.id AS chanid, rpt_chn.name AS channame, rpt_cre.dn AS dn, SUM(rm.imdel) AS impression, SUM(rm.cdel) AS click, \nrd.sqldate AS date FROM rm, rn CROSS JOIN rd, ra, rs, rc, rpt_chn, rpt_cre WHERE rm.date_key = rd.key AND rm.net_key = rn.key AND rm.adv_key = ra.key AND rm.camp_key = rc.key AND rm.s_key = rs.key AND rm.chn_key = rpt_chn.key AND \nrm.cre_key = rpt_cre.key AND ra.name != 'SYSTEM' AND rd.sqldate BETWEEN '12/1/2006' AND '12/30/2006' AND ( rn.id IN ( 607 ) ) GROUP BY rd.sqldate\n, rs.id, rs.name, ra.id, ra.name, rc.id, rc.name, rc.rev_type\n, rc.act_type, rpt_chn.id, rpt_chn.name, rpt_cre.dn; \n \nOn 3/26/07, Ragnar <[email protected]> wrote:\nOn mán, 2007-03-26 at 20:33 +0530, Gauri Kanekar wrote:you did not show your query, nor did you answer whather you had vacuumed\nand analyzed.> enable_seqscan = offwhy this? this is unlikely to help>> QUERY PLAN> ...>                                  ->  Nested Loop> (cost=0.00..1104714.83\n rows=6801 width=44) (actual> time=1820.153..229779.814 rows=10945938 loops=1)the estimates are way off here. you sure you have analyzed?gnari>\n-- RegardsGauri", "msg_date": "Tue, 27 Mar 2007 16:13:31 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested Loop" }, { "msg_contents": "On �ri, 2007-03-27 at 16:13 +0530, Gauri Kanekar wrote:\n> \n> SELECT rs.id AS sid, rs.name AS sname, rc.id AS campid, rc.name AS\n> campname, rc.rev_type AS revtype, rc.act_type AS actntype, ra.id AS\n> advid, ra.name AS advname, rpt_chn.id AS chanid, rpt_chn.name AS\n> channame, rpt_cre.dn AS dn, SUM(rm.imdel) AS impression, SUM(rm.cdel)\n> AS click, rd.sqldate AS date FROM rm, rn CROSS JOIN rd, ra, rs, rc,\n> rpt_chn, rpt_cre WHERE rm.date_key = rd.key AND rm.net_key = rn.key\n> AND rm.adv_key = ra.key AND rm.camp_key = rc.key AND rm.s_key = rs.key\n> AND rm.chn_key = rpt_chn.key AND rm.cre_key = rpt_cre.key AND\n> ra.name != 'SYSTEM' AND rd.sqldate BETWEEN '12/1/2006' AND\n> '12/30/2006' AND ( rn.id IN ( 607 ) ) GROUP BY rd.sqldate , rs.id,\n> rs.name, ra.id, ra.name, rc.id, rc.name, rc.rev_type , rc.act_type,\n> rpt_chn.id, rpt_chn.name, rpt_cre.dn;\n\nyou did not answer other questions, so do this:\n1) VACUUM ANALYZE your database\n2) set these in your postgresql.conf:\nenable_seqscan = true\njoin_collapse_limit = 8\n3) restart postgresql\n4) do the EXPLAIN ANALYZE again, and send us it's output\n\ngnari\n\n\n\n", "msg_date": "Tue, 27 Mar 2007 11:47:13 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop" } ]
[ { "msg_contents": "Hello!\n\n \n\n I have to manage an application written in java which call another module\nwritten in java which uses Postgre DBMS in a Linux environment. I'm new to\nPostgres. The problem is that for large amounts of data the application\nthrows an:\n\n org.postgresql.util.PSQLException: ERROR: out of shared memory\n\n \n\nPlease, have you any idea why this error appears and what can I do in order\nto fix this?\n\nAre there some Postgre related parameters I should tune (if yes what\nparameters) or is something related to the Linux OS?\n\n \n\nThank you very much\n\nWith best regards,\n\nSorin \n\n\n\n\n\n\n\n\n\n\n \n    Hello!\n \n   I have to manage an application written\nin java which call another module written in java which uses Postgre DBMS in a\nLinux environment. I’m new to Postgres. The problem is that for large\namounts of data the application throws an:\n org.postgresql.util.PSQLException: ERROR: out\nof shared memory\n \nPlease, have you any idea why this error appears and\nwhat can I do in order to fix this?\nAre there some Postgre related parameters I should\ntune (if yes what parameters) or is something related to the Linux OS?\n \nThank you very much\nWith best regards,\nSorin", "msg_date": "Mon, 26 Mar 2007 15:32:08 +0300", "msg_from": "\"Sorin N. Ciolofan\" <[email protected]>", "msg_from_op": true, "msg_subject": "ERROR: out of shared memory" }, { "msg_contents": "\"Sorin N. Ciolofan\" <[email protected]> writes:\n> I have to manage an application written in java which call another module\n> written in java which uses Postgre DBMS in a Linux environment. I'm new to\n> Postgres. The problem is that for large amounts of data the application\n> throws an:\n> org.postgresql.util.PSQLException: ERROR: out of shared memory\n\nAFAIK the only very likely way to cause that is to touch enough\ndifferent tables in one transaction that you run out of lock entries.\nWhile you could postpone the problem by increasing the\nmax_locks_per_transaction setting, I suspect there may be some basic\napplication misdesign involved here. How many tables have you got?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2007 23:37:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] ERROR: out of shared memory " }, { "msg_contents": "On 3/26/07, Tom Lane <[email protected]> wrote:\n> \"Sorin N. Ciolofan\" <[email protected]> writes:\n> > I have to manage an application written in java which call another module\n> > written in java which uses Postgre DBMS in a Linux environment. I'm new to\n> > Postgres. The problem is that for large amounts of data the application\n> > throws an:\n> > org.postgresql.util.PSQLException: ERROR: out of shared memory\n>\n> AFAIK the only very likely way to cause that is to touch enough\n> different tables in one transaction that you run out of lock entries.\n> While you could postpone the problem by increasing the\n> max_locks_per_transaction setting, I suspect there may be some basic\n> application misdesign involved here. How many tables have you got?\n\nor advisory locks...these are easy to spot. query pg_locks and look\nfor entries of locktype 'advisory'. I've already seen some apps in\nthe wild that use them, openads is one.\n\nmerlin\n", "msg_date": "Tue, 27 Mar 2007 09:06:38 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] ERROR: out of shared memory" }, { "msg_contents": "Dear Mr. Tom Lane,\n\nThank you very much for your answer.\nIt seems that the legacy application creates tables dynamically and the\nnumber of the created tables depends on the size of the input of the\napplication. For the specific input which generated that error I've\nestimated a number of created tables of about 4000. \nCould be this the problem?\n\nWith best regards,\nSorin\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, March 27, 2007 6:37 AM\nTo: Sorin N. Ciolofan\nCc: [email protected]; [email protected];\[email protected]\nSubject: Re: [GENERAL] ERROR: out of shared memory \n\n\"Sorin N. Ciolofan\" <[email protected]> writes:\n> I have to manage an application written in java which call another\nmodule\n> written in java which uses Postgre DBMS in a Linux environment. I'm new to\n> Postgres. The problem is that for large amounts of data the application\n> throws an:\n> org.postgresql.util.PSQLException: ERROR: out of shared memory\n\nAFAIK the only very likely way to cause that is to touch enough\ndifferent tables in one transaction that you run out of lock entries.\nWhile you could postpone the problem by increasing the\nmax_locks_per_transaction setting, I suspect there may be some basic\napplication misdesign involved here. How many tables have you got?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Mar 2007 16:13:42 +0300", "msg_from": "\"Sorin N. Ciolofan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] ERROR: out of shared memory " }, { "msg_contents": "\"Sorin N. Ciolofan\" <[email protected]> writes:\n> It seems that the legacy application creates tables dynamically and the\n> number of the created tables depends on the size of the input of the\n> application. For the specific input which generated that error I've\n> estimated a number of created tables of about 4000. \n> Could be this the problem?\n\nIf you have transactions that touch many of them within one transaction,\nthen yup, you could be out of locktable space. Try increasing\nmax_locks_per_transaction.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Mar 2007 09:59:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] ERROR: out of shared memory " }, { "msg_contents": " Dear Mr. Tom Lane,\n\n From what I've read from the postgresql.conf file I've understood that\nwhich each unit increasing of the \"max_locks_per_transaction\" parameter the\nshared memory used is also increased.\n But the shared memory looks to be already fully consumed according to the\nerror message, or is the error message irrelevant and improper in this\nsituation?\n\nWith best regards,\nSorin\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, March 27, 2007 4:59 PM\nTo: Sorin N. Ciolofan\nCc: [email protected]; [email protected];\[email protected]\nSubject: Re: [GENERAL] ERROR: out of shared memory \n\n\"Sorin N. Ciolofan\" <[email protected]> writes:\n> It seems that the legacy application creates tables dynamically and the\n> number of the created tables depends on the size of the input of the\n> application. For the specific input which generated that error I've\n> estimated a number of created tables of about 4000. \n> Could be this the problem?\n\nIf you have transactions that touch many of them within one transaction,\nthen yup, you could be out of locktable space. Try increasing\nmax_locks_per_transaction.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Mar 2007 16:20:09 +0300", "msg_from": "\"Sorin N. Ciolofan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ERROR: out of shared memory " }, { "msg_contents": "Try doing select * from pg_locks to see how many locks you have out.\n", "msg_date": "Thu, 05 Apr 2007 20:25:11 -0400", "msg_from": "Joseph S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ERROR: out of shared memory" } ]
[ { "msg_contents": "Hi,\n\nIn postgres 7.4.* I had to pass --with-java to the configure script\nfor jdbc support.\n\nDoes postgres 8.2* include it by default? If not, how do I enable it?\n\nThanks\n\nMiguel\n", "msg_date": "Tue, 27 Mar 2007 18:16:28 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to enable jdbc???" }, { "msg_contents": "Michael Dengler wrote:\n> Hi,\n> \n> In postgres 7.4.* I had to pass --with-java to the configure script\n> for jdbc support.\n> \n> Does postgres 8.2* include it by default? If not, how do I enable it?\n\nJust download the driver from jdbc.postgresql.org\n\n> \n> Thanks\n> \n> Miguel\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Tue, 27 Mar 2007 15:20:14 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to enable jdbc???" } ]
[ { "msg_contents": "Hi Dimitri,\n\nFirst of all, thanks again for the great feedback!\n\nYes, my I/O load is mostly read operations. There are some bulk writes done in the background periodically throughout the day, but these are not as time-sensitive. I'll have to do some testing to find the best balance of read vs. write speed and tolerance of disk failure vs. usable diskspace.\n\nI'm looking forward to seeing the results of your OLTP tests! Good luck! Since I won't be doing that myself, it'll be all new to me.\n\nAbout disk failure, I certainly agree that increasing the number of disks will decrease the average time between disk failures. Apart from any performance considerations, I wanted to get a clear idea of the risk of data loss under various RAID configurations. It's a handy reference, so I thought I'd share it:\n\n--------\n\nThe goal is to calculate the probability of data loss when we loose a certain number of disks within a short timespan (e.g. loosing a 2nd disk before replacing+rebuilding the 1st one). For RAID 10, 50, and Z, we will loose data if any disk group (i.e. mirror or parity-group) looses 2 disks. For RAID 60 and Z2, we will loose data if 3 disks die in the same parity group. The parity groups can include arbitrarily many disks. Having larger groups gives us more usable diskspace but less protection. (Naturally we're more likely to loose 2 disks in a group of 50 than in a group of 5.)\n\n g = number of disks in each group (e.g. mirroring = 2; single-parity = 3 or more; dual-parity = 4 or more)\n n = total number of disks\n risk of loosing any 1 disk = 1/n\n risk of loosing 1 disk from a particular group = g/n\n risk of loosing 2 disks in the same group = g/n * (g-1)/(n-1)\n risk of loosing 3 disks in the same group = g/n * (g-1)/(n-1) * (g-2)/(n-2)\n\nFor the x4500, we have 48 disks. If we stripe our data across all those disks, then these are our configuration options:\n\nRAID 10 or 50 -- Mirroring or single-parity must loose 2 disks from the same group to loose data:\ndisks_per_group num_groups total_disks usable_disks risk_of_data_loss\n 2 24 48 24 0.09%\n 3 16 48 32 0.27%\n 4 12 48 36 0.53%\n 6 8 48 40 1.33%\n 8 6 48 42 2.48%\n 12 4 48 44 5.85%\n 24 2 48 46 24.47%\n 48 1 48 47 100.00%\n\nRAID 60 or Z2 -- Double-parity must loose 3 disks from the same group to loose data:\ndisks_per_group num_groups total_disks usable_disks risk_of_data_loss\n 2 24 48 n/a n/a\n 3 16 48 16 0.01%\n 4 12 48 24 0.02%\n 6 8 48 32 0.12%\n 8 6 48 36 0.32%\n 12 4 48 40 1.27%\n 24 2 48 44 11.70%\n 48 1 48 46 100.00%\n\nSo, in terms of fault tolerance:\n - RAID 60 and Z2 always beat RAID 10, since they never risk data loss when only 2 disks fail.\n - RAID 10 always beats RAID 50 and Z, since it has the largest number of disk groups across which to spread the risk.\n - Having more parity groups increases fault tolerance but decreases usable diskspace.\n\nThat's all assuming each disk has an equal chance of failure, which is probably true since striping should distribute the workload evenly. And again, these probabilities are only describing the case where we don't have enough time between disk failures to recover the array.\n\nIn terms of performance, I think RAID 10 should always be best for write speed. (Since it doesn't calculate parity, writing a new block doesn't require reading the rest of the RAID stripe just to recalculate the parity bits.) I think it's also normally just as fast for reading, since the controller can load-balance the pending read requests to both sides of each mirror.\n\n--------\n\n\n", "msg_date": "Tue, 27 Mar 2007 21:44:42 -0700", "msg_from": "\"Matt Smiley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sunfire X4500 recommendations" }, { "msg_contents": "On Tue, 27 Mar 2007, Matt Smiley wrote:\n\n> --------\n>\n> The goal is to calculate the probability of data loss when we loose a \n> certain number of disks within a short timespan (e.g. loosing a 2nd disk \n> before replacing+rebuilding the 1st one). For RAID 10, 50, and Z, we \n> will loose data if any disk group (i.e. mirror or parity-group) looses 2 \n> disks. For RAID 60 and Z2, we will loose data if 3 disks die in the \n> same parity group. The parity groups can include arbitrarily many \n> disks. Having larger groups gives us more usable diskspace but less \n> protection. (Naturally we're more likely to loose 2 disks in a group of \n> 50 than in a group of 5.)\n>\n> g = number of disks in each group (e.g. mirroring = 2; single-parity = 3 or more; dual-parity = 4 or more)\n> n = total number of disks\n> risk of loosing any 1 disk = 1/n\n\nplease explain why you are saying that the risk of loosing any 1 disk is \n1/n. shouldn't it be probability of failure * n instead?\n\n> risk of loosing 1 disk from a particular group = g/n\n> risk of loosing 2 disks in the same group = g/n * (g-1)/(n-1)\n> risk of loosing 3 disks in the same group = g/n * (g-1)/(n-1) * (g-2)/(n-2)\n\nfollowing this logic the risk of loosing all 48 disks in a single group of \n48 would be 100%\n\nalso what you are looking for is the probability of the second (and third) \ndisks failing in time X (where X is the time nessasary to notice the \nfailure, get a replacement, and rebuild the disk)\n\nthe killer is the time needed to rebuild the disk, with multi-TB arrays \nis't sometimes faster to re-initialize the array and reload from backup \nthen it is to do a live rebuild (the kernel.org servers had a raid failure \nrecently and HPA mentioned that it took a week to rebuild the array, but \nit would have only taken a couple days to do a restore from backup)\n\nadd to this the fact that disk failures do not appear to be truely \nindependant from each other statisticly (see the recent studies released \nby google and cmu), and I wouldn't bother with single-parity for a \nmulti-TB array. If the data is easy to recreate (including from backup) or \nshort lived (say a database of log data that cycles every month or so) I \nwould just do RAID-0 and plan on loosing the data on drive failure (this \nassumes that you can afford the loss of service when this happens). if the \ndata is more important then I'd do dual-parity or more, along with a hot \nspare so that the rebuild can start as soon as the first failure is \nnoticed by the system to give myself a fighting chance to save things.\n\n\n> In terms of performance, I think RAID 10 should always be best for write \n> speed. (Since it doesn't calculate parity, writing a new block doesn't \n> require reading the rest of the RAID stripe just to recalculate the \n> parity bits.) I think it's also normally just as fast for reading, \n> since the controller can load-balance the pending read requests to both \n> sides of each mirror.\n\nthis depends on your write pattern. if you are doing sequential writes \n(say writing a log archive) then RAID 5 can be faster then RAID 10. since \nthere is no data there to begin with the system doesn't have to read \nanything to calculate the parity, and with the data spread across more \nspindles you have a higher potential throughput.\n\nif your write pattern is is more random, and especially if you are \noverwriting existing data then the reads needed to calculate the parity \nwill slow you down.\n\nas for read speed, it all depends on your access pattern and stripe size. \nif you are reading data that spans disks (larger then your stripe size) \nyou end up with a single read tieing up multiple spindles. with Raid 1 \n(and varients) you can read from either disk of the set if you need \ndifferent data within the same stripe that's on different disk tracks (if \nit's on the same track you'll get it just as fast reading from a single \ndrive, or so close to it that it doesn't matter). beyond that the question \nis how many spindles can you keep busy reading (as opposed to seeking to \nnew data or sitting idle becouse you don't need their data)\n\nthe worst case for reading is to be jumping through your data in strides \nof stripe*# disks available (accounting for RAID type) as all your reads \nwill end up hitting the same disk.\n\nDavid Lang\n", "msg_date": "Tue, 27 Mar 2007 22:34:38 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Sunfire X4500 recommendations" } ]
[ { "msg_contents": "Hi all.\n\nI would like to speed up this query:\n\nEXPLAIN ANALYZE\n SELECT\nrelid,schemaname,relname,seq_scan,seq_tup_read,idx_scan,idx_tup_fetch,n_tup_ins,n_tup_upd,n_tup_del\n FROM pg_stat_user_tables;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan pg_stat_all_tables (cost=747.72..791.10 rows=195 width=236)\n(actual time=11.582..13.632 rows=200 loops=1)\n -> HashAggregate (cost=747.72..752.10 rows=195 width=136) (actual time=\n11.571..12.813 rows=200 loops=1)\n -> Hash Join (cost=209.32..745.28 rows=195 width=136) (actual\ntime=1.780..6.477 rows=453 loops=1)\n Hash Cond: (\"outer\".relnamespace = \"inner\".oid)\n -> Hash Left Join (cost=206.87..702.69 rows=227 width=76)\n(actual time=1.729..5.392 rows=507 loops=1)\n Hash Cond: (\"outer\".oid = \"inner\".indrelid)\n -> Seq Scan on pg_class c (cost=0.00..465.22 rows=227\nwidth=72) (actual time=0.013..2.552 rows=228 loops=1)\n Filter: (relkind = 'r'::\"char\")\n -> Hash (cost=205.40..205.40 rows=587 width=8)\n(actual time=1.698..1.698 rows=0 loops=1)\n -> Seq Scan on pg_index i\n(cost=0.00..205.40rows=587 width=8) (actual time=\n0.004..1.182 rows=593 loops=1)\n -> Hash (cost=2.44..2.44 rows=6 width=68) (actual time=\n0.035..0.035 rows=0 loops=1)\n -> Seq Scan on pg_namespace n (cost=0.00..2.44 rows=6\nwidth=68) (actual time=0.013..0.028 rows=6 loops=1)\n Filter: ((nspname <> 'pg_catalog'::name) AND\n(nspname <> 'pg_toast'::name))\n Total runtime: 13.844 ms\n\nI think there would be good to create an index on pg_class.relkind and\npg_class.relnamespace, but its impossible since its a catalog table.\n\nAny way to make it a default index (system index)?\n\nIts an old PostgreSQL server:\n\nSELECT version();\n version\n------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 7.4.13 on x86_64-redhat-linux-gnu, compiled by GCC\nx86_64-redhat-linux-gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-2)\n\n\n\n-- \nDaniel Cristian Cruz\nAnalista de Sistemas\n\nHi all.I would like to speed up this query:EXPLAIN ANALYZE SELECT relid,schemaname,relname,seq_scan,seq_tup_read,idx_scan,idx_tup_fetch,n_tup_ins,n_tup_upd,n_tup_del FROM pg_stat_user_tables;\n                                                               QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------- Subquery Scan pg_stat_all_tables  (cost=\n747.72..791.10 rows=195 width=236) (actual time=11.582..13.632 rows=200 loops=1)   ->  HashAggregate  (cost=747.72..752.10 rows=195 width=136) (actual time=11.571..12.813 rows=200 loops=1)         ->  Hash Join  (cost=\n209.32..745.28 rows=195 width=136) (actual time=1.780..6.477 rows=453 loops=1)               Hash Cond: (\"outer\".relnamespace = \"inner\".oid)               ->  Hash Left Join  (cost=206.87..702.69\n rows=227 width=76) (actual time=1.729..5.392 rows=507 loops=1)                     Hash Cond: (\"outer\".oid = \"inner\".indrelid)                     ->  Seq Scan on pg_class c  (cost=0.00..465.22\n rows=227 width=72) (actual time=0.013..2.552 rows=228 loops=1)                           Filter: (relkind = 'r'::\"char\")                     ->  Hash  (cost=205.40..205.40 rows=587 width=8) (actual time=\n1.698..1.698 rows=0 loops=1)                           ->  Seq Scan on pg_index i  (cost=0.00..205.40 rows=587 width=8) (actual time=0.004..1.182 rows=593 loops=1)               ->  Hash  (cost=2.44..2.44 rows=6 width=68) (actual time=\n0.035..0.035 rows=0 loops=1)                     ->  Seq Scan on pg_namespace n  (cost=0.00..2.44 rows=6 width=68) (actual time=0.013..0.028 rows=6 loops=1)                           Filter: ((nspname <> 'pg_catalog'::name) AND (nspname <> 'pg_toast'::name))\n Total runtime: 13.844 msI think there would be good to create an index on pg_class.relkind and pg_class.relnamespace, but its impossible since its a catalog table.Any way to make it a default index (system index)?\nIts an old PostgreSQL server:SELECT version();                                                           version------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 7.4.13 on x86_64-redhat-linux-gnu, compiled by GCC x86_64-redhat-linux-gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-2)-- Daniel Cristian CruzAnalista de Sistemas", "msg_date": "Wed, 28 Mar 2007 09:59:18 -0300", "msg_from": "\"Daniel Cristian Cruz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Improving performance on system catalog" }, { "msg_contents": "> I would like to speed up this query:\n\n<snip>\n\n> Total runtime: 13.844 ms\n\nWhy bother?\n\nIt's running in less than 14 milliseconds.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Wed, 28 Mar 2007 23:24:41 +1000", "msg_from": "\"chris smith\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving performance on system catalog" }, { "msg_contents": "2007/3/28, chris smith <[email protected]>:\n>\n> > Total runtime: 13.844 ms\n>\n> Why bother?\n\n\nBecause faster could be better in a very busy system.\n\n-- \nDaniel Cristian Cruz\nAnalista de Sistemas\n\n2007/3/28, chris smith <[email protected]>:\n>  Total runtime: 13.844 msWhy bother?Because faster could be better in a very busy system. -- Daniel Cristian CruzAnalista de Sistemas", "msg_date": "Wed, 28 Mar 2007 12:52:10 -0300", "msg_from": "\"Daniel Cristian Cruz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving performance on system catalog" }, { "msg_contents": "Daniel Cristian Cruz wrote:\n> Hi all.\n> \n> I would like to speed up this query:\n> \n> EXPLAIN ANALYZE\n> SELECT\n> relid,schemaname,relname,seq_scan,seq_tup_read,idx_scan,idx_tup_fetch,n_tup_ins,n_tup_upd,n_tup_del \n> \n> FROM pg_stat_user_tables;\n> \n\n\nAlthough optimizing for 13ms is a little silly imo, you could probably \ngain from calling the specific query underneath instead of calling the \nview pg_stat_user_tables.\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Wed, 28 Mar 2007 08:57:56 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving performance on system catalog" }, { "msg_contents": "\"Daniel Cristian Cruz\" <[email protected]> writes:\n> 2007/3/28, chris smith <[email protected]>:\n>>> Total runtime: 13.844 ms\n>> \n>> Why bother?\n\n> Because faster could be better in a very busy system.\n\nIf you are concerned about global performance improvement, quit worrying\nabout this micro-detail and get yourself onto a more modern Postgres.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Mar 2007 12:28:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving performance on system catalog " }, { "msg_contents": "2007/3/28, Tom Lane <[email protected]>:\n>\n> \"Daniel Cristian Cruz\" <[email protected]> writes:\n> > 2007/3/28, chris smith <[email protected]>:\n> >>> Total runtime: 13.844 ms\n> >>\n> >> Why bother?\n>\n> > Because faster could be better in a very busy system.\n>\n> If you are concerned about global performance improvement, quit worrying\n> about this micro-detail and get yourself onto a more modern Postgres.\n\n\nGot it. We just planned move to 8.2.3 in about two weeks.\n\n-- \nDaniel Cristian Cruz\nAnalista de Sistemas\n\n2007/3/28, Tom Lane <[email protected]>:\n\"Daniel Cristian Cruz\" <[email protected]> writes:> 2007/3/28, chris smith <[email protected]>:>>> Total runtime: \n13.844 ms>>>> Why bother?> Because faster could be better in a very busy system.If you are concerned about global performance improvement, quit worryingabout this micro-detail and get yourself onto a more modern Postgres.\nGot it. We just planned move to  8.2.3 in about two weeks.-- Daniel Cristian CruzAnalista de Sistemas", "msg_date": "Wed, 28 Mar 2007 14:08:27 -0300", "msg_from": "\"Daniel Cristian Cruz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving performance on system catalog" } ]
[ { "msg_contents": "Greetings,\n\nWe've recently made a couple changes to our system that have resulted \nin a drastic increase in performance as well as some very confusing \nchanges to the database statistics, specifically \npg_stat_database.xact_commit. Here's the details:\n\nOS: Solaris10 x86\nServer: Sunfire X4100, 8GB Memory, 2 Dual Core Opterons\nPostgres 8.2.3\nDisk array:\nSun STK 6130 + CSM100 SATA tray, dual channel MPXIO, 15k drives, \nRAID5 across 14 disks\nWAL logs on SATA RAID10\nSAN architecture, 2 brocade FABRIC switches\n\nThe changes we made were:\n\nIncrease shared buffers from 150000 to 200000\nSet the disk mount for the data directory to use forcedirectio (added \nthat mount option that to the /etc/vfstab entry (ufs fs))\n\nSo, the reason we did this was that for months now we'd been \nexperiencing extremely high IO load from both the perspective of the \nOS and the database, specifically where writes were concerned. \nDuring peak hourse it wasn't unheard of for pg_stat_database to \nreport anywhere from 500000 to 1000000 transactions committed in an \nhour. iostat's %b (disk busy) sat at 100% for longer than we'd care \nto think about with the wait percentage going from a few percent on \nup to 50% at times and the cpu load almost never rising from around a \n2 avg., i.e. we were extremely IO bound in all cases.\n\nAs soon as we restarted postgres after making those changes the IO \nload was gone. While we the number and amount of disk reads have \nstayed pretty much the same and the number of disk writes have stayed \nthe same, the amount of data being written has dropped by about a \nfactor of 10, which is huge. The cpu load shot way up to around a 20 \navg. and stayed that way up and stayed that way for about two days \n(we're thinking that was autovacuum \"catching up\"). In addition, and \nthis is the truly confusing part, the xact_commit and xact_rollback \nstats from pg_stat_database both dropped by an order of magnitude \n(another factor of 10). So, we are now doing 50000 to 100000 commits \nper hour during peak hours.\n\nSo, where were all of those extra transactions coming from? Are \ntransactions reported on in pg_stat_database anything but SQL \nstatements? What was causing all of the excess(?!) data being \nwritten to the disk (it seems that there's a 1:1 correspondence \nbetween the xacts and volume of data being written)? Given that we \nhave the bgwriter on, could it have been the culprit and one of the \nchanges allowed it to now operate more efficiently and/or correctly?\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nGreetings,We've recently made a couple changes to our system that have resulted in a drastic increase in performance as well as some very confusing changes to the database statistics, specifically pg_stat_database.xact_commit.  Here's the details:OS: Solaris10 x86Server: Sunfire X4100, 8GB Memory, 2 Dual Core OpteronsPostgres 8.2.3Disk array: Sun STK 6130 + CSM100 SATA tray, dual channel MPXIO, 15k drives, RAID5 across 14 disksWAL logs on SATA RAID10SAN architecture, 2 brocade FABRIC switchesThe changes we made were:Increase shared buffers from 150000 to 200000Set the disk mount for the data directory to use forcedirectio (added that mount option that to the /etc/vfstab entry (ufs fs))So, the reason we did this was that for months now we'd been experiencing extremely high IO load from both the perspective of the OS and the database, specifically where writes were concerned.  During peak hourse it wasn't unheard of for pg_stat_database to report anywhere from 500000 to 1000000 transactions committed in an hour.  iostat's %b (disk busy) sat at 100% for longer than we'd care to think about with the wait percentage going from a few percent on up to 50% at times and the cpu load almost never rising from around a 2 avg., i.e. we were extremely IO bound in all cases.  As soon as we restarted postgres after making those changes the IO load was gone.  While we the number and amount of disk reads have stayed pretty much the same and the number of disk writes have stayed the same, the amount of data being written has dropped by about a factor of 10, which is huge.  The cpu load shot way up to around a 20 avg. and stayed that way up and stayed that way for about two days (we're thinking that was autovacuum \"catching up\").  In addition, and this is the truly confusing part, the xact_commit and xact_rollback stats from pg_stat_database both dropped by an order of magnitude (another factor of 10).  So, we are now doing 50000 to 100000 commits per hour during peak hours.So, where were all of those extra transactions coming from?  Are transactions reported on in pg_stat_database anything but SQL statements?  What was causing all of the excess(?!) data being written to the disk (it seems that there's a 1:1 correspondence between the xacts and volume of data being written)?  Given that we have the bgwriter on, could it have been the culprit and one of the changes allowed it to now operate more efficiently and/or correctly? erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Wed, 28 Mar 2007 14:56:47 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "Erik Jones <[email protected]> writes:\n> We've recently made a couple changes to our system that have resulted \n> in a drastic increase in performance as well as some very confusing \n> changes to the database statistics, specifically \n> pg_stat_database.xact_commit. Here's the details:\n\nI'm kinda boggled too. I can see how increasing shared buffers could\nresult in a drastic reduction in write rate, if the working set of your\nqueries fits in the new space but didn't fit in the old. I have no idea\nhow that leads to a drop in number of transactions committed though.\nIt doesn't make sense that autovac would run less frequently, because\nit's driven by number of tuples changed not number of disk writes; and\nthat could hardly account for a 10x drop anyway.\n\nDid you by any chance take note of exactly which processes were\ngenerating all the I/O or the CPU load?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Mar 2007 12:16:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited,\n and write IO on Solaris " }, { "msg_contents": "On Mar 29, 2007, at 11:16 AM, Tom Lane wrote:\n\n> Erik Jones <[email protected]> writes:\n>> We've recently made a couple changes to our system that have resulted\n>> in a drastic increase in performance as well as some very confusing\n>> changes to the database statistics, specifically\n>> pg_stat_database.xact_commit. Here's the details:\n>\n> I'm kinda boggled too. I can see how increasing shared buffers could\n> result in a drastic reduction in write rate, if the working set of \n> your\n> queries fits in the new space but didn't fit in the old. I have no \n> idea\n> how that leads to a drop in number of transactions committed though.\n> It doesn't make sense that autovac would run less frequently, because\n> it's driven by number of tuples changed not number of disk writes; and\n> that could hardly account for a 10x drop anyway.\n>\n> Did you by any chance take note of exactly which processes were\n> generating all the I/O or the CPU load?\n\nWell, wrt to the CPU load, as I said, we're pretty sure that's \nautovac as we still get spikes that hit about the same threshold, \nafter which cache hits go up dramatically and the spikes just don't \nlast two days anymore.\n\nAs far as the procs responsible for the writes go, we were unable to \nsee that from the OS level as the guy we had as a systems admin last \nyear totally screwed us with the way he set up the SunCluster on the \nboxes and we have been unable to run Dtrace which has left us \nwatching a lot of iostat. However, we did notice a direct \ncorrelation between write spikes and \"write intensive\" queries like \nlarge COPYs, UPDATEs, and INSERTs.\n\nOne very important thing to note here is that the number, or rather \nrate, of disk writes has not changed. It's the volume of data in \nthose writes that has dropped, along with those transaction \nmysterious counts. Could the bgwriter be the culprit here? Does \nanything it does get logged as a transaction?\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 29, 2007, at 11:16 AM, Tom Lane wrote:Erik Jones <[email protected]> writes: We've recently made a couple changes to our system that have resulted  in a drastic increase in performance as well as some very confusing  changes to the database statistics, specifically  pg_stat_database.xact_commit.  Here's the details: I'm kinda boggled too.  I can see how increasing shared buffers couldresult in a drastic reduction in write rate, if the working set of yourqueries fits in the new space but didn't fit in the old.  I have no ideahow that leads to a drop in number of transactions committed though.It doesn't make sense that autovac would run less frequently, becauseit's driven by number of tuples changed not number of disk writes; andthat could hardly account for a 10x drop anyway.Did you by any chance take note of exactly which processes weregenerating all the I/O or the CPU load? Well, wrt to the CPU load, as I said, we're pretty sure that's autovac as we still get spikes that hit about the same threshold, after which cache hits go up dramatically and the spikes just don't last two days anymore.As far as the procs responsible for the writes go, we were unable to see that from the OS level as the guy we had as a systems admin last year totally screwed us with the way he set up the SunCluster on the boxes and we have been unable to run Dtrace which has left us watching a lot of iostat.  However, we did notice a direct correlation between write spikes and \"write intensive\" queries like large COPYs, UPDATEs, and INSERTs.One very important thing to note here is that the number, or rather rate, of disk writes has not changed.  It's the volume of data in those writes that has dropped, along with those transaction mysterious counts.  Could the bgwriter be the culprit here?  Does anything it does get logged as a transaction? erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Thu, 29 Mar 2007 11:45:57 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared buffers, db transactions commited,\n and write IO on Solaris " }, { "msg_contents": "Erik,\n\nusing 'forcedirectio' simply brings your write operations to the\n*real* volume - means while you need to write 10 bytes you'll write 10\nbytes (instead of UFS block size (8K)). So it explains me why your\nwrite volume became slower.\n\nNow, why TX number is reduced - is a small mystery :)\n\nOptions:\n - you really do 10 times less commits, means you work 10 times slower? ;)\n what about users? how do you measure your work performance?\n\n - TX reported in pg_* tables are not exact, but I don't believe at all :)\n\nRgds,\n-Dimitri\n\nOn 3/29/07, Erik Jones <[email protected]> wrote:\n> On Mar 29, 2007, at 11:16 AM, Tom Lane wrote:\n>\n> > Erik Jones <[email protected]> writes:\n> >> We've recently made a couple changes to our system that have resulted\n> >> in a drastic increase in performance as well as some very confusing\n> >> changes to the database statistics, specifically\n> >> pg_stat_database.xact_commit. Here's the details:\n> >\n> > I'm kinda boggled too. I can see how increasing shared buffers could\n> > result in a drastic reduction in write rate, if the working set of\n> > your\n> > queries fits in the new space but didn't fit in the old. I have no\n> > idea\n> > how that leads to a drop in number of transactions committed though.\n> > It doesn't make sense that autovac would run less frequently, because\n> > it's driven by number of tuples changed not number of disk writes; and\n> > that could hardly account for a 10x drop anyway.\n> >\n> > Did you by any chance take note of exactly which processes were\n> > generating all the I/O or the CPU load?\n>\n> Well, wrt to the CPU load, as I said, we're pretty sure that's\n> autovac as we still get spikes that hit about the same threshold,\n> after which cache hits go up dramatically and the spikes just don't\n> last two days anymore.\n>\n> As far as the procs responsible for the writes go, we were unable to\n> see that from the OS level as the guy we had as a systems admin last\n> year totally screwed us with the way he set up the SunCluster on the\n> boxes and we have been unable to run Dtrace which has left us\n> watching a lot of iostat. However, we did notice a direct\n> correlation between write spikes and \"write intensive\" queries like\n> large COPYs, UPDATEs, and INSERTs.\n>\n> One very important thing to note here is that the number, or rather\n> rate, of disk writes has not changed. It's the volume of data in\n> those writes that has dropped, along with those transaction\n> mysterious counts. Could the bgwriter be the culprit here? Does\n> anything it does get logged as a transaction?\n>\n> erik jones <[email protected]>\n> software developer\n> 615-296-0838\n> emma(r)\n>\n>\n>\n>\n", "msg_date": "Thu, 29 Mar 2007 19:41:05 +0200", "msg_from": "\"dimitri k\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "On Mar 29, 2007, at 12:41 PM, dimitri k wrote:\n\n> On 3/29/07, Erik Jones <[email protected]> wrote:\n>> On Mar 29, 2007, at 11:16 AM, Tom Lane wrote:\n>>\n>> > Erik Jones <[email protected]> writes:\n>> >> We've recently made a couple changes to our system that have \n>> resulted\n>> >> in a drastic increase in performance as well as some very \n>> confusing\n>> >> changes to the database statistics, specifically\n>> >> pg_stat_database.xact_commit. Here's the details:\n>> >\n>> > I'm kinda boggled too. I can see how increasing shared buffers \n>> could\n>> > result in a drastic reduction in write rate, if the working set of\n>> > your\n>> > queries fits in the new space but didn't fit in the old. I have no\n>> > idea\n>> > how that leads to a drop in number of transactions committed \n>> though.\n>> > It doesn't make sense that autovac would run less frequently, \n>> because\n>> > it's driven by number of tuples changed not number of disk \n>> writes; and\n>> > that could hardly account for a 10x drop anyway.\n>> >\n>> > Did you by any chance take note of exactly which processes were\n>> > generating all the I/O or the CPU load?\n>>\n>> Well, wrt to the CPU load, as I said, we're pretty sure that's\n>> autovac as we still get spikes that hit about the same threshold,\n>> after which cache hits go up dramatically and the spikes just don't\n>> last two days anymore.\n>>\n>> As far as the procs responsible for the writes go, we were unable to\n>> see that from the OS level as the guy we had as a systems admin last\n>> year totally screwed us with the way he set up the SunCluster on the\n>> boxes and we have been unable to run Dtrace which has left us\n>> watching a lot of iostat. However, we did notice a direct\n>> correlation between write spikes and \"write intensive\" queries like\n>> large COPYs, UPDATEs, and INSERTs.\n>>\n>> One very important thing to note here is that the number, or rather\n>> rate, of disk writes has not changed. It's the volume of data in\n>> those writes that has dropped, along with those transaction\n>> mysterious counts. Could the bgwriter be the culprit here? Does\n>> anything it does get logged as a transaction?\n>>\n>> erik jones <[email protected]>\n>> software developer\n>> 615-296-0838\n>> emma(r)\n>>\n>>\n>>\n>>\n> Erik,\n>\n> using 'forcedirectio' simply brings your write operations to the\n> *real* volume - means while you need to write 10 bytes you'll write 10\n> bytes (instead of UFS block size (8K)). So it explains me why your\n> write volume became slower.\n\nSorry, that's not true. Google \"ufs forcedirectio\" go to the first \nlink and you will find:\n\n\"forcedirectio\n\nThe forcedirectio (read \"force direct IO\") UFS option causes data to \nbe buffered in kernel address whenever data is transferred between \nuser address space and the disk. In other words, it bypasses the file \nsystem cache. For certain types of applications -- primarily database \nsystems -- this option can dramatically improve performance. In fact, \nsome database experts have argued that a file using the forcedirectio \noption will outperform a raw partition, though this opinion seems \nfairly controversial.\n\nThe forcedirectio improves file system performance by eliminating \ndouble buffering, providing a small, efficient code path for file \nsystem reads and writes and removing pressure on memory.\"\n\nHowever, what this does mean is that writes will be at the actual \nfilesystem block size and not the cache block size (8K v. 512K).\n\n>\n> Now, why TX number is reduced - is a small mystery :)\n>\n> Options:\n> - you really do 10 times less commits, means you work 10 times \n> slower? ;)\n> what about users? how do you measure your work performance?\n\nWe are an email marketing service provider with a web front end \napplication. We measure work performance via web requests (counts, \ntypes, etc...), mailer activity and the resulting database activity. \nWe are doing as much or more work now than previously, and faster.\n\n>\n> - TX reported in pg_* tables are not exact, but I don't believe \n> at all :)\n\nEven if they aren't exact, being off by a factor of 10 wouldn't be \nbelievable. the forcedirectio mount setting for ufs can definitely \nexplain the drop in data written volume, but doesn't do much to \nexplain the difference in xact commits.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 29, 2007, at 12:41 PM, dimitri k wrote:On 3/29/07, Erik Jones <[email protected]> wrote: On Mar 29, 2007, at 11:16 AM, Tom Lane wrote:> Erik Jones <[email protected]> writes:>> We've recently made a couple changes to our system that have resulted>> in a drastic increase in performance as well as some very confusing>> changes to the database statistics, specifically>> pg_stat_database.xact_commit.  Here's the details:>> I'm kinda boggled too.  I can see how increasing shared buffers could> result in a drastic reduction in write rate, if the working set of> your> queries fits in the new space but didn't fit in the old.  I have no> idea> how that leads to a drop in number of transactions committed though.> It doesn't make sense that autovac would run less frequently, because> it's driven by number of tuples changed not number of disk writes; and> that could hardly account for a 10x drop anyway.>> Did you by any chance take note of exactly which processes were> generating all the I/O or the CPU load?Well, wrt to the CPU load, as I said, we're pretty sure that'sautovac as we still get spikes that hit about the same threshold,after which cache hits go up dramatically and the spikes just don'tlast two days anymore.As far as the procs responsible for the writes go, we were unable tosee that from the OS level as the guy we had as a systems admin lastyear totally screwed us with the way he set up the SunCluster on theboxes and we have been unable to run Dtrace which has left uswatching a lot of iostat.  However, we did notice a directcorrelation between write spikes and \"write intensive\" queries likelarge COPYs, UPDATEs, and INSERTs.One very important thing to note here is that the number, or ratherrate, of disk writes has not changed.  It's the volume of data inthose writes that has dropped, along with those transactionmysterious counts.  Could the bgwriter be the culprit here?  Doesanything it does get logged as a transaction?erik jones <[email protected]>software developer615-296-0838emma(r) Erik,using 'forcedirectio' simply brings your write operations to the*real* volume - means while you need to write 10 bytes you'll write 10bytes (instead of UFS block size (8K)). So it explains me why yourwrite volume became slower.Sorry, that's not true.  Google \"ufs forcedirectio\" go to the first link and you will find:\"forcedirectioThe forcedirectio (read \"force direct IO\") UFS option causes data to be buffered in kernel address whenever data is transferred between user address space and the disk. In other words, it bypasses the file system cache. For certain types of applications -- primarily database systems -- this option can dramatically improve performance. In fact, some database experts have argued that a file using the forcedirectio option will outperform a raw partition, though this opinion seems fairly controversial.The forcedirectio improves file system performance by eliminating double buffering, providing a small, efficient code path for file system reads and writes and removing pressure on memory.\"However, what this does mean is that writes will be at the actual filesystem block size and not the cache block size (8K v. 512K).Now, why TX number is reduced - is a small mystery :)Options:  - you really do 10 times less commits, means you work 10 times slower? ;)    what about users? how do you measure your work performance?We are an email marketing service provider with a web front end application.  We measure work performance via web requests (counts, types, etc...), mailer activity and the resulting database activity.  We are doing as much or more work now than previously, and faster.  - TX reported in pg_* tables are not exact, but I don't believe at all :)Even if they aren't exact, being off by a factor of 10 wouldn't be believable.  the forcedirectio mount setting for ufs can definitely explain the drop in data written volume, but doesn't do much to explain the difference in xact commits. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Thu, 29 Mar 2007 13:58:13 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "Erik Jones <[email protected]> writes:\n> One very important thing to note here is that the number, or rather \n> rate, of disk writes has not changed. It's the volume of data in \n> those writes that has dropped, along with those transaction \n> mysterious counts.\n\nHmm. I'm suddenly thinking about the stats collector: in existing 8.2.x\nreleases it's got a bug that causes it to write the collected-stats file\nmuch too often. If you had done something that would shrink the size\nof the stats file, that might explain this observation. Do you have \nstats_reset_on_server_start turned on?\n\nThe drop in reported transaction rate is still baffling though. Are you\nsure you're really doing the same amount of work? Can you estimate what\nyou think the transaction rate *should* be from a what-are-your-clients-\ndoing perspective?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Mar 2007 15:19:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited,\n and write IO on Solaris " }, { "msg_contents": "On Mar 29, 2007, at 2:19 PM, Tom Lane wrote:\n\n> Erik Jones <[email protected]> writes:\n>> One very important thing to note here is that the number, or rather\n>> rate, of disk writes has not changed. It's the volume of data in\n>> those writes that has dropped, along with those transaction\n>> mysterious counts.\n>\n> Hmm. I'm suddenly thinking about the stats collector: in existing \n> 8.2.x\n> releases it's got a bug that causes it to write the collected-stats \n> file\n> much too often. If you had done something that would shrink the size\n> of the stats file, that might explain this observation. Do you have\n> stats_reset_on_server_start turned on?\n\nNope.\n\n>\n> The drop in reported transaction rate is still baffling though. \n> Are you\n> sure you're really doing the same amount of work? Can you estimate \n> what\n> you think the transaction rate *should* be from a what-are-your- \n> clients-\n> doing perspective?\n\nUnfortunately, I can't think of any way to do that. Our app is made \nup of a lot of different components and not all of them are even \ndirectly client driven. For the client driven portions of the app \nany given web request can contain anywhere from around 10 to \nsometimes over 50 different xacts (and, that just a guesstimate). \nAlso, we didn't start tracking xact counts via pg_stat_database until \nabout two months ago when we were in IO bound hell and we actually \nthought that the really big xact #s were normal for our app as that \nwas the first and, thus, only numbers we had to work with.\n\nAlso, another metric we track is to take a count from \npg_stat_activity of queries running longer than 1 second every five \nminutes. Before these recent changes it wasn't uncommon to see that \ncount start to seriously stack up to over 200 at times with write \nintensive queries hanging out for sometimes 30 minutes or more (we'd \noften end having to kill them...). Since we upped the shared buffers \nand turned on forcedirectio for our fs mount, that number has stayed \nunder 50 and has only crossed 20 once.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 29, 2007, at 2:19 PM, Tom Lane wrote:Erik Jones <[email protected]> writes: One very important thing to note here is that the number, or rather  rate, of disk writes has not changed.  It's the volume of data in  those writes that has dropped, along with those transaction  mysterious counts. Hmm.  I'm suddenly thinking about the stats collector: in existing 8.2.xreleases it's got a bug that causes it to write the collected-stats filemuch too often.  If you had done something that would shrink the sizeof the stats file, that might explain this observation.  Do you have stats_reset_on_server_start turned on?Nope.The drop in reported transaction rate is still baffling though.  Are yousure you're really doing the same amount of work?  Can you estimate whatyou think the transaction rate *should* be from a what-are-your-clients-doing perspective? Unfortunately, I can't think of any way to do that.  Our app is made up of a lot of different components and not all of them are even directly client driven.  For the client driven portions of the app any given web request can contain anywhere from around 10 to sometimes over 50 different xacts (and, that just a guesstimate).  Also, we didn't start tracking xact counts via pg_stat_database until about two months ago when we were in IO bound hell and we actually thought that the really big xact #s were normal for our app as that was the first and, thus, only numbers we had to work with.  Also, another metric we track is to take a count from pg_stat_activity of queries running longer than 1 second every five minutes.  Before these recent changes it wasn't uncommon to see that count start to seriously stack up to over 200 at times with write intensive queries hanging out for sometimes 30 minutes or more (we'd often end having to kill them...).  Since we upped the shared buffers and turned on forcedirectio for our fs mount, that number has stayed under 50 and has only crossed 20 once.  erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Thu, 29 Mar 2007 14:49:48 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared buffers, db transactions commited,\n and write IO on Solaris " }, { "msg_contents": "> >>\n> > Erik,\n> >\n> > using 'forcedirectio' simply brings your write operations to the\n> > *real* volume - means while you need to write 10 bytes you'll write 10\n> > bytes (instead of UFS block size (8K)). So it explains me why your\n> > write volume became slower.\n\nI men 'lower' (not slower)\n\n>\n> Sorry, that's not true. Google \"ufs forcedirectio\" go to the first\n> link and you will find:\n>\n> \"forcedirectio\n>\n> The forcedirectio (read \"force direct IO\") UFS option causes data to\n> be buffered in kernel address whenever data is transferred between\n> user address space and the disk. In other words, it bypasses the file\n> system cache. For certain types of applications -- primarily database\n> systems -- this option can dramatically improve performance. In fact,\n> some database experts have argued that a file using the forcedirectio\n> option will outperform a raw partition, though this opinion seems\n> fairly controversial.\n>\n> The forcedirectio improves file system performance by eliminating\n> double buffering, providing a small, efficient code path for file\n> system reads and writes and removing pressure on memory.\"\n\nErik, please, don't take me wrong, but reading Google (or better man pages)\ndon't replace brain and basic practice... Direct IO option is not a silver\nbullet which will solve all your problems (try to do 'cp' on the mounted in\n'forcedirectio' filesystem, or use your mailbox on it - you'll quickly\nunderstand impact)...\n\n>\n> However, what this does mean is that writes will be at the actual\n> filesystem block size and not the cache block size (8K v. 512K).\n\nwhile UFS filesystem mounted normally, it uses its own cache for all\noperations (read and write) and saves data modifications on per\npage basis, means: when a process writes 200 bytes there will be 200\nbytes modified in cache, then whole page is written (8K) once data\ndemanded to be flushed (and WAL is writing per each commit)...\n\nNow, mounted with 'forcedirectio' option UFS is free of page size constraint\nand will write like a raw device an exactly demanded amount of data, means:\nwhen a process writes 200 bytes it'll write exactly 200 bytes to the disk. For\nWAL it may be very benefit, because you'll be able to perform more I/O\noperations/sec, means more commit/sec. But on the same time it may\ndramatically slow down all your read operations (no more data prefetch\nnor dynamic cache)... The best solution probably is to separate WAL from\ndata (BTW, it'll be nice to have such an option as WAL_PATH in conf file),\nit may be resolved by simple use of tablespace or at least directory links, etc.\nBut if your major activity is writing - probably it's already ok for you.\n\nHowever, to understand TX number mystery I think the only possible solution\nis to reproduce a small live test:\n\n(I'm sure you're aware you can mount/unmount forcedirectio dynamically?)\n\nduring stable workload do:\n\n # mount -o remount,logging /path_to_your_filesystem\n\nand check if I/O volume is increasing as well TX numbers\nthan come back:\n\n # mount -o remount,forcedirectio /path_to_your_filesystem\n\nand see if I/O volume is decreasing as well TX numbers...\n\nBest regards!\n-Dimitri\n\n\n>\n> >\n> > Now, why TX number is reduced - is a small mystery :)\n> >\n> > Options:\n> > - you really do 10 times less commits, means you work 10 times\n> > slower? ;)\n> > what about users? how do you measure your work performance?\n>\n> We are an email marketing service provider with a web front end\n> application. We measure work performance via web requests (counts,\n> types, etc...), mailer activity and the resulting database activity.\n> We are doing as much or more work now than previously, and faster.\n>\n> >\n> > - TX reported in pg_* tables are not exact, but I don't believe\n> > at all :)\n>\n> Even if they aren't exact, being off by a factor of 10 wouldn't be\n> believable. the forcedirectio mount setting for ufs can definitely\n> explain the drop in data written volume, but doesn't do much to\n> explain the difference in xact commits.\n>\n> erik jones <[email protected]>\n> software developer\n> 615-296-0838\n> emma(r)\n>\n>\n>\n>\n", "msg_date": "Fri, 30 Mar 2007 00:15:11 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "Erik,\n\nWow, thanks for the post.\n\nWe've just started testing the option of sizing shared_buffers bigger than \nthe database, and using forcedirectio in benchmarks at Sun. So far, our \nexperience has been *equal* performance in that configuration, so it's \n*very* interesting to see you're getting a gain.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 29 Mar 2007 17:23:23 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "On Mar 29, 2007, at 7:23 PM, Josh Berkus wrote:\n\n> Erik,\n>\n> Wow, thanks for the post.\n>\n> We've just started testing the option of sizing shared_buffers \n> bigger than\n> the database, and using forcedirectio in benchmarks at Sun. So \n> far, our\n> experience has been *equal* performance in that configuration, so it's\n> *very* interesting to see you're getting a gain.\n>\n> -- \n> --Josh\n\nJosh,\n\nYou'er welcome! However, I believe our situation is very different \nfrom what you're testing if I understand you correctly. Are you \nsaying that you're entire database will fit in memory? If so, then \nthese are very different situations as there is no way ours could \never do that. In fact, I'm not sure that forcedirectio would really \nnet you any gain in that situation as the IO service time will be \nbasically nil if the filesystem cache doesn't have to page which I \nwould think is why your seeing what you are.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 29, 2007, at 7:23 PM, Josh Berkus wrote:Erik,Wow, thanks for the post.We've just started testing the option of sizing shared_buffers bigger than the database, and using forcedirectio in benchmarks at Sun.  So far, our experience has been *equal* performance in that configuration, so it's *very* interesting to see you're getting a gain.-- --JoshJosh,You'er welcome!  However, I believe our situation is very different from what you're testing if I understand you correctly.  Are you saying that you're entire database will fit in memory?  If so, then these are very different situations as there is no way ours could ever do that.  In fact, I'm not sure that forcedirectio would really net you any gain in that situation as the IO service time will be basically nil if the filesystem cache doesn't have to page which I would think is why your seeing what you are. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Thu, 29 Mar 2007 23:27:06 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "On Thu, 29 Mar 2007, Erik Jones wrote:\n\n> As far as the procs responsible for the writes go, we were unable to see that \n> from the OS level as the guy we had as a systems admin last year totally \n> screwed us with the way he set up the SunCluster on the boxes and we have \n> been unable to run Dtrace which has left us watching a lot of iostat.\n\nThere are two processes spawned by Postgres that handle collecting \nstatistics and doing the background writing. You don't need any fancy \ntools (you Solaris guys and your Dtrace, sheesh) to see if they're busy. \nJust run top and switch the display to show the full command line instead \nof just the process name (on Linux the 'c' key does this) and you'll see \nthe processes that had just been \"postgres\" before label themselves.\n\nThe system I saw get nailed by the bug Tom mentioned was also doing an \nupdate-heavy workload. It manifested itself as one CPU spending almost \nall its time running the statistics collector. That process's issues \nkicked up I/O waits from minimal to >25% and the background writer was \nincredibly sluggish as well. The problem behavior was intermittant in \nthat it would be crippling at times, but merely a moderate slowdown at \nothers. Your case sounds similar in several ways.\n\nIf you see the stats collector process taking up any significant amount of \nCPU time in top, you should strongly consider the possibility that you're \nsuffering from this bug. It's only a few characters to patch the bug if \nyou don't want to wait for the next packaged release. In your situation, \nI'd do it just eliminate this problem from your list of possible causes \nASAP.\n\nTo fix, edit src/backend/postmaster/pgstat.c\nAround line 1650 you'll find:\n\n write_timeout.it_value.tv_usec = PGSTAT_STAT_INTERVAL % 1000;\n\nChange it to match the current line in the CVS tree for 8.3:\n\n write_timeout.it_value.tv_usec = (PGSTAT_STAT_INTERVAL % 1000) * 1000;\n\nThat's all it took to resolve things for me.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 30 Mar 2007 00:48:00 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited,\n and write IO on Solaris " }, { "msg_contents": "On Mar 29, 2007, at 5:15 PM, Dimitri wrote:\n\n>> >>\n>> > Erik,\n>> >\n>> > using 'forcedirectio' simply brings your write operations to the\n>> > *real* volume - means while you need to write 10 bytes you'll \n>> write 10\n>> > bytes (instead of UFS block size (8K)). So it explains me why your\n>> > write volume became slower.\n>\n> I men 'lower' (not slower)\n>\n>>\n>> Sorry, that's not true. Google \"ufs forcedirectio\" go to the first\n>> link and you will find:\n>>\n>> \"forcedirectio\n>>\n>> The forcedirectio (read \"force direct IO\") UFS option causes data to\n>> be buffered in kernel address whenever data is transferred between\n>> user address space and the disk. In other words, it bypasses the file\n>> system cache. For certain types of applications -- primarily database\n>> systems -- this option can dramatically improve performance. In fact,\n>> some database experts have argued that a file using the forcedirectio\n>> option will outperform a raw partition, though this opinion seems\n>> fairly controversial.\n>>\n>> The forcedirectio improves file system performance by eliminating\n>> double buffering, providing a small, efficient code path for file\n>> system reads and writes and removing pressure on memory.\"\n>\n> Erik, please, don't take me wrong, but reading Google (or better \n> man pages)\n> don't replace brain and basic practice... Direct IO option is not a \n> silver\n> bullet which will solve all your problems (try to do 'cp' on the \n> mounted in\n> 'forcedirectio' filesystem, or use your mailbox on it - you'll quickly\n> understand impact)...\n>\n>>\n>> However, what this does mean is that writes will be at the actual\n>> filesystem block size and not the cache block size (8K v. 512K).\n>\n> while UFS filesystem mounted normally, it uses its own cache for all\n> operations (read and write) and saves data modifications on per\n> page basis, means: when a process writes 200 bytes there will be 200\n> bytes modified in cache, then whole page is written (8K) once data\n> demanded to be flushed (and WAL is writing per each commit)...\n>\n> Now, mounted with 'forcedirectio' option UFS is free of page size \n> constraint\n> and will write like a raw device an exactly demanded amount of \n> data, means:\n> when a process writes 200 bytes it'll write exactly 200 bytes to \n> the disk. =\n\nYou are right in that the page size constraint is lifted in that \ndirectio cuts out the VM filesystem cache. However, the Solaris \nkernel still issues io ops in terms of its logical block size (which \nwe have at the default 8K). It can issue io ops for fragments as \nsmall as 1/8th of the block size, but Postgres issues its io requests \nin terms of the block size which means that io ops from Postgres will \nbe in 8K chunks which is exactly what we see when we look at our \nsystem io stats. In fact, if any io request is made that isn't a \nmultiple of 512 bytes (the disk sector size), the file system \nswitches back to the buffered io.\n\n>\n> However, to understand TX number mystery I think the only possible \n> solution\n> is to reproduce a small live test:\n>\n> (I'm sure you're aware you can mount/unmount forcedirectio \n> dynamically?)\n>\n> during stable workload do:\n>\n> # mount -o remount,logging /path_to_your_filesystem\n>\n> and check if I/O volume is increasing as well TX numbers\n> than come back:\n>\n> # mount -o remount,forcedirectio /path_to_your_filesystem\n>\n> and see if I/O volume is decreasing as well TX numbers...\n\nThat's an excellent idea and I'll run it by the rest of our team \ntomorrow.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 29, 2007, at 5:15 PM, Dimitri wrote:>>> Erik,>> using 'forcedirectio' simply brings your write operations to the> *real* volume - means while you need to write 10 bytes you'll write 10> bytes (instead of UFS block size (8K)). So it explains me why your> write volume became slower. I men 'lower' (not slower) Sorry, that's not true.  Google \"ufs forcedirectio\" go to the firstlink and you will find:\"forcedirectioThe forcedirectio (read \"force direct IO\") UFS option causes data tobe buffered in kernel address whenever data is transferred betweenuser address space and the disk. In other words, it bypasses the filesystem cache. For certain types of applications -- primarily databasesystems -- this option can dramatically improve performance. In fact,some database experts have argued that a file using the forcedirectiooption will outperform a raw partition, though this opinion seemsfairly controversial.The forcedirectio improves file system performance by eliminatingdouble buffering, providing a small, efficient code path for filesystem reads and writes and removing pressure on memory.\" Erik, please, don't take me wrong, but reading Google (or better man pages)don't replace brain and basic practice... Direct IO option is not a silverbullet which will solve all your problems (try to do 'cp' on the mounted in'forcedirectio' filesystem, or use your mailbox on it - you'll quicklyunderstand impact)... However, what this does mean is that writes will be at the actualfilesystem block size and not the cache block size (8K v. 512K). while UFS filesystem mounted normally, it uses its own cache for alloperations (read and write) and saves data modifications on perpage basis, means: when a process writes 200 bytes there will be 200bytes modified in cache, then whole page is written (8K) once datademanded to be flushed (and WAL is writing per each commit)...Now, mounted with 'forcedirectio' option UFS is free of page size constraintand will write like a raw device an exactly demanded amount of data, means:when a process writes 200 bytes it'll write exactly 200 bytes to the disk. =You are right in that the page size constraint is lifted in that directio cuts out the VM filesystem cache.  However, the Solaris kernel still issues io ops in terms of its logical block size (which we have at the default 8K).  It can issue io ops for fragments as small as 1/8th of the block size, but Postgres issues its io requests in terms of the block size which means that io ops from Postgres will be in 8K chunks which is exactly what we see when we look at our system io stats.  In fact, if any io request is made that isn't a multiple of 512 bytes (the disk sector size), the file system switches back to the buffered io.However, to understand TX number mystery I think the only possible solutionis to reproduce a small live test:(I'm sure you're aware you can mount/unmount forcedirectio dynamically?)during stable workload do:  # mount -o remount,logging  /path_to_your_filesystemand check if I/O volume is increasing as well TX numbersthan come back:  # mount -o remount,forcedirectio  /path_to_your_filesystemand see if I/O volume is decreasing as well TX numbers... That's an excellent idea and I'll run it by the rest of our team tomorrow. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Fri, 30 Mar 2007 00:22:52 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": ">\n> You are right in that the page size constraint is lifted in that\n> directio cuts out the VM filesystem cache. However, the Solaris\n> kernel still issues io ops in terms of its logical block size (which\n> we have at the default 8K). It can issue io ops for fragments as\n> small as 1/8th of the block size, but Postgres issues its io requests\n> in terms of the block size which means that io ops from Postgres will\n> be in 8K chunks which is exactly what we see when we look at our\n> system io stats. In fact, if any io request is made that isn't a\n> multiple of 512 bytes (the disk sector size), the file system\n> switches back to the buffered io.\n\nOh, yes, of course! yes, you still need to respect multiple of 512\nbytes block size on read and write - sorry, I was tired :)\n\nThen it's seems to be true - default XLOG block size is 8K, means for\nevery even small auto-committed transaction we should write 8K?... Is\nthere any reason to use so big default block size?...\n\nProbably it may be a good idea to put it as 'initdb' parameter? and\nhave such value per database server?\n\nRgds,\n-Dimitri\n\n>\n> >\n> > However, to understand TX number mystery I think the only possible\n> > solution\n> > is to reproduce a small live test:\n> >\n> > (I'm sure you're aware you can mount/unmount forcedirectio\n> > dynamically?)\n> >\n> > during stable workload do:\n> >\n> > # mount -o remount,logging /path_to_your_filesystem\n> >\n> > and check if I/O volume is increasing as well TX numbers\n> > than come back:\n> >\n> > # mount -o remount,forcedirectio /path_to_your_filesystem\n> >\n> > and see if I/O volume is decreasing as well TX numbers...\n>\n> That's an excellent idea and I'll run it by the rest of our team\n> tomorrow.\n>\n> erik jones <[email protected]>\n> software developer\n> 615-296-0838\n> emma(r)\n>\n>\n>\n>\n", "msg_date": "Fri, 30 Mar 2007 15:14:35 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "On Mar 30, 2007, at 8:14 AM, Dimitri wrote:\n\n>>\n>> You are right in that the page size constraint is lifted in that\n>> directio cuts out the VM filesystem cache. However, the Solaris\n>> kernel still issues io ops in terms of its logical block size (which\n>> we have at the default 8K). It can issue io ops for fragments as\n>> small as 1/8th of the block size, but Postgres issues its io requests\n>> in terms of the block size which means that io ops from Postgres will\n>> be in 8K chunks which is exactly what we see when we look at our\n>> system io stats. In fact, if any io request is made that isn't a\n>> multiple of 512 bytes (the disk sector size), the file system\n>> switches back to the buffered io.\n>\n> Oh, yes, of course! yes, you still need to respect multiple of 512\n> bytes block size on read and write - sorry, I was tired :)\n>\n> Then it's seems to be true - default XLOG block size is 8K, means for\n> every even small auto-committed transaction we should write 8K?... Is\n> there any reason to use so big default block size?...\n>\n> Probably it may be a good idea to put it as 'initdb' parameter? and\n> have such value per database server?\n\nI believe it's because that is a pretty normal Unix kernal block size \nand you want the two to match.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 30, 2007, at 8:14 AM, Dimitri wrote:You are right in that the page size constraint is lifted in thatdirectio cuts out the VM filesystem cache.  However, the Solariskernel still issues io ops in terms of its logical block size (whichwe have at the default 8K).  It can issue io ops for fragments assmall as 1/8th of the block size, but Postgres issues its io requestsin terms of the block size which means that io ops from Postgres willbe in 8K chunks which is exactly what we see when we look at oursystem io stats.  In fact, if any io request is made that isn't amultiple of 512 bytes (the disk sector size), the file systemswitches back to the buffered io. Oh, yes, of course! yes, you still need to respect multiple of 512bytes block size on read and write - sorry, I was tired :)Then it's seems to be true - default XLOG block size is 8K, means forevery even small auto-committed transaction we should write 8K?... Isthere any reason to use so big default block size?...Probably it may be a good idea to put it as 'initdb' parameter? andhave such value per database server?I believe it's because that is a pretty normal Unix kernal block size and you want the two to match. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Fri, 30 Mar 2007 09:14:11 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "The problem is while your goal is to commit as fast as possible - it's\npity to vast I/O operation speed just keeping common block size...\nLet's say if your transaction modification entering into 512K - you'll\nbe able to write much more 512K blocks per second rather 8K per second\n(for the same amount of data)... Even we rewrite probably several\ntimes the same block with incoming transactions - it still costs on\ntraffic, and we will process slower even H/W can do better. Don't\nthink it's good, no? ;)\n\nRgds,\n-Dimitri\n\nOn 3/30/07, Erik Jones <[email protected]> wrote:\n>\n> On Mar 30, 2007, at 8:14 AM, Dimitri wrote:\n>\n> >>\n> >> You are right in that the page size constraint is lifted in that\n> >> directio cuts out the VM filesystem cache. However, the Solaris\n> >> kernel still issues io ops in terms of its logical block size (which\n> >> we have at the default 8K). It can issue io ops for fragments as\n> >> small as 1/8th of the block size, but Postgres issues its io requests\n> >> in terms of the block size which means that io ops from Postgres will\n> >> be in 8K chunks which is exactly what we see when we look at our\n> >> system io stats. In fact, if any io request is made that isn't a\n> >> multiple of 512 bytes (the disk sector size), the file system\n> >> switches back to the buffered io.\n> >\n> > Oh, yes, of course! yes, you still need to respect multiple of 512\n> > bytes block size on read and write - sorry, I was tired :)\n> >\n> > Then it's seems to be true - default XLOG block size is 8K, means for\n> > every even small auto-committed transaction we should write 8K?... Is\n> > there any reason to use so big default block size?...\n> >\n> > Probably it may be a good idea to put it as 'initdb' parameter? and\n> > have such value per database server?\n>\n> I believe it's because that is a pretty normal Unix kernal block size\n> and you want the two to match.\n>\n> erik jones <[email protected]>\n> software developer\n> 615-296-0838\n> emma(r)\n>\n>\n>\n>\n", "msg_date": "Fri, 30 Mar 2007 16:25:16 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "On Mar 30, 2007, at 10:05 AM, Kenneth Marshall wrote:\n\n> On Fri, Mar 30, 2007 at 04:25:16PM +0200, Dimitri wrote:\n>> The problem is while your goal is to commit as fast as possible - \n>> it's\n>> pity to vast I/O operation speed just keeping common block size...\n>> Let's say if your transaction modification entering into 512K - \n>> you'll\n>> be able to write much more 512K blocks per second rather 8K per \n>> second\n>> (for the same amount of data)... Even we rewrite probably several\n>> times the same block with incoming transactions - it still costs on\n>> traffic, and we will process slower even H/W can do better. Don't\n>> think it's good, no? ;)\n>>\n>> Rgds,\n>> -Dimitri\n>>\n> With block sizes you are always trading off overhead versus space\n> efficiency. Most OS write only in 4k/8k to the underlying hardware\n> regardless of the size of the write you issue. Issuing 16 512byte\n> writes has much more overhead than 1 8k write. On the light \n> transaction\n> end, there is no real benefit to a small write and it will slow\n> performance for high throughput environments. It would be better to,\n> and I think that someone is looking into, batching I/O.\n>\n> Ken\n\nTrue, and really, considering that data is only written to disk by \nthe bgwriter and at checkpoints, writes are already somewhat \nbatched. Also, Dimitri, I feel I should backtrack a little and point \nout that it is possible to have postgres write in 512byte blocks (at \nleast for UFS which is what's in my head right now) if you set the \nsystems logical block size to 4K and fragment size to 512 bytes and \nthen set postgres's BLCKSZ to 512bytes. However, as Ken has just \npointed out, what you gain in space efficiency you lose in \nperformance so if you're working with a high traffic database this \nwouldn't be a good idea.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 30, 2007, at 10:05 AM, Kenneth Marshall wrote:On Fri, Mar 30, 2007 at 04:25:16PM +0200, Dimitri wrote: The problem is while your goal is to commit as fast as possible - it'spity to vast I/O operation speed just keeping common block size...Let's say if your transaction modification entering into 512K - you'llbe able to write much more 512K blocks per second rather 8K per second(for the same amount of data)... Even we rewrite probably severaltimes the same block with incoming transactions - it still costs ontraffic, and we will process slower even H/W can do better. Don'tthink it's good, no? ;)Rgds,-Dimitri With block sizes you are always trading off overhead versus spaceefficiency. Most OS write only in 4k/8k to the underlying hardwareregardless of the size of the write you issue. Issuing 16 512bytewrites has much more overhead than 1 8k write. On the light transactionend, there is no real benefit to a small write and it will slowperformance for high throughput environments. It would be better to,and I think that someone is looking into, batching I/O.Ken True, and really, considering that data is only written to disk by the bgwriter and at checkpoints, writes are already somewhat batched.  Also, Dimitri, I feel I should backtrack a little and point out that it is possible to have postgres write in 512byte blocks (at least for UFS which is what's in my head right now) if you set the systems logical block size to 4K and fragment size to 512 bytes and then set postgres's BLCKSZ to 512bytes.  However, as Ken has just pointed out, what you gain in space efficiency you lose in performance so if you're working with a high traffic database this wouldn't be a good idea. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Fri, 30 Mar 2007 11:19:09 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "Erik,\n\n> You'er welcome! However, I believe our situation is very different\n> from what you're testing if I understand you correctly. Are you\n> saying that you're entire database will fit in memory? If so, then\n> these are very different situations as there is no way ours could\n> ever do that. In fact, I'm not sure that forcedirectio would really\n> net you any gain in that situation as the IO service time will be\n> basically nil if the filesystem cache doesn't have to page which I\n> would think is why your seeing what you are.\n\nEven more interesting. I guess we've been doing too much work with \nbenchmark workloads, which tend to be smaller databases. \n\nThing is, there's *always* I/O for a read/write database. If nothing else, \nupdates have to be synched to disk.\n\nAnyway ... regarding the mystery transactions ... are you certain that it's \nnot your application? I can imagine that, if your app has a fairly tight \nretry interval for database non-response, that I/O sluggishness could \nresult in commit attempts spinning out of control.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Fri, 30 Mar 2007 14:46:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "On Mar 30, 2007, at 4:46 PM, Josh Berkus wrote:\n\n> Erik,\n>\n>> You'er welcome! However, I believe our situation is very different\n>> from what you're testing if I understand you correctly. Are you\n>> saying that you're entire database will fit in memory? If so, then\n>> these are very different situations as there is no way ours could\n>> ever do that. In fact, I'm not sure that forcedirectio would really\n>> net you any gain in that situation as the IO service time will be\n>> basically nil if the filesystem cache doesn't have to page which I\n>> would think is why your seeing what you are.\n>\n> Even more interesting. I guess we've been doing too much work with\n> benchmark workloads, which tend to be smaller databases.\n>\n> Thing is, there's *always* I/O for a read/write database. If \n> nothing else,\n> updates have to be synched to disk.\n\nRight. But, how *much* I/O?\n\n>\n> Anyway ... regarding the mystery transactions ... are you certain \n> that it's\n> not your application? I can imagine that, if your app has a fairly \n> tight\n> retry interval for database non-response, that I/O sluggishness could\n> result in commit attempts spinning out of control.\n\nWell, our application code itself doesn't retry queries if the db is \ntaking a long time to respond. However, we do have a number of our \nservers making db connections via pgpool so you may be on to \nsomething here. While I will be taking these questions to the pgpool \nlists, I'll posit them here as well: If a pgpool child process \nreaches it's connection lifetime while waiting on a query to \ncomplete, does pgpool retry the query with another child? If a \nconnection thus dies, does the transaction complete normally on the \nserver? If the answers to these questions are both yes, this could \ndefinitely be what was happening.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Mar 30, 2007, at 4:46 PM, Josh Berkus wrote:Erik, You'er welcome!  However, I believe our situation is very differentfrom what you're testing if I understand you correctly.  Are yousaying that you're entire database will fit in memory?  If so, thenthese are very different situations as there is no way ours couldever do that.  In fact, I'm not sure that forcedirectio would reallynet you any gain in that situation as the IO service time will bebasically nil if the filesystem cache doesn't have to page which Iwould think is why your seeing what you are. Even more interesting.  I guess we've been doing too much work with benchmark workloads, which tend to be smaller databases.  Thing is, there's *always* I/O for a read/write database.  If nothing else, updates have to be synched to disk.Right.  But, how *much* I/O?Anyway ... regarding the mystery transactions ... are you certain that it's not your application?  I can imagine that, if your app has a fairly tight retry interval for database non-response, that I/O sluggishness could result in commit attempts spinning out of control. Well, our application code itself doesn't retry queries if the db is taking a long time to respond.  However, we do have a number of our servers making db connections via pgpool so you may be on to something here.  While I will be taking these questions to the pgpool lists, I'll posit them here as well:  If a pgpool child process reaches it's connection lifetime while waiting on a query to complete, does pgpool retry the query with another child?  If a connection thus dies, does the transaction complete normally on the server?  If the answers to these questions are both yes, this could definitely be what was happening. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Sat, 31 Mar 2007 10:56:40 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "Erik,\n\n> Well, our application code itself doesn't retry queries if the db is\n> taking a long time to respond. However, we do have a number of our\n> servers making db connections via pgpool so you may be on to\n> something here. While I will be taking these questions to the pgpool\n> lists, I'll posit them here as well: If a pgpool child process\n> reaches it's connection lifetime while waiting on a query to\n> complete, does pgpool retry the query with another child? If a\n> connection thus dies, does the transaction complete normally on the\n> server? If the answers to these questions are both yes, this could\n> definitely be what was happening.\n\nIt's been a while since I used pgpool with load balancing turned on, so you \nshould probably try the pgpool lists. What version?\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Tue, 3 Apr 2007 09:26:02 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "Folks,\n\nto close topic with \"LOG block size=1K\" idea - I took a time to test\nit (yes) and in best cases there is only 15% gain comparing to 8K -\nstorage protocol is quite heavy itself, so less or more data sent\nwithin it doesn't reduce service time too much... As well even this\ngain is quickly decreasing with growing workload! So, yes 8K is good\nenough and probably the most optimal choice for LOG (as well data)\nblock size.\n\nRgds,\n-Dimitri\n\n\n> Well, to check if there is a real potential gain all we need is a\n> small comparing test using PgSQL compiled with LOG block size equal to\n> say 1K and direct IO enabled.\n>\n> Rgds,\n> -Dimitri\n>\n>\n> On 3/30/07, Kenneth Marshall <[email protected]> wrote:\n> > On Fri, Mar 30, 2007 at 04:25:16PM +0200, Dimitri wrote:\n> > > The problem is while your goal is to commit as fast as possible - it's\n> > > pity to vast I/O operation speed just keeping common block size...\n> > > Let's say if your transaction modification entering into 512K - you'll\n> > > be able to write much more 512K blocks per second rather 8K per second\n> > > (for the same amount of data)... Even we rewrite probably several\n> > > times the same block with incoming transactions - it still costs on\n> > > traffic, and we will process slower even H/W can do better. Don't\n> > > think it's good, no? ;)\n> > >\n> > > Rgds,\n> > > -Dimitri\n> > >\n> > With block sizes you are always trading off overhead versus space\n> > efficiency. Most OS write only in 4k/8k to the underlying hardware\n> > regardless of the size of the write you issue. Issuing 16 512byte\n> > writes has much more overhead than 1 8k write. On the light transaction\n> > end, there is no real benefit to a small write and it will slow\n> > performance for high throughput environments. It would be better to,\n> > and I think that someone is looking into, batching I/O.\n> >\n> > Ken\n> >\n>\n", "msg_date": "Tue, 3 Apr 2007 18:51:56 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "On Apr 3, 2007, at 11:51 AM, Dimitri wrote:\n\n>> Well, to check if there is a real potential gain all we need is a\n>> small comparing test using PgSQL compiled with LOG block size \n>> equal to\n>> say 1K and direct IO enabled.\n>>\n>> Rgds,\n>> -Dimitri\n>>\n>>\n>> On 3/30/07, Kenneth Marshall <[email protected]> wrote:\n>> > On Fri, Mar 30, 2007 at 04:25:16PM +0200, Dimitri wrote:\n>> > > The problem is while your goal is to commit as fast as \n>> possible - it's\n>> > > pity to vast I/O operation speed just keeping common block \n>> size...\n>> > > Let's say if your transaction modification entering into 512K \n>> - you'll\n>> > > be able to write much more 512K blocks per second rather 8K \n>> per second\n>> > > (for the same amount of data)... Even we rewrite probably several\n>> > > times the same block with incoming transactions - it still \n>> costs on\n>> > > traffic, and we will process slower even H/W can do better. Don't\n>> > > think it's good, no? ;)\n>> > >\n>> > > Rgds,\n>> > > -Dimitri\n>> > >\n>> > With block sizes you are always trading off overhead versus space\n>> > efficiency. Most OS write only in 4k/8k to the underlying hardware\n>> > regardless of the size of the write you issue. Issuing 16 512byte\n>> > writes has much more overhead than 1 8k write. On the light \n>> transaction\n>> > end, there is no real benefit to a small write and it will slow\n>> > performance for high throughput environments. It would be better \n>> to,\n>> > and I think that someone is looking into, batching I/O.\n>> >\n>> > Ken\n>> >\n>>\n> Folks,\n>\n> to close topic with \"LOG block size=1K\" idea - I took a time to test\n> it (yes) and in best cases there is only 15% gain comparing to 8K -\n> storage protocol is quite heavy itself, so less or more data sent\n> within it doesn't reduce service time too much... As well even this\n> gain is quickly decreasing with growing workload! So, yes 8K is good\n> enough and probably the most optimal choice for LOG (as well data)\n> block size.\n>\n> Rgds,\n> -Dimitri\n\nHey, man, thanks for taking the time to profile that!\n\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Apr 3, 2007, at 11:51 AM, Dimitri wrote: Well, to check if there is a real potential gain all we need is asmall comparing test using PgSQL compiled with LOG block size equal tosay 1K and direct IO enabled.Rgds,-DimitriOn 3/30/07, Kenneth Marshall <[email protected]> wrote:> On Fri, Mar 30, 2007 at 04:25:16PM +0200, Dimitri wrote:> > The problem is while your goal is to commit as fast as possible - it's> > pity to vast I/O operation speed just keeping common block size...> > Let's say if your transaction modification entering into 512K - you'll> > be able to write much more 512K blocks per second rather 8K per second> > (for the same amount of data)... Even we rewrite probably several> > times the same block with incoming transactions - it still costs on> > traffic, and we will process slower even H/W can do better. Don't> > think it's good, no? ;)> >> > Rgds,> > -Dimitri> >> With block sizes you are always trading off overhead versus space> efficiency. Most OS write only in 4k/8k to the underlying hardware> regardless of the size of the write you issue. Issuing 16 512byte> writes has much more overhead than 1 8k write. On the light transaction> end, there is no real benefit to a small write and it will slow> performance for high throughput environments. It would be better to,> and I think that someone is looking into, batching I/O.>> Ken>Folks,to close topic with \"LOG block size=1K\" idea - I took a time to testit (yes) and in best cases there is only 15% gain comparing to 8K -storage protocol is quite heavy itself, so less or more data sentwithin it doesn't reduce service time too much... As well even thisgain is quickly decreasing with growing workload! So, yes 8K is goodenough and probably the most optimal choice for LOG (as well data)block size.Rgds,-DimitriHey, man, thanks for taking the time to profile that! erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Tue, 3 Apr 2007 14:24:35 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" }, { "msg_contents": "On Fri, Mar 30, 2007 at 11:19:09AM -0500, Erik Jones wrote:\n> >On Fri, Mar 30, 2007 at 04:25:16PM +0200, Dimitri wrote:\n> >>The problem is while your goal is to commit as fast as possible - \n> >>it's\n> >>pity to vast I/O operation speed just keeping common block size...\n> >>Let's say if your transaction modification entering into 512K - \n> >>you'll\n> >>be able to write much more 512K blocks per second rather 8K per \n> >>second\n> >>(for the same amount of data)... Even we rewrite probably several\n> >>times the same block with incoming transactions - it still costs on\n> >>traffic, and we will process slower even H/W can do better. Don't\n> >>think it's good, no? ;)\n> >>\n> >>Rgds,\n> >>-Dimitri\n> >>\n> >With block sizes you are always trading off overhead versus space\n> >efficiency. Most OS write only in 4k/8k to the underlying hardware\n> >regardless of the size of the write you issue. Issuing 16 512byte\n> >writes has much more overhead than 1 8k write. On the light \n> >transaction\n> >end, there is no real benefit to a small write and it will slow\n> >performance for high throughput environments. It would be better to,\n> >and I think that someone is looking into, batching I/O.\n> >\n> >Ken\n> \n> True, and really, considering that data is only written to disk by \n> the bgwriter and at checkpoints, writes are already somewhat \n> batched. Also, Dimitri, I feel I should backtrack a little and point \n> out that it is possible to have postgres write in 512byte blocks (at \n> least for UFS which is what's in my head right now) if you set the \n> systems logical block size to 4K and fragment size to 512 bytes and \n> then set postgres's BLCKSZ to 512bytes. However, as Ken has just \n> pointed out, what you gain in space efficiency you lose in \n> performance so if you're working with a high traffic database this \n> wouldn't be a good idea.\n\nSorry for the late reply, but I was on vacation...\n\nFolks have actually benchmarked filesystem block size on linux and found\nthat block sizes larger than 8k can actually be faster. I suppose if you\nhad a workload that *always* worked with only individual pages it would\nbe a waste, but it doesn't take much sequential reading to tip the\nscales.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n", "msg_date": "Wed, 18 Apr 2007 12:19:59 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared buffers, db transactions commited, and write IO on Solaris" } ]
[ { "msg_contents": "8.0.3 - Linux 2.6.18..\nFreshly vacuumed and analyzed\n\nThis database has been humming along fine for a while now, but I've got one of those sticky queries that is taking\nmuch too long to finish.\n\nAfter some digging, I've found that the planner is choosing to apply a necessary seq scan to the table. Unfortunately,\nit's scanning the whole table, when it seems that it could have joined it to a smaller table first and reduce the\namount of rows it would have to scan dramatically ( 70 million to about 5,000 ).\n\nThe table \"eventactivity\" has about 70million rows in it, index on \"incidentid\"\nThe table \"keyword_incidents\" is a temporary table and has incidentid as its primary key. It contains\n5125 rows. eventmain and eventgeo both have 2.1 million. My hope is that I can convince the planner to do the\n join to keyword_incidents *first* and then do the seq scan for the LIKE condition. Instead, it seems that it's seqscanning the \nwhole 70 million rows first and then doing the join, which takes a lot longer than I'd like to wait for it. Or, maybe I'm\nmisreading the explain output?\n\nThanks again\n\n-Dan\n---------------------------------\nHere's the query:\n\nexplain analyze \n\nselect \n *\nfrom \n\n keyword_incidents, \n\n eventactivity, \n\n eventmain, \n\n eventgeo \n\n where \n\n eventmain.incidentid = keyword_incidents.incidentid and \n\n eventgeo.incidentid = keyword_incidents.incidentid and \n\n eventactivity.incidentid = keyword_incidents.incidentid \n\n and ( recordtext like '%JOSE CHAVEZ%' )\norder by eventmain.entrydate limit 10000;\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2388918.07..2388918.08 rows=1 width=455) (actual time=81771.186..81771.292 rows=26 loops=1)\n -> Sort (cost=2388918.07..2388918.08 rows=1 width=455) (actual time=81771.180..81771.215 rows=26 loops=1)\n Sort Key: eventmain.entrydate\n -> Nested Loop (cost=0.00..2388918.06 rows=1 width=455) (actual time=357.389..81770.982 rows=26 loops=1)\n -> Nested Loop (cost=0.00..2388913.27 rows=1 width=230) (actual time=357.292..81767.385 rows=26 loops=1)\n -> Nested Loop (cost=0.00..2388909.33 rows=1 width=122) (actual time=357.226..81764.501 rows=26 loops=1)\n -> Seq Scan on eventactivity (cost=0.00..2388874.46 rows=7 width=84) (actual time=357.147..81762.582 \nrows=27 loops=1)\n Filter: ((recordtext)::text ~~ '%JOSE CHAVEZ%'::text)\n -> Index Scan using keyword_incidentid_pkey on keyword_incidents (cost=0.00..4.97 rows=1 width=38) \n(actual time=0.034..0.036 rows=1 loops=27)\n Index Cond: ((\"outer\".incidentid)::text = (keyword_incidents.incidentid)::text)\n -> Index Scan using eventgeo_incidentid_idx on eventgeo (cost=0.00..3.93 rows=1 width=108) (actual \ntime=0.076..0.081 rows=1 loops=26)\n Index Cond: ((\"outer\".incidentid)::text = (eventgeo.incidentid)::text)\n -> Index Scan using eventmain_incidentid_idx on eventmain (cost=0.00..4.78 rows=1 width=225) (actual \ntime=0.069..0.075 rows=1 loops=26)\n Index Cond: ((\"outer\".incidentid)::text = (eventmain.incidentid)::text)\n Total runtime: 81771.529 ms\n(15 rows)\n", "msg_date": "Wed, 28 Mar 2007 20:22:25 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Planner doing seqscan before indexed join" }, { "msg_contents": "> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Dan Harris\n> \n> After some digging, I've found that the planner is choosing \n> to apply a necessary seq scan to the table. Unfortunately,\n> it's scanning the whole table, when it seems that it could \n> have joined it to a smaller table first and reduce the\n> amount of rows it would have to scan dramatically ( 70 \n> million to about 5,000 ).\n> \n\nJoining will reduce the amount of rows to scan for the filter, but\nperforming the join is non-trivial. If postgres is going to join two tables\ntogether without applying any filter first then it will have to do a seqscan\nof one of the tables, and if it chooses the table with 5000 rows, then it\nwill have to do 5000 index scans on a table with 70 million records. I\ndon't know which way would be faster. \n\nI wonder if you could find a way to use an index to do the text filter.\nMaybe tsearch2? I haven't used anything like that myself, maybe someone\nelse has more input.\n\n", "msg_date": "Thu, 29 Mar 2007 19:15:23 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner doing seqscan before indexed join" }, { "msg_contents": "You may try to change the planner's opinion using sub queries. Something\nlike:\n\n\nselect * from \n \n eventactivity, \n\n (select * from \n keyword_incidents, \n eventmain, \n eventgeo \n where \n eventmain.incidentid = keyword_incidents.incidentid \n and eventgeo.incidentid = keyword_incidents.incidentid \n and ( recordtext like '%JOSE CHAVEZ%' )\n )foo\n \n where eventactivity.incidentid = foo.incidentid \n order by foo.entrydate limit 10000;\n\n\nHTH,\n\nMarc\n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Dan Harris\nSent: Thursday, March 29, 2007 4:22 AM\nTo: PostgreSQL Performance\nSubject: [PERFORM] Planner doing seqscan before indexed join\n\n8.0.3 - Linux 2.6.18..\nFreshly vacuumed and analyzed\n\nThis database has been humming along fine for a while now, but I've got\none of those sticky queries that is taking much too long to finish.\n\nAfter some digging, I've found that the planner is choosing to apply a\nnecessary seq scan to the table. Unfortunately, it's scanning the whole\ntable, when it seems that it could have joined it to a smaller table\nfirst and reduce the amount of rows it would have to scan dramatically (\n70 million to about 5,000 ).\n\nThe table \"eventactivity\" has about 70million rows in it, index on\n\"incidentid\"\nThe table \"keyword_incidents\" is a temporary table and has incidentid as\nits primary key. It contains\n5125 rows. eventmain and eventgeo both have 2.1 million. My hope is that\nI can convince the planner to do the\n join to keyword_incidents *first* and then do the seq scan for the\nLIKE condition. Instead, it seems that it's seqscanning the whole 70\nmillion rows first and then doing the join, which takes a lot longer\nthan I'd like to wait for it. Or, maybe I'm misreading the explain\noutput?\n\nThanks again\n\n-Dan\n---------------------------------\nHere's the query:\n\nexplain analyze \n\nselect \n * from \n\n keyword_incidents, \n\n eventactivity, \n\n eventmain, \n\n eventgeo \n\n where \n\n eventmain.incidentid = keyword_incidents.incidentid and \n\n eventgeo.incidentid = keyword_incidents.incidentid and \n\n eventactivity.incidentid = keyword_incidents.incidentid \n\n and ( recordtext like '%JOSE CHAVEZ%' )\norder by eventmain.entrydate limit 10000;\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n---------------------------\n Limit (cost=2388918.07..2388918.08 rows=1 width=455) (actual\ntime=81771.186..81771.292 rows=26 loops=1)\n -> Sort (cost=2388918.07..2388918.08 rows=1 width=455) (actual\ntime=81771.180..81771.215 rows=26 loops=1)\n Sort Key: eventmain.entrydate\n -> Nested Loop (cost=0.00..2388918.06 rows=1 width=455)\n(actual time=357.389..81770.982 rows=26 loops=1)\n -> Nested Loop (cost=0.00..2388913.27 rows=1\nwidth=230) (actual time=357.292..81767.385 rows=26 loops=1)\n -> Nested Loop (cost=0.00..2388909.33 rows=1\nwidth=122) (actual time=357.226..81764.501 rows=26 loops=1)\n -> Seq Scan on eventactivity\n(cost=0.00..2388874.46 rows=7 width=84) (actual time=357.147..81762.582\nrows=27 loops=1)\n Filter: ((recordtext)::text ~~ '%JOSE\nCHAVEZ%'::text)\n -> Index Scan using keyword_incidentid_pkey\non keyword_incidents (cost=0.00..4.97 rows=1 width=38) (actual\ntime=0.034..0.036 rows=1 loops=27)\n Index Cond:\n((\"outer\".incidentid)::text = (keyword_incidents.incidentid)::text)\n -> Index Scan using eventgeo_incidentid_idx on\neventgeo (cost=0.00..3.93 rows=1 width=108) (actual\ntime=0.076..0.081 rows=1 loops=26)\n Index Cond: ((\"outer\".incidentid)::text =\n(eventgeo.incidentid)::text)\n -> Index Scan using eventmain_incidentid_idx on\neventmain (cost=0.00..4.78 rows=1 width=225) (actual\ntime=0.069..0.075 rows=1 loops=26)\n Index Cond: ((\"outer\".incidentid)::text =\n(eventmain.incidentid)::text)\n Total runtime: 81771.529 ms\n(15 rows)\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n", "msg_date": "Fri, 30 Mar 2007 09:39:31 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner doing seqscan before indexed join" } ]
[ { "msg_contents": "Hi David,\n\nThanks for your feedback! I'm rather a newbie at this, and I do appreciate the critique.\n\nFirst, let me correct myself: The formulas for the risk of loosing data when you loose 2 and 3 disks shouldn't have included the first term (g/n). I'll give the corrected formulas and tables at the end of the email.\n\n\n> please explain why you are saying that the risk of loosing any 1 disk is \n> 1/n. shouldn't it be probability of failure * n instead?\n\n1/n represents the assumption that all disks have an equal probability of being the next one to fail. This seems like a fair assumption in general for the active members of a stripe (not including hot spares). A possible exception would be the parity disks (because reads always skip them and writes always hit them), but that's only a consideration if the RAID configuration used dedicated disks for parity instead of distributing it across the RAID 5/6 group members. Apart from that, whether the workload is write-heavy or read-heavy, sequential or scattered, the disks in the stripe ought to handle a roughly equivalent number of iops over their lifetime.\n\n\n> following this logic the risk of loosing all 48 disks in a single group of \n> 48 would be 100%\n\nExactly. Putting all disks in one group is RAID 0 -- no data protection. If you loose even 1 active member of the stripe, the probability of loosing your data is 100%.\n\n\n> also what you are looking for is the probability of the second (and third) \n> disks failing in time X (where X is the time nessasary to notice the \n> failure, get a replacement, and rebuild the disk)\n\nYep, that's exactly what I'm looking for. That's why I said, \"these probabilities are only describing the case where we don't have enough time between disk failures to recover the array.\" My goal wasn't to estimate how long time X is. (It doesn't seem like a generalizable quantity; due partly to logistical and human factors, it's unique to each operating environment.) Instead, I start with the assumption that time X has been exceeded, and we've lost a 2nd (or 3rd) disk in the array. Given that assumption, I wanted to show the probability that the loss of the 2nd disk has caused the stripe to become unrecoverable.\n\nWe know that RAID 10 and 50 can tolerate the loss of anywhere between 1 and n/g disks, depending on how lucky you are. I wanted to quantify the amount of luck required, as a risk management tool. The duration of time X can be minimized with hot spares and attentive administrators, but the risk after exceeding time X can only be minimized (as far as I know) by configuring the RAID stripe with small enough underlying failure groups.\n\n\n> the killer is the time needed to rebuild the disk, with multi-TB arrays \n> is't sometimes faster to re-initialize the array and reload from backup \n> then it is to do a live rebuild (the kernel.org servers had a raid failure \n> recently and HPA mentioned that it took a week to rebuild the array, but \n> it would have only taken a couple days to do a restore from backup)\n\nThat's very interesting. I guess the rebuild time also would depend on how large the damaged failure group was. Under RAID 10, for example, I think you'd still only have to rebuild 1 disk from its mirror, regardless of how many other disks were in the stripe, right? So shortening the rebuild time may be another good motivation to keep the failure groups small.\n\n\n> add to this the fact that disk failures do not appear to be truely \n> independant from each other statisticly (see the recent studies released \n> by google and cmu), and I wouldn't bother with single-parity for a \n\nI don't think I've seen the studies you mentioned. Would you cite them please? This may not be typical of everyone's experience, but what I've seen during in-house load tests is an equal I/O rate for each disk in my stripe, using short-duration sampling intervals to avoid long-term averaging effects. This is what I expected to find, so I didn't delve deeper.\n\nCertainly it's true that some disks may be more heavily burdened than others for hours or days, but I wouldn't expect any bias from an application-driven access pattern to persist for a significant fraction of a disk's lifespan. The only influence I'd expect to bias the cumulative I/O handled by a disk over its entire life would be its role in the RAID configuration. Hot spares will have minimal wear-and-tear until they're activated. Dedicated parity disks will probably live longer than data disks, unless the workload is very heavily oriented towards small writes (e.g. logging).\n\n\n> multi-TB array. If the data is easy to recreate (including from backup) or \n> short lived (say a database of log data that cycles every month or so) I \n> would just do RAID-0 and plan on loosing the data on drive failure (this \n> assumes that you can afford the loss of service when this happens). if the \n> data is more important then I'd do dual-parity or more, along with a hot \n> spare so that the rebuild can start as soon as the first failure is \n> noticed by the system to give myself a fighting chance to save things.\n\nThat sounds like a fine plan. In my case, downtime is unacceptible (which is, of course, why I'm interested in quantifying the probabilities of data loss).\n\n\nHere are the corrected formulas:\n\nLet:\n g = number of disks in each group (e.g. mirroring = 2; single-parity = 3 or more; dual-parity = 4 or more)\n n = total number of disks\n risk of loosing any 1 disk = 1/n\nThen we have:\n risk of loosing 1 disk from a particular group = g/n\n risk of loosing 2 disks in the same group = (g-1)/(n-1)\n risk of loosing 3 disks in the same group = (g-1)/(n-1) * (g-2)/(n-2)\n\nFor the x4500, we have 48 disks. If we stripe our data across all those disks, then these are our configuration options:\n\nRAID 10 or 50 -- Mirroring or single-parity must loose 2 disks from the same group to loose data:\ndisks_per_group num_groups total_disks usable_disks risk_of_data_loss\n 2 24 48 24 2.13%\n 3 16 48 32 4.26%\n 4 12 48 36 6.38%\n 6 8 48 40 10.64%\n 8 6 48 42 14.89%\n 12 4 48 44 23.40%\n 16 3 48 45 31.91%\n 24 2 48 46 48.94%\n 48 1 48 47 100.00%\n\nRAID 60 or Z2 -- Double-parity must loose 3 disks from the same group to loose data:\ndisks_per_group num_groups total_disks usable_disks risk_of_data_loss\n 2 24 48 n/a n/a\n 3 16 48 16 0.09%\n 4 12 48 24 0.28%\n 6 8 48 32 0.93%\n 8 6 48 36 1.94%\n 12 4 48 40 5.09%\n 16 3 48 42 9.71%\n 24 2 48 44 23.40%\n 48 1 48 46 100.00%\n\n\n", "msg_date": "Thu, 29 Mar 2007 00:59:15 -0700", "msg_from": "\"Matt Smiley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sunfire X4500 recommendations" }, { "msg_contents": "On Thu, 29 Mar 2007, Matt Smiley wrote:\n\n> Hi David,\n>\n> Thanks for your feedback! I'm rather a newbie at this, and I do appreciate the critique.\n>\n> First, let me correct myself: The formulas for the risk of loosing data when you loose 2 and 3 disks shouldn't have included the first term (g/n). I'll give the corrected formulas and tables at the end of the email.\n>\n>\n>> please explain why you are saying that the risk of loosing any 1 disk is\n>> 1/n. shouldn't it be probability of failure * n instead?\n>\n> 1/n represents the assumption that all disks have an equal probability of being the next one to fail. This seems like a fair assumption in general for the active members of a stripe (not including hot spares). A possible exception would be the parity disks (because reads always skip them and writes always hit them), but that's only a consideration if the RAID configuration used dedicated disks for parity instead of distributing it across the RAID 5/6 group members. Apart from that, whether the workload is write-heavy or read-heavy, sequential or scattered, the disks in the stripe ought to handle a roughly equivalent number of iops over their lifetime.\n>\n\nonly assuming that you have a 100% chance of some disk failing. if you \nhave 15 disks in one array and 60 disks in another array the chances of \nhaving _some_ failure in the 15 disk array is only 1/4 the chance of \nhaving a failure of _some_ disk in the 60 disk array\n\n>\n>> following this logic the risk of loosing all 48 disks in a single group of\n>> 48 would be 100%\n>\n> Exactly. Putting all disks in one group is RAID 0 -- no data protection. If you loose even 1 active member of the stripe, the probability of loosing your data is 100%.\n\nbut by your math, the chance of failure with dual parity if a 48 disk \nraid5 was also 100%, this is just wrong.\n\n>\n>> also what you are looking for is the probability of the second (and third)\n>> disks failing in time X (where X is the time nessasary to notice the\n>> failure, get a replacement, and rebuild the disk)\n>\n> Yep, that's exactly what I'm looking for. That's why I said, \"these \n> probabilities are only describing the case where we don't have enough \n> time between disk failures to recover the array.\" My goal wasn't to \n> estimate how long time X is. (It doesn't seem like a generalizable \n> quantity; due partly to logistical and human factors, it's unique to \n> each operating environment.) Instead, I start with the assumption that \n> time X has been exceeded, and we've lost a 2nd (or 3rd) disk in the \n> array. Given that assumption, I wanted to show the probability that the \n> loss of the 2nd disk has caused the stripe to become unrecoverable.\n\nOk, this is the chance that if you loose that N disks without replacing \nany of them how much data are you likly to loose in different arrays.\n\n> We know that RAID 10 and 50 can tolerate the loss of anywhere between 1 \n> and n/g disks, depending on how lucky you are. I wanted to quantify the \n> amount of luck required, as a risk management tool. The duration of \n> time X can be minimized with hot spares and attentive administrators, \n> but the risk after exceeding time X can only be minimized (as far as I \n> know) by configuring the RAID stripe with small enough underlying \n> failure groups.\n\nbut I don't think this is the question anyone is really asking.\n\nwhat people want to know isn't 'how many disks can I loose without \nreplacing them before I loose data' what they want to know is ' with this \nconfiguration (including a drive replacement time of Y for the first N \ndrives and Z for drives after that), what are the odds of loosing data'\n\nand for the second question the chance of failure of additional disks \nisn't 100%.\n\n>\n>> the killer is the time needed to rebuild the disk, with multi-TB arrays\n>> is't sometimes faster to re-initialize the array and reload from backup\n>> then it is to do a live rebuild (the kernel.org servers had a raid failure\n>> recently and HPA mentioned that it took a week to rebuild the array, but\n>> it would have only taken a couple days to do a restore from backup)\n>\n> That's very interesting. I guess the rebuild time also would depend on \n> how large the damaged failure group was. Under RAID 10, for example, I \n> think you'd still only have to rebuild 1 disk from its mirror, \n> regardless of how many other disks were in the stripe, right? So \n> shortening the rebuild time may be another good motivation to keep the \n> failure groups small.\n>\n\ncorrect, however you have to decide how much this speed is worth to you. \nif you are building a ~20TB array you can do this with ~30 drives with \nsingle or dual parity, or ~60 drives with RAID 10.\n\nremember the big cost of arrays like this isn't even the cost of the \ndrives (although you are talking an extra $20,000 or so there), but the \ncost of the power and cooling to run all those extra drives\n\n>> add to this the fact that disk failures do not appear to be truely\n>> independant from each other statisticly (see the recent studies released\n>> by google and cmu), and I wouldn't bother with single-parity for a\n>\n> I don't think I've seen the studies you mentioned. Would you cite them \n> please?\n\nhttp://labs.google.com/papers/disk_failures.pdf\n\nhttp://www.usenix.org/events/fast07/tech/schroeder/schroeder_html/index.html\n\n> This may not be typical of everyone's experience, but what I've \n> seen during in-house load tests is an equal I/O rate for each disk in my \n> stripe, using short-duration sampling intervals to avoid long-term \n> averaging effects. This is what I expected to find, so I didn't delve \n> deeper.\n>\n> Certainly it's true that some disks may be more heavily burdened than \n> others for hours or days, but I wouldn't expect any bias from an \n> application-driven access pattern to persist for a significant fraction \n> of a disk's lifespan. The only influence I'd expect to bias the \n> cumulative I/O handled by a disk over its entire life would be its role \n> in the RAID configuration. Hot spares will have minimal wear-and-tear \n> until they're activated. Dedicated parity disks will probably live \n> longer than data disks, unless the workload is very heavily oriented \n> towards small writes (e.g. logging).\n>\n>\n>> multi-TB array. If the data is easy to recreate (including from backup) or\n>> short lived (say a database of log data that cycles every month or so) I\n>> would just do RAID-0 and plan on loosing the data on drive failure (this\n>> assumes that you can afford the loss of service when this happens). if the\n>> data is more important then I'd do dual-parity or more, along with a hot\n>> spare so that the rebuild can start as soon as the first failure is\n>> noticed by the system to give myself a fighting chance to save things.\n>\n> That sounds like a fine plan. In my case, downtime is unacceptible \n> (which is, of course, why I'm interested in quantifying the \n> probabilities of data loss).\n>\n>\n> Here are the corrected formulas:\n>\n> Let:\n> g = number of disks in each group (e.g. mirroring = 2; single-parity = 3 or more; dual-parity = 4 or more)\n> n = total number of disks\n> risk of loosing any 1 disk = 1/n\n> Then we have:\n> risk of loosing 1 disk from a particular group = g/n\n\nassuming you loose one disk\n\n> risk of loosing 2 disks in the same group = (g-1)/(n-1)\n\nassuming that you loose two disks without replaceing either one (including \nnot having a hot-spare)\n\n> risk of loosing 3 disks in the same group = (g-1)/(n-1) * (g-2)/(n-2)\n\nassuming that you loose three disks without replacing any of them \n(including not having a hot spare)\n\n> For the x4500, we have 48 disks. If we stripe our data across all those \n> disks, then these are our configuration options:\n\n> RAID 10 or 50 -- Mirroring or single-parity must loose 2 disks from the same group to loose data:\n> disks_per_group num_groups total_disks usable_disks risk_of_data_loss\n> 2 24 48 24 2.13%\n> 3 16 48 32 4.26%\n> 4 12 48 36 6.38%\n> 6 8 48 40 10.64%\n> 8 6 48 42 14.89%\n> 12 4 48 44 23.40%\n> 16 3 48 45 31.91%\n> 24 2 48 46 48.94%\n> 48 1 48 47 100.00%\n\nhowever, back in the real world, the chances of loosing three disks is \nconsiderably less then the chance of loosing two disks. so to compare \napples to apples you need to add the following\n\nchance of data loss if useing double-parity 0% in all configurations.\n\n> RAID 60 or Z2 -- Double-parity must loose 3 disks from the same group to loose data:\n> disks_per_group num_groups total_disks usable_disks risk_of_data_loss\n> 2 24 48 n/a n/a\n> 3 16 48 16 0.09%\n> 4 12 48 24 0.28%\n> 6 8 48 32 0.93%\n> 8 6 48 36 1.94%\n> 12 4 48 40 5.09%\n> 16 3 48 42 9.71%\n> 24 2 48 44 23.40%\n> 48 1 48 46 100.00%\n\nagain, to compare apples to apples you would need to add the following \n(calculating the odds for each group, they will be scareily larger then \nthe 2-drive failure chart)\n\n> RAID 10 or 50 -- Mirroring or single-parity must loose 2 disks from the same group to loose data:\n> disks_per_group num_groups total_disks usable_disks risk_of_data_loss\n> 2 24 48 24\n> 3 16 48 32\n> 4 12 48 36\n> 6 8 48 40\n> 8 6 48 42\n> 12 4 48 44\n> 16 3 48 45\n> 24 2 48 46\n> 48 1 48 47\n\nhowever, since it's easy to add a hot-spare drive, you really need to \naccount for it. there's still a chance of all the drives going bad before \nthe hot-spare can be built to replace the first one, but it's a lot lower \nthen if you don't have a hot-spare and require the admins to notice and \nreplace the failed disk.\n\nif you say that there is a 10% chance of a disk failing each year \n(significnatly higher then the studies listed above, but close enough) \nthen this works out to ~0.001% chance of a drive failing per hour (a \nreasonably round number to work with)\n\nto write 750G at ~45MB/sec takes 5 hours of 100% system throughput, or ~50 \nhours at 10% of the system throughput (background rebuilding)\n\nif we cut this in half to account for inefficiancies in retrieving data \nfrom other disks to calculate pairity it can take 100 hours (just over \nfour days) to do a background rebuild, or about 0.1% chance for each disk \nof loosing a seond disk. with 48 drives this is ~5% chance of loosing \neverything with single-parity, however the odds of loosing two disks \nduring this time are .25% so double-parity is _well_ worth it.\n\nchance of loosing data before hotspare is finished rebuilding (assumes one \nhotspare per group, you may be able to share a hotspare between multiple \ngroups to get slightly higher capacity)\n\n> RAID 60 or Z2 -- Double-parity must loose 3 disks from the same group to loose data:\n> disks_per_group num_groups total_disks usable_disks risk_of_data_loss\n> 2 24 48 n/a n/a\n> 3 16 48 n/a (0.0001% with manual replacement of drive)\n> 4 12 48 12 0.0009%\n> 6 8 48 24 0.003%\n> 8 6 48 30 0.006%\n> 12 4 48 36 0.02%\n> 16 3 48 39 0.03%\n> 24 2 48 42 0.06%\n> 48 1 48 45 0.25%\n\n> RAID 10 or 50 -- Mirroring or single-parity must loose 2 disks from the same group to loose data:\n> disks_per_group num_groups total_disks usable_disks risk_of_data_loss\n> 2 24 48 n/a (~0.1% with manual replacement of drive)\n> 3 16 48 16 0.2%\n> 4 12 48 24 0.3%\n> 6 8 48 32 0.5%\n> 8 6 48 36 0.8%\n> 12 4 48 40 1.3%\n> 16 3 48 42 1.7%\n> 24 2 48 44 2.5%\n> 48 1 48 46 5%\n\nso if I've done the math correctly the odds of losing data with the \nworst-case double-parity (one large array including hotspare) are about \nthe same as the best case single parity (mirror+ hotspare), but with \nalmost triple the capacity.\n\nDavid Lang\n\n", "msg_date": "Thu, 29 Mar 2007 18:41:20 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Sunfire X4500 recommendations" } ]
[ { "msg_contents": "Hi all.\n\nI'm running PostgreSQL v8.1.8 (under Linux Ubuntu).\n\nA function body is written as \"language sql stable\". There is just a select\nfor a search in a view with two arguments to do the search. The search is done\nwith equality comparisons.\nBoth the function call and the select alone run very fast thanks to the\nindexes on the right columns I presume.\n\nThen I create a twin function where part of the comparison is done with\nthe \"like\" operator on one of the very same columns as the previous case.\nWhile the function call is very slow, the select alone runs almost as fast\nas in the case of equality comparison.\n\nI thought that the query planner usually did a bad job on function bodies\nbecause they'd appear opaque to it.\nIn this case it seems to me that the body is opaque only if I use the \"like\"\noperator.\n\nAny hint?\n\n-- \nVincenzo Romano\n----\nMaybe Computers will never become as intelligent as Humans.\nFor sure they won't ever become so stupid. [VR-1987]\n", "msg_date": "Thu, 29 Mar 2007 20:56:07 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Weird performance drop" }, { "msg_contents": "> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Vincenzo Romano\n>\n> I thought that the query planner usually did a bad job on \n> function bodies\n> because they'd appear opaque to it.\n> In this case it seems to me that the body is opaque only if I \n> use the \"like\"\n> operator.\n\nIf you run explain on a query that looks like \"select * from a_table where\na_column like 'foo%'\" (and you have the appropriate index) you will see that\npostgres rewrites the where clause as \"a_column >= 'foo' and a_column <\n'fop'\". I think your problem is that the query is planned when the function\nis created, and at that time postgres doesn't know the value you are\ncomparing against when you use the like operator, so postgres can't rewrite\nthe query using >= and <. The problem doesn't happen for plain equality\nbecause postgres doesn't need to know anything about what you are comparing\nagainst in order to use equality.\n\nSomebody else can correct me if I'm wrong.\n\nDave\n\n", "msg_date": "Thu, 29 Mar 2007 18:12:58 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop" }, { "msg_contents": "On Friday 30 March 2007 01:12 Dave Dutcher wrote:\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf Of\n> > Vincenzo Romano\n> >\n> > I thought that the query planner usually did a bad job on\n> > function bodies\n> > because they'd appear opaque to it.\n> > In this case it seems to me that the body is opaque only if I\n> > use the \"like\"\n> > operator.\n>\n> If you run explain on a query that looks like \"select * from a_table where\n> a_column like 'foo%'\" (and you have the appropriate index) you will see\n> that postgres rewrites the where clause as \"a_column >= 'foo' and a_column\n> < 'fop'\". I think your problem is that the query is planned when the\n> function is created, and at that time postgres doesn't know the value you\n> are comparing against when you use the like operator, so postgres can't\n> rewrite the query using >= and <. The problem doesn't happen for plain\n> equality because postgres doesn't need to know anything about what you are\n> comparing against in order to use equality.\n>\n> Somebody else can correct me if I'm wrong.\n>\n> Dave\n\nIs there any \"workaround\"?\n\nIn my opinion the later the query planner decisions are taken the more\neffective they can be.\nIt could be an option for the function (body) to delay any query planner\ndecision.\n\n-- \nVincenzo Romano\n----\nMaybe Computers will never become as intelligent as Humans.\nFor sure they won't ever become so stupid. [VR-1987]\n", "msg_date": "Fri, 30 Mar 2007 08:36:16 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird performance drop" }, { "msg_contents": "> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Vincenzo Romano\n> \n> Is there any \"workaround\"?\n> \n> In my opinion the later the query planner decisions are taken the more\n> effective they can be.\n> It could be an option for the function (body) to delay any \n> query planner\n> decision.\n\nI think a possible workaround is to use a plpgsql function and the execute\nstatement. The docs will have more info.\n\nDave\n\n\n", "msg_date": "Fri, 30 Mar 2007 09:34:10 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird performance drop" }, { "msg_contents": "On Friday 30 March 2007 16:34 Dave Dutcher wrote:\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf Of\n> > Vincenzo Romano\n> >\n> > Is there any \"workaround\"?\n> >\n> > In my opinion the later the query planner decisions are taken the more\n> > effective they can be.\n> > It could be an option for the function (body) to delay any\n> > query planner\n> > decision.\n>\n> I think a possible workaround is to use a plpgsql function and the execute\n> statement. The docs will have more info.\n>\n> Dave\n\nAye sir. It works.\n\nThere's not much details into the documentation but the real point is\nthat the execute command of the PLPg/SQL actually says the DB to delay\nthe query planning process as late as possible,\n\nI have also managed to build search functions at runtime using the execute\ncommand with a dynamically built text variable.\nComplex, error prone but really fast.\n\n-- \nVincenzo Romano\n----\nMaybe Computers will never become as intelligent as Humans.\nFor sure they won't ever become so stupid. [VR-1987]\n", "msg_date": "Thu, 5 Apr 2007 23:52:45 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird performance drop" } ]
[ { "msg_contents": "Hello,\n\nI am looking to use PostgreSQL for storing some very simple flat data\nmostly in a single table. The amount of data will be in the hundreds\nof gigabytes range. Each row is on the order of 100-300 bytes in size;\nin other words, small enough that I am expecting disk I/O to be seek\nbound (even if PostgreSQL reads a full pg page at a time, since a page\nis significantly smaller than the stripe size of the volume).\n\nThe only important performance characteristics are insertion/deletion\nperformance, and the performance of trivial SELECT queries whose WHERE\nclause tests equality on one of the columns.\n\nOther than absolute performance, an important goal is to be able to\nscale fairly linearly with the number of underlying disk drives. We\nare fully willing to take a disk seek per item selected, as long as it\nscales.\n\nTo this end I have been doing some benchmarking to see whether the\nplan is going to be feasable. On a 12 disk hardware stripe, insertion\nperformance does scale somewhat with concurrent inserters. However, I\nam seeing surprising effects with SELECT:s: a single selecter\ngenerates the same amount of disk activity as two concurrent selecters\n(I was easily expecting about twice).\n\nThe query is simple:\n\nSELECT * FROM test WHERE value = 'xxx' LIMIT 1000;\n\nNo ordering, no joins, no nothing. Selecting concurrently with two\ndifferent values of 'xxx' yields the same amount of disk activity\n(never any significant CPU activity). Note that the total amount of\ndata is too large to fit in RAM (> 500 million rows), and the number\nof distinct values in the value column is 10000. The column in the\nWHERE clause is indexed.\n\nSo my first question is - why am I not seeing this scaling? The\nabsolute amount of disk activity with a single selecter is consistent\nwith what I would expect from a SINGLE disk, which is completely\nexpected since I never thought PostgreSQL would introduce disk I/O\nconcurrency on its own. But this means that adding additional readers\ndoing random-access reads *should* scale very well with 12 underlying\ndisks in a stripe.\n\n(Note that I have seen fairly similar results on other RAID variants\ntoo, including software RAID5 (yes yes I know), in addition to the\nhardware stripe.)\n\nThese tests have been done Linux 2.6.19.3 and PostgreSQL 8.1.\n\nSecondly, I am seeing a query plan switch after a certain\nthreshold. Observe:\n\nperftest=# explain select * from test where val='7433' limit 1000; \n QUERY PLAN \n-----------------------------------------------------------------------------------------\n Limit (cost=0.00..4016.50 rows=1000 width=143)\n -> Index Scan using test_val_ix on test (cost=0.00..206620.88 rows=51443 width=143)\n Index Cond: ((val)::text = '7433'::text)\n(3 rows)\n\nNow increasing to a limit of 10000:\n\nperftest=# explain select * from test where val='7433' limit 10000;\n QUERY PLAN \n--------------------------------------------------------------------------------------\n Limit (cost=360.05..38393.36 rows=10000 width=143)\n -> Bitmap Heap Scan on test (cost=360.05..196014.82 rows=51443 width=143)\n Recheck Cond: ((val)::text = '7433'::text)\n -> Bitmap Index Scan on test_val_ix (cost=0.00..360.05 rows=51443 width=0)\n Index Cond: ((val)::text = '7433'::text)\n(5 rows)\n\nThe interesting part is that the latter query is entirely CPU bound\n(no disk I/O at all) for an extended period of time before even\nbeginning to read data from disk. And when it *does* start performing\ndisk I/O, the performance is about the same as for the other case. In\nother words, the change in query plan seems to do nothing but add\noverhead.\n\nWhat is the bitmap heap scan supposed to be doing that would increase\nperformance above a \"seek once per matching row\" plan? I haven't been\nable to Google my way to what the intended benefit is of a heap scan\nvs. a plain index scan.\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Fri, 30 Mar 2007 07:16:45 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Scaling SELECT:s with the number of disks on a stripe" }, { "msg_contents": "Hello Peter,\n\nIf you are dealing with timed data or similar, you may consider to\npartition your table(s).\n\nIn order to deal with large data, I've built a \"logical\" partition\nsystem, \nwhereas the target partition is defined by the date of my data (the date\nis part of the filenames that I import...).\n\nInstead of using the Postgres partitioning framework, I keep the tables\nboundaries within a refererence table.\nThen I've built a function that takes the different query parameters as\nargument (column list, where clause...). \nThis functions retrieve the list of tables to query from my reference\ntable and build the final query, binding \nthe different subqueries from each partition with \"UNION ALL\". \nIt also requires an additional reference table that describes the table\ncolumns (data type, behaviour , e.g. groupable,summable...)\n\n\nThis allowed me to replace many \"delete\" with \"drop table\" statements,\nwhis is probably the main advantage of the solution.\n\n\nThe biggest issue was the implementation time ;-) but I'm really happy\nwith the resulting performances.\n\nHTH,\n\nMarc\n\n\n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Peter\nSchuller\nSent: Friday, March 30, 2007 7:17 AM\nTo: [email protected]\nSubject: [PERFORM] Scaling SELECT:s with the number of disks on a stripe\n\nHello,\n\nI am looking to use PostgreSQL for storing some very simple flat data\nmostly in a single table. The amount of data will be in the hundreds of\ngigabytes range. Each row is on the order of 100-300 bytes in size; in\nother words, small enough that I am expecting disk I/O to be seek bound\n(even if PostgreSQL reads a full pg page at a time, since a page is\nsignificantly smaller than the stripe size of the volume).\n\nThe only important performance characteristics are insertion/deletion\nperformance, and the performance of trivial SELECT queries whose WHERE\nclause tests equality on one of the columns.\n\nOther than absolute performance, an important goal is to be able to\nscale fairly linearly with the number of underlying disk drives. We are\nfully willing to take a disk seek per item selected, as long as it\nscales.\n\nTo this end I have been doing some benchmarking to see whether the plan\nis going to be feasable. On a 12 disk hardware stripe, insertion\nperformance does scale somewhat with concurrent inserters. However, I am\nseeing surprising effects with SELECT:s: a single selecter generates the\nsame amount of disk activity as two concurrent selecters (I was easily\nexpecting about twice).\n\nThe query is simple:\n\nSELECT * FROM test WHERE value = 'xxx' LIMIT 1000;\n\nNo ordering, no joins, no nothing. Selecting concurrently with two\ndifferent values of 'xxx' yields the same amount of disk activity (never\nany significant CPU activity). Note that the total amount of data is too\nlarge to fit in RAM (> 500 million rows), and the number of distinct\nvalues in the value column is 10000. The column in the WHERE clause is\nindexed.\n\nSo my first question is - why am I not seeing this scaling? The absolute\namount of disk activity with a single selecter is consistent with what I\nwould expect from a SINGLE disk, which is completely expected since I\nnever thought PostgreSQL would introduce disk I/O concurrency on its\nown. But this means that adding additional readers doing random-access\nreads *should* scale very well with 12 underlying disks in a stripe.\n\n(Note that I have seen fairly similar results on other RAID variants\ntoo, including software RAID5 (yes yes I know), in addition to the\nhardware stripe.)\n\nThese tests have been done Linux 2.6.19.3 and PostgreSQL 8.1.\n\nSecondly, I am seeing a query plan switch after a certain threshold.\nObserve:\n\nperftest=# explain select * from test where val='7433' limit 1000; \n QUERY PLAN\n\n------------------------------------------------------------------------\n-----------------\n Limit (cost=0.00..4016.50 rows=1000 width=143)\n -> Index Scan using test_val_ix on test (cost=0.00..206620.88\nrows=51443 width=143)\n Index Cond: ((val)::text = '7433'::text)\n(3 rows)\n\nNow increasing to a limit of 10000:\n\nperftest=# explain select * from test where val='7433' limit 10000;\n QUERY PLAN\n\n------------------------------------------------------------------------\n--------------\n Limit (cost=360.05..38393.36 rows=10000 width=143)\n -> Bitmap Heap Scan on test (cost=360.05..196014.82 rows=51443\nwidth=143)\n Recheck Cond: ((val)::text = '7433'::text)\n -> Bitmap Index Scan on test_val_ix (cost=0.00..360.05\nrows=51443 width=0)\n Index Cond: ((val)::text = '7433'::text)\n(5 rows)\n\nThe interesting part is that the latter query is entirely CPU bound (no\ndisk I/O at all) for an extended period of time before even beginning to\nread data from disk. And when it *does* start performing disk I/O, the\nperformance is about the same as for the other case. In other words, the\nchange in query plan seems to do nothing but add overhead.\n\nWhat is the bitmap heap scan supposed to be doing that would increase\nperformance above a \"seek once per matching row\" plan? I haven't been\nable to Google my way to what the intended benefit is of a heap scan vs.\na plain index scan.\n\n--\n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org\n\n", "msg_date": "Fri, 30 Mar 2007 09:07:53 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling SELECT:s with the number of disks on a stripe" }, { "msg_contents": "On 2007-03-30, Peter Schuller <[email protected]> wrote:\n[...]\n> Other than absolute performance, an important goal is to be able to\n> scale fairly linearly with the number of underlying disk drives. We\n> are fully willing to take a disk seek per item selected, as long as it\n> scales.\n>\n> To this end I have been doing some benchmarking to see whether the\n> plan is going to be feasable. On a 12 disk hardware stripe, insertion\n> performance does scale somewhat with concurrent inserters. However, I\n> am seeing surprising effects with SELECT:s: a single selecter\n> generates the same amount of disk activity as two concurrent selecters\n> (I was easily expecting about twice).\n>\n> The query is simple:\n>\n> SELECT * FROM test WHERE value = 'xxx' LIMIT 1000;\n\nI tested this on a 14-way software raid10 on freebsd, using pg 8.1.6, and\ncouldn't reproduce anything like it. With one client I get about 200 disk\nrequests per second, scaling almost exactly linearly for the first 5 or so\nclients, as expected. At 14 clients it was down to about 150 reqs/sec per\nclient, but the total throughput continued to increase with additional\nconcurrency up to about 60 clients, giving about 3600 reqs/sec (260 per\ndisk, which is about right for 10krpm scsi disks under highly concurrent\nrandom loads).\n\n> So my first question is - why am I not seeing this scaling?\n\nA good question. Have you tried testing the disks directly? e.g. create\nsome huge files, and run a few concurrent random readers on them? That\nwould test the array and the filesystem without involving postgres.\n\n> Secondly, I am seeing a query plan switch after a certain\n> threshold. Observe:\n[snip index scan changing to bitmapscan]\n\nThis is entirely expected. With the larger row count, it is more likely\n(or so the planner estimates) that rows will need to be fetched from\nadjacent or at least nearby blocks, thus a plan which fetches rows in\nphysical table order rather than index order would be expected to be\nsuperior. The planner takes into account the estimated startup cost and\nper-row cost when planning LIMIT queries; therefore it is no surprise\nthat for larger limits, it switches to a plan with a higher startup cost\nbut lower per-row cost.\n\n> The interesting part is that the latter query is entirely CPU bound\n> (no disk I/O at all) for an extended period of time before even\n> beginning to read data from disk.\n\nMost likely your index is small enough that large parts of it will be\ncached in RAM, so that the scan of the index to build the bitmap does\nnot need to hit the disk much if at all.\n\n> And when it *does* start performing\n> disk I/O, the performance is about the same as for the other case. In\n> other words, the change in query plan seems to do nothing but add\n> overhead.\n\nThis is going to depend quite a bit on the physical ordering of the data\nrelative to the indexed values.\n\n> What is the bitmap heap scan supposed to be doing that would increase\n> performance above a \"seek once per matching row\" plan? I haven't been\n> able to Google my way to what the intended benefit is of a heap scan\n> vs. a plain index scan.\n\nThe bitmap scan visits the heap in heap order, rather than index order,\nthus enabling it to take advantage of prefetch and other sequential-read\noptimizations (in the underlying OS and disk subsystem, rather than in\npg itself).\n\n-- \nAndrew, Supernews\nhttp://www.supernews.com - individual and corporate NNTP services\n", "msg_date": "Sat, 31 Mar 2007 22:55:48 -0000", "msg_from": "Andrew - Supernews <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling SELECT:s with the number of disks on a stripe" }, { "msg_contents": "Hello,\n\n> If you are dealing with timed data or similar, you may consider to\n> partition your table(s).\n\nUnfortunately this is not the case; the insertion is more or less\nrandom (not quite, but for the purpose of this problem it is).\n\nThanks for the pointers though. That is sure to be useful in some\nother context down the road.\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Mon, 2 Apr 2007 12:34:44 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling SELECT:s with the number of disks on a stripe" }, { "msg_contents": "Hello,\n\n> > SELECT * FROM test WHERE value = 'xxx' LIMIT 1000;\n> \n> I tested this on a 14-way software raid10 on freebsd, using pg 8.1.6, and\n> couldn't reproduce anything like it. With one client I get about 200 disk\n> requests per second, scaling almost exactly linearly for the first 5 or so\n> clients, as expected. At 14 clients it was down to about 150 reqs/sec per\n> client, but the total throughput continued to increase with additional\n> concurrency up to about 60 clients, giving about 3600 reqs/sec (260 per\n> disk, which is about right for 10krpm scsi disks under highly concurrent\n> random loads).\n\nOk. That is very intersting; so there is definitely nothing\nfundamental in PG that prevents the scaling (even if on FreeBSD).\n\n> A good question. Have you tried testing the disks directly? e.g. create\n> some huge files, and run a few concurrent random readers on them? That\n> would test the array and the filesystem without involving postgres.\n\nI have confirmed that I am seeing expected performance for random\nshort and highly concurrent reads in one large (> 200 GB) file. The\nI/O is done using libaio however, so depending on implementation I\nsuppose the I/O scheduling behavior of the fs/raid driver might be\naffected compared to having a number of concurrent threads doing\nsynchronous reads. I will try to confirm performance in a way that\nwill more closely match PostgreSQL's behavior.\n\nI have to say though that I will be pretty surprised if the\nperformance is not matched in that test.\n\nIs there any chance there is some operation system conditional code in\npg itself that might affect this behavior? Some kind of purposeful\nserialization of I/O for example (even if that sounds like an\nextremely strange thing to do)?\n\n> This is entirely expected. With the larger row count, it is more likely\n> (or so the planner estimates) that rows will need to be fetched from\n> adjacent or at least nearby blocks, thus a plan which fetches rows in\n> physical table order rather than index order would be expected to be\n> superior. The planner takes into account the estimated startup cost and\n> per-row cost when planning LIMIT queries; therefore it is no surprise\n> that for larger limits, it switches to a plan with a higher startup cost\n> but lower per-row cost.\n\nRoger that, makes sense. I had misunderstood the meaning of the heap\nscan.\n\n> Most likely your index is small enough that large parts of it will be\n> cached in RAM, so that the scan of the index to build the bitmap does\n> not need to hit the disk much if at all.\n\nEven so however, several seconds of CPU activity to scan the index for\na few tens of thousands of entries sounds a bit excessive. Or does it\nnot? Because at that level, the CPU bound period alone is approaching\nthe time it would take to seek for each entry instead. But then I\npresume the amount of work is similar/the same for the other case,\nexcept it's being done at the beginning of the query instead of before\neach seek.\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Mon, 2 Apr 2007 12:53:55 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling SELECT:s with the number of disks on a stripe" }, { "msg_contents": "On 2007-04-02, Peter Schuller <[email protected]> wrote:\n> I have confirmed that I am seeing expected performance for random\n> short and highly concurrent reads in one large (> 200 GB) file. The\n> I/O is done using libaio however, so depending on implementation I\n> suppose the I/O scheduling behavior of the fs/raid driver might be\n> affected compared to having a number of concurrent threads doing\n> synchronous reads. I will try to confirm performance in a way that\n> will more closely match PostgreSQL's behavior.\n>\n> I have to say though that I will be pretty surprised if the\n> performance is not matched in that test.\n\nThe next question then is whether anything in your postgres configuration\nis preventing it getting useful performance from the OS. What settings\nhave you changed in postgresql.conf? Are you using any unusual settings\nwithin the OS itself?\n\n> Is there any chance there is some operation system conditional code in\n> pg itself that might affect this behavior?\n\nUnlikely.\n\n>> Most likely your index is small enough that large parts of it will be\n>> cached in RAM, so that the scan of the index to build the bitmap does\n>> not need to hit the disk much if at all.\n>\n> Even so however, several seconds of CPU activity to scan the index for\n> a few tens of thousands of entries sounds a bit excessive. Or does it\n> not? Because at that level, the CPU bound period alone is approaching\n> the time it would take to seek for each entry instead. But then I\n> presume the amount of work is similar/the same for the other case,\n> except it's being done at the beginning of the query instead of before\n> each seek.\n\nYou're forgetting the LIMIT clause. For the straight index scan, the\nquery aborts when the LIMIT is reached having scanned only the specified\nnumber of index rows (plus any index entries that turned out to be dead\nin the heap). For the bitmap scan case, the limit can be applied only after\nthe heap scan is under way, therefore the index scan to build the bitmap\nwill need to scan ~50k rows, not the 10k specified in the limit, so the\namount of time spent scanning the index is 50 times larger than in the\nstraight index scan case.\n\nHowever, I do suspect you have a problem here somewhere, because in my\ntests the time taken to do the bitmap index scan on 50k rows, with the\nindex in cache, is on the order of 30ms (where the data is cached in\nshared_buffers) to 60ms (where the data is cached by the OS). That's on\na 2.8GHz xeon.\n\n-- \nAndrew, Supernews\nhttp://www.supernews.com - individual and corporate NNTP services\n", "msg_date": "Mon, 02 Apr 2007 17:00:28 -0000", "msg_from": "Andrew - Supernews <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling SELECT:s with the number of disks on a stripe" }, { "msg_contents": "Hello,\n\n> The next question then is whether anything in your postgres configuration\n> is preventing it getting useful performance from the OS. What settings\n> have you changed in postgresql.conf?\n\nThe only options not commented out are the following (it's not even\ntweaked for buffer sizes and such, since in this case I am not\ninterested in things like sort performance and cache locality other\nthan as an afterthought):\n\nhba_file = '/etc/postgresql/8.1/main/pg_hba.conf'\nident_file = '/etc/postgresql/8.1/main/pg_ident.conf'\nexternal_pid_file = '/var/run/postgresql/8.1-main.pid'\nlisten_addresses = '*'\nport = 5432\nmax_connections = 100\nunix_socket_directory = '/var/run/postgresql'\nssl = true\nshared_buffers = 1000\nlog_line_prefix = '%t '\nstats_command_string = on\nstats_row_level = on\nautovacuum = on\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\n\n> Are you using any unusual settings within the OS itself?\n\nNo. It's a pretty standard kernel. The only local tweaking done is\nenabling/disabling various things; there are no special patches used\nor attempts to create a minimalistic kernel or anything like that.\n\n> You're forgetting the LIMIT clause. For the straight index scan, the\n> query aborts when the LIMIT is reached having scanned only the specified\n> number of index rows (plus any index entries that turned out to be dead\n> in the heap). For the bitmap scan case, the limit can be applied only after\n> the heap scan is under way, therefore the index scan to build the bitmap\n> will need to scan ~50k rows, not the 10k specified in the limit, so the\n> amount of time spent scanning the index is 50 times larger than in the\n> straight index scan case.\n\nOk - makes sense that it has to scan the entire subset of the index\nfor the value in question. I will have to tweak the CPU/disk costs\nsettings (which I have, on purpose, not yet done).\n\n> However, I do suspect you have a problem here somewhere, because in my\n> tests the time taken to do the bitmap index scan on 50k rows, with the\n> index in cache, is on the order of 30ms (where the data is cached in\n> shared_buffers) to 60ms (where the data is cached by the OS). That's on\n> a 2.8GHz xeon.\n\nThis is on a machine with 2.33GHz xeons and I wasn't trying to\nexaggerate. I timed it and it is CPU bound (in userspace; next to no\nsystem CPU usage at all) for about 15 seconds for the case of\nselecting with a limit of 10000.\n\nGiven that there is no disk activity I can't imagine any buffer sizes\nor such affecting this other than userspace vs. kernelspace CPU\nconcerns (since obviously the data being worked on is in RAM). Or am I\nmissing something?\n\nIt is worth noting that the SELECT of fewer entries is entirely disk\nbound; there is almost no CPU usage whatsoever. Even taking the\ncumulative CPU usage into account (gut feeling calculation, nothing\nscientific) and multiplying by 50 you are nowhere near 15 seconds of\nCPU boundness. So it is indeed strange.\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Wed, 4 Apr 2007 08:01:16 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling SELECT:s with the number of disks on a stripe" }, { "msg_contents": "\nOn 4-Apr-07, at 2:01 AM, Peter Schuller wrote:\n\n> Hello,\n>\n>> The next question then is whether anything in your postgres \n>> configuration\n>> is preventing it getting useful performance from the OS. What \n>> settings\n>> have you changed in postgresql.conf?\n>\n> The only options not commented out are the following (it's not even\n> tweaked for buffer sizes and such, since in this case I am not\n> interested in things like sort performance and cache locality other\n> than as an afterthought):\n>\n> hba_file = '/etc/postgresql/8.1/main/pg_hba.conf'\n> ident_file = '/etc/postgresql/8.1/main/pg_ident.conf'\n> external_pid_file = '/var/run/postgresql/8.1-main.pid'\n> listen_addresses = '*'\n> port = 5432\n> max_connections = 100\n> unix_socket_directory = '/var/run/postgresql'\n> ssl = true\n> shared_buffers = 1000\nThis is way too low, if this 8.x then set it to 25% of available \nmemory, and effective cache should be 3x that\n> log_line_prefix = '%t '\n> stats_command_string = on\n> stats_row_level = on\n> autovacuum = on\n> lc_messages = 'C'\n> lc_monetary = 'C'\n> lc_numeric = 'C'\n> lc_time = 'C'\n>\n>> Are you using any unusual settings within the OS itself?\n>\n> No. It's a pretty standard kernel. The only local tweaking done is\n> enabling/disabling various things; there are no special patches used\n> or attempts to create a minimalistic kernel or anything like that.\n>\n>> You're forgetting the LIMIT clause. For the straight index scan, the\n>> query aborts when the LIMIT is reached having scanned only the \n>> specified\n>> number of index rows (plus any index entries that turned out to be \n>> dead\n>> in the heap). For the bitmap scan case, the limit can be applied \n>> only after\n>> the heap scan is under way, therefore the index scan to build the \n>> bitmap\n>> will need to scan ~50k rows, not the 10k specified in the limit, \n>> so the\n>> amount of time spent scanning the index is 50 times larger than in \n>> the\n>> straight index scan case.\n>\n> Ok - makes sense that it has to scan the entire subset of the index\n> for the value in question. I will have to tweak the CPU/disk costs\n> settings (which I have, on purpose, not yet done).\n>\n>> However, I do suspect you have a problem here somewhere, because \n>> in my\n>> tests the time taken to do the bitmap index scan on 50k rows, with \n>> the\n>> index in cache, is on the order of 30ms (where the data is cached in\n>> shared_buffers) to 60ms (where the data is cached by the OS). \n>> That's on\n>> a 2.8GHz xeon.\n>\n> This is on a machine with 2.33GHz xeons and I wasn't trying to\n> exaggerate. I timed it and it is CPU bound (in userspace; next to no\n> system CPU usage at all) for about 15 seconds for the case of\n> selecting with a limit of 10000.\n>\n> Given that there is no disk activity I can't imagine any buffer sizes\n> or such affecting this other than userspace vs. kernelspace CPU\n> concerns (since obviously the data being worked on is in RAM). Or am I\n> missing something?\n>\n> It is worth noting that the SELECT of fewer entries is entirely disk\n> bound; there is almost no CPU usage whatsoever. Even taking the\n> cumulative CPU usage into account (gut feeling calculation, nothing\n> scientific) and multiplying by 50 you are nowhere near 15 seconds of\n> CPU boundness. So it is indeed strange.\n>\n> -- \n> / Peter Schuller\n>\n> PGP userID: 0xE9758B7D or 'Peter Schuller \n> <[email protected]>'\n> Key retrieval: Send an E-Mail to [email protected]\n> E-Mail: [email protected] Web: http://www.scode.org\n>\n\n", "msg_date": "Wed, 4 Apr 2007 07:52:48 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling SELECT:s with the number of disks on a stripe" }, { "msg_contents": "On 2007-04-04, Peter Schuller <[email protected]> wrote:\n>> The next question then is whether anything in your postgres configuration\n>> is preventing it getting useful performance from the OS. What settings\n>> have you changed in postgresql.conf?\n>\n> The only options not commented out are the following (it's not even\n> tweaked for buffer sizes and such, since in this case I am not\n> interested in things like sort performance and cache locality other\n> than as an afterthought):\n>\n> shared_buffers = 1000\n\nI'd always do benchmarks with a realistic value of shared_buffers (i.e.\nmuch higher than that).\n\nAnother thought that comes to mind is that the bitmap index scan does\ndepend on the size of work_mem.\n\nTry increasing your shared_buffers to a reasonable working value (say\n10%-15% of RAM - I was testing on a machine with 4GB of RAM, using a\nshared_buffers setting of 50000), and increase work_mem to 16364, and\nsee if there are any noticable changes in behaviour.\n\n-- \nAndrew, Supernews\nhttp://www.supernews.com - individual and corporate NNTP services\n", "msg_date": "Wed, 04 Apr 2007 12:03:20 -0000", "msg_from": "Andrew - Supernews <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scaling SELECT:s with the number of disks on a stripe" }, { "msg_contents": "Hello,\n\n> I'd always do benchmarks with a realistic value of shared_buffers (i.e.\n> much higher than that).\n> \n> Another thought that comes to mind is that the bitmap index scan does\n> depend on the size of work_mem.\n> \n> Try increasing your shared_buffers to a reasonable working value (say\n> 10%-15% of RAM - I was testing on a machine with 4GB of RAM, using a\n> shared_buffers setting of 50000), and increase work_mem to 16364, and\n> see if there are any noticable changes in behaviour.\n\nIncreasing the buffer size and work_mem did have a significant\neffect. I can understand it in the case of the heap scan, but I am\nstill surprised at the index scan. Could pg be serializing the entire\nquery as a result of insufficient buffers/work_mem to satisfy multiple\nconcurrent queries?\n\nWith both turned up, not only is the heap scan no longer visibly CPU\nbound, I am seeing some nice scaling in terms of disk I/O. I have not\nyet benchmarked to the point of being able to say whether it's\nentirely linear, but it certainly seems to at least be approaching the\nballpark.\n\nThank you for the help! I guess I made a bad call not tweaking\nthis. My thinking was that I explicitly did not want to turn it up so\nthat I could benchmark the raw performance of disk I/O, rather than\nhaving things be cached in memory more than it would already be. But\napparantly it had other side-effects I did not consider.\n\nThanks again,\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Wed, 4 Apr 2007 19:44:48 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scaling SELECT:s with the number of disks on a stripe" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi all,\ntake a look at those plans:\n\n\ntest=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE pvcp in (select id from l_pvcp where value ilike '%pi%');\n QUERY PLAN\n- ---------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=154279.01..154279.01 rows=1 width=8) (actual time=4010.094..4010.096 rows=1 loops=1)\n -> Hash IN Join (cost=2.22..153835.49 rows=177404 width=8) (actual time=2.908..4001.814 rows=7801 loops=1)\n Hash Cond: (\"outer\".pvcp = \"inner\".id)\n -> Seq Scan on t_oa_2_00_card (cost=0.00..147670.82 rows=877682 width=12) (actual time=0.030..2904.522 rows=877682 loops=1)\n -> Hash (cost=2.17..2.17 rows=19 width=4) (actual time=0.093..0.093 rows=1 loops=1)\n -> Seq Scan on l_pvcp (cost=0.00..2.17 rows=19 width=4) (actual time=0.066..0.081 rows=1 loops=1)\n Filter: (value ~~* '%pi%'::text)\n Total runtime: 4010.413 ms\n(8 rows)\n\ntest=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE pvcp in (select id from l_pvcp where value ilike 'pi');\n QUERY PLAN\n- ----------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=93540.82..93540.83 rows=1 width=8) (actual time=55.333..55.334 rows=1 loops=1)\n -> Nested Loop (cost=84.60..93447.44 rows=37348 width=8) (actual time=2.730..46.770 rows=7801 loops=1)\n -> HashAggregate (cost=2.18..2.22 rows=4 width=4) (actual time=0.089..0.092 rows=1 loops=1)\n -> Seq Scan on l_pvcp (cost=0.00..2.17 rows=4 width=4) (actual time=0.065..0.081 rows=1 loops=1)\n Filter: (value ~~* 'pi'::text)\n -> Bitmap Heap Scan on t_oa_2_00_card (cost=82.42..23216.95 rows=11548 width=12) (actual time=2.633..29.566 rows=7801 loops=1)\n Recheck Cond: (t_oa_2_00_card.pvcp = \"outer\".id)\n -> Bitmap Index Scan on i3_t_oa_2_00_card (cost=0.00..82.42 rows=11548 width=0) (actual time=2.050..2.050 rows=7801 loops=1)\n Index Cond: (t_oa_2_00_card.pvcp = \"outer\".id)\n Total runtime: 55.454 ms\n(10 rows)\n\n\nIsn't too much choose a sequential scan due to 19 estimated rows when with 4 estimated does a correct index scan ?\n\n\nRegards\nGaetano Mendola\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGDNlB7UpzwH2SGd4RAjY8AJ9yrIaQe297m3Lh7+ZVM4i9hoqlYQCeJFGL\nz00RLwJ5yR/7bOT2TVx+JVA=\n=1lOI\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 30 Mar 2007 11:32:49 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong plan sequential scan instead of an index one" }, { "msg_contents": "Gaetano Mendola wrote:\n> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1\n> \n> Hi all, take a look at those plans:\n> \n> \n> test=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE\n> pvcp in (select id from l_pvcp where value ilike '%pi%');\n\n> -> Hash IN Join (cost=2.22..153835.49 rows=177404 width=8) (actual\n> time=2.908..4001.814 rows=7801 loops=1) Hash Cond: (\"outer\".pvcp =\n> \"inner\".id)\n\n> Isn't too much choose a sequential scan due to 19 estimated rows when\n> with 4 estimated does a correct index scan ?\n\nI don't think it's the matches on l_pvcp that's the problem, it's the \nfact that it thinks its getting 177404 rows matching the IN.\n\nNow, why 19 rows from the subquery should produce such a large estimate \nin the outer query I'm not sure. Any strange distribution of values on pvcp?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 30 Mar 2007 10:48:12 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "> Hi all,\n> take a look at those plans:\n\nTry changing random_page_cost from the default 4 to 2 in postgresql.conf:\n\nrandom_page_cost = 2\n\nThe default in postgresql is somewhat conservative. This setting\nindicates for postgresql how fast your disks are, the lower the\nfaster.\n\nCould this setting be changed to 2 as default rather than 4?\n\nregards\nClaus\n", "msg_date": "Fri, 30 Mar 2007 11:51:03 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nClaus Guttesen wrote:\n>> Hi all,\n>> take a look at those plans:\n> \n> Try changing random_page_cost from the default 4 to 2 in postgresql.conf:\n> \n> random_page_cost = 2\n> \n> The default in postgresql is somewhat conservative. This setting\n> indicates for postgresql how fast your disks are, the lower the\n> faster.\n> \n> Could this setting be changed to 2 as default rather than 4?\n\nI have tuned that number already at 2.5, lowering it to 2 doesn't change\nthe plan.\n\nRegards\nGaetano Mendola\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGDOGa7UpzwH2SGd4RAjvaAKDAbz/vxwyOBPCILGpw8rBSvTFMtACfRPBe\nyMge0RFfww0ef7xrGBLal7o=\n=k+RM\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 30 Mar 2007 12:08:26 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nRichard Huxton wrote:\n> Gaetano Mendola wrote:\n>> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1\n>>\n>> Hi all, take a look at those plans:\n>>\n>>\n>> test=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE\n>> pvcp in (select id from l_pvcp where value ilike '%pi%');\n> \n>> -> Hash IN Join (cost=2.22..153835.49 rows=177404 width=8) (actual\n>> time=2.908..4001.814 rows=7801 loops=1) Hash Cond: (\"outer\".pvcp =\n>> \"inner\".id)\n> \n>> Isn't too much choose a sequential scan due to 19 estimated rows when\n>> with 4 estimated does a correct index scan ?\n> \n> I don't think it's the matches on l_pvcp that's the problem, it's the\n> fact that it thinks its getting 177404 rows matching the IN.\n> \n> Now, why 19 rows from the subquery should produce such a large estimate\n> in the outer query I'm not sure. Any strange distribution of values on\n> pvcp?\n\nI don't know what do you mean for strange, this is the distribution:\n\ntest=# select count(*) from t_oa_2_00_card;\n count\n- --------\n 877682\n(1 row)\n\ntest=# select count(*), pvcp from t_oa_2_00_card group by pvcp;\n count | pvcp\n- -------+------\n 13 |\n 2 | 94\n 57 | 93\n 250 | 90\n 8158 | 89\n 4535 | 88\n 3170 | 87\n 13711 | 86\n 5442 | 85\n 2058 | 84\n 44 | 83\n 1 | 82\n 4 | 80\n 1 | 79\n 14851 | 78\n 12149 | 77\n 149 | 76\n 9 | 75\n 4 | 74\n 2 | 73\n 5 | 72\n 28856 | 71\n 12847 | 70\n 8183 | 69\n 11246 | 68\n 9232 | 67\n 14433 | 66\n 13970 | 65\n 3616 | 64\n 2996 | 63\n 7801 | 62\n 3329 | 61\n 949 | 60\n 35168 | 59\n 18752 | 58\n 1719 | 57\n 1031 | 56\n 1585 | 55\n 2125 | 54\n 9007 | 53\n 22060 | 52\n 2800 | 51\n 5629 | 50\n 16970 | 49\n 8254 | 48\n 11448 | 47\n 20253 | 46\n 3637 | 45\n 13876 | 44\n 19002 | 43\n 17940 | 42\n 5022 | 41\n 24478 | 40\n 2374 | 39\n 4885 | 38\n 3779 | 37\n 3532 | 36\n 11783 | 35\n 15843 | 34\n 14546 | 33\n 29171 | 32\n 5048 | 31\n 13411 | 30\n 6746 | 29\n 375 | 28\n 9244 | 27\n 10577 | 26\n 36096 | 25\n 3827 | 24\n 29497 | 23\n 20362 | 22\n 8068 | 21\n 2936 | 20\n 661 | 19\n 8224 | 18\n 3016 | 17\n 7731 | 16\n 8792 | 15\n 4486 | 14\n 3 | 13\n 6859 | 12\n 4576 | 11\n 13377 | 10\n 14578 | 9\n 6991 | 8\n 52714 | 7\n 6477 | 6\n 11445 | 5\n 24690 | 4\n 10522 | 3\n 2917 | 2\n 34694 | 1\n(92 rows)\n\n\nI think that estimate is something like: 877682 / 92 * 19\n\n\nRegards\nGaetano Mendola\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGDONZ7UpzwH2SGd4RAhs3AKCYWgyn3vkzDvhWl/tF1TRs/nDT7QCeJDZu\nk9hQ0WBS1cFHcCjIs3jca0Y=\n=RIDE\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 30 Mar 2007 12:15:53 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "Gaetano Mendola wrote:\n> \n> Richard Huxton wrote:\n>>\n>> Now, why 19 rows from the subquery should produce such a large estimate\n>> in the outer query I'm not sure. Any strange distribution of values on\n>> pvcp?\n> \n> I don't know what do you mean for strange, this is the distribution:\n> \n> test=# select count(*) from t_oa_2_00_card;\n> count\n> - --------\n> 877682\n> (1 row)\n> \n> test=# select count(*), pvcp from t_oa_2_00_card group by pvcp;\n> count | pvcp\n> - -------+------\n> (92 rows)\n> \n> \n> I think that estimate is something like: 877682 / 92 * 19\n\nSo if you actually had 19 matches for '%pi%' it might be a sensible plan \nthen. I'm afraid I don't know of any way to improve PG's prediction on \nhow many matches you'll get for a substring pattern though.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 30 Mar 2007 11:33:25 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "\nI don't know about postgres, but in oracle it could be better to write:\n\nSELECT COUNT(distinct c.id)\nFROM t_oa_2_00_card c,l_pvcp l\nWHERE l.value ilike '%pi%' and c.pvcp=l.id;\n\nor\n\nSELECT COUNT(c.id) \nFROM t_oa_2_00_card c,\n(select distinct id from l_pvcp where value ilike '%pi%') l\nWHERE c.pvcp=l.id;\n\ndepending how many rows, what kind of rows, ... are in l_pvcp table.\n\nhaving index in t_oa_2_00_card.pvcp can slow queries in oracle.\n\nIsmo\n\nOn Fri, 30 Mar 2007, Gaetano Mendola wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Richard Huxton wrote:\n> > Gaetano Mendola wrote:\n> >> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1\n> >>\n> >> Hi all, take a look at those plans:\n> >>\n> >>\n> >> test=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE\n> >> pvcp in (select id from l_pvcp where value ilike '%pi%');\n> > \n> >> -> Hash IN Join (cost=2.22..153835.49 rows=177404 width=8) (actual\n> >> time=2.908..4001.814 rows=7801 loops=1) Hash Cond: (\"outer\".pvcp =\n> >> \"inner\".id)\n> > \n> >> Isn't too much choose a sequential scan due to 19 estimated rows when\n> >> with 4 estimated does a correct index scan ?\n> > \n> > I don't think it's the matches on l_pvcp that's the problem, it's the\n> > fact that it thinks its getting 177404 rows matching the IN.\n> > \n> > Now, why 19 rows from the subquery should produce such a large estimate\n> > in the outer query I'm not sure. Any strange distribution of values on\n> > pvcp?\n> \n> I don't know what do you mean for strange, this is the distribution:\n> \n> test=# select count(*) from t_oa_2_00_card;\n> count\n> - --------\n> 877682\n> (1 row)\n> \n> test=# select count(*), pvcp from t_oa_2_00_card group by pvcp;\n> count | pvcp\n> - -------+------\n> 13 |\n> 2 | 94\n> 57 | 93\n> 250 | 90\n> 8158 | 89\n> 4535 | 88\n> 3170 | 87\n> 13711 | 86\n> 5442 | 85\n> 2058 | 84\n> 44 | 83\n> 1 | 82\n> 4 | 80\n> 1 | 79\n> 14851 | 78\n> 12149 | 77\n> 149 | 76\n> 9 | 75\n> 4 | 74\n> 2 | 73\n> 5 | 72\n> 28856 | 71\n> 12847 | 70\n> 8183 | 69\n> 11246 | 68\n> 9232 | 67\n> 14433 | 66\n> 13970 | 65\n> 3616 | 64\n> 2996 | 63\n> 7801 | 62\n> 3329 | 61\n> 949 | 60\n> 35168 | 59\n> 18752 | 58\n> 1719 | 57\n> 1031 | 56\n> 1585 | 55\n> 2125 | 54\n> 9007 | 53\n> 22060 | 52\n> 2800 | 51\n> 5629 | 50\n> 16970 | 49\n> 8254 | 48\n> 11448 | 47\n> 20253 | 46\n> 3637 | 45\n> 13876 | 44\n> 19002 | 43\n> 17940 | 42\n> 5022 | 41\n> 24478 | 40\n> 2374 | 39\n> 4885 | 38\n> 3779 | 37\n> 3532 | 36\n> 11783 | 35\n> 15843 | 34\n> 14546 | 33\n> 29171 | 32\n> 5048 | 31\n> 13411 | 30\n> 6746 | 29\n> 375 | 28\n> 9244 | 27\n> 10577 | 26\n> 36096 | 25\n> 3827 | 24\n> 29497 | 23\n> 20362 | 22\n> 8068 | 21\n> 2936 | 20\n> 661 | 19\n> 8224 | 18\n> 3016 | 17\n> 7731 | 16\n> 8792 | 15\n> 4486 | 14\n> 3 | 13\n> 6859 | 12\n> 4576 | 11\n> 13377 | 10\n> 14578 | 9\n> 6991 | 8\n> 52714 | 7\n> 6477 | 6\n> 11445 | 5\n> 24690 | 4\n> 10522 | 3\n> 2917 | 2\n> 34694 | 1\n> (92 rows)\n> \n> \n> I think that estimate is something like: 877682 / 92 * 19\n> \n> \n> Regards\n> Gaetano Mendola\n> \n> \n> \n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.2.5 (MingW32)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n> \n> iD8DBQFGDONZ7UpzwH2SGd4RAhs3AKCYWgyn3vkzDvhWl/tF1TRs/nDT7QCeJDZu\n> k9hQ0WBS1cFHcCjIs3jca0Y=\n> =RIDE\n> -----END PGP SIGNATURE-----\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n", "msg_date": "Fri, 30 Mar 2007 13:43:53 +0300 (EEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "On Fri, Mar 30, 2007 at 12:08:26PM +0200, Gaetano Mendola wrote:\n> Claus Guttesen wrote:\n> > Try changing random_page_cost from the default 4 to 2 in postgresql.conf:\n> > \n> > random_page_cost = 2\n> \n> I have tuned that number already at 2.5, lowering it to 2 doesn't change\n> the plan.\n\nThe following 19-fold overestimate is influencing the rest of the\nplan:\n\n -> Seq Scan on l_pvcp (cost=0.00..2.17 rows=19 width=4) (actual time=0.066..0.081 rows=1 loops=1)\n Filter: (value ~~* '%pi%'::text)\n\nHave you tried increasing the statistics target on l_pvcp.value?\nI ran your queries against canned data in 8.2.3 and better statistics\nresulted in more accurate row count estimates for this and other\nparts of the plan. I don't recall if estimates for non-leading-character\nmatches in earlier versions can benefit from better statistics.\n\n-- \nMichael Fuhr\n", "msg_date": "Fri, 30 Mar 2007 04:46:11 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "On Fri, Mar 30, 2007 at 04:46:11AM -0600, Michael Fuhr wrote:\n> Have you tried increasing the statistics target on l_pvcp.value?\n> I ran your queries against canned data in 8.2.3 and better statistics\n> resulted in more accurate row count estimates for this and other\n> parts of the plan. I don't recall if estimates for non-leading-character\n> matches in earlier versions can benefit from better statistics.\n\nThis might work only in 8.2. I see the following in the Release Notes:\n\n* Improve the optimizer's selectivity estimates for LIKE, ILIKE,\n and regular expression operations (Tom)\n\n-- \nMichael Fuhr\n", "msg_date": "Fri, 30 Mar 2007 04:55:34 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMichael Fuhr wrote:\n> On Fri, Mar 30, 2007 at 12:08:26PM +0200, Gaetano Mendola wrote:\n>> Claus Guttesen wrote:\n>>> Try changing random_page_cost from the default 4 to 2 in postgresql.conf:\n>>>\n>>> random_page_cost = 2\n>> I have tuned that number already at 2.5, lowering it to 2 doesn't change\n>> the plan.\n> \n> The following 19-fold overestimate is influencing the rest of the\n> plan:\n> \n> -> Seq Scan on l_pvcp (cost=0.00..2.17 rows=19 width=4) (actual time=0.066..0.081 rows=1 loops=1)\n> Filter: (value ~~* '%pi%'::text)\n> \n> Have you tried increasing the statistics target on l_pvcp.value?\n> I ran your queries against canned data in 8.2.3 and better statistics\n> resulted in more accurate row count estimates for this and other\n> parts of the plan. I don't recall if estimates for non-leading-character\n> matches in earlier versions can benefit from better statistics.\n> \n\n\ntest=# alter table l_pvcp alter column value set statistics 1000;\nALTER TABLE\ntest=# analyze l_pvcp;\nANALYZE\ntest=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE pvcp in (select id from l_pvcp where value ilike '%pi%');\n QUERY PLAN\n- ---------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=154321.83..154321.84 rows=1 width=8) (actual time=4948.627..4948.628 rows=1 loops=1)\n -> Hash IN Join (cost=2.22..153877.08 rows=177898 width=8) (actual time=2.262..4940.395 rows=7801 loops=1)\n Hash Cond: (\"outer\".pvcp = \"inner\".id)\n -> Seq Scan on t_oa_2_00_card (cost=0.00..147695.25 rows=880125 width=12) (actual time=0.040..3850.074 rows=877682 loops=1)\n -> Hash (cost=2.17..2.17 rows=19 width=4) (actual time=0.073..0.073 rows=1 loops=1)\n -> Seq Scan on l_pvcp (cost=0.00..2.17 rows=19 width=4) (actual time=0.052..0.067 rows=1 loops=1)\n Filter: (value ~~* '%pi%'::text)\n Total runtime: 4948.717 ms\n(8 rows)\n\n\nand nothing changed.\n\n\nRegards\nGaetano Mendola\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGDPVS7UpzwH2SGd4RAp+DAJ9Z5HdDcKx9rOQDbm+uAdb8uEc8OgCgjGmM\nZ351j5icCHT4yMOLEu3ZcJY=\n=CY1c\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 30 Mar 2007 13:32:34 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMichael Fuhr wrote:\n> On Fri, Mar 30, 2007 at 04:46:11AM -0600, Michael Fuhr wrote:\n>> Have you tried increasing the statistics target on l_pvcp.value?\n>> I ran your queries against canned data in 8.2.3 and better statistics\n>> resulted in more accurate row count estimates for this and other\n>> parts of the plan. I don't recall if estimates for non-leading-character\n>> matches in earlier versions can benefit from better statistics.\n> \n> This might work only in 8.2. I see the following in the Release Notes:\n> \n> * Improve the optimizer's selectivity estimates for LIKE, ILIKE,\n> and regular expression operations (Tom)\n\n\nI will try same select on a 8.2 ( that one was a 8.1 ) and I'll let you\nknow.\n\nRegards\nGaetano Mendola\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGDPXk7UpzwH2SGd4RAsQcAKCs5sh3mYuE2TMdbtdxxgSOs989JACglT1H\n44s1hJZJ5upBzIPwLigoxa4=\n=Aas2\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 30 Mar 2007 13:35:00 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "Gaetano Mendola wrote:\n> \n> The match 19 for '%pi%' is estimated, the real matches are:\n> \n> test=# select id from l_pvcp where value ilike '%pi%';\n> id\n> - ----\n> 62\n> (1 row)\n> \n> \n> test=# select id from l_pvcp where value ilike 'pi';\n> id\n> - ----\n> 62\n> (1 row)\n> \n> so one row in both cases, that's why I expect for both same plan.\n\nAh, but it's got no way of knowing what matches you'll get for \n'%anything%'. There's no easy way to get statistics for matching substrings.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 30 Mar 2007 12:48:18 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "Gaetano Mendola wrote:\n> Michael Fuhr wrote:\n>> On Fri, Mar 30, 2007 at 04:46:11AM -0600, Michael Fuhr wrote:\n>>> Have you tried increasing the statistics target on l_pvcp.value?\n>>> I ran your queries against canned data in 8.2.3 and better statistics\n>>> resulted in more accurate row count estimates for this and other\n>>> parts of the plan. I don't recall if estimates for non-leading-character\n>>> matches in earlier versions can benefit from better statistics.\n>> This might work only in 8.2. I see the following in the Release Notes:\n>>\n>> * Improve the optimizer's selectivity estimates for LIKE, ILIKE,\n>> and regular expression operations (Tom)\n> \n> \n> I will try same select on a 8.2 ( that one was a 8.1 ) and I'll let you\n> know.\n\nYou will also need to set statistics for the column to at least 100 to \ntrigger the improved selectivity estimate if memory serves.\n\nNot enough time to check the code, but Tom could better advise.\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com\n", "msg_date": "Fri, 30 Mar 2007 15:36:26 +0200", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan sequential scan instead of an index one" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> Ah, but it's got no way of knowing what matches you'll get for \n> '%anything%'. There's no easy way to get statistics for matching substrings.\n\n8.2 actually tries the match on the most-common-values list, if said\nlist is big enough (I think the threshold is stats target = 100).\nNot sure if that will help here, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Mar 2007 12:13:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong plan sequential scan instead of an index one " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nTom Lane wrote:\n> Richard Huxton <[email protected]> writes:\n>> Ah, but it's got no way of knowing what matches you'll get for \n>> '%anything%'. There's no easy way to get statistics for matching substrings.\n> \n> 8.2 actually tries the match on the most-common-values list, if said\n> list is big enough (I think the threshold is stats target = 100).\n> Not sure if that will help here, though.\n\nI didn't change the stats target and I obtain on a 8.2 engine the result I\nwas expecting.\n\n\ntest=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE pvcp in (select id from l_pvcp where value ilike '%pi%');\n QUERY PLAN\n- ---------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=163228.76..163228.77 rows=1 width=8) (actual time=23.398..23.398 rows=1 loops=1)\n -> Nested Loop (cost=74.71..163020.31 rows=83380 width=8) (actual time=2.237..18.580 rows=7801 loops=1)\n -> HashAggregate (cost=2.22..2.41 rows=19 width=4) (actual time=0.043..0.045 rows=1 loops=1)\n -> Seq Scan on l_pvcp (cost=0.00..2.17 rows=19 width=4) (actual time=0.028..0.037 rows=1 loops=1)\n Filter: (value ~~* '%pi%'::text)\n -> Bitmap Heap Scan on t_oa_2_00_card (cost=72.49..8525.04 rows=4388 width=12) (actual time=2.188..9.204 rows=7801 loops=1)\n Recheck Cond: (t_oa_2_00_card.pvcp = l_pvcp.id)\n -> Bitmap Index Scan on i3_t_oa_2_00_card (cost=0.00..71.39 rows=4388 width=0) (actual time=1.768..1.768 rows=7801 loops=1)\n Index Cond: (t_oa_2_00_card.pvcp = l_pvcp.id)\n Total runtime: 23.503 ms\n(10 rows)\n\ntest=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE pvcp in (select id from l_pvcp where value ilike 'pi');\n QUERY PLAN\n- ---------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=38343.44..38343.45 rows=1 width=8) (actual time=23.386..23.387 rows=1 loops=1)\n -> Nested Loop (cost=76.52..38299.55 rows=17554 width=8) (actual time=2.246..18.576 rows=7801 loops=1)\n -> HashAggregate (cost=2.18..2.22 rows=4 width=4) (actual time=0.041..0.043 rows=1 loops=1)\n -> Seq Scan on l_pvcp (cost=0.00..2.17 rows=4 width=4) (actual time=0.026..0.035 rows=1 loops=1)\n Filter: (value ~~* 'pi'::text)\n -> Bitmap Heap Scan on t_oa_2_00_card (cost=74.33..9519.48 rows=4388 width=12) (actual time=2.198..9.161 rows=7801 loops=1)\n Recheck Cond: (t_oa_2_00_card.pvcp = l_pvcp.id)\n -> Bitmap Index Scan on i3_t_oa_2_00_card (cost=0.00..73.24 rows=4388 width=0) (actual time=1.779..1.779 rows=7801 loops=1)\n Index Cond: (t_oa_2_00_card.pvcp = l_pvcp.id)\n Total runtime: 23.491 ms\n(10 rows)\n\n\nI had to lower the random_page_cost = 2.5 in order to avoid the sequential scan on the big table t_oa_2_00_card.\n\nthis is a +1 to update our engines to a 8.2.\n\n\nRegards\nGaetano Mendola\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGEN237UpzwH2SGd4RAo9yAJ9K7bTa5eEUjvPjk/OcAMgt+AncmQCfbkBH\nFlomqoY1ASv3TDkd9L5hgG4=\n=ZLS8\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 02 Apr 2007 12:40:55 +0200", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong plan sequential scan instead of an index one [8.2 solved\n it]" } ]
[ { "msg_contents": "Hi all,\n\nWhen I run multiple TPC-H queries (DBT3) on postgresql, I found the system\nis not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor\n( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18. I \nrun multiple\nq2 queries simultaneously. The results are:\n\n1 process takes 0.65 second to finish.\n2 processes take 1.07 seconds.\n4 processes take 4.93 seconds.\n8 processes take 16.95 seconds.\n\nFor 4-process case and 8-process case, queries takes even more time than\nthey are executed serially one after another. Because the system has 8GB\nmemory, which is much bigger than the DB size(SF=1), and I warmed the cache\nbefore I run the test, I do not think the problem was caused by disk I/O.\n\nI think it might be caused by some contentions. But I do not know postgresql\nmuch. May anybody give me some clue to find the reasons?\n\nThanks!\n\nXiaoning\n", "msg_date": "Fri, 30 Mar 2007 16:25:04 -0400", "msg_from": "Xiaoning Ding <[email protected]>", "msg_from_op": true, "msg_subject": "scalablility problem" }, { "msg_contents": "> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Xiaoning Ding\n> \n> \n> Hi all,\n> \n> When I run multiple TPC-H queries (DBT3) on postgresql, I \n> found the system\n> is not scalable. My machine has 8GB memory, and 4 Xeon Dual \n> Core processor\n> ( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is \n> 7.3.18.\n\nIs there anyway you can upgrade to 8.2? There have been a lot of\nperformance and scalability enhancements.\n\n", "msg_date": "Fri, 30 Mar 2007 15:56:17 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "Xiaoning Ding <[email protected]> writes:\n> When I run multiple TPC-H queries (DBT3) on postgresql, I found the system\n> is not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor\n> ( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18.\n\nIf you are not running PG 8.1 or later, it's really not worth your time\nto test this. Multiprocessor scalability was hardly even on the radar\nin 7.3 days.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Mar 2007 17:12:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem " }, { "msg_contents": "Thanks guys,\n\nI update PG to 8.2.3. The results are much better now.\n1 process : 0.94 second\n2 processes: 1.32 seconds\n4 processes: 2.03 seconds\n8 processes: 2.54 seconds\n\nDo you think they are good enough?\nBTW where can I found some info on what 8.2.3 did to improve\nscalability compared with pre 8.1 versions?\n\n\nXiaoning\n\nTom Lane wrote:\n> Xiaoning Ding <[email protected]> writes:\n>> When I run multiple TPC-H queries (DBT3) on postgresql, I found the system\n>> is not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor\n>> ( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18.\n> \n> If you are not running PG 8.1 or later, it's really not worth your time\n> to test this. Multiprocessor scalability was hardly even on the radar\n> in 7.3 days.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n> \n\n", "msg_date": "Fri, 30 Mar 2007 17:38:57 -0400", "msg_from": "Xiaoning Ding <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "On Fri, 2007-03-30 at 15:25, Xiaoning Ding wrote:\n> Hi all,\n> \n> When I run multiple TPC-H queries (DBT3) on postgresql, I found the system\n> is not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor\n> ( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18. I \n> run multiple\n> q2 queries simultaneously. The results are:\n> \n> 1 process takes 0.65 second to finish.\n> 2 processes take 1.07 seconds.\n> 4 processes take 4.93 seconds.\n> 8 processes take 16.95 seconds.\n> \n> For 4-process case and 8-process case, queries takes even more time than\n> they are executed serially one after another. Because the system has 8GB\n> memory, which is much bigger than the DB size(SF=1), and I warmed the cache\n> before I run the test, I do not think the problem was caused by disk I/O.\n\nYou may be right, you may be wrong. What did top / vmstat have to say\nabout IO wait / disk idle time?\n\nPostgreSQL has to commit transactions to disk. TPC-H does both business\ndecision mostly read queries, as well as mixing in writes. If you have\none hard drive, it may well be that activity is stacking up waiting on\nthose writes.\n\n> I think it might be caused by some contentions. But I do not know postgresql\n> much. May anybody give me some clue to find the reasons?\n\nOthers have mentioned your version of postgresql. 7.3 is quite old, as\nit came out at the end of 2002. Seeing as 7.3 is the standard pgsql\nversion supported by RHEL3, and RHEL came with a 2.6.9 kernel, I'm gonna\nguess your OS is about that old too.\n\npgsql 7.3 cannot take advantage of lots of shared memory, and has some\nissues scaling to lots of CPUs / processes.\n\nWhile RHEL won't be EOLed for a few more years (redhat promises 7 years\nI think) it's really not a great choice for getting started today. \nRHEL5 just released and RHEL4 is very stable.\n\nThere are several things to look at to get better performance.\n\n1: Late model PostgreSQL. Go with 8.2.3 or as a minimum 8.1.8\n2: Late model Unix.\n3: RAID controller with battery backed cache\n4: Plenty of memory.\n5: Lots of hard drives\n6: 4 to 8 CPUs.\n\nThen, google postgresql performance tuning. There are three or four good\ntuning guides that pop up right at the top.\n", "msg_date": "Fri, 30 Mar 2007 16:41:43 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "On Fri, 2007-03-30 at 16:38, Xiaoning Ding wrote:\n> Thanks guys,\n> \n> I update PG to 8.2.3. The results are much better now.\n> 1 process : 0.94 second\n> 2 processes: 1.32 seconds\n> 4 processes: 2.03 seconds\n> 8 processes: 2.54 seconds\n> \n> Do you think they are good enough?\n> BTW where can I found some info on what 8.2.3 did to improve\n> scalability compared with pre 8.1 versions?\n\nVery nice, eh? \n\nI'd say look through -hackers and -perform to see some of it, but as\nusual, the source code is the reference. You'd be surprised how well\ncommented it is.\n", "msg_date": "Fri, 30 Mar 2007 16:44:39 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "Quoth [email protected] (Xiaoning Ding):\n> When I run multiple TPC-H queries (DBT3) on postgresql, I found the system\n> is not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor\n> ( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18. I\n\n> I think it might be caused by some contentions. But I do not know postgresql\n> much. May anybody give me some clue to find the reasons?\n\nTwo primary issues:\n\n1. You're running a horrendously ancient version of PostgreSQL. The\n7.3 series is Really Old. Newer versions have *enormous*\nimprovements that are likely to be *enormously* relevant.\n\nUpgrade to 8.2.\n\n2. There are known issues with the combination of Xeon processors and\nPAE memory addressing; that sort of hardware tends to be *way* less\nspeedy than the specs would suggest.\n\nThere have been \"context switching\" issues on this sort of hardware\nthat are enormously worsened if you're running on a version of\nPostgreSQL that is 4 major releases out of date.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').\nhttp://cbbrowne.com/info/x.html\nI am not a number!\nI am a free man!\n", "msg_date": "Fri, 30 Mar 2007 21:18:18 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "On 30.03.2007, at 19:18, Christopher Browne wrote:\n\n> 2. There are known issues with the combination of Xeon processors and\n> PAE memory addressing; that sort of hardware tends to be *way* less\n> speedy than the specs would suggest.\n\nThat is not true as the current series of processors (Woodcrest and \nthe like) are also called Xeon. You probably mean the Pentium IV era \nXeons.\n\ncug\n", "msg_date": "Fri, 30 Mar 2007 22:00:30 -0600", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "On Fri, Mar 30, 2007 at 10:00:30PM -0600, Guido Neitzer wrote:\n>On 30.03.2007, at 19:18, Christopher Browne wrote:\n>>2. There are known issues with the combination of Xeon processors and\n>>PAE memory addressing; that sort of hardware tends to be *way* less\n>>speedy than the specs would suggest.\n>\n>That is not true as the current series of processors (Woodcrest and \n>the like) are also called Xeon. You probably mean the Pentium IV era \n>Xeons.\n\nWell, the newer ones can address large amounts of memory directly, \nwithout using PAE, but the original comment was correct--PAE is slow \nregardless of what processor implements it.\n\nMike Stone\n", "msg_date": "Sat, 31 Mar 2007 07:41:06 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "Scott Marlowe wrote:\n> On Fri, 2007-03-30 at 15:25, Xiaoning Ding wrote:\n>> Hi all,\n>>\n>> When I run multiple TPC-H queries (DBT3) on postgresql, I found the system\n>> is not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor\n>> ( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18. I \n>> run multiple\n>> q2 queries simultaneously. The results are:\n>>\n>> 1 process takes 0.65 second to finish.\n>> 2 processes take 1.07 seconds.\n>> 4 processes take 4.93 seconds.\n>> 8 processes take 16.95 seconds.\n>>\n>> For 4-process case and 8-process case, queries takes even more time than\n>> they are executed serially one after another. Because the system has 8GB\n>> memory, which is much bigger than the DB size(SF=1), and I warmed the cache\n>> before I run the test, I do not think the problem was caused by disk I/O.\n> \n> You may be right, you may be wrong. What did top / vmstat have to say\n> about IO wait / disk idle time?\n> \n> PostgreSQL has to commit transactions to disk. TPC-H does both business\n> decision mostly read queries, as well as mixing in writes. If you have\n> one hard drive, it may well be that activity is stacking up waiting on\n> those writes.\n\nShouldn't writes be asynchronous in linux ?\n\n>> I think it might be caused by some contentions. But I do not know postgresql\n>> much. May anybody give me some clue to find the reasons?\n> \n> Others have mentioned your version of postgresql. 7.3 is quite old, as\n> it came out at the end of 2002. Seeing as 7.3 is the standard pgsql\n> version supported by RHEL3, and RHEL came with a 2.6.9 kernel, I'm gonna\n> guess your OS is about that old too.\n>\n> pgsql 7.3 cannot take advantage of lots of shared memory, and has some\n> issues scaling to lots of CPUs / processes.\n\nI use RHEL 4. I can not understand how the scalability related with \nshared memory?\n\n> While RHEL won't be EOLed for a few more years (redhat promises 7 years\n> I think) it's really not a great choice for getting started today. \n> RHEL5 just released and RHEL4 is very stable.\n> \n> There are several things to look at to get better performance.\n> \n> 1: Late model PostgreSQL. Go with 8.2.3 or as a minimum 8.1.8\n> 2: Late model Unix.\n> 3: RAID controller with battery backed cache\n> 4: Plenty of memory.\n> 5: Lots of hard drives\n> 6: 4 to 8 CPUs.\n> \n> Then, google postgresql performance tuning. There are three or four good\n> tuning guides that pop up right at the top.\n> \n> \n\n", "msg_date": "Sat, 31 Mar 2007 11:43:20 -0400", "msg_from": "Xiaoning Ding <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "\n>> pgsql 7.3 cannot take advantage of lots of shared memory, and has some\n>> issues scaling to lots of CPUs / processes.\n> \n> I use RHEL 4. I can not understand how the scalability related with \n> shared memory?\n\nIt isn't RHEL4 and shared memory. It is PostgreSQL and shared memory. \nThings have changed with PostgreSQL since 7.3 (7.3 is really god awful \nold) that allow it to more effectively access shared memory and thus \nprovide better performance.\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Sat, 31 Mar 2007 08:49:01 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "Christopher Browne wrote:\n> Quoth [email protected] (Xiaoning Ding):\n>> When I run multiple TPC-H queries (DBT3) on postgresql, I found the system\n>> is not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor\n>> ( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18. I\n> \n>> I think it might be caused by some contentions. But I do not know postgresql\n>> much. May anybody give me some clue to find the reasons?\n> \n> Two primary issues:\n> \n> 1. You're running a horrendously ancient version of PostgreSQL. The\n> 7.3 series is Really Old. Newer versions have *enormous*\n> improvements that are likely to be *enormously* relevant.\n>\n> Upgrade to 8.2.\n8.2 is really much better.\n> \n> 2. There are known issues with the combination of Xeon processors and\n> PAE memory addressing; that sort of hardware tends to be *way* less\n> speedy than the specs would suggest.\nI think PAE slows each query process. It would not affect scalability.\n\n> There have been \"context switching\" issues on this sort of hardware\n> that are enormously worsened if you're running on a version of\n> PostgreSQL that is 4 major releases out of date.\nHow does PG 8.2 address this issue? by setting processor affinity?\n\nThanks!\n\nXiaoning\n", "msg_date": "Sat, 31 Mar 2007 11:54:15 -0400", "msg_from": "Xiaoning Ding <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "Michael Stone wrote:\n> On Fri, Mar 30, 2007 at 10:00:30PM -0600, Guido Neitzer wrote:\n>> On 30.03.2007, at 19:18, Christopher Browne wrote:\n>>> 2. There are known issues with the combination of Xeon processors and\n>>> PAE memory addressing; that sort of hardware tends to be *way* less\n>>> speedy than the specs would suggest.\n>>\n>> That is not true as the current series of processors (Woodcrest and \n>> the like) are also called Xeon. You probably mean the Pentium IV era \n>> Xeons.\n> \n> Well, the newer ones can address large amounts of memory directly, \n> without using PAE, but the original comment was correct--PAE is slow \n> regardless of what processor implements it.\n\nHere is the information in /proc/cpuinfo\n\nprocessor : 6\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 4\nmodel name : Intel(R) Xeon(TM) CPU 2.80GHz\nstepping : 8\ncpu MHz : 2793.091\ncache size : 2048 KB\nphysical id : 0\nsiblings : 4\ncore id : 1\ncpu cores : 2\nfpu : yes\nfpu_exception : yes\ncpuid level : 5\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge \nmca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall \nnx lm pni monitor ds_cpl est cid cx16 xtpr\nbogomips : 5586.08\nclflush size : 64\ncache_alignment : 128\naddress sizes : 36 bits physical, 48 bits virtual\n\n> \n> Mike Stone\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n> \n\n", "msg_date": "Sat, 31 Mar 2007 11:54:29 -0400", "msg_from": "Xiaoning Ding <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n>> I use RHEL 4. I can not understand how the scalability related with \n>> shared memory?\n\n> It isn't RHEL4 and shared memory. It is PostgreSQL and shared memory. \n> Things have changed with PostgreSQL since 7.3 (7.3 is really god awful \n> old) that allow it to more effectively access shared memory and thus \n> provide better performance.\n\nSome specifics:\n\n* bufmgr algorithms redesigned to allow larger number of shared buffers\nto be used effectively\n\n* bufmgr redesigned to not have a single lock for management of all\nshared buffers; likewise for lockmgr\n\n* lots of marginal tweaks such as paying attention to cache line\nalignment of \"hot\" shared data structures\n\nI'm probably forgetting some things but I think the bufmgr and lockmgr\nchanges were the biggest improvements in this area.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Mar 2007 12:31:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem " }, { "msg_contents": "Tom Lane wrote:\n> \"Joshua D. Drake\" <[email protected]> writes:\n>>> I use RHEL 4. I can not understand how the scalability related with \n>>> shared memory?\n> \n>> It isn't RHEL4 and shared memory. It is PostgreSQL and shared memory. \n>> Things have changed with PostgreSQL since 7.3 (7.3 is really god awful \n>> old) that allow it to more effectively access shared memory and thus \n>> provide better performance.\n> \n> Some specifics:\n> \n> * bufmgr algorithms redesigned to allow larger number of shared buffers\n> to be used effectively\n> \n> * bufmgr redesigned to not have a single lock for management of all\n> shared buffers; likewise for lockmgr\n> \n> * lots of marginal tweaks such as paying attention to cache line\n> alignment of \"hot\" shared data structures\n> \n> I'm probably forgetting some things but I think the bufmgr and lockmgr\n> changes were the biggest improvements in this area.\n> \n> \t\t\tregards, tom lane\nThat is very helpful. Thanks!\n\nXiaoning\n> \n> \n\n", "msg_date": "Sat, 31 Mar 2007 13:08:25 -0400", "msg_from": "Xiaoning Ding <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "Xiaoning Ding wrote:\n> Postgresql is 7.3.18. [...]\n> 1 process takes 0.65 second to finish.\n\n> I update PG to 8.2.3. The results are [...] now.\n> 1 process : 0.94 second\n\nYou sure about your test environment? Anything else\nrunning at the same time, perhaps?\n\nI'm a bit surprised that 8.2.3 would be 40% slower than 7.3.18\neven in the single process case.\n", "msg_date": "Sun, 01 Apr 2007 03:22:08 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "I repeated the test again. It took 0.92 second under 8.2.3.\nI checked system load using top and ps. There is no other\nactive processes.\n\nXiaoning\n\nRon Mayer wrote:\n> Xiaoning Ding wrote:\n>> Postgresql is 7.3.18. [...]\n>> 1 process takes 0.65 second to finish.\n> \n>> I update PG to 8.2.3. The results are [...] now.\n>> 1 process : 0.94 second\n> \n> You sure about your test environment? Anything else\n> running at the same time, perhaps?\n> \n> I'm a bit surprised that 8.2.3 would be 40% slower than 7.3.18\n> even in the single process case.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n> \n\n", "msg_date": "Sun, 01 Apr 2007 13:51:36 -0400", "msg_from": "Xiaoning Ding <[email protected]>", "msg_from_op": true, "msg_subject": "Re: scalablility problem" }, { "msg_contents": "I may have missed this but have you tuned your postgresql \nconfiguration ?\n\n8.2 tuning guidelines are significantly different than 7.3\n\nDave\nOn 1-Apr-07, at 1:51 PM, Xiaoning Ding wrote:\n\n> I repeated the test again. It took 0.92 second under 8.2.3.\n> I checked system load using top and ps. There is no other\n> active processes.\n>\n> Xiaoning\n>\n> Ron Mayer wrote:\n>> Xiaoning Ding wrote:\n>>> Postgresql is 7.3.18. [...]\n>>> 1 process takes 0.65 second to finish.\n>>> I update PG to 8.2.3. The results are [...] now.\n>>> 1 process : 0.94 second\n>> You sure about your test environment? Anything else\n>> running at the same time, perhaps?\n>> I'm a bit surprised that 8.2.3 would be 40% slower than 7.3.18\n>> even in the single process case.\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 2: Don't 'kill -9' the postmaster\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n", "msg_date": "Sun, 1 Apr 2007 20:26:35 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: scalablility problem" } ]
[ { "msg_contents": "* Denis Lishtovny <[email protected]> [070402 09:20]:\n> Hello All.\n> \n> I have a lot of tables and indexes in database. I must to determine which\n> indexes are not using or using seldon in databese . I enabled all posible\n> statistics in config but a can`t uderstand how to do this.\n> Thanks.\n> \n> p.s for example i need this to reduce database size for increase backup\n> and restore speed.\nIndexes are not backuped, and you can increase restore speed by\ntemporarily dropping them. Current pg_dumps should be fine from this\naspect.\n\nDiscovering which tables are unused via the database suggests more of\na software eng. problem IMHO. And it is bound to be unprecise and\ndangerous, tables might get read from:\n\n*) triggers. That means some tables might be only consulted if user X\nis doing something. Or we have full moon. Or the Chi of the DBA barked\n3 times this day.\n\n*) during application startup only (easy to solve by forcing all clients\nto restart)\n\n*) during a cron job (daily, weekly, monthly, bi-monthly)\n\n*) only during human orginated processes.\n\nNot a good thing to decide to drop tables just because nothing has\naccessed them for half an hour. Or even a week.\n\nWorse, some tables might have relationsships that are missing in the\ndatabase (foreign constraint forgotten, or some relationships that are\nhard to express with SQL constraints).\n\nOTOH, if you just try to get a feel what parts of the database is\nactive, you can start by enabling SQL statement logging, and analyze\nsome of that output.\n\nAndreas\n\n", "msg_date": "Mon, 2 Apr 2007 09:45:21 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to determine which indexes are not using or using seldom in\n\tdatabase" }, { "msg_contents": "Hello All.\n\nI have a lot of tables and indexes in database. I must to determine which\nindexes are not using or using seldon in databese . I enabled all posible\nstatistics in config but a can`t uderstand how to do this.\nThanks.\n\np.s for example i need this to reduce database size for increase backup and\nrestore speed.\n\n\n\n\n\n\n\n\n\n\nHello All.\n\nI have a lot of tables and indexes in database. I must to determine which\nindexes are not using or using seldon in databese . I enabled all posible\nstatistics in config but a can`t uderstand how to do this.\nThanks.\n\np.s for example i need this to reduce database size for increase backup and\nrestore speed.", "msg_date": "Mon, 2 Apr 2007 12:12:52 +0400", "msg_from": "\"Denis Lishtovny\" <[email protected]>", "msg_from_op": false, "msg_subject": "How to determine which indexes are not using or using seldom in\n\tdatabase" }, { "msg_contents": "Hi,\n\nOn Monday 02 April 2007 10:12, Denis Lishtovny wrote:\n| I have a lot of tables and indexes in database. I must to determine which\n| indexes are not using or using seldon in databese . I enabled all posible\n| statistics in config but a can`t uderstand how to do this.\n| Thanks.\n\nTry \"select * from pg_stat_user_indexes;\" - that should give you a good start\nto look at.\n\n| p.s for example i need this to reduce database size for increase backup and\n| restore speed.\n\nDeleting indexes won't reduce backup size noticeably (but has impact on \nrestore speed), if you use pg_dump for backup.\n\nCiao,\nThomas\n\n-- \nThomas Pundt <[email protected]> ---- http://rp-online.de/ ----\n", "msg_date": "Mon, 2 Apr 2007 10:45:13 +0200", "msg_from": "Thomas Pundt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to determine which indexes are not using or using seldom in\n\tdatabase" }, { "msg_contents": "Hi All,\n\n Currently in one of the projects we want to restrict the unauthorized users to the Postgres DB. Here we are using Postgres version 8.2.0\n\n Can anybody tell me how can I provide the user based previleges to the Postgres DB so that, we can restrict the unauthorized users as well as porivde the access control to the users based on the set previleges by the administrator. \n\n\nThanks and Regards, \nRamachandra B.S.\n\n\n\nThe information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. \n\nWARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.\n \nwww.wipro.com\n\n\n\n\n[PERFORM] Providing user based previleges to Postgres DB\n\n\n\nHi All,\n\n      Currently in one of the projects we want to restrict the unauthorized users to the Postgres DB.    Here we are using Postgres version 8.2.0\n\n      Can anybody tell me how can I provide the user based previleges to the Postgres DB so that, we can restrict the unauthorized users as well as porivde the access control to the users based on the set previleges by the administrator.    \n\n\nThanks and Regards,\nRamachandra B.S.", "msg_date": "Mon, 2 Apr 2007 16:05:28 +0530", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Providing user based previleges to Postgres DB" }, { "msg_contents": "[email protected] wrote:\n>\n> Hi All,\n>\n> Currently in one of the projects we want to restrict the \n> unauthorized users to the Postgres DB. Here we are using Postgres \n> version 8.2.0\n>\n> Can anybody tell me how can I provide the user based previleges \n> to the Postgres DB so that, we can restrict the unauthorized users as \n> well as porivde the access control to the users based on the set \n> previleges by the administrator. \n>\n\nThe pgsql-general list might be more appropriate for this type of\nquestion... Still:\n\nAre you talking restrictions based on database-users ? If so, look\nup grant and revoke in the PG documentation (under SQL commands).\n\nIf you're talking about restricting system-users to even attempt to use\npsql (which really, would not be much of a restriction), then perhaps\nyou would have to assign a group-owner to the file psql and grant\nexecute permission to the group only (well, and owner).\n\nHTH,\n\nCarlos\n--\n\n\n", "msg_date": "Mon, 02 Apr 2007 10:42:13 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Providing user based previleges to Postgres DB" } ]
[ { "msg_contents": "Sorry if anyone receives this twice; it didn't seem to go through the\nfirst time. I'll attach the query plans to another email in case they\nwere causing a size limit problem. Also here's the here's the table\ndescription:\n\n Table \"public.t1\"\n Column | Type | Modifiers\n-----------+------------------------+-----------\n num | character varying(30) | not null\n c1 | character varying(500) |\n c12 | character varying(50) |\n c2 | date |\n c3 | date |\n c11 | character varying(20) |\n c4 | integer |\n c5 | integer |\n c6 | character varying(300) |\n c7 | character varying(300) |\n c8 | date |\n c9 | character varying(100) |\n c10 | character varying(50) |\n c13 | integer |\nIndexes:\n \"t1_pkey\" primary key, btree (num)\nCheck constraints:\n \"t1_c13\" CHECK (c13 > 0 AND c13 < 6)\n\n---------------------------------------\n\nI had some problems a few weeks back when I tried to rebuild my\ndatabase on a SAN volume using postgres 8.1. The back story is as\nfollows:\n\nI had a large postgres 7.4 database (about 16 GB) that was originally\non an old sun box on scsi disks. I rebuild the database from scratch\non a new sun box on a SAN volume. The database performed poorly, and\nat the time I assumed it was due to the SAN. Well, after building a\nnew server with a fast scsi RAID array and rebuilding the DB, I've\ncome to find that it's about as only marginally faster than the SAN\nbased DB. The old 7.4 databse is still significantly faster than both\nnew DBs and I'm not sure why. The databases were created from scratch\nusing the same table structure on each server.\n\nHardware:\n\nOld server:\nSun v880 (4x1.2 Ghz CPUs, 8GB RAM, non-RAID scsi JBOD volume, postgres\n7.4, SQL_ASCII DB)\nSolaris 8\n~45/50MBps W/R\n\nNew server (with SAN storage): sun x4100 (4x opteron cores, 8GB ram,\nSAN volume, postgres 8.1, UNICODE DB)\ndebian etch\n~65/150MBps W/R\n\nNew server (with local scsi RAID): sun x4100 (4x opteron cores, 8GB\nram, RAID scsi volume, postgres 8.2 tried both UNICODE and SQL_ASCII\nDBs)\ndebian etch\n~160/185 MBps W/R\n\nMost of the queries we do are significantly slower on the new servers.\n Thinking that the UTF8 format might be slowing things down, I also\ntried SQL_ASCII, but the change in performance was negligible. I've\ntweaked just about every option in the config file, but nothing seems\nto make a difference. The only thing I can see that would make a\ndifference is the query plans. The old 7.4 server seems to use index\nscans for just about every query we throw at it. The new servers seem\nto prefer bitmap heap scans and sequential scans. I tried adjusting\nthe planner options, but no matter what I did, it seems to like the\nsequential and bitmap scans. I've analyzed and vacuumed.\n\nDoes anyone have any ideas what might be going wrong? I suppose the\nnext thing to try is 7.4 on the new servers, but I'd really like to\nstick to the 8.x series if possible.\n\nI've included some sample query plans below.\n\nThanks,\n\nAlex\n", "msg_date": "Mon, 2 Apr 2007 10:50:29 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgres 7.4 vs. 8.x redux" } ]
[ { "msg_contents": "and here are the query plans referenced in my last email (apologies if\nyou get these twice, they didn't seem to go through the first time,\nperhaps due to size?). I cut out the longer ones.\n\nThanks,\n\nAlex\n\npostgres 7.4\n\nEXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,\nc11 from t1 where num like 'RT2350533%' or num like 'GH0405545%' or\nnum like 'KL8403192%';\n\n\n QUERY PLAN\n----------------------------------------------------------------------\n Index Scan using t1_pkey, t1_pkey, t1_pkey on t1 (cost=0.00..17.93\nrows=1 width=164) (actual time=0.103..0.238 rows=3 loops=1)\n Index Cond: ((((num)::text >= 'RT2350533'::character varying) AND\n((num)::text < 'RT2350534'::character varying)) OR (((num)::text >=\n'GH0405545'::character varying) AND ((num)::text <\n'GH0405546'::character varying)) OR (((num)::text >=\n'KL8403192'::character varying) AND ((num)::text <\n'KL8403193'::character varying)))\n Filter: (((num)::text ~~ 'RT2350533%'::text) OR ((num)::text ~~\n'GH0405545%'::text) OR ((num)::text ~~ 'KL8403192%'::text))\n Total runtime: 0.427 ms\n(4 rows)\n\n\npostgres 8.2\n\nEXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,\nc11 from t1 where num like 'RT2350533%' or num like 'GH0405545%' or\nnum like 'KL8403192%';\n QUERY PLAN\n---------------------------------------------------------------------\n Seq Scan on t1 (cost=0.00..918295.05 rows=1 width=156) (actual\ntime=15.674..26225.919 rows=3 loops=1)\n Filter: (((num)::text ~~ 'RT2350533%'::text) OR ((num)::text ~~\n'GH0405545%'::text) OR ((num)::text ~~ 'KL8403192%'::text))\n Total runtime: 26225.975 ms\n(3 rows)\n\n\nposgres 7.4\n\nEXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,\nc11 from t1 where num in\n('AB6253262','AB6145031','AB6091431','AB6286083','AB5857086','AB5649157','AB7089381','AB5557744','AB6314478','AB6505260','AB6249847','AB5832304');\n\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey,\nt1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey on t1\n(cost=0.00..71.97 rows=12 width=164) (actual time=0.132..0.729 rows=12\nloops=1)\n Index Cond: (((num)::text = 'AB6253262'::text) OR ((num)::text =\n'AB6145031'::text) OR ((num)::text = 'AB6091431'::text) OR\n((num)::text = 'AB6286083'::text) OR ((num)::text = 'AB5857086'::text)\nOR ((num)::text = 'AB5649157'::text) OR ((num)::text =\n'AB7089381'::text) OR ((num)::text = 'AB5557744'::text) OR\n((num)::text = 'AB6314478'::text) OR ((num)::text = 'AB6505260'::text)\nOR ((num)::text = 'AB6249847'::text) OR ((num)::text =\n'AB5832304'::text))\n Total runtime: 1.019 ms\n(3 rows)\n\npostgres 8.2\n\nEXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,\nc11 from t1 where num in\n('AB6253262','AB6145031','AB6091431','AB6286083','AB5857086','AB5649157','AB7089381','AB5557744','AB6314478','AB6505260','AB6249847','AB5832304');\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t1 (cost=28.98..53.25 rows=12 width=156) (actual\ntime=61.442..61.486 rows=12 loops=1)\n Recheck Cond: ((num)::text = ANY\n(('{AB6253262,AB6145031,AB6091431,AB6286083,AB5857086,AB5649157,AB7089381,AB5557744,AB6314478,AB6505260,AB6249847,AB5832304}'::character\nvarying[])::text[]))\n -> Bitmap Index Scan on t1_pkey (cost=0.00..28.98 rows=12\nwidth=0) (actual time=61.429..61.429 rows=12 loops=1)\n Index Cond: ((num)::text = ANY\n(('{AB6253262,AB6145031,AB6091431,AB6286083,AB5857086,AB5649157,AB7089381,AB5557744,AB6314478,AB6505260,AB6249847,AB5832304}'::character\nvarying[])::text[]))\n Total runtime: 61.544 ms\n(5 rows)\n", "msg_date": "Tue, 3 Apr 2007 01:09:49 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgres 7.4 vs 8.x redux: query plans" }, { "msg_contents": "\"Alex Deucher\" <[email protected]> writes:\n> and here are the query plans referenced in my last email (apologies if\n> you get these twice, they didn't seem to go through the first time,\n> perhaps due to size?). I cut out the longer ones.\n\nThe first case looks a whole lot like 8.2 does not think it can use an\nindex for LIKE, which suggests strongly that you've used the wrong\nlocale in the 8.2 installation (ie, not C).\n\nThe second pair of plans may look a lot different but in principle they\nought to perform pretty similarly. I think the performance differential\nmay at root be that string comparison is way more expensive in the 8.2\ninstallation, which again is possible if you went from C locale to some\nother locale.\n\nIn short: check out \"show lc_collate\" in both installations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Apr 2007 01:28:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans " }, { "msg_contents": "On 4/3/07, Tom Lane <[email protected]> wrote:\n> \"Alex Deucher\" <[email protected]> writes:\n> > and here are the query plans referenced in my last email (apologies if\n> > you get these twice, they didn't seem to go through the first time,\n> > perhaps due to size?). I cut out the longer ones.\n>\n> The first case looks a whole lot like 8.2 does not think it can use an\n> index for LIKE, which suggests strongly that you've used the wrong\n> locale in the 8.2 installation (ie, not C).\n>\n> The second pair of plans may look a lot different but in principle they\n> ought to perform pretty similarly. I think the performance differential\n> may at root be that string comparison is way more expensive in the 8.2\n> installation, which again is possible if you went from C locale to some\n> other locale.\n>\n> In short: check out \"show lc_collate\" in both installations.\n\nOK, cool, the old one was C and the new one as not. So I dumped the\nDB and re-inited the DB with the locale set to C, then reloaded the\ndump, but I'm still getting the same behavior. Any ideas?\n\nThanks,\n\nAlex\n\n>\n> regards, tom lane\n>\n", "msg_date": "Tue, 3 Apr 2007 12:37:49 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans" }, { "msg_contents": "On 4/3/07, Alex Deucher <[email protected]> wrote:\n> On 4/3/07, Tom Lane <[email protected]> wrote:\n> > \"Alex Deucher\" <[email protected]> writes:\n> > > and here are the query plans referenced in my last email (apologies if\n> > > you get these twice, they didn't seem to go through the first time,\n> > > perhaps due to size?). I cut out the longer ones.\n> >\n> > The first case looks a whole lot like 8.2 does not think it can use an\n> > index for LIKE, which suggests strongly that you've used the wrong\n> > locale in the 8.2 installation (ie, not C).\n> >\n> > The second pair of plans may look a lot different but in principle they\n> > ought to perform pretty similarly. I think the performance differential\n> > may at root be that string comparison is way more expensive in the 8.2\n> > installation, which again is possible if you went from C locale to some\n> > other locale.\n> >\n> > In short: check out \"show lc_collate\" in both installations.\n>\n> OK, cool, the old one was C and the new one as not. So I dumped the\n> DB and re-inited the DB with the locale set to C, then reloaded the\n> dump, but I'm still getting the same behavior. Any ideas?\n>\n\nshow lc_collate;\n lc_collate\n------------\n C\n(1 row)\n\n\nHere are some updated query plans. The index is now used, but the\nperformance is still much slower on 8.2.\n\nThanks,\n\nAlex\n\npostgres 7.4\n\n\nEXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,\nc11 from t1 where num like 'RT2350533%' or num like 'GH0405545%' or\nnum like 'KL8403192%';\n\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan ABing t1_pkey, t1_pkey, t1_pkey on t1 (cost=0.00..17.93\nrows=1 width=164) (actual time=11.652..12.132 rows=3 loops=1)\n Index Cond: ((((num)::text >= 'RT2350533'::character varying) AND\n((num)::text < 'RT2350534'::character varying)) OR (((num)::text >=\n'GH0405545'::character varying) AND ((num)::text <\n'GH0405546'::character varying)) OR (((num)::text >=\n'KL8403192'::character varying) AND ((num)::text <\n'KL8403193'::character varying)))\n Filter: (((num)::text ~~ 'RT2350533%'::text) OR ((num)::text ~~\n'GH0405545%'::text) OR ((num)::text ~~ 'KL8403192%'::text))\n Total runtime: 12.320 ms\n(4 rows)\n\n\nPostgres 8.2\n\nEXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,\nc11 from t1 where num like 'RT2350533%' or num like 'GH0405545%' or\nnum like 'KL8403192%';\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t1 (cost=24.03..28.04 rows=1 width=157) (actual\ntime=165.681..274.872 rows=3 loops=1)\n Recheck Cond: (((num)::text ~~ 'RT2350533%'::text) OR ((num)::text\n~~ 'GH0405545%'::text) OR ((num)::text ~~ 'KL8403192%'::text))\n Filter: (((num)::text ~~ 'RT2350533%'::text) OR ((num)::text ~~\n'GH0405545%'::text) OR ((num)::text ~~ 'KL8403192%'::text))\n -> BitmapOr (cost=24.03..24.03 rows=1 width=0) (actual\ntime=126.080..126.080 rows=0 loops=1)\n -> Bitmap Index Scan on t1_pkey (cost=0.00..8.01 rows=1\nwidth=0) (actual time=61.805..61.805 rows=1 loops=1)\n Index Cond: (((num)::text >= 'RT2350533'::character\nvarying) AND ((num)::text < 'RT2350534'::character varying))\n -> Bitmap Index Scan on t1_pkey (cost=0.00..8.01 rows=1\nwidth=0) (actual time=37.388..37.388 rows=1 loops=1)\n Index Cond: (((num)::text >= 'GH0405545'::character\nvarying) AND ((num)::text < 'GH0405546'::character varying))\n -> Bitmap Index Scan on t1_pkey (cost=0.00..8.01 rows=1\nwidth=0) (actual time=26.876..26.876 rows=1 loops=1)\n Index Cond: (((num)::text >= 'KL8403192'::character\nvarying) AND ((num)::text < 'KL8403193'::character varying))\n Total runtime: 274.938 ms\n(11 rows)\n\n\nPostgres 7.4\n\nEXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,\nc11 from t1 where num in\n('AB6698130','AB7076908','AB6499382','AB6438888','AB6385893','AB6378237','AB7146973','AB7127138','AB7124531','AB7124513','AB7123427','AB7121183','AB7121036','AB7110101','AB7100321','AB7089845','AB7088750','AB7031384','AB7021188','AB7006144','AB6988331','AB6973865','AB6966775','AB6935066','AB6931779','AB6923412','AB6902405','AB6892488','AB6886288','AB6880467','AB6874269','AB6871439','AB6868615','AB6819495','AB6807740','AB6799138','AB6796038','AB6769347','AB6732987','AB6722076','AB6718130','AB6717543','AB6714564','AB6701821','AB6667761','AB6666630','AB6655069','AB6648287','AB6643969','AB6636412');\n\n\n\n\n\n\n\n\n\n\n\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan ABing t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey,\nt1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey,\nt1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey,\nt1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey,\nt1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey,\nt1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey,\nt1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey, t1_pkey,\nt1_pkey, t1_pkey, t1_pkey on t1 (cost=0.00..304.64 rows=50 width=164)\n(actual time=0.118..2.267 rows=50 loops=1)\n Index Cond: (((num)::text = 'AB6698130'::text) OR ((num)::text =\n'AB7076908'::text) OR ((num)::text = 'AB6499382'::text) OR\n((num)::text = 'AB6438888'::text) OR ((num)::text = 'AB6385893'::text)\nOR ((num)::text = 'AB6378237'::text) OR ((num)::text =\n'AB7146973'::text) OR ((num)::text = 'AB7127138'::text) OR\n((num)::text = 'AB7124531'::text) OR ((num)::text = 'AB7124513'::text)\nOR ((num)::text = 'AB7123427'::text) OR ((num)::text =\n'AB7121183'::text) OR ((num)::text = 'AB7121036'::text) OR\n((num)::text = 'AB7110101'::text) OR ((num)::text = 'AB7100321'::text)\nOR ((num)::text = 'AB7089845'::text) OR ((num)::text =\n'AB7088750'::text) OR ((num)::text = 'AB7031384'::text) OR\n((num)::text = 'AB7021188'::text) OR ((num)::text = 'AB7006144'::text)\nOR ((num)::text = 'AB6988331'::text) OR ((num)::text =\n'AB6973865'::text) OR ((num)::text = 'AB6966775'::text) OR\n((num)::text = 'AB6935066'::text) OR ((num)::text = 'AB6931779'::text)\nOR ((num)::text = 'AB6923412'::text) OR ((num)::text =\n'AB6902405'::text) OR ((num)::text = 'AB6892488'::text) OR\n((num)::text = 'AB6886288'::text) OR ((num)::text = 'AB6880467'::text)\nOR ((num)::text = 'AB6874269'::text) OR ((num)::text =\n'AB6871439'::text) OR ((num)::text = 'AB6868615'::text) OR\n((num)::text = 'AB6819495'::text) OR ((num)::text = 'AB6807740'::text)\nOR ((num)::text = 'AB6799138'::text) OR ((num)::text =\n'AB6796038'::text) OR ((num)::text = 'AB6769347'::text) OR\n((num)::text = 'AB6732987'::text) OR ((num)::text = 'AB6722076'::text)\nOR ((num)::text = 'AB6718130'::text) OR ((num)::text =\n'AB6717543'::text) OR ((num)::text = 'AB6714564'::text) OR\n((num)::text = 'AB6701821'::text) OR ((num)::text = 'AB6667761'::text)\nOR ((num)::text = 'AB6666630'::text) OR ((num)::text =\n'AB6655069'::text) OR ((num)::text = 'AB6648287'::text) OR\n((num)::text = 'AB6643969'::text) OR ((num)::text =\n'AB6636412'::text))\n Total runtime: 3.073 ms\n(3 rows)\n\n\n\nPostgres 8.2\n\n\nEXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,\nc11 from t1 where num in\n('AB6698130','AB7076908','AB6499382','AB6438888','AB6385893','AB6378237','AB7146973','AB7127138','AB7124531','AB7124513','AB7123427','AB7121183','AB7121036','AB7110101','AB7100321','AB7089845','AB7088750','AB7031384','AB7021188','AB7006144','AB6988331','AB6973865','AB6966775','AB6935066','AB6931779','AB6923412','AB6902405','AB6892488','AB6886288','AB6880467','AB6874269','AB6871439','AB6868615','AB6819495','AB6807740','AB6799138','AB6796038','AB6769347','AB6732987','AB6722076','AB6718130','AB6717543','AB6714564','AB6701821','AB6667761','AB6666630','AB6655069','AB6648287','AB6643969','AB6636412');\n\n\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t1 (cost=216.70..418.95 rows=50 width=157)\n(actual time=203.880..417.165 rows=50 loops=1)\n Recheck Cond: ((num)::text = ANY\n(('{AB6698130,AB7076908,AB6499382,AB6438888,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101,AB7100321,AB7089845,AB7088750,AB7031384,AB7021188,AB7006144,AB6988331,AB6973865,AB6966775,AB6935066,AB6931779,AB6923412,AB6902405,AB6892488,AB6886288,AB6880467,AB6874269,AB6871439,AB6868615,AB6819495,AB6807740,AB6799138,AB6796038,AB6769347,AB6732987,AB6722076,AB6718130,AB6717543,AB6714564,AB6701821,AB6667761,AB6666630,AB6655069,AB6648287,AB6643969,AB6636412}'::character\nvarying[])::text[]))\n -> Bitmap Index Scan on t1_pkey (cost=0.00..216.69 rows=50\nwidth=0) (actual time=198.188..198.188 rows=50 loops=1)\n Index Cond: ((num)::text = ANY\n(('{AB6698130,AB7076908,AB6499382,AB6438888,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101,AB7100321,AB7089845,AB7088750,AB7031384,AB7021188,AB7006144,AB6988331,AB6973865,AB6966775,AB6935066,AB6931779,AB6923412,AB6902405,AB6892488,AB6886288,AB6880467,AB6874269,AB6871439,AB6868615,AB6819495,AB6807740,AB6799138,AB6796038,AB6769347,AB6732987,AB6722076,AB6718130,AB6717543,AB6714564,AB6701821,AB6667761,AB6666630,AB6655069,AB6648287,AB6643969,AB6636412}'::character\nvarying[])::text[]))\n Total runtime: 417.288 ms\n(5 rows)\n", "msg_date": "Tue, 3 Apr 2007 15:30:50 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans" }, { "msg_contents": "On 4/3/07, Alex Deucher <[email protected]> wrote:\n(('{AB6698130,AB7076908,AB6499382,AB6438888,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101,AB7100321,AB7089845,AB7088750,AB7031384,AB7021188,AB7006144,AB6988331,AB6973865,AB6966775,AB6935066,AB6931779,AB6923412,AB6902405,AB6892488,AB6886288,AB6880467,AB6874269,AB6871439,AB6868615,AB6819495,AB6807740,AB6799138,AB6796038,AB6769347,AB6732987,AB6722076,AB6718130,AB6717543,AB6714564,AB6701821,AB6667761,AB6666630,AB6655069,AB6648287,AB6643969,AB6636412}'::character\n> varying[])::text[]))\n> -> Bitmap Index Scan on t1_pkey (cost=0.00..216.69 rows=50\n> width=0) (actual time=198.188..198.188 rows=50 loops=1)\n> Index Cond: ((num)::text = ANY\n\nbitmap scan:\n* did you run analyze?\n* is effective_cache_size set properly?\n* if nothing else works, try disable bitmap scan and running query.\n\nmerlin\n", "msg_date": "Tue, 3 Apr 2007 16:13:37 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans" }, { "msg_contents": "On 4/3/07, Merlin Moncure <[email protected]> wrote:\n> On 4/3/07, Alex Deucher <[email protected]> wrote:\n> (('{AB6698130,AB7076908,AB6499382,AB6438888,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101,AB7100321,AB7089845,AB7088750,AB7031384,AB7021188,AB7006144,AB6988331,AB6973865,AB6966775,AB6935066,AB6931779,AB6923412,AB6902405,AB6892488,AB6886288,AB6880467,AB6874269,AB6871439,AB6868615,AB6819495,AB6807740,AB6799138,AB6796038,AB6769347,AB6732987,AB6722076,AB6718130,AB6717543,AB6714564,AB6701821,AB6667761,AB6666630,AB6655069,AB6648287,AB6643969,AB6636412}'::character\n> > varying[])::text[]))\n> > -> Bitmap Index Scan on t1_pkey (cost=0.00..216.69 rows=50\n> > width=0) (actual time=198.188..198.188 rows=50 loops=1)\n> > Index Cond: ((num)::text = ANY\n>\n> bitmap scan:\n> * did you run analyze?\n\nyes.\n\n> * is effective_cache_size set properly?\n\nIt should be. I based it on the output of `free`. It's set to 988232.\n The system has 8 GB of ram.\n\n> * if nothing else works, try disable bitmap scan and running query.\n\nI'll give that a try and post the results.\n\nAlex\n\n>\n> merlin\n>\n", "msg_date": "Tue, 3 Apr 2007 16:18:34 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans" }, { "msg_contents": "On 4/3/07, Alex Deucher <[email protected]> wrote:\n> On 4/3/07, Merlin Moncure <[email protected]> wrote:\n> > On 4/3/07, Alex Deucher <[email protected]> wrote:\n> > (('{AB6698130,AB7076908,AB6499382,AB6438888,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101,AB7100321,AB7089845,AB7088750,AB7031384,AB7021188,AB7006144,AB6988331,AB6973865,AB6966775,AB6935066,AB6931779,AB6923412,AB6902405,AB6892488,AB6886288,AB6880467,AB6874269,AB6871439,AB6868615,AB6819495,AB6807740,AB6799138,AB6796038,AB6769347,AB6732987,AB6722076,AB6718130,AB6717543,AB6714564,AB6701821,AB6667761,AB6666630,AB6655069,AB6648287,AB6643969,AB6636412}'::character\n> > > varying[])::text[]))\n> > > -> Bitmap Index Scan on t1_pkey (cost=0.00..216.69 rows=50\n> > > width=0) (actual time=198.188..198.188 rows=50 loops=1)\n> > > Index Cond: ((num)::text = ANY\n> >\n> > bitmap scan:\n> > * did you run analyze?\n>\n> yes.\n>\n> > * is effective_cache_size set properly?\n>\n> It should be. I based it on the output of `free`. It's set to 988232.\n> The system has 8 GB of ram.\n>\n> > * if nothing else works, try disable bitmap scan and running query.\n>\n> I'll give that a try and post the results.\n>\n\nTurning off bitmapscan ends up doing a sequential scan. Turning off\nboth bitmapscan and seqscan results in a bitmap heap scan. It doesn't\nseem to want to use the index at all. Any ideas?\n\nAlex\n", "msg_date": "Tue, 3 Apr 2007 16:34:19 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans" }, { "msg_contents": "On 4/3/07, Alex Deucher <[email protected]> wrote:\n> On 4/3/07, Alex Deucher <[email protected]> wrote:\n> > On 4/3/07, Merlin Moncure <[email protected]> wrote:\n> > > On 4/3/07, Alex Deucher <[email protected]> wrote:\n> > > (('{AB6698130,AB7076908,AB6499382,AB6438888,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101,AB7100321,AB7089845,AB7088750,AB7031384,AB7021188,AB7006144,AB6988331,AB6973865,AB6966775,AB6935066,AB6931779,AB6923412,AB6902405,AB6892488,AB6886288,AB6880467,AB6874269,AB6871439,AB6868615,AB6819495,AB6807740,AB6799138,AB6796038,AB6769347,AB6732987,AB6722076,AB6718130,AB6717543,AB6714564,AB6701821,AB6667761,AB6666630,AB6655069,AB6648287,AB6643969,AB6636412}'::character\n> > > > varying[])::text[]))\n> > > > -> Bitmap Index Scan on t1_pkey (cost=0.00..216.69 rows=50\n> > > > width=0) (actual time=198.188..198.188 rows=50 loops=1)\n> > > > Index Cond: ((num)::text = ANY\n> > >\n> > > bitmap scan:\n> > > * did you run analyze?\n> >\n> > yes.\n> >\n> > > * is effective_cache_size set properly?\n> >\n> > It should be. I based it on the output of `free`. It's set to 988232.\n> > The system has 8 GB of ram.\n> >\n> > > * if nothing else works, try disable bitmap scan and running query.\n> >\n> > I'll give that a try and post the results.\n> >\n>\n> Turning off bitmapscan ends up doing a sequential scan. Turning off\n> both bitmapscan and seqscan results in a bitmap heap scan. It doesn't\n> seem to want to use the index at all. Any ideas?\n>\n\nHere are some new query plans:\n\ndb=# set enable_bitmapscan = 0;\nSET\ndb=# EXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9,\nc10, c11 from t1 where num in\n('AB7089845','AB7044044','AB6873406','AB6862832','AB6819495','AB6708597','AB6671991','AB6549872','AB6421947','AB6295753','AB6289624','AB6151788','AB5837918','AB5822713','AB5795628','AB5784823','AB5784821','AB5686690','AB5661775','AB5448834','AB5388364','AB5364097','AB5323555','AB5282594','AB5237773','AB5204489','AB5187317','AB5171933','AB4876942','AB4825258','AB4823674','AB4787291','AB4760770','AB4665795','AB4404890','AB4213700','AB4202246','AB4164081','AB4048489','AB4040744','AB4015258','AB4011789','AB3997762');\n\n\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on t1 (cost=0.00..2058004.69 rows=43 width=157) (actual\ntime=31227.514..39911.029 rows=43 loops=1)\n Filter: ((num)::text = ANY\n(('{AB7089845,AB7044044,AB6873406,AB6862832,AB6819495,AB6708597,AB6671991,AB6549872,AB6421947,AB6295753,AB6289624,AB6151788,AB5837918,AB5822713,AB5795628,AB5784823,AB5784821,AB5686690,AB5661775,AB5448834,AB5388364,AB5364097,AB5323555,AB5282594,AB5237773,AB5204489,AB5187317,AB5171933,AB4876942,AB4825258,AB4823674,AB4787291,AB4760770,AB4665795,AB4404890,AB4213700,AB4202246,AB4164081,AB4048489,AB4040744,AB4015258,AB4011789,AB3997762}'::character\nvarying[])::text[]))\n Total runtime: 39911.192 ms\n(3 rows)\n\ndb=# set enable_seqscan = 0;\nSET\ndb=# show enable_bitmapscan;\n enable_bitmapscan\n-------------------\n off\n(1 row)\n\ndb=# show enable_seqscan ;\n enable_seqscan\n----------------\n off\n(1 row)\n\ndb=# EXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9,\nc10, c11 from t1 where num in\n('AB6698130','AB7076908','AB6499382','AB6438888','AB6385893','AB6378237','AB7146973','AB7127138','AB7124531','AB7124513','AB7123427','AB7121183','AB7121036','AB7110101','AB7100321','AB7089845','AB7088750','AB7031384','AB7021188','AB7006144','AB6988331','AB6973865','AB6966775','AB6935066','AB6931779','AB6923412','AB6902405','AB6892488','AB6886288','AB6880467','AB6874269','AB6871439','AB6868615','AB6819495','AB6807740','AB6799138','AB6796038','AB6769347','AB6732987','AB6722076','AB6718130','AB6717543','AB6714564','AB6701821','AB6667761','AB6666630','AB6655069','AB6648287','AB6643969','AB6636412');\n\n\n\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t1 (cost=100000216.70..100000418.95 rows=50\nwidth=157) (actual time=236.200..236.999 rows=50 loops=1)\n Recheck Cond: ((num)::text = ANY\n(('{AB6698130,AB7076908,AB6499382,AB6438888,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101,AB7100321,AB7089845,AB7088750,AB7031384,AB7021188,AB7006144,AB6988331,AB6973865,AB6966775,AB6935066,AB6931779,AB6923412,AB6902405,AB6892488,AB6886288,AB6880467,AB6874269,AB6871439,AB6868615,AB6819495,AB6807740,AB6799138,AB6796038,AB6769347,AB6732987,AB6722076,AB6718130,AB6717543,AB6714564,AB6701821,AB6667761,AB6666630,AB6655069,AB6648287,AB6643969,AB6636412}'::character\nvarying[])::text[]))\n -> Bitmap Index Scan on t1_pkey (cost=0.00..216.69 rows=50\nwidth=0) (actual time=236.163..236.163 rows=50 loops=1)\n Index Cond: ((num)::text = ANY\n(('{AB6698130,AB7076908,AB6499382,AB6438888,AB6385893,AB6378237,AB7146973,AB7127138,AB7124531,AB7124513,AB7123427,AB7121183,AB7121036,AB7110101,AB7100321,AB7089845,AB7088750,AB7031384,AB7021188,AB7006144,AB6988331,AB6973865,AB6966775,AB6935066,AB6931779,AB6923412,AB6902405,AB6892488,AB6886288,AB6880467,AB6874269,AB6871439,AB6868615,AB6819495,AB6807740,AB6799138,AB6796038,AB6769347,AB6732987,AB6722076,AB6718130,AB6717543,AB6714564,AB6701821,AB6667761,AB6666630,AB6655069,AB6648287,AB6643969,AB6636412}'::character\nvarying[])::text[]))\n Total runtime: 237.121 ms\n(5 rows)\n", "msg_date": "Tue, 3 Apr 2007 16:45:23 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans" }, { "msg_contents": "\"Alex Deucher\" <[email protected]> writes:\n> Turning off bitmapscan ends up doing a sequential scan. Turning off\n> both bitmapscan and seqscan results in a bitmap heap scan. It doesn't\n> seem to want to use the index at all. Any ideas?\n\nThe \"ORed indexscans\" plan style that was in 7.4 isn't there anymore;\nwe use bitmap OR'ing instead. There actually are repeated indexscans\nhidden under the \"= ANY\" indexscan condition in 8.2, it's just that\nthe mechanism for detecting duplicate matches is different. AFAIK the\nindex access costs ought to be about the same either way, and the other\ncosts the same or better as what we did in 7.4. It's clear though that\n8.2 is taking some kind of big hit in the index access in your case.\nThere's something very strange going on here.\n\nYou do have both lc_collate and lc_ctype set to C, right? What about\ndatabase encoding?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Apr 2007 17:21:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans " }, { "msg_contents": "On 4/3/07, Tom Lane <[email protected]> wrote:\n> \"Alex Deucher\" <[email protected]> writes:\n> > Turning off bitmapscan ends up doing a sequential scan. Turning off\n> > both bitmapscan and seqscan results in a bitmap heap scan. It doesn't\n> > seem to want to use the index at all. Any ideas?\n>\n> The \"ORed indexscans\" plan style that was in 7.4 isn't there anymore;\n> we use bitmap OR'ing instead. There actually are repeated indexscans\n> hidden under the \"= ANY\" indexscan condition in 8.2, it's just that\n> the mechanism for detecting duplicate matches is different. AFAIK the\n> index access costs ought to be about the same either way, and the other\n> costs the same or better as what we did in 7.4. It's clear though that\n> 8.2 is taking some kind of big hit in the index access in your case.\n> There's something very strange going on here.\n>\n> You do have both lc_collate and lc_ctype set to C, right? What about\n> database encoding?\n\nSHOW lc_collate ;\n lc_collate\n------------\n C\n(1 row)\n\n\nSHOW lc_ctype ;\n lc_ctype\n----------\n C\n(1 row)\n\nThe encoding is UTF8, however I also built a SQL_ASCII version of the\nDB to compare performance, but they both seem to perform about the\nsame.\n\nAlex\n\nSQL_ASCII:\n\nEXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,\nc11 from t1 where num in\n('AB5679927','AB4974075','AB5066236','AB4598969','AB5009616','AB6409547','AB5593311','AB4975084','AB6604964','AB5637015','AB5135405','AB4501459','AB5605469','AB5603634','AB6000955','AB5718599','AB5328380','AB4846727');\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t1 (cost=80.41..152.72 rows=18 width=157)\n(actual time=157.210..283.140 rows=18 loops=1)\n Recheck Cond: ((num)::text = ANY\n(('{AB5679927,AB4974075,AB5066236,AB4598969,AB5009616,AB6409547,AB5593311,AB4975084,AB6604964,AB5637015,AB5135405,AB4501459,AB5605469,AB5603634,AB6000955,AB5718599,AB5328380,AB4846727}'::character\nvarying[])::text[]))\n -> Bitmap Index Scan on t1_pkey (cost=0.00..80.41 rows=18\nwidth=0) (actual time=140.419..140.419 rows=18 loops=1)\n Index Cond: ((num)::text = ANY\n(('{AB5679927,AB4974075,AB5066236,AB4598969,AB5009616,AB6409547,AB5593311,AB4975084,AB6604964,AB5637015,AB5135405,AB4501459,AB5605469,AB5603634,AB6000955,AB5718599,AB5328380,AB4846727}'::character\nvarying[])::text[]))\n Total runtime: 283.214 ms\n(5 rows)\n\n\nUTF8:\n\nEXPLAIN ANALYZE select num, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10,\nc11 from t1 where num in\n('AB5679927','AB4974075','AB5066236','AB4598969','AB5009616','AB6409547','AB5593311','AB4975084','AB6604964','AB5637015','AB5135405','AB4501459','AB5605469','AB5603634','AB6000955','AB5718599','AB5328380','AB4846727');\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t1 (cost=80.41..152.72 rows=18 width=159)\n(actual time=126.194..126.559 rows=18 loops=1)\n Recheck Cond: ((num)::text = ANY\n(('{AB5679927,AB4974075,AB5066236,AB4598969,AB5009616,AB6409547,AB5593311,AB4975084,AB6604964,AB5637015,AB5135405,AB4501459,AB5605469,AB5603634,AB6000955,AB5718599,AB5328380,AB4846727}'::character\nvarying[])::text[]))\n -> Bitmap Index Scan on t1_pkey (cost=0.00..80.41 rows=18\nwidth=0) (actual time=126.155..126.155 rows=18 loops=1)\n Index Cond: ((num)::text = ANY\n(('{AB5679927,AB4974075,AB5066236,AB4598969,AB5009616,AB6409547,AB5593311,AB4975084,AB6604964,AB5637015,AB5135405,AB4501459,AB5605469,AB5603634,AB6000955,AB5718599,AB5328380,AB4846727}'::character\nvarying[])::text[]))\n Total runtime: 126.661 ms\n(5 rows)\n", "msg_date": "Tue, 3 Apr 2007 17:38:34 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans" }, { "msg_contents": "On 4/3/07, Tom Lane <[email protected]> wrote:\n> \"Alex Deucher\" <[email protected]> writes:\n> > Turning off bitmapscan ends up doing a sequential scan. Turning off\n> > both bitmapscan and seqscan results in a bitmap heap scan. It doesn't\n> > seem to want to use the index at all. Any ideas?\n>\n> The \"ORed indexscans\" plan style that was in 7.4 isn't there anymore;\n> we use bitmap OR'ing instead. There actually are repeated indexscans\n> hidden under the \"= ANY\" indexscan condition in 8.2, it's just that\n> the mechanism for detecting duplicate matches is different. AFAIK the\n> index access costs ought to be about the same either way, and the other\n> costs the same or better as what we did in 7.4. It's clear though that\n> 8.2 is taking some kind of big hit in the index access in your case.\n> There's something very strange going on here.\n>\n> You do have both lc_collate and lc_ctype set to C, right? What about\n> database encoding?\n>\n\nAlso for reference, the old 7.4 DB is C for lc_collate and lc_ctype\nand SQL_ASCII for encoding.\n\nAlex\n", "msg_date": "Tue, 3 Apr 2007 17:43:47 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans" }, { "msg_contents": "On 4/3/07, Tom Lane <[email protected]> wrote:\n> \"Alex Deucher\" <[email protected]> writes:\n> > Turning off bitmapscan ends up doing a sequential scan. Turning off\n> > both bitmapscan and seqscan results in a bitmap heap scan. It doesn't\n> > seem to want to use the index at all. Any ideas?\n>\n> The \"ORed indexscans\" plan style that was in 7.4 isn't there anymore;\n> we use bitmap OR'ing instead. There actually are repeated indexscans\n> hidden under the \"= ANY\" indexscan condition in 8.2, it's just that\n> the mechanism for detecting duplicate matches is different. AFAIK the\n> index access costs ought to be about the same either way, and the other\n> costs the same or better as what we did in 7.4. It's clear though that\n> 8.2 is taking some kind of big hit in the index access in your case.\n> There's something very strange going on here.\n>\n> You do have both lc_collate and lc_ctype set to C, right? What about\n> database encoding?\n>\n\nhmmm... ok, this is weird. performance seems to have improved\nsignificantly after I reloaded postgres after adding some hew hosts to\nthe pg_hba.conf. I'll run some more tests and let you know what\nhappens.\n\nAlex\n", "msg_date": "Tue, 3 Apr 2007 18:17:20 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans" }, { "msg_contents": "Ok, well, I dropped the DB and reloaded it and now all seems to be\nfine and performing well. I'm not sure what was going on before.\nThanks for everyone's help!\n\nAlex\n\nOn 4/3/07, Alex Deucher <[email protected]> wrote:\n> On 4/3/07, Tom Lane <[email protected]> wrote:\n> > \"Alex Deucher\" <[email protected]> writes:\n> > > Turning off bitmapscan ends up doing a sequential scan. Turning off\n> > > both bitmapscan and seqscan results in a bitmap heap scan. It doesn't\n> > > seem to want to use the index at all. Any ideas?\n> >\n> > The \"ORed indexscans\" plan style that was in 7.4 isn't there anymore;\n> > we use bitmap OR'ing instead. There actually are repeated indexscans\n> > hidden under the \"= ANY\" indexscan condition in 8.2, it's just that\n> > the mechanism for detecting duplicate matches is different. AFAIK the\n> > index access costs ought to be about the same either way, and the other\n> > costs the same or better as what we did in 7.4. It's clear though that\n> > 8.2 is taking some kind of big hit in the index access in your case.\n> > There's something very strange going on here.\n> >\n> > You do have both lc_collate and lc_ctype set to C, right? What about\n> > database encoding?\n> >\n>\n> hmmm... ok, this is weird. performance seems to have improved\n> significantly after I reloaded postgres after adding some hew hosts to\n> the pg_hba.conf. I'll run some more tests and let you know what\n> happens.\n>\n> Alex\n>\n", "msg_date": "Thu, 5 Apr 2007 11:34:57 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 7.4 vs 8.x redux: query plans" } ]
[ { "msg_contents": "Hi\n\nIs there a way to get the cache hit ratio in PostGreSQL ?\n\nCheers\n\n-- \n-- Jean Arnaud\n-- Projet SARDES\n-- INRIA Rh�ne-Alpes / LSR-IMAG\n-- http://sardes.inrialpes.fr/~jarnaud\n\n\n", "msg_date": "Tue, 03 Apr 2007 16:12:20 +0200", "msg_from": "Jean Arnaud <[email protected]>", "msg_from_op": true, "msg_subject": "Cache hit ratio" }, { "msg_contents": "Jean Arnaud <Jean.Arnaud 'at' inrialpes.fr> writes:\n\n> Hi\n> \n> Is there a way to get the cache hit ratio in PostGreSQL ?\n\nWhen you activate:\n\n stats_block_level = true\n stats_row_level = true\n\nyou will get global statistics, per table and per index, about\nread disk blocks and saved reads thanks to buffers.\n\n\nThat said, I'd like to add that however, I am not sure what\nperformance gain we should expect by increasing the buffers to\nincrease the cache hit ratio.\n\nFor example, for a bunch of given heavy SQL queries, with -B 1000\n(pg 7.4) the difference is:\n\n select * from pg_statio_user_indexes where indexrelname = 'pk_themes';\n relid | indexrelid | schemaname | relname | indexrelname | idx_blks_read | idx_blks_hit \n ----------+------------+------------+---------+--------------+---------------+--------------\n 77852514 | 86437474 | public | themes | pk_themes | 220 | 0\n \n select * from pg_statio_user_indexes where indexrelname = 'pk_themes';\n relid | indexrelid | schemaname | relname | indexrelname | idx_blks_read | idx_blks_hit \n ----------+------------+------------+---------+--------------+---------------+--------------\n 77852514 | 86437474 | public | themes | pk_themes | 275 | 0\n\nwhich shows the index on primary keys is used, but is always read\nfrom disk.\n\nIf I then use -B 20000 (kernel reports the postmaster process\nenlarges from 22M to 173M of RSS), the difference is:\n\n select * from pg_statio_user_indexes where indexrelname = 'pk_themes';\n relid | indexrelid | schemaname | relname | indexrelname | idx_blks_read | idx_blks_hit \n ----------+------------+------------+---------+--------------+---------------+--------------\n 77852514 | 86437474 | public | themes | pk_themes | 55 | 110\n \n select * from pg_statio_user_indexes where indexrelname = 'pk_themes';\n relid | indexrelid | schemaname | relname | indexrelname | idx_blks_read | idx_blks_hit \n ----------+------------+------------+---------+--------------+---------------+--------------\n 77852514 | 86437474 | public | themes | pk_themes | 55 | 165\n\nwhich shows postmaster manages to keep the index in buffers.\n\nBut, the clock time used for the request is actually identical\nwhen using -B 1000 or -B 20000. I suppose the kernel is bringing\nthe performance difference thanks to filesystem caching.\n\nIn conclusion, I guess that using postmaster cache rather than\nkernel cache is probably better in the long run, because\npostmaster might be able to make better caching decisions than\nthe kernel because it has additional information, but I am not\nsure in which circumstances and the amount of better decisions it\ncan take.\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n", "msg_date": "03 Apr 2007 16:53:39 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cache hit ratio" }, { "msg_contents": "Set log_executor_stats=true;\n\nThen look in the log after running statements (or tail -f logfile).\n\n- Luke\n\n\nOn 4/3/07 7:12 AM, \"Jean Arnaud\" <[email protected]> wrote:\n\n> Hi\n> \n> Is there a way to get the cache hit ratio in PostGreSQL ?\n> \n> Cheers\n\n\n", "msg_date": "Tue, 03 Apr 2007 09:17:47 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cache hit ratio" }, { "msg_contents": "Have you looked at the pg_stat_* views? You must enable stats collection \nto see any data in them, but that's probably what you're looking for.\n\nOn Tue, 3 Apr 2007, Jean Arnaud wrote:\n\n> Hi\n>\n> Is there a way to get the cache hit ratio in PostGreSQL ?\n>\n> Cheers\n>\n> -- \n> -- Jean Arnaud\n> -- Projet SARDES\n> -- INRIA Rh�ne-Alpes / LSR-IMAG\n> -- http://sardes.inrialpes.fr/~jarnaud\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>From [email protected] Tue Apr 3 15:31:59 2007\nReceived: from localhost (maia-1.hub.org [200.46.204.191])\n\tby postgresql.org (Postfix) with ESMTP id 9B2A49FBAAE\n\tfor <[email protected]>; Tue, 3 Apr 2007 15:31:58 -0300 (ADT)\nReceived: from postgresql.org ([200.46.204.71])\n by localhost (mx1.hub.org [200.46.204.191]) (amavisd-maia, port 10024)\n with ESMTP id 24736-05-2\n for <[email protected]>;\n Tue, 3 Apr 2007 15:31:47 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.4\nReceived: from outmail129163.authsmtp.co.uk (outmail129163.authsmtp.co.uk [62.13.129.163])\n\tby postgresql.org (Postfix) with ESMTP id 098899FBAE9\n\tfor <[email protected]>; Tue, 3 Apr 2007 15:30:49 -0300 (ADT)\nReceived: from outmail128178.authsmtp.net (outmail128178.authsmtp.net [62.13.128.178])\n\tby punt3.authsmtp.com (8.13.8/8.13.8/Kp) with ESMTP id l33IUILU071444;\n\tTue, 3 Apr 2007 19:30:19 +0100 (BST)\nReceived: from [192.168.2.2] (adsl-63-195-55-98.dsl.snfc21.pacbell.net [63.195.55.98])\n\t(authenticated bits=0)\n\tby mail.authsmtp.com (8.13.8/8.13.8/Kp) with ESMTP id l33IUEQU046624;\n\tTue, 3 Apr 2007 19:30:15 +0100 (BST)\nMessage-ID: <[email protected]>\nDate: Tue, 03 Apr 2007 11:29:56 -0700\nFrom: Josh Berkus <[email protected]>\nUser-Agent: Thunderbird 1.5.0.10 (Macintosh/20070221)\nMIME-Version: 1.0\nTo: Guillaume Cottenceau <[email protected]>\nCC: Jean Arnaud <[email protected]>,\n PostgreSQL Performance <[email protected]>\nSubject: Re: Cache hit ratio\nReferences: <[email protected]> <[email protected]>\nIn-Reply-To: <[email protected]>\nContent-Type: text/plain; charset=ISO-8859-1; format=flowed\nContent-Transfer-Encoding: 7bit\nX-Server-Quench: 5e2c464c-e211-11db-a443-001185d377ca\nX-AuthRoute: OCdyZgscClZXSx8a IioLCC5HRQ8+YBZL BAkGMA9GIUINWEQK c1ACdR16KEdbHwkB BHYKUl5XUFdwXC1z aBRQZABDZ09QVg11 Uk1LR01SWllrAmVk c397Uh10cQdCNn9x ZEEsWyVZD0B8cRVg F0xVFHAHZDMydTEb VENFdwNVcQtPKhxC bQMuGhFYa3VsBg8C MSgJGgV5FjxUKCkd QgARZUgbXUcMdmdp \nX-Authentic-SMTP: 61633136333939.squirrel.dmpriest.net.uk:556/Kp\nX-Report-SPAM: If SPAM / abuse - report it at: http://www.authsmtp.com/abuse\nX-Virus-Status: No virus detected - but ensure you scan with your own anti-virus system!\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Archive-Number: 200704/23\nX-Sequence-Number: 23900\n\nGuillaume,\n\n\n> which shows the index on primary keys is used, but is always read\n> from disk.\n\nor, more likely, from the FS cache.\n\n> But, the clock time used for the request is actually identical\n> when using -B 1000 or -B 20000. I suppose the kernel is bringing\n> the performance difference thanks to filesystem caching.\n\nYes. The only way you'd see a differeence is on a mixed load of \nconcurrent read & write queries. Any single-query test is unlikely to \nshow a difference between using the FS cache and shared_buffers.\n\n--Josh\n", "msg_date": "Tue, 3 Apr 2007 10:33:19 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cache hit ratio" } ]
[ { "msg_contents": "Hi all,\n\n In MySQL when you create a table you can define something like:\n\nCREATE TABLE `sneakers` (\n `sneaker_id` char(24) NOT NULL,\n `sneaker_time` int(10) unsigned NOT NULL default '0',\n `sneaker_user` int(10) unsigned NOT NULL default '0',\n UNIQUE KEY `sneaker_id` (`sneaker_id`)\n) ENGINE=MEMORY DEFAULT CHARSET=utf8 MAX_ROWS=1000;\n\nMySQL manual says:\n\n\"The MEMORY storage engine creates tables with contents that are stored\nin memory. As indicated by the name, MEMORY tables are stored in memory.\nThey use hash indexes by default, which makes them very fast, and very\nuseful for creating temporary tables. However, when the server shuts\ndown, all rows stored in MEMORY tables are lost. The tables themselves\ncontinue to exist because their definitions are stored in .frm files on\ndisk, but they are empty when the server restarts.\n\nMAX_ROWS can be used to determine the maximum and minimum numbers of rows\"\n\nIs there anything similar in PostgreSQL? The idea behind this is how I\ncan do in PostgreSQL to have tables where I can query on them very often\nsomething like every few seconds and get results very fast without\noverloading the postmaster.\n\nThank you very much\n-- \nArnau\n", "msg_date": "Tue, 03 Apr 2007 20:59:45 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\" \"MAX_ROWS=1000\"" }, { "msg_contents": "Arnau,\n\n> Is there anything similar in PostgreSQL? The idea behind this is how I\n> can do in PostgreSQL to have tables where I can query on them very often\n> something like every few seconds and get results very fast without\n> overloading the postmaster.\n\nIf you're only querying the tables every few seconds, then you don't \nreally need to worry about performance.\n\n--Josh Berkus\n\n", "msg_date": "Tue, 03 Apr 2007 12:05:31 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "Arnau <[email protected]> writes:\n> MySQL manual says:\n> \"The MEMORY storage engine creates tables with contents that are stored\n> in memory. As indicated by the name, MEMORY tables are stored in memory.\n\n> Is there anything similar in PostgreSQL?\n\nAs long as you have shared_buffers large enough (or temp_buffers if\nyou're dealing with temp tables), everything will stay in memory anyway.\nDon't sweat it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Apr 2007 15:08:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "Indeed... I looked through the official TODO list and was unable to \nfind an entry for global temporary tables- such a thing would be \nideal for any transient data such as web sessions or materialized \nviews. Is there any reason why global temp tables shouldn't be \nimplemented? (And, no, I'm not simply referring to \"in-memory\" \ntables- they can simply be handled with a ram disk.)\n\n-M\n", "msg_date": "Tue, 3 Apr 2007 15:16:07 -0400", "msg_from": "\"A.M.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "A.M. wrote:\n> Indeed... I looked through the official TODO list and was unable to \n> find an entry for global temporary tables- such a thing would be ideal \n> for any transient data such as web sessions or materialized views. Is \n> there any reason why global temp tables shouldn't be implemented? \n> (And, no, I'm not simply referring to \"in-memory\" tables- they can \n> simply be handled with a ram disk.)\nNot exactly what you're looking for and a simple API, but the \nperformance is very nice and has a lot of potential.\n\nhttp://pgfoundry.org/projects/pgmemcache/\n\nImplementing a cleaner more transparent sql wrapper would be even nicer.\n\nhttp://tangent.org/index.pl?lastnode_id=478&node_id=506\n\nJust sharing/tossing some ideas around..\n\nC.\n", "msg_date": "Tue, 03 Apr 2007 22:39:51 +0300", "msg_from": "=?ISO-8859-1?Q?=22C=2E_Bergstr=F6m=22?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "\nOn Apr 3, 2007, at 15:39 , C. Bergstr�m wrote:\n\n> A.M. wrote:\n>> Indeed... I looked through the official TODO list and was unable \n>> to find an entry for global temporary tables- such a thing would \n>> be ideal for any transient data such as web sessions or \n>> materialized views. Is there any reason why global temp tables \n>> shouldn't be implemented? (And, no, I'm not simply referring to \n>> \"in-memory\" tables- they can simply be handled with a ram disk.)\n> Not exactly what you're looking for and a simple API, but the \n> performance is very nice and has a lot of potential.\n>\n> http://pgfoundry.org/projects/pgmemcache/\n\nI would like to use transactional semantics over tables that can \ndisappear whenever the server fails. memcached does not offer that.\n\nCheers,\nM", "msg_date": "Tue, 3 Apr 2007 15:47:41 -0400", "msg_from": "\"A.M.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "On Tuesday 03 April 2007 12:47, \"A.M.\" <[email protected]> wrote:\n> On Apr 3, 2007, at 15:39 , C. Bergström wrote:\n> I would like to use transactional semantics over tables that can\n> disappear whenever the server fails. memcached does not offer that.\n\nHow would temporary tables?\n\n-- \nGinsberg's Theorem:\n 1) You can't win.\n 2) You can't break even.\n 3) You can't quit the game.\n\n", "msg_date": "Tue, 3 Apr 2007 13:00:45 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "\nOn Apr 3, 2007, at 16:00 , Alan Hodgson wrote:\n\n> On Tuesday 03 April 2007 12:47, \"A.M.\" \n> <[email protected]> wrote:\n>> On Apr 3, 2007, at 15:39 , C. Bergstr�m wrote:\n>> I would like to use transactional semantics over tables that can\n>> disappear whenever the server fails. memcached does not offer that.\n>\n> How would temporary tables?\n\nThe only difference between temporary tables and standard tables is \nthe WAL. Global temporary tables would be accessible by all sessions \nand would be truncated on postmaster start. For a further potential \nspeed boost, global temp tables could be put in a ramdisk tablespace.\n\nWell, that's at least how I envision them.\n\nCheers,\nM", "msg_date": "Tue, 3 Apr 2007 16:06:30 -0400", "msg_from": "\"A.M.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "Hi Josh,\n\nJosh Berkus wrote:\n> Arnau,\n> \n>> Is there anything similar in PostgreSQL? The idea behind this is how I\n>> can do in PostgreSQL to have tables where I can query on them very often\n>> something like every few seconds and get results very fast without\n>> overloading the postmaster.\n> \n> If you're only querying the tables every few seconds, then you don't \n> really need to worry about performance.\n\nWell, the idea behind this is to have events tables, and a monitoring \nsystem polls that table every few seconds. I'd like to have a kind of \nFIFO stack. From \"the events producer\" point of view he'll be pushing \nrows into that table, when it's filled the oldest one will be removed to \nleave room to the newest one. From \"the consumer\" point of view he'll \nread all the contents of that table.\n\nSo I'll not only querying the tables, I'll need to also modify that tables.\n\n\n-- \nArnau\n", "msg_date": "Wed, 04 Apr 2007 10:50:48 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "On 2007-04-04 Arnau wrote:\n> Josh Berkus wrote:\n>>> Is there anything similar in PostgreSQL? The idea behind this is how\n>>> I can do in PostgreSQL to have tables where I can query on them very\n>>> often something like every few seconds and get results very fast\n>>> without overloading the postmaster.\n>> \n>> If you're only querying the tables every few seconds, then you don't\n>> really need to worry about performance.\n> \n> Well, the idea behind this is to have events tables, and a monitoring\n> system polls that table every few seconds. I'd like to have a kind of\n> FIFO stack. From \"the events producer\" point of view he'll be pushing\n> rows into that table, when it's filled the oldest one will be removed\n> to leave room to the newest one. From \"the consumer\" point of view\n> he'll read all the contents of that table.\n> \n> So I'll not only querying the tables, I'll need to also modify that\n> tables.\n\nUmmm... this may be a dumb question, but why are you trying to implement\nsomething like a FIFO with an RDBMS in the first place? Wouldn't it be\nmuch easier to implement something like that as a separate program or\nscript?\n\nRegards\nAnsgar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n", "msg_date": "Wed, 4 Apr 2007 13:08:32 +0200", "msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "Hi Ansgar ,\n\n> On 2007-04-04 Arnau wrote:\n>> Josh Berkus wrote:\n>>>> Is there anything similar in PostgreSQL? The idea behind this is how\n>>>> I can do in PostgreSQL to have tables where I can query on them very\n>>>> often something like every few seconds and get results very fast\n>>>> without overloading the postmaster.\n>>> If you're only querying the tables every few seconds, then you don't\n>>> really need to worry about performance.\n>> Well, the idea behind this is to have events tables, and a monitoring\n>> system polls that table every few seconds. I'd like to have a kind of\n>> FIFO stack. From \"the events producer\" point of view he'll be pushing\n>> rows into that table, when it's filled the oldest one will be removed\n>> to leave room to the newest one. From \"the consumer\" point of view\n>> he'll read all the contents of that table.\n>>\n>> So I'll not only querying the tables, I'll need to also modify that\n>> tables.\n> \n> Ummm... this may be a dumb question, but why are you trying to implement\n> something like a FIFO with an RDBMS in the first place? Wouldn't it be\n> much easier to implement something like that as a separate program or\n> script?\n\nWell, the idea is have a table with a maximum number of rows. As the \nnumber of queries over this table will be very high, I'd like to keep it \nas small as possible and without indexes and so on that could make the \nupdate slower.\n\nMaybe it's the moment to change my question, is there any trick to get a \ntable that can be modified/queried very fast and with the minimum of \noverhead? This table will have several queries every second and I'd like \nto do this as fast as possible\n\nThanks\n-- \nArnau\n", "msg_date": "Wed, 04 Apr 2007 13:51:14 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "Probably another helpful solution may be to implement:\n\n ALTER TABLE LOGGING OFF/ON;\n\njust to disable/enable WAL?\n\nFirst it may help in all cases of intensive data load while you slow\ndown other sessions with increasing WAL activity.\nThen you have a way to implement MEMORY-like tables on RAM disk\ntablespace (well, you still need to take care to drop them\nauto-manually :))\n\nHowever, if we speak about performance of MEMORY table - it should be\nmuch better in Tom's solution with big temp buffers rather RAM disk...\nThe strong point in implementation of MEMORY table is it *knows* it\nsits in RAM! and it changes completely all I/O kind logic...\n\nBTW, before NDB was bough by MySQL we done a benchmark to rich a\nhighest possible TPS numbers with it. We got 1.500.000 TPS(!) (yes,\none million and half per second!) knowing all current TPC records are\nmeasured in thousands of transactions per minute - you see impact...\n\nAnd of course for my education I tried to do the same with other\ndatabase vendors running only SELECT queries and placing tablespaces\non RAM disk... After trying all possible combinations I was still\n*very* far :))\n\nMEMORY databases is something like a parallel world, very interesting,\nbut very different :))\n\nRgds,\n-Dimitri\n\nOn 4/3/07, A.M. <[email protected]> wrote:\n>\n> On Apr 3, 2007, at 16:00 , Alan Hodgson wrote:\n>\n> > On Tuesday 03 April 2007 12:47, \"A.M.\"\n> > <[email protected]> wrote:\n> >> On Apr 3, 2007, at 15:39 , C. Bergström wrote:\n> >> I would like to use transactional semantics over tables that can\n> >> disappear whenever the server fails. memcached does not offer that.\n> >\n> > How would temporary tables?\n>\n> The only difference between temporary tables and standard tables is\n> the WAL. Global temporary tables would be accessible by all sessions\n> and would be truncated on postmaster start. For a further potential\n> speed boost, global temp tables could be put in a ramdisk tablespace.\n>\n> Well, that's at least how I envision them.\n>\n> Cheers,\n> M\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n", "msg_date": "Wed, 4 Apr 2007 14:46:54 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\" \"MAX_ROWS=1000\"" }, { "msg_contents": "Dimitri,\n\n> Probably another helpful solution may be to implement:\n>\n> ALTER TABLE LOGGING OFF/ON;\n>\n> just to disable/enable WAL?\n\nActually, a patch similar to this is currently in the queue for 8.3. \n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 5 Apr 2007 11:05:16 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "Wow, it's excellent! :))\n\nprobably the next step is:\n\n ALTER TABLE CACHE ON/OFF;\n\njust to force keeping any table in the cache. What do you think?...\n\nRgds,\n-Dimitri\n\nOn 4/5/07, Josh Berkus <[email protected]> wrote:\n> Dimitri,\n>\n> > Probably another helpful solution may be to implement:\n> >\n> > ALTER TABLE LOGGING OFF/ON;\n> >\n> > just to disable/enable WAL?\n>\n> Actually, a patch similar to this is currently in the queue for 8.3.\n>\n> --\n> Josh Berkus\n> PostgreSQL @ Sun\n> San Francisco\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n", "msg_date": "Thu, 5 Apr 2007 21:00:19 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "On Wednesday 04 April 2007 07:51, Arnau wrote:\n> Hi Ansgar ,\n>\n> > On 2007-04-04 Arnau wrote:\n> >> Josh Berkus wrote:\n> >>>> Is there anything similar in PostgreSQL? The idea behind this is how\n> >>>> I can do in PostgreSQL to have tables where I can query on them very\n> >>>> often something like every few seconds and get results very fast\n> >>>> without overloading the postmaster.\n> >>>\n> >>> If you're only querying the tables every few seconds, then you don't\n> >>> really need to worry about performance.\n> >>\n> >> Well, the idea behind this is to have events tables, and a monitoring\n> >> system polls that table every few seconds. I'd like to have a kind of\n> >> FIFO stack. From \"the events producer\" point of view he'll be pushing\n> >> rows into that table, when it's filled the oldest one will be removed\n> >> to leave room to the newest one. From \"the consumer\" point of view\n> >> he'll read all the contents of that table.\n> >>\n> >> So I'll not only querying the tables, I'll need to also modify that\n> >> tables.\n> >\n> > Ummm... this may be a dumb question, but why are you trying to implement\n> > something like a FIFO with an RDBMS in the first place? Wouldn't it be\n> > much easier to implement something like that as a separate program or\n> > script?\n>\n> Well, the idea is have a table with a maximum number of rows. As the\n> number of queries over this table will be very high, I'd like to keep it\n> as small as possible and without indexes and so on that could make the\n> update slower.\n>\n> Maybe it's the moment to change my question, is there any trick to get a\n> table that can be modified/queried very fast and with the minimum of\n> overhead? This table will have several queries every second and I'd like\n> to do this as fast as possible\n>\n\nIf you're wedded to the FIFO idea, I'd suggest reading this:\nhttp://people.planetpostgresql.org/greg/index.php?/archives/89-Implementing-a-queue-in-SQL-Postgres-version.html\n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Thu, 12 Apr 2007 13:54:01 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" } ]
[ { "msg_contents": "We need to upgrade a postgres server. I'm not tied to these specific \nalternatives, but I'm curious to get feedback on their general \nqualities.\n\nSCSI\n dual xeon 5120, 8GB ECC\n 8*73GB SCSI 15k drives (PERC 5/i)\n (dell poweredge 2900)\n\nSATA\n dual opteron 275, 8GB ECC\n 24*320GB SATA II 7.2k drives (2*12way 3ware cards)\n (generic vendor)\n\nBoth boxes are about $8k running ubuntu. We're planning to setup with \nraid10. Our main requirement is highest TPS (focused on a lot of \nINSERTS).\n\nQuestion: will 8*15k SCSI drives outperform 24*7K SATA II drives?\n\n-jay\n", "msg_date": "Tue, 3 Apr 2007 15:13:15 -0700", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "SCSI vs SATA" }, { "msg_contents": "For random IO, the 3ware cards are better than PERC\n\n > Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA II drives?\n\nNope. Not even if the 15K 73GB HDs were the brand new Savvio 15K screamers.\n\nExample assuming 3.5\" HDs and RAID 10 => 4 15K 73GB vs 12 7.2K 320GB\nThe 15K's are 2x faster rpm, but they are only ~23% the density => \nadvantage per HD to SATAs.\nThen there's the fact that there are 1.5x as many 7.2K spindles as \n15K spindles...\n\nUnless your transactions are very small and unbuffered / unscheduled \n(in which case you are in a =lot= of trouble), The SATA set-up rates \nto be ~2x - ~3x faster ITRW than the SCSI set-up.\n\nCheers,\nRon Peacetree\n\n\nAt 06:13 PM 4/3/2007, [email protected] wrote:\n>We need to upgrade a postgres server. I'm not tied to these specific\n>alternatives, but I'm curious to get feedback on their general\n>qualities.\n>\n>SCSI\n> dual xeon 5120, 8GB ECC\n> 8*73GB SCSI 15k drives (PERC 5/i)\n> (dell poweredge 2900)\n>\n>SATA\n> dual opteron 275, 8GB ECC\n> 24*320GB SATA II 7.2k drives (2*12way 3ware cards)\n> (generic vendor)\n>\n>Both boxes are about $8k running ubuntu. We're planning to setup with\n>raid10. Our main requirement is highest TPS (focused on a lot of\n>INSERTS).\n>\n>Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?\n>\n>-jay\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Tue, 03 Apr 2007 19:07:52 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "At 07:07 PM 4/3/2007, Ron wrote:\n>For random IO, the 3ware cards are better than PERC\n>\n> > Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA \n> II drives?\n>\n>Nope. Not even if the 15K 73GB HDs were the brand new Savvio 15K screamers.\n>\n>Example assuming 3.5\" HDs and RAID 10 => 4 15K 73GB vs 12 7.2K 320GB\n>The 15K's are 2x faster rpm, but they are only ~23% the density => \n>advantage per HD to SATAs.\n>Then there's the fact that there are 1.5x as many 7.2K spindles as \n>15K spindles...\nOops make that =3x= as many 7.2K spindles as 15K spindles...\n\n\n>Unless your transactions are very small and unbuffered / unscheduled \n>(in which case you are in a =lot= of trouble), The SATA set-up rates \n>to be ~2x - ~3x faster ITRW than the SCSI set-up.\n...which makes this imply that the SATA set-up given will be ~4x - \n~6x faster ITRW than the SCSI set-up given.\n\n>Cheers,\n>Ron Peacetree\n>\n>\n>At 06:13 PM 4/3/2007, [email protected] wrote:\n>>We need to upgrade a postgres server. I'm not tied to these specific\n>>alternatives, but I'm curious to get feedback on their general\n>>qualities.\n>>\n>>SCSI\n>> dual xeon 5120, 8GB ECC\n>> 8*73GB SCSI 15k drives (PERC 5/i)\n>> (dell poweredge 2900)\n>>\n>>SATA\n>> dual opteron 275, 8GB ECC\n>> 24*320GB SATA II 7.2k drives (2*12way 3ware cards)\n>> (generic vendor)\n>>\n>>Both boxes are about $8k running ubuntu. We're planning to setup with\n>>raid10. Our main requirement is highest TPS (focused on a lot of\n>>INSERTS).\n>>\n>>Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?\n>>\n>>-jay\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n\n", "msg_date": "Tue, 03 Apr 2007 19:13:02 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Ron wrote:\n> At 07:07 PM 4/3/2007, Ron wrote:\n>> For random IO, the 3ware cards are better than PERC\n>>\n>> > Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA II \n>> drives?\n>>\n>> Nope. Not even if the 15K 73GB HDs were the brand new Savvio 15K \n>> screamers.\n>>\n>> Example assuming 3.5\" HDs and RAID 10 => 4 15K 73GB vs 12 7.2K 320GB\n>> The 15K's are 2x faster rpm, but they are only ~23% the density => \n>> advantage per HD to SATAs.\n>> Then there's the fact that there are 1.5x as many 7.2K spindles as 15K \n>> spindles...\n> Oops make that =3x= as many 7.2K spindles as 15K spindles...\n\nI don't think the density difference will be quite as high as you seem to \nthink: most 320GB SATA drives are going to be 3-4 platters, the most that a \n73GB SCSI is going to have is 2, and more likely 1, which would make the \nSCSIs more like 50% the density of the SATAs. Note that this only really \nmakes a difference to theoretical sequential speeds; if the seeks are \nrandom the SCSI drives could easily get there 50% faster (lower rotational \nlatency and they certainly will have better actuators for the heads). \nIndividual 15K SCSIs will trounce 7.2K SATAs in terms of i/os per second.\n\nWhat I always do when examining hard drive options is to see if they've \nbeen tested (or a similar model has) at http://www.storagereview.com/ - \nthey have a great database there with lots of low-level information \n(although it seems to be down at the time of writing).\n\nBut what's likely to make the largest difference in the OP's case (many \ninserts) is write caching, and a battery-backed cache would be needed for \nthis. This will help mask write latency differences between the two \noptions, and so benefit SATA more. Some 3ware cards offer it, some don't, \nso check the model.\n\nHow the drives are arranged is going to be important too - one big RAID 10 \nis going to be rather worse than having arrays dedicated to each of \npg_xlog, indices and tables, and on that front the SATA option is going to \ngrant more flexibility.\n\nIf you care about how often you'll have to replace a failed drive, then the \nSCSI option no question, although check the cases for hot-swapability.\n\nHTH,\nGeoff\n", "msg_date": "Tue, 03 Apr 2007 18:54:07 -0700", "msg_from": "Geoff Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "You might also ask on:\n\[email protected]\n\nPeople are pretty candid there.\n\n~BAS\n\nOn Tue, 2007-04-03 at 15:13 -0700, [email protected] wrote:\n> Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?\n-- \nBrian A. Seklecki <[email protected]>\nCollaborative Fusion, Inc.\n\n", "msg_date": "Tue, 03 Apr 2007 23:36:44 -0400", "msg_from": "\"Brian A. Seklecki\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Tue, 3 Apr 2007, Geoff Tolley wrote:\n\n> \n> Ron wrote:\n>> At 07:07 PM 4/3/2007, Ron wrote:\n>> > For random IO, the 3ware cards are better than PERC\n>> > \n>> > > Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA II \n>> > drives?\n>> > \n>> > Nope. Not even if the 15K 73GB HDs were the brand new Savvio 15K \n>> > screamers.\n>> > \n>> > Example assuming 3.5\" HDs and RAID 10 => 4 15K 73GB vs 12 7.2K 320GB\n>> > The 15K's are 2x faster rpm, but they are only ~23% the density => \n>> > advantage per HD to SATAs.\n>> > Then there's the fact that there are 1.5x as many 7.2K spindles as 15K \n>> > spindles...\n>> Oops make that =3x= as many 7.2K spindles as 15K spindles...\n>\n> I don't think the density difference will be quite as high as you seem to \n> think: most 320GB SATA drives are going to be 3-4 platters, the most that a \n> 73GB SCSI is going to have is 2, and more likely 1, which would make the \n> SCSIs more like 50% the density of the SATAs. Note that this only really \n> makes a difference to theoretical sequential speeds; if the seeks are random \n> the SCSI drives could easily get there 50% faster (lower rotational latency \n> and they certainly will have better actuators for the heads). Individual 15K \n> SCSIs will trounce 7.2K SATAs in terms of i/os per second.\n\ntrue, but with 3x as many drives (and 4x the capacity per drive) the SATA \nsystem will have to do far less seeking\n\nfor that matter, with 20ish 320G drives, how large would a parition be \nthat only used the outer pysical track of each drive? (almost certinly \nmultiple logical tracks) if you took the time to set this up you could \neliminate seeking entirely (at the cost of not useing your capacity, but \nsince you are considering a 12x range in capacity, it's obviously not your \nprimary concern)\n\n> If you care about how often you'll have to replace a failed drive, then the \n> SCSI option no question, although check the cases for hot-swapability.\n\nnote that the CMU and Google studies both commented on being surprised at \nthe lack of difference between the reliability of SCSI and SATA drives.\n\nDavid Lang\n", "msg_date": "Tue, 3 Apr 2007 21:15:03 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "[email protected] wrote:\n> 8*73GB SCSI 15k ...(dell poweredge 2900)...\n> 24*320GB SATA II 7.2k ...(generic vendor)...\n> \n> raid10. Our main requirement is highest TPS (focused on a lot of INSERTS).\n> Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?\n\nIt's worth asking the vendors in question if you can test the configurations\nbefore you buy. Of course with 'generic vendor' it's easiest if that\nvendor has local offices.\n\nIf Dell hesitates, mention that their competitors offer such programs;\nsome by loaning you the servers[1], others by having performance testing\ncenters where you can (for a fee?) come in and benchmark your applications[2].\n\n\n[1] http://www.sun.com/tryandbuy/\n[2] http://www.hp.com/products1/solutioncenters/services/index.html#solution\n", "msg_date": "Wed, 04 Apr 2007 01:36:53 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "This may be a silly question but: will not 3 times as many disk drives\nmean 3 times higher probability for disk failure? Also rumor has it\nthat SATA drives are more prone to fail than SCSI drivers. More\nfailures will result, in turn, in more administration costs.\n\nThanks\nPeter\n\nOn 4/4/07, [email protected] <[email protected]> wrote:\n> On Tue, 3 Apr 2007, Geoff Tolley wrote:\n>\n> >\n> > Ron wrote:\n> >> At 07:07 PM 4/3/2007, Ron wrote:\n> >> > For random IO, the 3ware cards are better than PERC\n> >> >\n> >> > > Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA II\n> >> > drives?\n> >> >\n> >> > Nope. Not even if the 15K 73GB HDs were the brand new Savvio 15K\n> >> > screamers.\n> >> >\n> >> > Example assuming 3.5\" HDs and RAID 10 => 4 15K 73GB vs 12 7.2K 320GB\n> >> > The 15K's are 2x faster rpm, but they are only ~23% the density =>\n> >> > advantage per HD to SATAs.\n> >> > Then there's the fact that there are 1.5x as many 7.2K spindles as 15K\n> >> > spindles...\n> >> Oops make that =3x= as many 7.2K spindles as 15K spindles...\n> >\n> > I don't think the density difference will be quite as high as you seem to\n> > think: most 320GB SATA drives are going to be 3-4 platters, the most that a\n> > 73GB SCSI is going to have is 2, and more likely 1, which would make the\n> > SCSIs more like 50% the density of the SATAs. Note that this only really\n> > makes a difference to theoretical sequential speeds; if the seeks are random\n> > the SCSI drives could easily get there 50% faster (lower rotational latency\n> > and they certainly will have better actuators for the heads). Individual 15K\n> > SCSIs will trounce 7.2K SATAs in terms of i/os per second.\n>\n> true, but with 3x as many drives (and 4x the capacity per drive) the SATA\n> system will have to do far less seeking\n>\n> for that matter, with 20ish 320G drives, how large would a parition be\n> that only used the outer pysical track of each drive? (almost certinly\n> multiple logical tracks) if you took the time to set this up you could\n> eliminate seeking entirely (at the cost of not useing your capacity, but\n> since you are considering a 12x range in capacity, it's obviously not your\n> primary concern)\n>\n> > If you care about how often you'll have to replace a failed drive, then the\n> > SCSI option no question, although check the cases for hot-swapability.\n>\n> note that the CMU and Google studies both commented on being surprised at\n> the lack of difference between the reliability of SCSI and SATA drives.\n>\n> David Lang\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Wed, 4 Apr 2007 13:16:13 +0200", "msg_from": "\"Peter Kovacs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "* Peter Kovacs <[email protected]> [070404 14:40]:\n> This may be a silly question but: will not 3 times as many disk drives\n> mean 3 times higher probability for disk failure? Also rumor has it\n> that SATA drives are more prone to fail than SCSI drivers. More\n> failures will result, in turn, in more administration costs.\nActually, the newest research papers show that all discs (be it\ndesktops, or highend SCSI) have basically the same failure statistics.\n\nBut yes, having 3 times the discs will increase the fault probability.\n\nAndreas\n> \n> Thanks\n> Peter\n> \n> On 4/4/07, [email protected] <[email protected]> wrote:\n> >On Tue, 3 Apr 2007, Geoff Tolley wrote:\n> >\n> >>\n> >> Ron wrote:\n> >>> At 07:07 PM 4/3/2007, Ron wrote:\n> >>> > For random IO, the 3ware cards are better than PERC\n> >>> >\n> >>> > > Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA II\n> >>> > drives?\n> >>> >\n> >>> > Nope. Not even if the 15K 73GB HDs were the brand new Savvio 15K\n> >>> > screamers.\n> >>> >\n> >>> > Example assuming 3.5\" HDs and RAID 10 => 4 15K 73GB vs 12 7.2K 320GB\n> >>> > The 15K's are 2x faster rpm, but they are only ~23% the density =>\n> >>> > advantage per HD to SATAs.\n> >>> > Then there's the fact that there are 1.5x as many 7.2K spindles as 15K\n> >>> > spindles...\n> >>> Oops make that =3x= as many 7.2K spindles as 15K spindles...\n> >>\n> >> I don't think the density difference will be quite as high as you seem to\n> >> think: most 320GB SATA drives are going to be 3-4 platters, the most that a\n> >> 73GB SCSI is going to have is 2, and more likely 1, which would make the\n> >> SCSIs more like 50% the density of the SATAs. Note that this only really\n> >> makes a difference to theoretical sequential speeds; if the seeks are random\n> >> the SCSI drives could easily get there 50% faster (lower rotational latency\n> >> and they certainly will have better actuators for the heads). Individual 15K\n> >> SCSIs will trounce 7.2K SATAs in terms of i/os per second.\n> >\n> >true, but with 3x as many drives (and 4x the capacity per drive) the SATA\n> >system will have to do far less seeking\n> >\n> >for that matter, with 20ish 320G drives, how large would a parition be\n> >that only used the outer pysical track of each drive? (almost certinly\n> >multiple logical tracks) if you took the time to set this up you could\n> >eliminate seeking entirely (at the cost of not useing your capacity, but\n> >since you are considering a 12x range in capacity, it's obviously not your\n> >primary concern)\n> >\n> >> If you care about how often you'll have to replace a failed drive, then the\n> >> SCSI option no question, although check the cases for hot-swapability.\n> >\n> >note that the CMU and Google studies both commented on being surprised at\n> >the lack of difference between the reliability of SCSI and SATA drives.\n> >\n> >David Lang\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n", "msg_date": "Wed, 4 Apr 2007 14:46:43 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Andreas Kostyrka escribi�:\n> * Peter Kovacs <[email protected]> [070404 14:40]:\n> > This may be a silly question but: will not 3 times as many disk drives\n> > mean 3 times higher probability for disk failure? Also rumor has it\n> > that SATA drives are more prone to fail than SCSI drivers. More\n> > failures will result, in turn, in more administration costs.\n> Actually, the newest research papers show that all discs (be it\n> desktops, or highend SCSI) have basically the same failure statistics.\n> \n> But yes, having 3 times the discs will increase the fault probability.\n\n... of individual disks, which is quite different from failure of a disk\narray (in case there is one).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 4 Apr 2007 09:19:20 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "But if an individual disk fails in a disk array, sooner than later you\nwould want to purchase a new fitting disk, walk/drive to the location\nof the disk array, replace the broken disk in the array and activate\nthe new disk. Is this correct?\n\nThanks\nPeter\n\nOn 4/4/07, Alvaro Herrera <[email protected]> wrote:\n> Andreas Kostyrka escribió:\n> > * Peter Kovacs <[email protected]> [070404 14:40]:\n> > > This may be a silly question but: will not 3 times as many disk drives\n> > > mean 3 times higher probability for disk failure? Also rumor has it\n> > > that SATA drives are more prone to fail than SCSI drivers. More\n> > > failures will result, in turn, in more administration costs.\n> > Actually, the newest research papers show that all discs (be it\n> > desktops, or highend SCSI) have basically the same failure statistics.\n> >\n> > But yes, having 3 times the discs will increase the fault probability.\n>\n> ... of individual disks, which is quite different from failure of a disk\n> array (in case there is one).\n>\n> --\n> Alvaro Herrera http://www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n>\n", "msg_date": "Wed, 4 Apr 2007 15:30:00 +0200", "msg_from": "\"Peter Kovacs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Peter Kovacs escribi�:\n> But if an individual disk fails in a disk array, sooner than later you\n> would want to purchase a new fitting disk, walk/drive to the location\n> of the disk array, replace the broken disk in the array and activate\n> the new disk. Is this correct?\n\nIdeally you would have a spare disk to let the array controller replace\nthe broken one as soon as it breaks, but yeah, that would be more or\nless the procedure. There is a way to defer the walk/drive until a more\nconvenient opportunity presents.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 4 Apr 2007 09:36:20 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\nOn 4-Apr-07, at 8:46 AM, Andreas Kostyrka wrote:\n\n> * Peter Kovacs <[email protected]> [070404 14:40]:\n>> This may be a silly question but: will not 3 times as many disk \n>> drives\n>> mean 3 times higher probability for disk failure? Also rumor has it\n>> that SATA drives are more prone to fail than SCSI drivers. More\n>> failures will result, in turn, in more administration costs.\n> Actually, the newest research papers show that all discs (be it\n> desktops, or highend SCSI) have basically the same failure statistics.\n>\n> But yes, having 3 times the discs will increase the fault probability.\n\nI highly recommend RAID6 to anyone with more than 6 standard SATA \ndrives in a single array. It's actually fairly probable that you will \nlose 2 drives in a 72 hour window (say over a long weekend) at some \npoint.\n\n> Andreas\n>>\n>> Thanks\n>> Peter\n>>\n>> On 4/4/07, [email protected] <[email protected]> wrote:\n>>> On Tue, 3 Apr 2007, Geoff Tolley wrote:\n>>>\n>>>>\n>>>> Ron wrote:\n>>>>> At 07:07 PM 4/3/2007, Ron wrote:\n>>>>>> For random IO, the 3ware cards are better than PERC\n>>>>>>\n>>>>>>> Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB \n>>>>>>> SATA II\n>>>>>> drives?\n>>>>>>\n>>>>>> Nope. Not even if the 15K 73GB HDs were the brand new Savvio \n>>>>>> 15K\n>>>>>> screamers.\n>>>>>>\n>>>>>> Example assuming 3.5\" HDs and RAID 10 => 4 15K 73GB vs 12 \n>>>>>> 7.2K 320GB\n>>>>>> The 15K's are 2x faster rpm, but they are only ~23% the \n>>>>>> density =>\n>>>>>> advantage per HD to SATAs.\n>>>>>> Then there's the fact that there are 1.5x as many 7.2K \n>>>>>> spindles as 15K\n>>>>>> spindles...\n>>>>> Oops make that =3x= as many 7.2K spindles as 15K spindles...\n>>>>\n>>>> I don't think the density difference will be quite as high as \n>>>> you seem to\n>>>> think: most 320GB SATA drives are going to be 3-4 platters, the \n>>>> most that a\n>>>> 73GB SCSI is going to have is 2, and more likely 1, which would \n>>>> make the\n>>>> SCSIs more like 50% the density of the SATAs. Note that this \n>>>> only really\n>>>> makes a difference to theoretical sequential speeds; if the \n>>>> seeks are random\n>>>> the SCSI drives could easily get there 50% faster (lower \n>>>> rotational latency\n>>>> and they certainly will have better actuators for the heads). \n>>>> Individual 15K\n>>>> SCSIs will trounce 7.2K SATAs in terms of i/os per second.\n>>>\n>>> true, but with 3x as many drives (and 4x the capacity per drive) \n>>> the SATA\n>>> system will have to do far less seeking\n>>>\n>>> for that matter, with 20ish 320G drives, how large would a \n>>> parition be\n>>> that only used the outer pysical track of each drive? (almost \n>>> certinly\n>>> multiple logical tracks) if you took the time to set this up you \n>>> could\n>>> eliminate seeking entirely (at the cost of not useing your \n>>> capacity, but\n>>> since you are considering a 12x range in capacity, it's obviously \n>>> not your\n>>> primary concern)\n>>>\n>>>> If you care about how often you'll have to replace a failed \n>>>> drive, then the\n>>>> SCSI option no question, although check the cases for hot- \n>>>> swapability.\n>>>\n>>> note that the CMU and Google studies both commented on being \n>>> surprised at\n>>> the lack of difference between the reliability of SCSI and SATA \n>>> drives.\n>>>\n>>> David Lang\n>>>\n>>> ---------------------------(end of \n>>> broadcast)---------------------------\n>>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>>> subscribe-nomail command to [email protected] so \n>>> that your\n>>> message can get through to the mailing list cleanly\n>>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Wed, 4 Apr 2007 09:38:40 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "* Alvaro Herrera <[email protected]> [070404 15:42]:\n> Peter Kovacs escribi�:\n> > But if an individual disk fails in a disk array, sooner than later you\n> > would want to purchase a new fitting disk, walk/drive to the location\n> > of the disk array, replace the broken disk in the array and activate\n> > the new disk. Is this correct?\n> \n> Ideally you would have a spare disk to let the array controller replace\n> the broken one as soon as it breaks, but yeah, that would be more or\nWell, no matter what, you need to test this procedure. I'd expect in\nmany cases the disc io during the rebuild of the array to that much\nslower that the database server won't be able to cope with the load.\n\nAndreas\n", "msg_date": "Wed, 4 Apr 2007 15:48:40 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Apr 3, 2007, at 6:54 PM, Geoff Tolley wrote:\n\n> I don't think the density difference will be quite as high as you \n> seem to think: most 320GB SATA drives are going to be 3-4 platters, \n> the most that a 73GB SCSI is going to have is 2, and more likely 1, \n> which would make the SCSIs more like 50% the density of the SATAs. \n> Note that this only really makes a difference to theoretical \n> sequential speeds; if the seeks are random the SCSI drives could \n> easily get there 50% faster (lower rotational latency and they \n> certainly will have better actuators for the heads). Individual 15K \n> SCSIs will trounce 7.2K SATAs in terms of i/os per second.\n\nGood point. On another note, I am wondering why nobody's brought up \nthe command-queuing perf benefits (yet). Is this because sata vs scsi \nare at par here? I'm finding conflicting information on this -- some \ncalling sata's ncq mostly crap, others stating the real-world results \nare negligible. I'm inclined to believe SCSI's pretty far ahead here \nbut am having trouble finding recent articles on this.\n\n> What I always do when examining hard drive options is to see if \n> they've been tested (or a similar model has) at http:// \n> www.storagereview.com/ - they have a great database there with lots \n> of low-level information (although it seems to be down at the time \n> of writing).\n\nStill down! They might want to get better drives... j/k.\n\n> But what's likely to make the largest difference in the OP's case \n> (many inserts) is write caching, and a battery-backed cache would \n> be needed for this. This will help mask write latency differences \n> between the two options, and so benefit SATA more. Some 3ware cards \n> offer it, some don't, so check the model.\n\nThe servers are hooked up to a reliable UPS. The battery-backed cache \nwon't hurt but might be overkill (?).\n\n> How the drives are arranged is going to be important too - one big \n> RAID 10 is going to be rather worse than having arrays dedicated to \n> each of pg_xlog, indices and tables, and on that front the SATA \n> option is going to grant more flexibility.\n\nI've read some recent contrary advice. Specifically advising the \nsharing of all files (pg_xlogs, indices, etc..) on a huge raid array \nand letting the drives load balance by brute force. I know the \npostgresql documentation claims up to 13% more perf for moving the \npg_xlog to its own device(s) -- but by sharing everything on a huge \narray you lose a small amount of perf (when compared to the \ntheoretically optimal solution) - vs being significantly off optimal \nperf if you partition your tables/files wrongly. I'm willing to do \nreasonable benchmarking but time is money -- and reconfiguring huge \narrays in multiple configurations to get possibly get incremental \nperf might not be as cost efficient as just spending more on hardware.\n\nThanks for all the tips.\n", "msg_date": "Wed, 4 Apr 2007 07:12:20 -0700", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "At 07:16 AM 4/4/2007, Peter Kovacs wrote:\n>This may be a silly question but: will not 3 times as many disk drives\n>mean 3 times higher probability for disk failure?\n\nYes, all other factors being equal 3x more HDs (24 vs 8) means ~3x \nthe chance of any specific HD failing.\n\nOTOH, either of these numbers is probably smaller than you think.\nAssuming a HD with a 1M hour MTBF (which means that at 1M hours of \noperation you have a ~1/2 chance of that specific HD failing), the \ninstantaneous reliability of any given HD is\n\nx^(1M)= 1/2, (1M)lg(x)= lg(1/2), lg(x)= lg(1/2)/(1M), lg(x)= ~ \n-1/(1M), x= ~.999999307\n\nTo evaluate the instantaneous reliability of a set of \"n\" HDs, we \nraise x to the power of that number of HDs.\nWhether we evaluate x^8= .999994456 or x^24= .999983368, the result \nis still darn close to 1.\n\nMultiple studies have shown that ITRW modern components considered to \nbe critical like HDs, CPUs, RAM, etc fail far less often than say \nfans and PSUs.\n\nIn addition, those same studies show HDs are usually\na= set up to be redundant (RAID) and\nb= hot swap-able\nc= usually do not catastrophically fail with no warning (unlike fans and PSUs)\n\nFinally, catastrophic failures of HDs are infinitesimally rare \ncompared to things like fans.\n\nIf your system is in appropriate conditions and suffers a system \nstopping HW failure, the odds are it will not be a HD that failed.\nBuy HDs with 5+ year warranties + keep them in appropriate \nenvironments and the odds are very good that you will never have to \ncomplain about your HD subsystem.\n\n\n> Also rumor has it that SATA drives are more prone to fail than \n> SCSI drivers. More\n>failures will result, in turn, in more administration costs.\nHard data trumps rumors. The hard data is that you should only buy \nHDs with 5+ year warranties and then make sure to use them only in \nappropriate conditions and under appropriate loads.\n\nRespect those constraints and the numbers say the difference in \nreliability between SCSI, SATA, and SAS HDs is negligible.\n\nCheers,\nRon Peacetree \n\n", "msg_date": "Wed, 04 Apr 2007 10:57:31 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\n> \n> Good point. On another note, I am wondering why nobody's brought up the \n> command-queuing perf benefits (yet). Is this because sata vs scsi are at \n\nSATAII has similar features.\n\n> par here? I'm finding conflicting information on this -- some calling \n> sata's ncq mostly crap, others stating the real-world results are \n> negligible. I'm inclined to believe SCSI's pretty far ahead here but am \n> having trouble finding recent articles on this.\n\nWhat I find is, a bunch of geeks sit in a room and squabble about a few \npercentages one way or the other. One side feels very l33t because their \nwhite paper looks like the latest swimsuit edition.\n\nReal world specs and real world performance shows that SATAII performs, \nvery, very well. It is kind of like X86. No chip engineer that I know \nhas ever said, X86 is elegant but guess which chip design is conquering \nall others in the general and enterprise marketplace?\n\nSATAII brute forces itself through some of its performance, for example \n16MB write cache on each drive.\n\nSincerely,\n\nJoshua D. Drake\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Wed, 04 Apr 2007 08:16:26 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Joshua D. Drake wrote:\n> \n>>\n>> Good point. On another note, I am wondering why nobody's brought up \n>> the command-queuing perf benefits (yet). Is this because sata vs scsi \n>> are at \n> \n> SATAII has similar features.\n> \n>> par here? I'm finding conflicting information on this -- some calling \n>> sata's ncq mostly crap, others stating the real-world results are \n>> negligible. I'm inclined to believe SCSI's pretty far ahead here but \n>> am having trouble finding recent articles on this.\n> \n> What I find is, a bunch of geeks sit in a room and squabble about a few \n> percentages one way or the other. One side feels very l33t because their \n> white paper looks like the latest swimsuit edition.\n> \n> Real world specs and real world performance shows that SATAII performs, \n> very, very well. It is kind of like X86. No chip engineer that I know \n> has ever said, X86 is elegant but guess which chip design is conquering \n> all others in the general and enterprise marketplace?\n> \n> SATAII brute forces itself through some of its performance, for example \n> 16MB write cache on each drive.\n\nsure but for any serious usage one either wants to disable that \ncache(and rely on tagged command queuing or how that is called in SATAII \nworld) or rely on the OS/raidcontroller implementing some sort of \nFUA/write barrier feature(which linux for example only does in pretty \nrecent kernels)\n\n\nStefan\n", "msg_date": "Wed, 04 Apr 2007 17:33:32 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\n>> SATAII brute forces itself through some of its performance, for \n>> example 16MB write cache on each drive.\n> \n> sure but for any serious usage one either wants to disable that \n> cache(and rely on tagged command queuing or how that is called in SATAII \n\nWhy? Assuming we have a BBU, why would you turn off the cache?\n\n> world) or rely on the OS/raidcontroller implementing some sort of \n> FUA/write barrier feature(which linux for example only does in pretty \n> recent kernels)\n\nSincerely,\n\nJoshua D. Drake\n\n> \n> \n> Stefan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Wed, 04 Apr 2007 08:40:33 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "* Joshua D. Drake <[email protected]> [070404 17:40]:\n> \n> >Good point. On another note, I am wondering why nobody's brought up the command-queuing perf benefits (yet). Is this because sata vs scsi are at \n> \n> SATAII has similar features.\n> \n> >par here? I'm finding conflicting information on this -- some calling sata's ncq mostly crap, others stating the real-world results are negligible. I'm inclined to believe SCSI's \n> >pretty far ahead here but am having trouble finding recent articles on this.\n> \n> What I find is, a bunch of geeks sit in a room and squabble about a few percentages one way or the other. One side feels very l33t because their white paper looks like the latest \n> swimsuit edition.\n> \n> Real world specs and real world performance shows that SATAII performs, very, very well. It is kind of like X86. No chip engineer that I know has ever said, X86 is elegant but guess\n> which chip design is conquering all others in the general and enterprise marketplace?\n\nActually, to second that, we did have very similiar servers with\nSCSI/SATA drives, and I did not notice any relevant measurable\ndifference. OTOH, the SCSI discs were way less reliable than the SATA\ndiscs, that might have been bad luck.\n\nAndreas\n", "msg_date": "Wed, 4 Apr 2007 17:43:05 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\n> difference. OTOH, the SCSI discs were way less reliable than the SATA\n> discs, that might have been bad luck.\n\nProbably bad luck. I find that SCSI is very reliable, but I don't find \nit any more reliable than SATA. That is assuming correct ventilation etc...\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Wed, 04 Apr 2007 08:50:44 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Joshua D. Drake wrote:\n> \n>>> SATAII brute forces itself through some of its performance, for \n>>> example 16MB write cache on each drive.\n>>\n>> sure but for any serious usage one either wants to disable that \n>> cache(and rely on tagged command queuing or how that is called in SATAII \n> \n> Why? Assuming we have a BBU, why would you turn off the cache?\n\nthe BBU is usually only protecting the memory of the (hardware) raid \ncontroller not the one in the drive ...\n\n\nStefan\n", "msg_date": "Wed, 04 Apr 2007 17:59:28 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Wed, 4 Apr 2007, Peter Kovacs wrote:\n\n> But if an individual disk fails in a disk array, sooner than later you\n> would want to purchase a new fitting disk, walk/drive to the location\n> of the disk array, replace the broken disk in the array and activate\n> the new disk. Is this correct?\n\n\ncorrect, but more drives also give you the chance to do multiple parity \narrays so that you can loose more drives before you loose data. see the \ntread titled 'Sunfire X4500 recommendations' for some stats on how likely \nyou are to loose your data in the face of multiple drive failures.\n\nyou can actually get much better reliability then RAID 10\n\nDavid Lang\n", "msg_date": "Wed, 4 Apr 2007 09:04:19 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "I had a 'scratch' database for testing, which I deleted, and then disk went out. No problem, no precious data. But now I can't drop the tablespace, or the user who had that as the default tablespace.\n\nI thought about removing the tablespace from pg_tablespaces, but it seems wrong to be monkeying with the system tables. I still can't drop the user, and can't drop the tablespace. What's the right way to clear out Postgres when a disk fails and there's no reason to repair the disk?\n\nThanks,\nCraig\n", "msg_date": "Wed, 04 Apr 2007 09:43:40 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Can't drop tablespace or user after disk gone" }, { "msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> I had a 'scratch' database for testing, which I deleted, and then disk went out. No problem, no precious data. But now I can't drop the tablespace, or the user who had that as the default tablespace.\n> I thought about removing the tablespace from pg_tablespaces, but it seems wrong to be monkeying with the system tables. I still can't drop the user, and can't drop the tablespace. What's the right way to clear out Postgres when a disk fails and there's no reason to repair the disk?\n\nProbably best to make a dummy postgres-owned directory somewhere and\nrepoint the symlink at it, then DROP TABLESPACE.\n\nCVS HEAD has recently been tweaked to be more forgiving of such cases...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Apr 2007 12:57:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't drop tablespace or user after disk gone " }, { "msg_contents": "On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:\n> >difference. OTOH, the SCSI discs were way less reliable than the SATA\n> >discs, that might have been bad luck.\n> Probably bad luck. I find that SCSI is very reliable, but I don't find \n> it any more reliable than SATA. That is assuming correct ventilation etc...\n\nPerhaps a basic question - but why does the interface matter? :-)\n\nI find the subject interesting to read about - but I am having trouble\nunderstanding why SATAII is technically superior or inferior to SCSI as\nan interface, in any place that counts.\n\nIs the opinion being expressed that manufacturers who have decided to\nmove to SATAII are not designing for the enterprise market yes? I find\nmyself doubting this...\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Wed, 4 Apr 2007 12:58:03 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "[email protected] wrote:\n\n> Good point. On another note, I am wondering why nobody's brought up \n> the command-queuing perf benefits (yet). Is this because sata vs scsi \n> are at par here? I'm finding conflicting information on this -- some \n> calling sata's ncq mostly crap, others stating the real-world results \n> are negligible. I'm inclined to believe SCSI's pretty far ahead here \n> but am having trouble finding recent articles on this.\n\nMy personal thoughts are that the SATA NCQ opinion you've found is simply \nbecause the workloads SATAs tend to be given (single-user) don't really \nbenefit that much from it.\n\n> The servers are hooked up to a reliable UPS. The battery-backed cache \n> won't hurt but might be overkill (?).\n\nThe difference is that a BBU isn't going to be affected by OS/hardware \nhangs. There are even some SCSI RAID cards I've seen which can save your \ndata in case the card itself fails (the BBU in these cases is part of the \nsame module as the write cache memory, so you can remove them together and \nput them into a new card, after which the data can be written).\n\nI haven't checked into this recently, but IDE drives are notorious for \nlying about having their internal write cache disabled. Which means that in \ntheory a BBU controller can have a write acknowledged as having happened, \nconsequently purge the data from the write cache, then when the power fails \nthe data still isn't on any kind of permanent storage. It depends how \nparanoid you are as to whether you care about this edge case (and it'd make \nrather less difference if the pg_xlog is on a non-lying drive).\n\nHTH,\nGeoff\n", "msg_date": "Wed, 04 Apr 2007 10:36:36 -0700", "msg_from": "Geoff Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "[email protected] wrote:\n\n> for that matter, with 20ish 320G drives, how large would a parition be \n> that only used the outer pysical track of each drive? (almost certinly \n> multiple logical tracks) if you took the time to set this up you could \n> eliminate seeking entirely (at the cost of not useing your capacity, but \n> since you are considering a 12x range in capacity, it's obviously not \n> your primary concern)\n\nGood point: if 8x73GB in a RAID10 is an option, the database can't be \nlarger than 292GB, or 1/12 the available space on the 320GB SATA version.\n\n> note that the CMU and Google studies both commented on being surprised \n> at the lack of difference between the reliability of SCSI and SATA drives.\n\nI'd read about the Google study's conclusions on the failure rate over time \nof drives; I hadn't gotten wind before of it comparing SCSI to SATA drives. \nI do wonder what their access patterns are like, and how that pertains to \nfailure rates. I'd like to think that with smaller seeks (like in the \nmany-big-SATAs-option) the life of the drives would be longer.\n\nOh, one big advantage of SATA over SCSI: simple cabling and no need for \ntermination. Although SAS levels that particular playing field.\n\nCheers,\nGeoff\n", "msg_date": "Wed, 04 Apr 2007 11:03:23 -0700", "msg_from": "Geoff Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "[email protected] wrote:\n\n> Perhaps a basic question - but why does the interface matter? :-)\n\nThe interface itself matters not so much these days as the drives that \nhappen to use it. Most manufacturers make both SATA and SCSI lines, are \nkeen to keep the market segmented, and don't want to cannibalize their SCSI \nbusiness by coming out with any SATA drives that are too good. One notable \nexception is Western Digital, which is why they remain the only makers of \n10K SATAs more than three years after first coming out with them.\n\nCheers,\nGeoff\n", "msg_date": "Wed, 04 Apr 2007 11:40:02 -0700", "msg_from": "Geoff Tolley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": ">sure but for any serious usage one either wants to disable that\n>cache(and rely on tagged command queuing or how that is called in SATAII\n>world) or rely on the OS/raidcontroller implementing some sort of\n>FUA/write barrier feature(which linux for example only does in pretty\n>recent kernels)\n\nDoes anyone know which other hosts have write barrier implementations?\nSolaris? FreeBSD? Windows?\n\nThe buffers should help greatly in such a case, right? Particularly if\nyou have quite a wide stripe.\n\n--\nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 268.18.25/745 - Release Date: 03/04/2007\n12:48\n\n", "msg_date": "Wed, 4 Apr 2007 19:45:01 +0100", "msg_from": "\"James Mansion\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On 4-4-2007 0:13 [email protected] wrote:\n> We need to upgrade a postgres server. I'm not tied to these specific \n> alternatives, but I'm curious to get feedback on their general qualities.\n> \n> SCSI\n> dual xeon 5120, 8GB ECC\n> 8*73GB SCSI 15k drives (PERC 5/i)\n> (dell poweredge 2900)\n\nThis is a SAS set-up, not SCSI. So the cabling, if an issue at all, is \nin SAS' favour rather than SATA's. Normally you don't have to worry \nabout that in a hot-swap chassis anyway.\n\n> SATA\n> dual opteron 275, 8GB ECC\n> 24*320GB SATA II 7.2k drives (2*12way 3ware cards)\n> (generic vendor)\n> \n> Both boxes are about $8k running ubuntu. We're planning to setup with \n> raid10. Our main requirement is highest TPS (focused on a lot of INSERTS).\n> \n> Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?\n\nI'm not sure this is an entirely fair question given the fact that the \nsystems aren't easily comparable. They are likely not the same build \nquality or have the same kind of support, they occupy different amounts \nof space (2U vs probably at least 4U or 5U) and there will probably a be \ndifference in energy consumption in favour of the first solution.\nIf you don't care about such things, it may actually be possible to \nbuild a similar set-up as your SATA-system with 12 or 16 15k rpm SAS \ndisks or 10k WD Raptor disks. For the sata-solution you can also \nconsider a 24-port Areca card.\n\n\nBest regards,\n\nArjen\n", "msg_date": "Wed, 04 Apr 2007 21:09:16 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\nOn Apr 4, 2007, at 12:09 PM, Arjen van der Meijden wrote:\n\n> If you don't care about such things, it may actually be possible to \n> build a similar set-up as your SATA-system with 12 or 16 15k rpm \n> SAS disks or 10k WD Raptor disks. For the sata-solution you can \n> also consider a 24-port Areca card.\n\nfwiw, I've had horrible experiences with areca drivers on linux. I've \nfound them to be unreliable when used with dual AMD64 processors 4+ \nGB of ram. I've tried kernels 2.16 up to 2.19... intermittent yet \ninevitable ext3 corruptions. 3ware cards, on the other hand, have \nbeen rock solid.\n\n-jay\n\n\n", "msg_date": "Wed, 4 Apr 2007 12:17:43 -0700", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On 4-4-2007 21:17 [email protected] wrote:\n> fwiw, I've had horrible experiences with areca drivers on linux. I've \n> found them to be unreliable when used with dual AMD64 processors 4+ GB \n> of ram. I've tried kernels 2.16 up to 2.19... intermittent yet \n> inevitable ext3 corruptions. 3ware cards, on the other hand, have been \n> rock solid.\n\nThat's the first time I hear such a thing. We have two systems (both are \nprevious generation 64bit Xeon systems with 6 and 8GB memory) which run \nperfectly stable with uptimes with a ARC-1130 and 8 WD-raptor disks.\n\nBest regards,\n\nArjen\n", "msg_date": "Wed, 04 Apr 2007 21:38:28 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "[email protected] wrote:\n> On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:\n> > >difference. OTOH, the SCSI discs were way less reliable than the SATA\n> > >discs, that might have been bad luck.\n> > Probably bad luck. I find that SCSI is very reliable, but I don't find \n> > it any more reliable than SATA. That is assuming correct ventilation etc...\n> \n> Perhaps a basic question - but why does the interface matter? :-)\n> \n> I find the subject interesting to read about - but I am having trouble\n> understanding why SATAII is technically superior or inferior to SCSI as\n> an interface, in any place that counts.\n\nYou should probably read this to learn the difference between desktop\nand enterprise-level drives:\n\n http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Wed, 4 Apr 2007 16:27:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Bruce Momjian wrote:\n> [email protected] wrote:\n>> On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:\n>>>> difference. OTOH, the SCSI discs were way less reliable than the SATA\n>>>> discs, that might have been bad luck.\n>>> Probably bad luck. I find that SCSI is very reliable, but I don't find \n>>> it any more reliable than SATA. That is assuming correct ventilation etc...\n>> Perhaps a basic question - but why does the interface matter? :-)\n>>\n>> I find the subject interesting to read about - but I am having trouble\n>> understanding why SATAII is technically superior or inferior to SCSI as\n>> an interface, in any place that counts.\n> \n> You should probably read this to learn the difference between desktop\n> and enterprise-level drives:\n> \n> http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n\nProblem is :), you can purchase SATA Enterprise Drives.\n\nJoshua D. Drake\n\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Wed, 04 Apr 2007 13:32:09 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Joshua D. Drake wrote:\n> Bruce Momjian wrote:\n> > [email protected] wrote:\n> >> On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:\n> >>>> difference. OTOH, the SCSI discs were way less reliable than the SATA\n> >>>> discs, that might have been bad luck.\n> >>> Probably bad luck. I find that SCSI is very reliable, but I don't find \n> >>> it any more reliable than SATA. That is assuming correct ventilation etc...\n> >> Perhaps a basic question - but why does the interface matter? :-)\n> >>\n> >> I find the subject interesting to read about - but I am having trouble\n> >> understanding why SATAII is technically superior or inferior to SCSI as\n> >> an interface, in any place that counts.\n> > \n> > You should probably read this to learn the difference between desktop\n> > and enterprise-level drives:\n> > \n> > http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n> \n> Problem is :), you can purchase SATA Enterprise Drives.\n\nRight --- the point is not the interface, but whether the drive is built\nfor reliability or to hit a low price point.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Wed, 4 Apr 2007 16:36:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\n> Problem is :), you can purchase SATA Enterprise Drives.\n\nProblem???? I would have thought that was a good thing!!! ;-)\n\nCarlos\n--\n\n", "msg_date": "Wed, 04 Apr 2007 16:48:05 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "In a perhaps fitting compromise, I have decide to go with a hybrid \nsolution:\n\n8*73GB 15k SAS drives hooked up to Adaptec 4800SAS\nPLUS\n6*150GB SATA II drives hooked up to mobo (for now)\n\nAll wrapped in a 16bay 3U server. My reasoning is that the extra SATA \ndrives are practically free compared to the rest of the system (since \nthe mobo has 6 onboard connectors). I plan on putting the pg_xlog & \noperating system on the sata drives and the tables/indices on the SAS \ndrives, although I might not use the sata drives for the xlog if \nthey dont pan out perf-wise. I plan on getting the battery backed \nmodule for the adaptec (72 hours of charge time).\n\nThanks to everyone for the valuable input. I hope i can do you all \nproud with the setup and postgres.conf optimizations.\n\n-jay\n\n\nOn Apr 4, 2007, at 1:48 PM, Carlos Moreno wrote:\n\n>\n>> Problem is :), you can purchase SATA Enterprise Drives.\n>\n> Problem???? I would have thought that was a good thing!!! ;-)\n>\n> Carlos\n> --\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n\n", "msg_date": "Wed, 4 Apr 2007 16:42:21 -0700", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": ">Right --- the point is not the interface, but whether the drive is built\n>for reliability or to hit a low price point.\n\nPersonally I take the marketing mublings about the enterprise drives\nwith a pinch of salt. The low-price drives HAVE TO be reliable too,\nbecause a non-negligible failure rate will result in returns processing\ncosts that destroy a very thin margin.\n\nGranted, there was a move to very short warranties a while back,\nbut the trend has been for more realistic warranties again recently.\nYou can bet they don't do this unless the drives are generally pretty\ngood.\n\nJames\n\n--\nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 268.18.25/745 - Release Date: 03/04/2007\n12:48\n\n", "msg_date": "Thu, 5 Apr 2007 06:25:05 +0100", "msg_from": "\"James Mansion\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\"James Mansion\" <[email protected]> writes:\n>> Right --- the point is not the interface, but whether the drive is built\n>> for reliability or to hit a low price point.\n\n> Personally I take the marketing mublings about the enterprise drives\n> with a pinch of salt. The low-price drives HAVE TO be reliable too,\n> because a non-negligible failure rate will result in returns processing\n> costs that destroy a very thin margin.\n\nReliability is relative. Server-grade drives are built to be beat upon\n24x7x365 for the length of their warranty period. Consumer-grade drives\nare built to be beat upon a few hours a day, a few days a week, for the\nlength of their warranty period. Even if the warranties mention the\nsame number of years, there is a huge difference here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Apr 2007 01:32:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA " }, { "msg_contents": "If the 3U case has a SAS-expander in its backplane (which it probably \nhas?) you should be able to connect all drives to the Adaptec \ncontroller, depending on the casing's exact architecture etc. That's \nanother two advantages of SAS, you don't need a controller port for each \nharddisk (we have a Dell MD1000 with 15 drives connected to a 4-port \nexternal sas connection) and you can mix SAS and SATA drives on a \nSAS-controller.\n\nBest regards,\n\nArjen\n\nOn 5-4-2007 1:42 [email protected] wrote:\n> In a perhaps fitting compromise, I have decide to go with a hybrid \n> solution:\n> \n> 8*73GB 15k SAS drives hooked up to Adaptec 4800SAS\n> PLUS\n> 6*150GB SATA II drives hooked up to mobo (for now)\n> \n> All wrapped in a 16bay 3U server. My reasoning is that the extra SATA \n> drives are practically free compared to the rest of the system (since \n> the mobo has 6 onboard connectors). I plan on putting the pg_xlog & \n> operating system on the sata drives and the tables/indices on the SAS \n> drives, although I might not use the sata drives for the xlog if they \n> dont pan out perf-wise. I plan on getting the battery backed module for \n> the adaptec (72 hours of charge time).\n> \n> Thanks to everyone for the valuable input. I hope i can do you all proud \n> with the setup and postgres.conf optimizations.\n> \n> -jay\n> \n> \n> On Apr 4, 2007, at 1:48 PM, Carlos Moreno wrote:\n> \n>>\n>>> Problem is :), you can purchase SATA Enterprise Drives.\n>>\n>> Problem???? I would have thought that was a good thing!!! ;-)\n>>\n>> Carlos\n>> -- \n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 7: You can help support the PostgreSQL project by donating at\n>>\n>> http://www.postgresql.org/about/donate\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n", "msg_date": "Thu, 05 Apr 2007 09:07:11 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "[email protected] wrote:\n> In a perhaps fitting compromise, I have decide to go with a hybrid \n> solution:\n> \n> 8*73GB 15k SAS drives hooked up to Adaptec 4800SAS\n> PLUS\n> 6*150GB SATA II drives hooked up to mobo (for now)\n> \n> All wrapped in a 16bay 3U server. My reasoning is that the extra SATA \n> drives are practically free compared to the rest of the system (since \n> the mobo has 6 onboard connectors). I plan on putting the pg_xlog & \n> operating system on the sata drives and the tables/indices on the SAS \n> drives, although I might not use the sata drives for the xlog if they \n> dont pan out perf-wise. I plan on getting the battery backed module for \n> the adaptec (72 hours of charge time).\n\nIf you have an OLTP kind of workload, you'll want to have the xlog on \nthe drives with the battery backup module. The xlog needs to be fsync'd \nevery time you commit, and the battery backup will effectively eliminate \nthe delay that causes.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 05 Apr 2007 10:28:28 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!\n\nIME, they are usually the worst of the commodity RAID controllers available.\nI've often seen SW RAID outperform them.\n\nIf you are going to use this config, Tyan's n3600M (AKA S2932) MB has \na variant that comes with 8 SAS + 6 SATA II connectors.\nThe SKU is S2932WG2NR.\nhttp://www.tyan.us/product_board_detail.aspx?pid=453\n\nBe very careful to get this =exact= SKU if you order this board and \nwant the one with SAS support.\nThe non-SAS variant's SKU is S2932G2NR. Note that the only \ndifference is the \"W\" in the middle.\n\nAnyway, the onboard RAID is based on a LSI PCI-E controller.\n\nI'm using this beast (Dual Socket F, AMD Barcelona ready, 16 DIMMS \nsupporting up to 64GB of ECC RAM, 2 PCI-Ex16 slots w/ PCI-Ex8 signalling, etc:\n~$450 US w/o SAS, ~$500 US w/ SAS) for my most recent pg 8.2.3 build \non top of XFS.\n\nIf the on board RAID is or becomes inadequate to your needs, I'd \nstrongly suggest either 3ware or Areca RAID controllers.\n\nSide Note:\nWhat kind of HDs are the 8*73GB ones? If they are the new 2.5\" \nSavvio 15Ks, be =VERY= careful about having proper power and cooling for them.\n14 HD's in one case are going to have a serious transient load on \nsystem start up and (especially with those SAS HDs) can generate a \ngreat deal of heat.\n\nWhat 16bay 3U server are you using?\n\nCheers,\nRon Peacetree\n\nPS to all: Tom's point about the difference between enterprise and \nand non-enterprise HDs is dead on accurate.\nEnterprise class HD's have case \"clam shells\" that are specifically \ndesigned for 5 years of 24x7 operation is gangs of RAID under typical \nconditions found in a reasonable NOC.\nConsumer HDs are designed to be used in consumer boxes in \"one's and \ntwo's\", and for far less time per day, and under far less punishment \nduring the time they are on.\nThere is a =big= difference between consumer class HDs and enterprise \nclass HDs even if they both have 5 year warranties.\nBuy the right thing for your typical use case or risk losing company data.\n\nGetting the wrong thing when it is your responsibility to get the \nright thing is a firing offense if Something Bad happens to the data \nbecause of it where I come from.\n\n\nAt 07:42 PM 4/4/2007, [email protected] wrote:\n>In a perhaps fitting compromise, I have decide to go with a hybrid\n>solution:\n>\n>8*73GB 15k SAS drives hooked up to Adaptec 4800SAS\n>PLUS\n>6*150GB SATA II drives hooked up to mobo (for now)\n>\n>All wrapped in a 16bay 3U server. My reasoning is that the extra SATA\n>drives are practically free compared to the rest of the system (since\n>the mobo has 6 onboard connectors). I plan on putting the pg_xlog &\n>operating system on the sata drives and the tables/indices on the SAS\n>drives, although I might not use the sata drives for the xlog if\n>they dont pan out perf-wise. I plan on getting the battery backed\n>module for the adaptec (72 hours of charge time).\n>\n>Thanks to everyone for the valuable input. I hope i can do you all\n>proud with the setup and postgres.conf optimizations.\n>\n>-jay\n>\n>\n>On Apr 4, 2007, at 1:48 PM, Carlos Moreno wrote:\n>\n>>\n>>>Problem is :), you can purchase SATA Enterprise Drives.\n>>\n>>Problem???? I would have thought that was a good thing!!! ;-)\n>>\n>>Carlos\n>>--\n>>\n>>\n>>---------------------------(end of\n>>broadcast)---------------------------\n>>TIP 7: You can help support the PostgreSQL project by donating at\n>>\n>> http://www.postgresql.org/about/donate\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n\n", "msg_date": "Thu, 05 Apr 2007 07:09:30 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Wed, 2007-04-04 at 09:12, [email protected] wrote:\n> On Apr 3, 2007, at 6:54 PM, Geoff Tolley wrote:\n> \n\n> > But what's likely to make the largest difference in the OP's case \n> > (many inserts) is write caching, and a battery-backed cache would \n> > be needed for this. This will help mask write latency differences \n> > between the two options, and so benefit SATA more. Some 3ware cards \n> > offer it, some don't, so check the model.\n> \n> The servers are hooked up to a reliable UPS. The battery-backed cache \n> won't hurt but might be overkill (?).\n\nJust had to mention that the point of battery backed cache on the RAID\ncontroller isn't the same as for a UPS on a system.\n\nWith drives that properly report fsync(), your system is limited to the\nrpm of the drive( subsystem) that the pg_xlog sits upon. With battery\nbacked cache, the controller immediately acknowledges an fsync() call\nand then commits it at its leisure. Should the power be lost, either\ndue to mains / UPS failure or internal power supply failure, the\ncontroller hangs onto those data for several days, and upon restart\nflushes them out to the drives they were heading for originally.\n\nbattery backed cache is the best way to get both good performance and\nreliability from a system without breaking the bank. I've seen 2 disk\nRAID-1 setups with BBU beat some pretty big arrays that didn't have a\nBBU on OLTP work. \n\n\n> > How the drives are arranged is going to be important too - one big \n> > RAID 10 is going to be rather worse than having arrays dedicated to \n> > each of pg_xlog, indices and tables, and on that front the SATA \n> > option is going to grant more flexibility.\n> \n> I've read some recent contrary advice. Specifically advising the \n> sharing of all files (pg_xlogs, indices, etc..) on a huge raid array \n> and letting the drives load balance by brute force.\n\nThe other, at first almost counter-intuitive result was that putting\npg_xlog on a different partition on the same array (i.e. one big\nphysical partition broken up into multiple logical ones) because the OS\noverhead of writing all the data to one file system caused performance\nissues. Can't remember who reported the performance increase of the top\nof my head.\n\nNote that a lot of the advantages to running on multiple arrays etc...\nare somewhat negated by having a good RAID controller with a BBU.\n", "msg_date": "Thu, 05 Apr 2007 10:03:45 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Thu, 2007-04-05 at 00:32, Tom Lane wrote:\n> \"James Mansion\" <[email protected]> writes:\n> >> Right --- the point is not the interface, but whether the drive is built\n> >> for reliability or to hit a low price point.\n> \n> > Personally I take the marketing mublings about the enterprise drives\n> > with a pinch of salt. The low-price drives HAVE TO be reliable too,\n> > because a non-negligible failure rate will result in returns processing\n> > costs that destroy a very thin margin.\n> \n> Reliability is relative. Server-grade drives are built to be beat upon\n> 24x7x365 for the length of their warranty period. Consumer-grade drives\n> are built to be beat upon a few hours a day, a few days a week, for the\n> length of their warranty period. Even if the warranties mention the\n> same number of years, there is a huge difference here.\n\nJust a couple of points...\n\nServer drives are generally more tolerant of higher temperatures. I.e.\nthe failure rate for consumer and server class HDs may be about the same\nat 40 degrees C, but by the time the internal case temps get up to 60-70\ndegrees C, the consumer grade drives will likely be failing at a much\nhigher rate, whether they're working hard or not.\n\nWhich brings up my next point:\n\nI'd rather have 36 consumer grade drives in a case that moves a LOT of\nair and keeps the drive bays cool than 12 server class drives in a case\nthat has mediocre / poor air flow in it. I would, however, allocate 3\nor 4 drives as spares in the 36 drive array just to be sure.\n\nLast point:\n\nAs has been mentioned in this thread already, not all server drives are\ncreated equal. Anyone who lived through the HP Surestore 2000 debacle\nor one like it can attest to that. Until the drives have been burnt in\nand proven reliable, just assume that they could all fail at any time\nand act accordingly.\n", "msg_date": "Thu, 05 Apr 2007 10:19:30 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Thu, 5 Apr 2007, Scott Marlowe wrote:\n\n>> I've read some recent contrary advice. Specifically advising the\n>> sharing of all files (pg_xlogs, indices, etc..) on a huge raid array\n>> and letting the drives load balance by brute force.\n>\n> The other, at first almost counter-intuitive result was that putting\n> pg_xlog on a different partition on the same array (i.e. one big\n> physical partition broken up into multiple logical ones) because the OS\n> overhead of writing all the data to one file system caused performance\n> issues. Can't remember who reported the performance increase of the top\n> of my head.\n\nI noticed this behavior on the last Areca based 8 disk Raptor system I built. \nPutting pg_xlog on a separate partition on the same logical volume was faster \nthan putting it on the large volume. It was also faster to have 8xRAID10 for \nOS+data+pg_xlog vs 6xRAID10 for data and 2xRAID1 for pg_xlog+OS. Your \nworkload may vary, but it's definitely worth testing. The system in question \nhad 1GB BBU.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 5 Apr 2007 08:21:56 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\nOn Apr 5, 2007, at 8:21 AM, Jeff Frost wrote:\n\n> I noticed this behavior on the last Areca based 8 disk Raptor \n> system I built. Putting pg_xlog on a separate partition on the same \n> logical volume was faster than putting it on the large volume. It \n> was also faster to have 8xRAID10 for OS+data+pg_xlog vs 6xRAID10 \n> for data and 2xRAID1 for pg_xlog+OS. Your workload may vary, but \n> it's definitely worth testing. The system in question had 1GB BBU.\n\nThanks for sharing your findings - I'll definitely try that config out.\n\n-jay\n\n\n", "msg_date": "Thu, 5 Apr 2007 08:38:58 -0700", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\nOn Apr 5, 2007, at 4:09 AM, Ron wrote:\n\n> BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!\n\nThanks - I received similar private emails with the same advice. I \nwill change the controller to a LSI MegaRAID SAS 8408E -- any \nfeedback on this one?\n\n>\n> IME, they are usually the worst of the commodity RAID controllers \n> available.\n> I've often seen SW RAID outperform them.\n>\n> If you are going to use this config, Tyan's n3600M (AKA S2932) MB \n> has a variant that comes with 8 SAS + 6 SATA II connectors.\n> The SKU is S2932WG2NR.\n> http://www.tyan.us/product_board_detail.aspx?pid=453\n\nI plan on leveraging the battery backed module so onboard sas isn't a \npriority for me.\n\n> I'm using this beast (Dual Socket F, AMD Barcelona ready, 16 DIMMS \n> supporting up to 64GB of ECC RAM, 2 PCI-Ex16 slots w/ PCI-Ex8 \n> signalling, etc:\n> ~$450 US w/o SAS, ~$500 US w/ SAS) for my most recent pg 8.2.3 \n> build on top of XFS.\n\nI'm curious to know why you're on xfs (i've been too chicken to stray \nfrom ext3).\n\n> If the on board RAID is or becomes inadequate to your needs, I'd \n> strongly suggest either 3ware or Areca RAID controllers.\n\nI don't know why, but my last attempt at using an areca 1120 w/ linux \non amd64 (and > 4gb ram) was disastrous - i will never use them \nagain. 3ware's been rock solid for us.\n\n>\n> Side Note:\n> What kind of HDs are the 8*73GB ones? If they are the new 2.5\" \n> Savvio 15Ks, be =VERY= careful about having proper power and \n> cooling for them.\n> 14 HD's in one case are going to have a serious transient load on \n> system start up and (especially with those SAS HDs) can generate a \n> great deal of heat.\n\nI went w/ Fujitsu. Fortunately these servers are hosted in a very \nwell ventilated area so i am not that concerned with heat issues.\n\n>\n> What 16bay 3U server are you using?\n\nsupermicro sc836tq-r800\nhttp://www.supermicro.com/products/chassis/3U/836/SC836TQ-R800V.cfm\n\nThanks for all the help!\n\n", "msg_date": "Thu, 5 Apr 2007 08:58:54 -0700", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "[email protected] wrote:\n> \n> On Apr 5, 2007, at 4:09 AM, Ron wrote:\n> \n>> BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!\n> \n> Thanks - I received similar private emails with the same advice. I will \n> change the controller to a LSI MegaRAID SAS 8408E -- any feedback on \n> this one?\n\nLSI makes a good controller and the driver for linux is very stable.\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 05 Apr 2007 09:02:19 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On 4/5/07, [email protected] <[email protected]> wrote:\n>\n> On Apr 5, 2007, at 4:09 AM, Ron wrote:\n>\n> > BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!\n>\n> Thanks - I received similar private emails with the same advice. I\n> will change the controller to a LSI MegaRAID SAS 8408E -- any\n> feedback on this one?\n\nWe use the LSI SAS1064 SAS chips and they've been great.\n\n>\n> >\n> > IME, they are usually the worst of the commodity RAID controllers\n> > available.\n> > I've often seen SW RAID outperform them.\n> >\n> > If you are going to use this config, Tyan's n3600M (AKA S2932) MB\n> > has a variant that comes with 8 SAS + 6 SATA II connectors.\n> > The SKU is S2932WG2NR.\n> > http://www.tyan.us/product_board_detail.aspx?pid=453\n>\n> I plan on leveraging the battery backed module so onboard sas isn't a\n> priority for me.\n>\n> > I'm using this beast (Dual Socket F, AMD Barcelona ready, 16 DIMMS\n> > supporting up to 64GB of ECC RAM, 2 PCI-Ex16 slots w/ PCI-Ex8\n> > signalling, etc:\n> > ~$450 US w/o SAS, ~$500 US w/ SAS) for my most recent pg 8.2.3\n> > build on top of XFS.\n>\n> I'm curious to know why you're on xfs (i've been too chicken to stray\n> from ext3).\n\nI've had great performance with jfs, however there are some issues\nwith it on certain bigendian platforms.\n\n>\n> > If the on board RAID is or becomes inadequate to your needs, I'd\n> > strongly suggest either 3ware or Areca RAID controllers.\n>\n> I don't know why, but my last attempt at using an areca 1120 w/ linux\n> on amd64 (and > 4gb ram) was disastrous - i will never use them\n> again. 3ware's been rock solid for us.\n>\n> >\n> > Side Note:\n> > What kind of HDs are the 8*73GB ones? If they are the new 2.5\"\n> > Savvio 15Ks, be =VERY= careful about having proper power and\n> > cooling for them.\n> > 14 HD's in one case are going to have a serious transient load on\n> > system start up and (especially with those SAS HDs) can generate a\n> > great deal of heat.\n>\n> I went w/ Fujitsu. Fortunately these servers are hosted in a very\n> well ventilated area so i am not that concerned with heat issues.\n>\n\nWe have the 2.5\" drives (seagates and fujitsus) and they have been\nreliable and performed well.\n\nAlex\n", "msg_date": "Thu, 5 Apr 2007 12:10:00 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "At 11:19 AM 4/5/2007, Scott Marlowe wrote:\n>On Thu, 2007-04-05 at 00:32, Tom Lane wrote:\n> > \"James Mansion\" <[email protected]> writes:\n> > >> Right --- the point is not the interface, but whether the drive is built\n> > >> for reliability or to hit a low price point.\n> >\n> > > Personally I take the marketing mublings about the enterprise drives\n> > > with a pinch of salt. The low-price drives HAVE TO be reliable too,\n> > > because a non-negligible failure rate will result in returns processing\n> > > costs that destroy a very thin margin.\n> >\n> > Reliability is relative. Server-grade drives are built to be beat upon\n> > 24x7x365 for the length of their warranty period. Consumer-grade drives\n> > are built to be beat upon a few hours a day, a few days a week, for the\n> > length of their warranty period. Even if the warranties mention the\n> > same number of years, there is a huge difference here.\n>\n>Just a couple of points...\n>\n>Server drives are generally more tolerant of higher temperatures. I.e.\n>the failure rate for consumer and server class HDs may be about the same\n>at 40 degrees C, but by the time the internal case temps get up to 60-70\n>degrees C, the consumer grade drives will likely be failing at a much\n>higher rate, whether they're working hard or not.\n\nExactly correct.\n\n\n>Which brings up my next point:\n>\n>I'd rather have 36 consumer grade drives in a case that moves a LOT of\n>air and keeps the drive bays cool than 12 server class drives in a case\n>that has mediocre / poor air flow in it.\n\nAlso exactly correct. High temperatures or unclean power issues age \nHDs faster than any other factors.\n\nThis is why I dislike 1U's for the vast majority f applications.\n\n\n>I would, however, allocate 3 or 4 drives as spares in the 36 drive \n>array just to be sure.\n10% sparing is reasonable.\n\n\n>Last point:\n>\n>As has been mentioned in this thread already, not all server drives \n>are created equal. Anyone who lived through the HP Surestore 2000 \n>debacle or one like it can attest to that.\n\nYeah, that was very much !no! fun.\n\n\n> Until the drives have been burnt in and proven reliable, just \n> assume that they could all fail at any time and act accordingly.\nYep. Folks should google \"bath tub curve of statistical failure\" or \nsimilar. Basically, always burn in your drives for at least 1/2 a \nday before using them in a production or mission critical role.\n\n\nCheers,\nRon Peacetree \n\n", "msg_date": "Thu, 05 Apr 2007 12:25:38 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On 5-4-2007 17:58 [email protected] wrote:\n> \n> On Apr 5, 2007, at 4:09 AM, Ron wrote:\n> \n>> BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!\n> \n> Thanks - I received similar private emails with the same advice. I will \n> change the controller to a LSI MegaRAID SAS 8408E -- any feedback on \n> this one?\n\nWe have the dell-equivalent (PERC 5/e and PERC 5/i) in production and \nhave had no issues with it, it also performes very well (compared to a \nICP Vortex controller). The LSI has been benchmarked by my colleague and \nhe was pleased with the controller.\n\n> I went w/ Fujitsu. Fortunately these servers are hosted in a very well \n> ventilated area so i am not that concerned with heat issues.\n\nWe have 15 of the 36GB drives and they are doing great. According to \nthat same colleague, the Fujitsu drives are currently the best \nperforming drives. Although he hasn't had his hands on the new Savvio \n15k rpm drives yet.\n\n>> What 16bay 3U server are you using?\n> \n> supermicro sc836tq-r800\n> http://www.supermicro.com/products/chassis/3U/836/SC836TQ-R800V.cfm\n\nYou could also look at this version of that chassis:\nhttp://www.supermicro.com/products/chassis/3U/836/SC836E1-R800V.cfm\n\nAfaik it sports a 28-port expander, which should (please confirm with \nyour vendor) allow you to connect all 16 drives to the 8-ports of your \ncontroller. Which in turn allows your both sets of disks to be used with \nyour BBU-backed controller.\n\nBest regards,\n\nArjen\n", "msg_date": "Thu, 05 Apr 2007 18:29:42 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": ">Server drives are generally more tolerant of higher temperatures. I.e.\n>the failure rate for consumer and server class HDs may be about the same\n>at 40 degrees C, but by the time the internal case temps get up to 60-70\n>degrees C, the consumer grade drives will likely be failing at a much\n>higher rate, whether they're working hard or not.\n\nCan you cite any statistical evidence for this?\n\nJames\n\n--\nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 268.18.26/746 - Release Date: 04/04/2007\n13:09\n\n", "msg_date": "Thu, 5 Apr 2007 20:30:50 +0100", "msg_from": "\"James Mansion\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Thu, 5 Apr 2007, [email protected] wrote:\n\n>\n> I'm curious to know why you're on xfs (i've been too chicken to stray from \n> ext3).\n\nbetter support for large files (although postgres does tend to try and \nkeep the file size down by going with multiple files) and also for more \nfiles\n\nthe multiple levels of indirection that ext3 uses for accessing large \nfiles (or large directories) can really slow things down, just from the \noverhead of looking up the metadata (including finding where the actual \ndata blocks are on disk)\n\next4 is planning to address this and will probably be a _very_ good \nimprovement, but ext3 has very definiate limits that it inherited from \next2.\n\nDavid Lang\n\n", "msg_date": "Thu, 5 Apr 2007 13:11:56 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Thu, 5 Apr 2007, Ron wrote:\n\n> At 11:19 AM 4/5/2007, Scott Marlowe wrote:\n>> On Thu, 2007-04-05 at 00:32, Tom Lane wrote:\n>> > \"James Mansion\" <[email protected]> writes:\n>> > > > Right --- the point is not the interface, but whether the drive is \n>> > > > built\n>> > > > for reliability or to hit a low price point.\n>> > \n>> > > Personally I take the marketing mublings about the enterprise drives\n>> > > with a pinch of salt. The low-price drives HAVE TO be reliable too,\n>> > > because a non-negligible failure rate will result in returns \n>> > > processing\n>> > > costs that destroy a very thin margin.\n>> > \n>> > Reliability is relative. Server-grade drives are built to be beat upon\n>> > 24x7x365 for the length of their warranty period. Consumer-grade drives\n>> > are built to be beat upon a few hours a day, a few days a week, for the\n>> > length of their warranty period. Even if the warranties mention the\n>> > same number of years, there is a huge difference here.\n>> \n>> Just a couple of points...\n>> \n>> Server drives are generally more tolerant of higher temperatures. I.e.\n>> the failure rate for consumer and server class HDs may be about the same\n>> at 40 degrees C, but by the time the internal case temps get up to 60-70\n>> degrees C, the consumer grade drives will likely be failing at a much\n>> higher rate, whether they're working hard or not.\n>\n> Exactly correct.\n>\n>\n>> Which brings up my next point:\n>> \n>> I'd rather have 36 consumer grade drives in a case that moves a LOT of\n>> air and keeps the drive bays cool than 12 server class drives in a case\n>> that has mediocre / poor air flow in it.\n>\n> Also exactly correct. High temperatures or unclean power issues age HDs \n> faster than any other factors.\n>\n\nthis I agree with, however I believe that this is _so_ much of a factor \nthat it swamps any difference that they may be between 'enterprise' and \n'consumer' drives.\n\n>\n>> Until the drives have been burnt in and proven reliable, just assume that\n>> they could all fail at any time and act accordingly.\n> Yep. Folks should google \"bath tub curve of statistical failure\" or similar. \n> Basically, always burn in your drives for at least 1/2 a day before using \n> them in a production or mission critical role.\n\nfor this and your first point, please go and look at the google and cmu \nstudies. unless the vendors did the burn-in before delivering the drives \nto the sites that installed them, there was no 'infant mortality' spike on \nthe drives (both studies commented on this, they expected to find one)\n\nDavid Lang\n", "msg_date": "Thu, 5 Apr 2007 13:15:58 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Thu, 2007-04-05 at 14:30, James Mansion wrote:\n> >Server drives are generally more tolerant of higher temperatures. I.e.\n> >the failure rate for consumer and server class HDs may be about the same\n> >at 40 degrees C, but by the time the internal case temps get up to 60-70\n> >degrees C, the consumer grade drives will likely be failing at a much\n> >higher rate, whether they're working hard or not.\n> \n> Can you cite any statistical evidence for this?\n\nLogic?\n\nMechanical devices have decreasing MTBF when run in hotter environments,\noften at non-linear rates.\n\nServer class drives are designed with a longer lifespan in mind.\n\nServer class hard drives are rated at higher temperatures than desktop\ndrives.\n\nGoogle can supply any numbers to fill those facts in, but I found a\ndozen or so data sheets for various enterprise versus desktop drives in\na matter of minutes.\n", "msg_date": "Thu, 05 Apr 2007 16:13:32 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Thu, 5 Apr 2007, Scott Marlowe wrote:\n\n> On Thu, 2007-04-05 at 14:30, James Mansion wrote:\n>>> Server drives are generally more tolerant of higher temperatures. I.e.\n>>> the failure rate for consumer and server class HDs may be about the same\n>>> at 40 degrees C, but by the time the internal case temps get up to 60-70\n>>> degrees C, the consumer grade drives will likely be failing at a much\n>>> higher rate, whether they're working hard or not.\n>>\n>> Can you cite any statistical evidence for this?\n>\n> Logic?\n>\n> Mechanical devices have decreasing MTBF when run in hotter environments,\n> often at non-linear rates.\n\nthis I will agree with.\n\n> Server class drives are designed with a longer lifespan in mind.\n>\n> Server class hard drives are rated at higher temperatures than desktop\n> drives.\n\nthese two I question.\n\nDavid Lang\n", "msg_date": "Thu, 5 Apr 2007 19:07:44 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "At 10:07 PM 4/5/2007, [email protected] wrote:\n>On Thu, 5 Apr 2007, Scott Marlowe wrote:\n>\n>>Server class drives are designed with a longer lifespan in mind.\n>>\n>>Server class hard drives are rated at higher temperatures than desktop\n>>drives.\n>\n>these two I question.\n>\n>David Lang\nBoth statements are the literal truth. Not that I would suggest \nabusing your server class HDs just because they are designed to live \nlonger and in more demanding environments.\n\nOverheating, nasty electrical phenomenon, and abusive physical shocks \nwill trash a server class HD almost as fast as it will a consumer grade one.\n\nThe big difference between the two is that a server class HD can sit \nin a rack with literally 100's of its brothers around it, cranking \naway on server class workloads 24x7 in a constant vibration \nenvironment (fans, other HDs, NOC cooling systems) and be quite happy \nwhile a consumer HD will suffer greatly shortened life and die a \nhorrible death in such a environment and under such use.\n\n\nRon \n\n", "msg_date": "Thu, 05 Apr 2007 23:19:04 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Thu, 5 Apr 2007, Ron wrote:\n\n> At 10:07 PM 4/5/2007, [email protected] wrote:\n>> On Thu, 5 Apr 2007, Scott Marlowe wrote:\n>> \n>> > Server class drives are designed with a longer lifespan in mind.\n>> > \n>> > Server class hard drives are rated at higher temperatures than desktop\n>> > drives.\n>> \n>> these two I question.\n>> \n>> David Lang\n> Both statements are the literal truth. Not that I would suggest abusing your \n> server class HDs just because they are designed to live longer and in more \n> demanding environments.\n>\n> Overheating, nasty electrical phenomenon, and abusive physical shocks will \n> trash a server class HD almost as fast as it will a consumer grade one.\n>\n> The big difference between the two is that a server class HD can sit in a \n> rack with literally 100's of its brothers around it, cranking away on server \n> class workloads 24x7 in a constant vibration environment (fans, other HDs, \n> NOC cooling systems) and be quite happy while a consumer HD will suffer \n> greatly shortened life and die a horrible death in such a environment and \n> under such use.\n\nRon,\n I know that the drive manufacturers have been claiming this, but I'll \nsay that my experiance doesn't show a difference and neither do the google \nand CMU studies (and they were all in large datacenters, some HPC labs, \nsome commercial companies).\n\nagain the studies showed _no_ noticable difference between the \n'enterprise' SCSI drives and the 'consumer' SATA drives.\n\nDavid Lang\n", "msg_date": "Thu, 5 Apr 2007 20:40:35 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Thu, 5 Apr 2007, Scott Marlowe wrote:\n\n> On Thu, 2007-04-05 at 14:30, James Mansion wrote:\n>> Can you cite any statistical evidence for this?\n> Logic?\n\nOK, everyone who hasn't already needs to read the Google and CMU papers. \nI'll even provide links for you:\n\nhttp://www.cs.cmu.edu/~bianca/fast07.pdf\nhttp://labs.google.com/papers/disk_failures.pdf\n\nThere are several things their data suggests that are completely at odds \nwith the lore suggested by traditional logic-based thinking in this area. \nSection 3.4 of Google's paper basically disproves that \"mechanical devices \nhave decreasing MTBF when run in hotter environments\" applies to hard \ndrives in the normal range they're operated in. Your comments about \nserver hard drives being rated to higher temperatures is helpful, but \nconclusions drawn from just thinking about something I don't trust when \nthey conflict with statistics to the contrary.\n\nI don't want to believe everything they suggest, but enough of it matches \nmy experience that I find it difficult to dismiss the rest. For example, \nI scan all my drives for reallocated sectors, and the minute there's a \nsingle one I get e-mailed about it and get all the data off that drive \npronto. This has saved me from a complete failure that happened within \nthe next day on multiple occasions.\n\nThe main thing I wish they'd published is breaking some of the statistics \ndown by drive manufacturer. For example, they suggest a significant \nnumber of drive failures were not predicted by SMART. I've seen plenty of \ndrives where the SMART reporting was spotty at best (yes, I'm talking \nabout you, Maxtor) and wouldn't be surprised that they were quiet right up \nto their bitter (and frequent) end. I'm not sure how that factor may have \nskewed this particular bit of data.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 6 Apr 2007 00:37:47 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\nOn Thu, 5 Apr 2007 [email protected] wrote:\n> On Thu, 5 Apr 2007, Ron wrote:\n> > At 10:07 PM 4/5/2007, [email protected] wrote:\n> >> On Thu, 5 Apr 2007, Scott Marlowe wrote:\n> >>\n> >> > Server class drives are designed with a longer lifespan in mind.\n> >> >\n> >> > Server class hard drives are rated at higher temperatures than desktop\n> >> > drives.\n> >>\n> >> these two I question.\n> >>\n> >> David Lang\n> > Both statements are the literal truth. Not that I would suggest abusing your\n> > server class HDs just because they are designed to live longer and in more\n> > demanding environments.\n> >\n> > Overheating, nasty electrical phenomenon, and abusive physical shocks will\n> > trash a server class HD almost as fast as it will a consumer grade one.\n> >\n> > The big difference between the two is that a server class HD can sit in a\n> > rack with literally 100's of its brothers around it, cranking away on server\n> > class workloads 24x7 in a constant vibration environment (fans, other HDs,\n> > NOC cooling systems) and be quite happy while a consumer HD will suffer\n> > greatly shortened life and die a horrible death in such a environment and\n> > under such use.\n>\n> Ron,\n> I know that the drive manufacturers have been claiming this, but I'll\n> say that my experiance doesn't show a difference and neither do the google\n> and CMU studies (and they were all in large datacenters, some HPC labs,\n> some commercial companies).\n>\n> again the studies showed _no_ noticable difference between the\n> 'enterprise' SCSI drives and the 'consumer' SATA drives.\n>\n> David Lang\n\nHi David, Ron,\n\nI was just about to chime in to Ron's post when you did already, David. My\nexperience supports David's view point. I'm a scientist and with that hat\non my head I must acknowledge that it wasn't my goal to do a study on the\nsubject so my data is more of the character of anecdote. However, I work\nwith some pretty large shops, such as UC's SDSC, NOAA's NCDC (probably the\nworld's largest non-classified data center), Langley, among many others,\nso my perceptions include insights from a lot of pretty sharp folks.\n\n...When you provide your disk drives with clean power, cool, dry air, and\navoid serious shocks, it seems to be everyone's perception that all modern\ndrives - say, of the last ten years or a bit more - are exceptionally\nreliable, and it's not at all rare to get 7 years and more out of a drive.\nWhat seems _most_ detremental is power-cycles, without regard to which\ntype of drive you might have. This isn't to say the two types, \"server\nclass\" and \"PC\", are equal. PC drives are by comparison rather slow, and\nthat's their biggest downside, but they are also typically rather large.\n\nAgain, anecdotal evidence says that PC disks are typically cycled far more\noften and so they also fail more often. Put them in the same environ as a\nserver-class disk and they'll also live a long time. Science Tools set up\nour data center ten years ago this May, something more than a terrabyte -\nlarge at the time (and it's several times that now), and we also adopted a\ngood handful of older equipment at that time, some twelve and fifteen\nyears old by now. We didn't have a single disk failure in our first seven\nyears, but then, we also never turn anything off unless it's being\nserviced. Our disk drives are decidedly mixed - SCSI, all forms of ATA\nand, some SATA in the last couple of years, and plenty of both server and\nPC class. Yes, the older ones are dieing now - we lost one on a server\njust now (so recently we haven't yet replaced it), but the death rate is\nstill remarkably low.\n\nI should point out that we've had far more controller failures than drive\nfailures, and these have come all through these ten years at seemingly\nrandom times. Unfortunately, I can't really comment on which brands are\nbetter or worse, but I do remember once when we had a 50% failure rate of\nsome new SATA cards a few years back. Perhaps it's also worth a keystroke\nor two to comment that we rotate new drives in on an annual basis, and the\nolder ones get moved to less critical, less stressful duties. Generally,\nour oldest drives are now serving our gateway / firewall systems (of which\nwe have several), while our newest are providing primary daily workhorse\nservice, and middle-aged are serving hot-backup duty. Perhaps you could\nargue that this putting out to pasture isn't comparable to heavy 24/7/356\ndemands, but then, that wouldn't be appropriate for a fifteen year old\ndrive, now would it? -smile-\n\nGood luck with your drives,\nRichard\n\n-- \nRichard Troy, Chief Scientist\nScience Tools Corporation\n510-924-1363 or 202-747-1263\[email protected], http://ScienceTools.com/\n\n", "msg_date": "Thu, 5 Apr 2007 21:40:22 -0700 (PDT)", "msg_from": "Richard Troy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "At 11:40 PM 4/5/2007, [email protected] wrote:\n>On Thu, 5 Apr 2007, Ron wrote:\n>\n>>At 10:07 PM 4/5/2007, [email protected] wrote:\n>>>On Thu, 5 Apr 2007, Scott Marlowe wrote:\n>>> > Server class drives are designed with a longer lifespan in mind.\n>>> > > Server class hard drives are rated at higher temperatures than desktop\n>>> > drives.\n>>>these two I question.\n>>>David Lang\n>>Both statements are the literal truth. Not that I would suggest \n>>abusing your server class HDs just because they are designed to \n>>live longer and in more demanding environments.\n>>\n>>Overheating, nasty electrical phenomenon, and abusive physical \n>>shocks will trash a server class HD almost as fast as it will a \n>>consumer grade one.\n>>\n>>The big difference between the two is that a server class HD can \n>>sit in a rack with literally 100's of its brothers around it, \n>>cranking away on server class workloads 24x7 in a constant \n>>vibration environment (fans, other HDs, NOC cooling systems) and be \n>>quite happy while a consumer HD will suffer greatly shortened life \n>>and die a horrible death in such a environment and under such use.\n>\n>Ron,\n> I know that the drive manufacturers have been claiming this, but \n> I'll say that my experiance doesn't show a difference and neither \n> do the google and CMU studies (and they were all in large \n> datacenters, some HPC labs, some commercial companies).\n>\n>again the studies showed _no_ noticable difference between the \n>'enterprise' SCSI drives and the 'consumer' SATA drives.\n>\n>David Lang\nBear in mind that Google was and is notorious for pushing their \nenvironmental factors to the limit while using the cheapest \"PoS\" HW \nthey can get their hands on.\nLet's just say I'm fairly sure every piece of HW they were using for \nthose studies was operating outside of manufacturer's suggested specifications.\n\nUnder such conditions the environmental factors are so deleterious \nthat they swamp any other effect.\n\nOTOH, I've spent my career being as careful as possible to as much as \npossible run HW within manufacturer's suggested specifications.\nI've been chided for it over the years... ...usually by folks who \n\"save\" money by buying commodity HDs for big RAID farms in NOCs or \npush their environmental envelope or push their usage envelope or ... \n...and then act surprised when they have so much more down time and \nHW replacements than I do.\n\nAll I can tell you is that I've gotten to eat my holiday dinner far \nmore often than than my counterparts who push it in that fashion.\n\nOTOH, there are crises like the Power Outage of 2003 in the NE USA \nwhere some places had such Bad Things happen that it simply doesn't \nmatter what you bought\n(power dies, generator cuts in, power comes on, but AC units crash, \ntemperatures shoot up so fast that by the time everything is \nre-shutdown it's in the 100F range in the NOC. Lot's 'O Stuff dies \non the spot + spend next 6 months having HW failures at \n+considerably+ higher rates than historical norms. Ick..)\n\n IME, it really does make a difference =if you pay attention to the \ndifference in the first place=.\nIf you treat everything equally poorly, then you should not be \nsurprised when everything acts equally poorly.\n\nBut hey, YMMV.\n\nCheers,\nRon Peacetree \n\n", "msg_date": "Fri, 06 Apr 2007 01:27:24 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Fri, 6 Apr 2007, Ron wrote:\n\n> Bear in mind that Google was and is notorious for pushing their environmental \n> factors to the limit while using the cheapest \"PoS\" HW they can get their \n> hands on.\n> Let's just say I'm fairly sure every piece of HW they were using for those \n> studies was operating outside of manufacturer's suggested specifications.\n\nRon, please go read both the studies. unless you want to say that every \norginization the CMU picked to study also abused their hardware as \nwell....\n\n> Under such conditions the environmental factors are so deleterious that they \n> swamp any other effect.\n>\n> OTOH, I've spent my career being as careful as possible to as much as \n> possible run HW within manufacturer's suggested specifications.\n> I've been chided for it over the years... ...usually by folks who \"save\" \n> money by buying commodity HDs for big RAID farms in NOCs or push their \n> environmental envelope or push their usage envelope or ... ...and then act \n> surprised when they have so much more down time and HW replacements than I \n> do.\n>\n> All I can tell you is that I've gotten to eat my holiday dinner far more \n> often than than my counterparts who push it in that fashion.\n>\n> OTOH, there are crises like the Power Outage of 2003 in the NE USA where some \n> places had such Bad Things happen that it simply doesn't matter what you \n> bought\n> (power dies, generator cuts in, power comes on, but AC units crash, \n> temperatures shoot up so fast that by the time everything is re-shutdown it's \n> in the 100F range in the NOC. Lot's 'O Stuff dies on the spot + spend next 6 \n> months having HW failures at +considerably+ higher rates than historical \n> norms. Ick..)\n>\n> IME, it really does make a difference =if you pay attention to the \n> difference in the first place=.\n> If you treat everything equally poorly, then you should not be surprised when \n> everything acts equally poorly.\n>\n> But hey, YMMV.\n>\n> Cheers,\n> Ron Peacetree \n>\n>\n", "msg_date": "Thu, 5 Apr 2007 22:53:46 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "[email protected] writes:\n> On Thu, 5 Apr 2007, Ron wrote:\n>> Yep. Folks should google \"bath tub curve of statistical failure\" or similar. \n>> Basically, always burn in your drives for at least 1/2 a day before using \n>> them in a production or mission critical role.\n\n> for this and your first point, please go and look at the google and cmu \n> studies. unless the vendors did the burn-in before delivering the drives \n> to the sites that installed them, there was no 'infant mortality' spike on \n> the drives (both studies commented on this, they expected to find one)\n\nIt seems hard to believe that the vendors themselves wouldn't burn in\nthe drives for half a day, if that's all it takes to eliminate a large\nfraction of infant mortality. The savings in return processing and\ncustomer goodwill would surely justify the electricity they'd use.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Apr 2007 02:00:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA " }, { "msg_contents": "On Fri, 6 Apr 2007, Tom Lane wrote:\n\n> It seems hard to believe that the vendors themselves wouldn't burn in\n> the drives for half a day, if that's all it takes to eliminate a large\n> fraction of infant mortality.\n\nI've read that much of the damage that causes hard drive infant mortality \nis related to shipping. The drive is fine when it leaves the factory, \ngets shaken up and otherwise brutalized by environmental changes in \ntransit (it's a long trip from Singapore to here), and therefore is a bit \nwhacked by the time it is installed. A quick post-installation burn-in \nhelps ferret out when this happens.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 6 Apr 2007 02:10:11 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA " }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> On Fri, 6 Apr 2007, Tom Lane wrote:\n>> It seems hard to believe that the vendors themselves wouldn't burn in\n>> the drives for half a day, if that's all it takes to eliminate a large\n>> fraction of infant mortality.\n\n> I've read that much of the damage that causes hard drive infant mortality \n> is related to shipping.\n\nDoh, of course. Maybe I'd better go to bed now...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Apr 2007 02:38:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA " }, { "msg_contents": "On Thu, Apr 05, 2007 at 11:19:04PM -0400, Ron wrote:\n>Both statements are the literal truth.\n\nRepeating something over and over again doesn't make it truth. The OP \nasked for statistical evidence (presumably real-world field evidence) to \nsupport that assertion. Thus far, all the publicly available evidence \ndoes not show a significant difference between SATA and SCSI reliability \nin the field.\n\nMike Stone\n", "msg_date": "Fri, 06 Apr 2007 07:38:17 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Fri, Apr 06, 2007 at 02:00:15AM -0400, Tom Lane wrote:\n>It seems hard to believe that the vendors themselves wouldn't burn in\n>the drives for half a day, if that's all it takes to eliminate a large\n>fraction of infant mortality. The savings in return processing and\n>customer goodwill would surely justify the electricity they'd use.\n\nWouldn't help if the reason for the infant mortality is bad handling \nbetween the factory and the rack. One thing that I did question in the \nCMU study was the lack of infant mortality--I've definately observed it, \nbut it might just be that my UPS guy is clumsier than theirs.\n\nMike Stone\n", "msg_date": "Fri, 06 Apr 2007 07:43:38 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "I read them as soon as they were available. Then I shrugged and \nnoted YMMV to myself.\n\n\n1= Those studies are valid for =those= users under =those= users' \ncircumstances in =those= users' environments.\n How well do those circumstances and environments mimic anyone else's?\nI don't know since the studies did not document said in enough detail \n(and it would be nigh unto impossible to do so) for me to compare \nmine to theirs. I =do= know that neither Google's nor a university's \nnor an ISP's nor a HPC supercomputing facility's NOC are particularly \nsimilar to say a financial institution's or a health care organization's NOC.\n...and they better not be. Ditto the personnel's behavior working them.\n\nYou yourself have said the environmental factors make a big \ndifference. I agree. I submit that therefore differences in the \nenvironmental factors are just as significant.\n\n\n2= I'll bet all the money in your pockets vs all the money in my \npockets that people are going to leap at the chance to use these \nstudies as yet another excuse to pinch IT spending further. In the \nprocess they are consciously or unconsciously going to imitate some \nor all of the environments that were used in those studies.\nWhich IMHO is exactly wrong for most mission critical functions in \nmost non-university organizations.\n\nWhile we can't all pamper our HDs to the extent that Richard Troy's \norganization can, frankly that is much closer to the way things \nshould be done for most organizations. Ditto Greg Smith's =very= good habit:\n\"I scan all my drives for reallocated sectors, and the minute there's \na single one I get e-mailed about it and get all the data off that \ndrive pronto. This has saved me from a complete failure that \nhappened within the next day on multiple occasions.\"\nAmen.\n\nI'll make the additional bet that no matter what they say neither \nGoogle nor the CMU places had to deal with setting up and running \nenvironments where the consequences of data loss or data corruption \nare as serious as they are for most mission critical business \napplications. =Especially= DBMSs in such organizations.\nIf anyone tried to convince me to run a mission critical or \nproduction DBMS in a business the way Google runs their HW, I'd be \napplying the clue-by-four liberally in \"boot to the head\" fashion \nuntil either they got just how wrong they were or they convinced me \nthey were too stupid to learn.\nA which point they are never touching my machines.\n\n\n3= From the CMU paper:\n \"We also find evidence, based on records of disk replacements in \nthe field, that failure rate is not constant with age, and that, \nrather than a significant infant mortality effect, we see a \nsignificant early onset of wear-out degradation. That is, replacement \nrates in our data grew constantly with age, an effect often assumed \nnot to set in until after a nominal lifetime of 5 years.\"\n\"In our data sets, the replacement rates of SATA disks are not worse \nthan the replacement rates of SCSI or FC disks.\n=This may indicate that disk independent factors, such as operating \nconditions, usage and environmental factors, affect replacement=.\" \n(emphasis mine)\n\nIf you look at the organizations in these two studies, you will note \nthat one thing they all have in common is that they are organizations \nthat tend to push the environmental and usage envelopes. Especially \nwith regards to anything involving spending money. (Google is an \nextreme even in that group).\nWhat these studies say clearly to me is that it is possible to be \npenny-wise and pound-foolish with regards to IT spending... ...and \nthat these organizations have a tendency to be so.\nNot a surprise to anyone who's worked in those environments I'm sure.\nThe last thing the IT industry needs is for everyone to copy these \norganization's IT behavior!\n\n\n4= Tom Lane is of course correct that vendors burn in their HDs \nenough before selling them to get past most infant mortality. Then \nany time any HD is shipped between organizations, it is usually \nburned in again to detect and possibly deal with issues caused by \nshipping. That's enough to see to it that the end operating \nenvironment is not going to see a bath tub curve failure rate.\nThen environmental, usage, and maintenance factors further distort \nboth the shape and size of the statistical failure curve.\n\n\n5= The major conclusion of the CMU paper is !NOT! that we should buy \nthe cheapest HDs we can because HD quality doesn't make a difference.\nThe important conclusion is that a very large segment of the industry \noperates their equipment significantly enough outside manufacturer's \nspecifications that we need a new error rate model for end use. I agree.\nRegardless of what Seagate et al can do in their QA labs, we need \nreliability numbers that are actually valid ITRW of HD usage.\n\nThe other take-away is that organizational policy and procedure with \nregards to HD maintenance and use in most organizations could use improving.\nI strongly agree with that as well.\n\n\nCheers,\nRon Peacetree\n\n\n\nAt 01:53 AM 4/6/2007, [email protected] wrote:\n>On Fri, 6 Apr 2007, Ron wrote:\n>\n>>Bear in mind that Google was and is notorious for pushing their \n>>environmental factors to the limit while using the cheapest \"PoS\" \n>>HW they can get their hands on.\n>>Let's just say I'm fairly sure every piece of HW they were using \n>>for those studies was operating outside of manufacturer's suggested \n>>specifications.\n>\n>Ron, please go read both the studies. unless you want to say that \n>every orginization the CMU picked to study also abused their \n>hardware as well....\n>\n>>Under such conditions the environmental factors are so deleterious \n>>that they swamp any other effect.\n>>\n>>OTOH, I've spent my career being as careful as possible to as much \n>>as possible run HW within manufacturer's suggested specifications.\n>>I've been chided for it over the years... ...usually by folks who \n>>\"save\" money by buying commodity HDs for big RAID farms in NOCs or \n>>push their environmental envelope or push their usage envelope or \n>>... ...and then act surprised when they have so much more down time \n>>and HW replacements than I do.\n>>\n>>All I can tell you is that I've gotten to eat my holiday dinner far \n>>more often than than my counterparts who push it in that fashion.\n>>\n>>OTOH, there are crises like the Power Outage of 2003 in the NE USA \n>>where some places had such Bad Things happen that it simply doesn't \n>>matter what you bought\n>>(power dies, generator cuts in, power comes on, but AC units crash, \n>>temperatures shoot up so fast that by the time everything is \n>>re-shutdown it's in the 100F range in the NOC. Lot's 'O Stuff dies \n>>on the spot + spend next 6 months having HW failures at \n>>+considerably+ higher rates than historical norms. Ick..)\n>>\n>>IME, it really does make a difference =if you pay attention to the \n>>difference in the first place=.\n>>If you treat everything equally poorly, then you should not be \n>>surprised when everything acts equally poorly.\n>>\n>>But hey, YMMV.\n>>\n>>Cheers,\n>>Ron Peacetree\n\n", "msg_date": "Fri, 06 Apr 2007 08:31:51 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Tom Lane wrote:\n> Greg Smith <[email protected]> writes:\n>> On Fri, 6 Apr 2007, Tom Lane wrote:\n>>> It seems hard to believe that the vendors themselves wouldn't burn in\n>>> the drives for half a day, if that's all it takes to eliminate a large\n>>> fraction of infant mortality.\n> \n>> I've read that much of the damage that causes hard drive infant mortality \n>> is related to shipping.\n> \n> Doh, of course. Maybe I'd better go to bed now...\n> \n> \t\t\tregards, tom lane\n\nYou actually sleep?\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n", "msg_date": "Fri, 06 Apr 2007 08:43:36 -0400", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Michael Stone wrote:\n> On Fri, Apr 06, 2007 at 02:00:15AM -0400, Tom Lane wrote:\n>> It seems hard to believe that the vendors themselves wouldn't burn in\n>> the drives for half a day, if that's all it takes to eliminate a large\n>> fraction of infant mortality. The savings in return processing and\n>> customer goodwill would surely justify the electricity they'd use.\n> \n> Wouldn't help if the reason for the infant mortality is bad handling \n> between the factory and the rack. One thing that I did question in the \n> CMU study was the lack of infant mortality--I've definately observed it, \n> but it might just be that my UPS guy is clumsier than theirs.\n\nGood point. Folks must realize that carriers handle computer hardware \nthe same way they handle a box of marshmallows or ball bearings.. A box \nis a box is a box.\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n", "msg_date": "Fri, 06 Apr 2007 08:46:11 -0400", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "At 07:38 AM 4/6/2007, Michael Stone wrote:\n>On Thu, Apr 05, 2007 at 11:19:04PM -0400, Ron wrote:\n>>Both statements are the literal truth.\n>\n>Repeating something over and over again doesn't make it truth. The \n>OP asked for statistical evidence (presumably real-world field \n>evidence) to support that assertion. Thus far, all the publicly \n>available evidence does not show a significant difference between \n>SATA and SCSI reliability in the field.\nNot quite. Each of our professional experiences is +also+ \nstatistical evidence. Even if it is a personally skewed sample.\n\nFor instance, Your experience suggests that infant mortality is more \nreal than the studies stated. Does that invalidate your \nexperience? Of course not.\nDoes that invalidate the studies? Equally clearly not.\n\nMy experience supports the hypothesis that spending slightly more for \nquality and treating HDs better is worth it.\nDoes that mean one of us is right and the other wrong? Nope. Just \nthat =in my experience= it does make a difference.\n\nThe OP asked for real world evidence. We're providing it; and \nacross a wider range of use cases than the studies used.\n\nCheers,\nRon \n\n", "msg_date": "Fri, 06 Apr 2007 08:49:08 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Fri, Apr 06, 2007 at 08:49:08AM -0400, Ron wrote:\n>Not quite. Each of our professional experiences is +also+ \n>statistical evidence. Even if it is a personally skewed sample.\n\nI'm not sure that word means what you think it means. I think the one \nyou're looking for is \"anecdotal\".\n\n>My experience supports the hypothesis that spending slightly more for \n>quality and treating HDs better is worth it.\n>Does that mean one of us is right and the other wrong? Nope. Just \n>that =in my experience= it does make a difference.\n\nWell, without real numbers to back it up, it doesn't mean much in the \nface of studies that include real numbers. Humans are, in general, \nexceptionally lousy at assessing probabilities. There's a very real \ntendency to exaggerate evidence that supports our preconceptions and \ndiscount evidence that contradicts them. Maybe you're immune to that. \nPersonally, I tend to simply assume that anecdotal evidence isn't very\nuseful. This is why having some large scale independent studies is \nvaluable. The manufacturer's studies are obviously biased, and it's good \nto have some basis for decision making other than \"logic\" (the classic \n\"proof by 'it stands to reason'\"), marketing, or \"I paid more for it\" (\"so \nit's better whether it's better or not\").\n\nMike Stone\n", "msg_date": "Fri, 06 Apr 2007 09:23:48 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Thu, 2007-04-05 at 23:37, Greg Smith wrote:\n> On Thu, 5 Apr 2007, Scott Marlowe wrote:\n> \n> > On Thu, 2007-04-05 at 14:30, James Mansion wrote:\n> >> Can you cite any statistical evidence for this?\n> > Logic?\n> \n> OK, everyone who hasn't already needs to read the Google and CMU papers. \n> I'll even provide links for you:\n> \n> http://www.cs.cmu.edu/~bianca/fast07.pdf\n> http://labs.google.com/papers/disk_failures.pdf\n> \n> There are several things their data suggests that are completely at odds \n> with the lore suggested by traditional logic-based thinking in this area. \n> Section 3.4 of Google's paper basically disproves that \"mechanical devices \n> have decreasing MTBF when run in hotter environments\" applies to hard \n> drives in the normal range they're operated in.\n\nOn the google:\n\nThe google study ONLY looked at consumer grade drives. It did not\ncompare them to server class drives.\n\nThis is only true when the temperature is fairly low. Note that the\ndrive temperatures in the google study are <=55C. If the drive temp is\nbelow 55C, then the environment, by extension, must be lower than that\nby some fair bit, likely 10-15C, since the drive is a heat source, and\nthe environment the heat sink. So, the environment here is likely in\nthe 35C range.\n\nMost server drives are rated for 55-60C environmental temperature\noperation, which means the drive would be even hotter.\n\nAs for the CMU study:\n\nIt didn't expressly compare server to consumer grade hard drives. \nRemember, there are server class SATA drives, and there were (once upon\na time) consumer class SCSI drives. If they had separated out the\ndrives by server / consumer grade I think the study would have been more\ninteresting. But we just don't know from that study.\n\nPersonal Experience:\n\nIn my last job we had three very large storage arrays (big black\nrefrigerator looking boxes, you know the kind.) Each one had somewhere\nin the range of 150 or so drives in it. The first two we purchased were\nbased on 9Gig server class SCSI drives. The third, and newer one, was\nbased on commodity IDE drives. I'm not sure of the size, but I believe\nthey were somewhere around 20Gigs or so. So, this was 5 or so years\nago, not recently.\n\nWe had a cooling failure in our hosting center, and the internal\ntemperature of the data center rose to about 110F to 120F (43C to 48C). \nWe ran at that temperature for about 12 hours, before we got a\nrefrigerator on a flatbed brought in (btw, I highly recommend Aggreko if\nyou need large scale portable air conditioners or generators) to cool\nthings down.\n\nIn the months that followed the drives in the IDE based storage array\nfailed by the dozens. We eventually replaced ALL the drives in that\nstorage array because of the failure rate. The SCSI based arrays had a\nfew extra drives fail than usual, but nothing too shocking.\n\nNow, maybe now Seagate et. al. are making their consumer grade drives\nfrom yesterday's server grade technology, but 5 or 6 years ago that was\nnot the case from what I saw.\n\n> Your comments about \n> server hard drives being rated to higher temperatures is helpful, but \n> conclusions drawn from just thinking about something I don't trust when \n> they conflict with statistics to the contrary.\n\nActually, as I looked up some more data on this, I found it interesting\nthat 5 to 10 years ago, consumer grade drives were rated for 35C\nenvironments, while today consumer grade drives seem to be rated to 55C\nor 60C. Same as server drives were 5 to 10 years ago. I do think that\nserver grade drive tech has been migrating into the consumer realm over\ntime. I can imagine that today's high performance game / home systems\nwith their heat generating video cards and tendency towards RAID1 /\nRAID0 drive setups are pushing the drive manufacturers to improve\nreliability of consumer disk drives.\n\n> The main thing I wish they'd published is breaking some of the statistics \n> down by drive manufacturer. For example, they suggest a significant \n> number of drive failures were not predicted by SMART. I've seen plenty of \n> drives where the SMART reporting was spotty at best (yes, I'm talking \n> about you, Maxtor) and wouldn't be surprised that they were quiet right up \n> to their bitter (and frequent) end. I'm not sure how that factor may have \n> skewed this particular bit of data.\n\nI too have pretty much given up on Maxtor drives and things like SMART\nor sleep mode, or just plain working properly.\n\nIn recent months, we had an AC unit fail here at work, and we have two\ndrive manufacturers for our servers. Manufacturer F and S. The drives\nfrom F failed at a much higher rate, and developed lots and lots of bad\nsectors, the drives from manufacturer S, OTOH, have not had an increased\nfailure rate. While both manufacturers claim that their drives can\nsurvive in an environment of 55/60C, I'm pretty sure one of them was\nlying. We are silently replacing the failed drives with drives from\nmanufacturer S.\n\nBased on experience I think that on average server drives are more\nreliable than consumer grade drives, and can take more punishment. But,\nthe variables of manufacturer, model, and the batch often make even more\ndifference than grade.\n", "msg_date": "Fri, 06 Apr 2007 10:20:47 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "At 09:23 AM 4/6/2007, Michael Stone wrote:\n>On Fri, Apr 06, 2007 at 08:49:08AM -0400, Ron wrote:\n>>Not quite. Each of our professional \n>>experiences is +also+ statistical \n>>evidence. Even if it is a personally skewed sample.\n>\n>I'm not sure that word means what you think it \n>means. I think the one you're looking for is \"anecdotal\".\nOK, let's kill this one as well. Personal \nexperience as related by non professionals is \noften based on casual observation and of questionable quality or veracity.\nIt therefore is deservedly called \"anecdotal\".\n\nProfessionals giving evidence in their \nprofessional capacity within their field of \nexpertise are under an obligation to tell the \ntruth, the whole truth, and nothing but the truth \nto the best of their knowledge and \nability. Whether you are in court and sworn in or not.\nEven if it's \"just\" to a mailing list ;-)\n\n From dictionary.com\nan·ec·dot·al:\n1.pertaining to, resembling, or containing \nanecdotes: an anecdotal history of jazz.\n2.(of the treatment of subject matter in \nrepresentational art) pertaining to the \nrelationship of figures or to the arrangement of \nelements in a scene so as to emphasize the story \ncontent of a subject. Compare narrative (def. 6).\n3.based on personal observation, case study \nreports, or random investigations rather than \nsystematic scientific evaluation: anecdotal evidence.\n\n+also an·ec·dot·ic (-d t' k) or an·ec·dot·i·cal \n(- -k l) Of, characterized by, or full of anecdotes.\n+Based on casual observations or indications \nrather than rigorous or scientific analysis: \n\"There are anecdotal reports of children poisoned \nby hot dogs roasted over a fire of the [oleander] stems\" (C. Claiborne Ray).\n\nWhile evidence given by professionals can't be as \nrigorous as that of a double blind and controlled \nstudy, there darn well better be nothing casual \nor ill-considered about it. And it had better \n!not! be anything \"distorted or emphasized\" just \nfor the sake of making the story better.\n(Good Journalists deal with this one all the time.)\n\nIn short, professional advice and opinions are \nsupposed to be considerably more rigorous and \nanalytical than anything \"anecdotal\". The alternative is \"malpractice\".\n\n\n>>My experience supports the hypothesis that \n>>spending slightly more for quality and treating HDs better is worth it.\n>>Does that mean one of us is right and the other \n>>wrong? Nope. Just that =in my experience= it does make a difference.\n>\n>Well, without real numbers to back it up, it \n>doesn't mean much in the face of studies that \n>include real numbers. Humans are, in general, \n>exceptionally lousy at assessing probabilities. \n>There's a very real tendency to exaggerate \n>evidence that supports our preconceptions and \n>discount evidence that contradicts them. Maybe you're immune to that.\n\nHalf agree. Half disagree.\n\nPart of the definition of \"professional\" vs \n\"amateur\" is an obligation to think and act \noutside our personal \"stuff\" when acting in our professional capacity.\nWhether numbers are explicitly involved or not.\n\nI'm certainly not immune to personal bias. No \none is. But I have a professional obligation of \nthe highest order to do everything I can to make \nsure I never think or act based on personal bias \nwhen operating in my professional capacity. All professionals do.\n\nMaybe you've found it harder to avoid personal \nbias without sticking strictly to controlled \nstudies. I respect that. Unfortunately the RW \nis too fast moving and too messy to wait for a \nlaboratory style study to be completed before we \nare called on to make professional decisions on \nmost issues we face within our work\nIME I have to serve my customers in a timely \nfashion that for the most part prohibits me from \nwaiting for the perfect experiment's outcome.\n\n\n>Personally, I tend to simply assume that \n>anecdotal evidence isn't very useful.\n\nAgreed. OTOH, there's not supposed to be \nanything casual, ill-considered, or low quality \nabout professionals giving professional opinions within their\nfields of expertise. Whether numbers are explicitly involved or not.\n\n\n>This is why having some large scale independent \n>studies is valuable. The manufacturer's studies \n>are obviously biased, and it's good to have some \n>basis for decision making other than \"logic\" \n>(the classic \"proof by 'it stands to reason'\"), \n>marketing, or \"I paid more for it\" (\"so it's \n>better whether it's better or not\").\nNo argument here. However, note that there is \noften other bias present even in studies that strive to be objective.\nI described the bias in the sample set of the CMU study in a previous post.\n\n\nCheers,\nRon Peacetree \n\n", "msg_date": "Fri, 06 Apr 2007 12:41:25 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Fri, Apr 06, 2007 at 12:41:25PM -0400, Ron wrote:\n>3.based on personal observation, case study \n>reports, or random investigations rather than \n>systematic scientific evaluation: anecdotal evidence.\n\nHere you even quote the appropriate definition before ignoring it. \n\n>In short, professional advice and opinions are \n>supposed to be considerably more rigorous and \n>analytical than anything \"anecdotal\". The alternative is \"malpractice\".\n\nIn any profession where malpractice is applicable, the profession \nopinion had better be backed up by research rather than anecdote. I'm \nnot aware of any profession held to a \"malpractice\" standard which is \nbased on personal observation and random investigation rather than \nformal methods.\n\n>studies. I respect that. Unfortunately the RW \n>is too fast moving and too messy to wait for a \n>laboratory style study to be completed before we \n>are called on to make professional decisions on \n>most issues we face within our work\n>IME I have to serve my customers in a timely \n>fashion that for the most part prohibits me from \n>waiting for the perfect experiment's outcome.\n\nWhich is what distinguishes your field from a field such as engineering \nor medicine, and which is why waving the term \"malpractice\" around is \njust plain silly. And claiming to have to wait for perfection is a red \nherring. Did you record the numbers of disks involved (failed & \nnonfailed), the models, the environmental conditions, the poweron hours, \netc.? That's what would distinguish anecdote from systematic study. \n\n>Agreed. OTOH, there's not supposed to be \n>anything casual, ill-considered, or low quality \n>about professionals giving professional opinions within their\n>fields of expertise. Whether numbers are explicitly involved or not.\n\nIf I go to an engineer and ask him how to build a strong bridge and he \nresponds with something like \"Well, I always use steel bridges. I've \ndriven by concrete bridges that were cracked and needed repairs, and I \nwould never use a concrete bridge for a professional purpose.\" he'd lose \nhis license. You'd expect the engineer to use, you know, numbers and \nstuff, not anecdotal observations of bridges. The professional opinion \nhas to do with how to apply the numbers, not fundamentals like 100 year \nloads, material strength, etc. \n\nWhat you're arguing is that your personal observations are a perfectly \ngood substitute for more rigorous study, and that's frankly ridiculous. \nIn an immature field personal observations may be the best data \navailable, but that's a weakness of the field rather than a desirable \nstate. 200 years ago doctors operated the same way--I'm glad they \nabandoned that for a more rigorous approach. The interesting thing is, \nthere was quite a disruption as quite a few of the more established \ndoctors were really offended by the idea that their professional \nopinions would be replaced by standards of care based on large scale \nstudies. \n\nMike Stone\n", "msg_date": "Fri, 06 Apr 2007 14:19:15 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "At 02:19 PM 4/6/2007, Michael Stone wrote:\n>On Fri, Apr 06, 2007 at 12:41:25PM -0400, Ron wrote:\n>>3.based on personal observation, case study reports, or random \n>>investigations rather than systematic scientific evaluation: \n>>anecdotal evidence.\n>\n>Here you even quote the appropriate definition before ignoring it.\n>>In short, professional advice and opinions are supposed to be \n>>considerably more rigorous and analytical than anything \n>>\"anecdotal\". The alternative is \"malpractice\".\n>\n>In any profession where malpractice is applicable, the profession \n>opinion had better be backed up by research rather than anecdote. \n>I'm not aware of any profession held to a \"malpractice\" standard \n>which is based on personal observation and random investigation \n>rather than formal methods.\nTalk to every Professional Engineer who's passed both rounds of the \nProfessional Engineering Exams. While there's a significant \nimprovement in quality when comparing a formal study to professional \nadvice, there should be an equally large improvement when comparing \nprofessional advice to random anecdotal evidence.\n\nIf there isn't, the professional isn't worth paying for. ...and you \n=can= be successfully sued for giving bad professional advice.\n\n\n>>studies. I respect that. Unfortunately the RW is too fast moving \n>>and too messy to wait for a laboratory style study to be completed \n>>before we are called on to make professional decisions on most \n>>issues we face within our work\n>>IME I have to serve my customers in a timely fashion that for the \n>>most part prohibits me from waiting for the perfect experiment's outcome.\n>\n>Which is what distinguishes your field from a field such as \n>engineering or medicine, and which is why waving the term \n>\"malpractice\" around is just plain silly.\n\nOk, since you know I am an engineer that crossed a professional line \nin terms of insult. That finishes this conversation.\n\n...and you know very well that the use of the term \"malpractice\" was \nnot in the legal sense but in the strict dictionary sense: \"mal, \nmeaning bad\" \"practice, meaning \"professional practice.\" ...and \nunless you've been an academic your entire career you know the time \npressures of the RW of business.\n\n\n> And claiming to have to wait for perfection is a red herring. Did \n> you record the numbers of disks involved (failed & nonfailed), the \n> models, the environmental conditions, the power on hours, etc.? \n> That's what would distinguish anecdote from systematic study.\n\nYes, as a matter of fact I =do= keep such maintenance records for \noperations centers I've been responsible for. Unfortunately, that is \nnot nearly enough to qualify for being \"objective\". Especially since \nit is not often possible to keep accurate track of every one might \nwant to. Even your incomplete list.\nLooks like you might not have ever =done= some of the studies you tout so much.\n\n\n>>Agreed. OTOH, there's not supposed to be anything casual, \n>>ill-considered, or low quality about professionals giving \n>>professional opinions within their\n>>fields of expertise. Whether numbers are explicitly involved or not.\n>\n>If I go to an engineer and ask him how to build a strong bridge and \n>he responds with something like \"Well, I always use steel bridges. \n>I've driven by concrete bridges that were cracked and needed \n>repairs, and I would never use a concrete bridge for a professional \n>purpose.\" he'd lose his license. You'd expect the engineer to use, \n>you know, numbers and stuff, not anecdotal observations of bridges. \n>The professional opinion has to do with how to apply the numbers, \n>not fundamentals like 100 year loads, material strength, etc.\n..and I referenced this as the knowledge base a professional uses to \nrender opinions and give advice. That's far better than anecdote, \nbut far worse than specific study. The history of bridge building is \nin fact a perfect example for this phenomenon. There are a number of \ngood books on this topic both specific to bridges and for other \nengineering projects that failed due to mistakes in extrapolation.\n\n\n>What you're arguing is that your personal observations are a \n>perfectly good substitute for more rigorous study,\n\nOf course I'm not! and IMHO you know I'm not. Insult number \ntwo. Go settle down.\n\n\n>and that's frankly ridiculous.\n\nOf course it would be. The =point=, which you seem to just refuse to \nconsider, is that there is a valid degree of evidence between \n\"anecdote\" and \"data from proper objective study\". There has to be \nfor all sorts of reasons.\n\nAs I'm sure you know, the world is not binary.\n\n\n>In an immature field personal observations may be the best data \n>available, but that's a weakness of the field rather than a \n>desirable state. 200 years ago doctors operated the same way--I'm \n>glad they abandoned that for a more rigorous approach. The \n>interesting thing is, there was quite a disruption as quite a few of \n>the more established doctors were really offended by the idea that \n>their professional opinions would be replaced by standards of care \n>based on large scale studies.\n..and this is just silly. Personal observations of trained \nobservers are known and proven to be better than that of random observers.\nIt's also a hard skill to learn, let alone master.\n\n=That's= one of the things we technical professionals are paid for: \nbeing trained objective observers.\n\n...and in the specific case of medicine there are known problems with \nusing large scale studies to base health care standards on.\nThe statistically normal human does not exist in the medical sense.\nFor instance, a given woman is actually very =unlikely= to have a \npregnancy exactly 9 months long. Especially if her genetic family \nhistory is biased towards bearing earlier or later than exactly 9 months.\nDrug dosing is another good example, etc etc.\n\nThe problem with the doctors you mention is that they were \n=supposedly= objective, but turned out not to be.\nSimilar example from Anthropology can be found on Stephen Jay Gould's \n_The Mis-measure of Man_\n\n\nHave a good day.\nRon Peacetree \n\n", "msg_date": "Fri, 06 Apr 2007 15:37:08 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Fri, 6 Apr 2007, Scott Marlowe wrote:\n\n> Most server drives are rated for 55-60C environmental temperature\n> operation, which means the drive would be even hotter.\n\nI chuckled when I dug into the details for the drives in my cheap PC; the \nconsumer drives from Seagate:\nhttp://www.seagate.com/docs/pdf/datasheet/disc/ds_barracuda_7200_10.pdf\n\nare rated to a higher operating temperature than their enterprise drives:\nhttp://www.seagate.com/docs/pdf/datasheet/disc/ds_barracuda_es.pdf\n\nThey actually have an interesting white paper on this subject. The factor \nthey talk about that isn't addressed in the studies we've been discussing \nis the I/O workload of the drive:\nhttp://www.seagate.com/content/pdf/whitepaper/TP555_BarracudaES_Jun06.pdf\n\nWhat kind of sticks out when I compare all their data is that the chart in \nthe white paper puts the failure rate (AFR) of their consumer drives at \nalmost 0.6%, yet the specs on the consumer drive quote 0.34%.\n\nGoing back to the original question here, though, the rates are all \nsimilar and small enough that I'd take many more drives over a small \nnumber of slightly more reliable ones any day. As long as you have a \ncontroller that can support multiple hot-spares you should be way ahead. \nI get more concerned about battery backup cache issues than this nowadays \n(been through too many extended power outages in the last few years).\n\n> I do think that server grade drive tech has been migrating into the \n> consumer realm over time. I can imagine that today's high performance \n> game / home systems with their heat generating video cards and tendency \n> towards RAID1 / RAID0 drive setups are pushing the drive manufacturers \n> to improve reliability of consumer disk drives.\n\nThe introduction of fluid dynamic motor bearings into the hard drive \nmarket over the last few years (ramping up around 2003) has very much \ntransformed the nature of that very temperature sensitive mechanism. \nThat's the cause of why a lot of rules of thumb from before that era don't \napply as strongly to modern drives. Certainly that fact that today's \nconsumer processors produce massively more heat than those of even a few \nyears ago has contributed to drive manufacturers moving their specs \nupwards as well.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 6 Apr 2007 16:44:19 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Fri, Apr 06, 2007 at 03:37:08PM -0400, Ron wrote:\n>>>studies. I respect that. Unfortunately the RW is too fast moving \n>>>and too messy to wait for a laboratory style study to be completed \n>>>before we are called on to make professional decisions on most \n>>>issues we face within our work\n>>>IME I have to serve my customers in a timely fashion that for the \n>>>most part prohibits me from waiting for the perfect experiment's outcome.\n>>\n>>Which is what distinguishes your field from a field such as \n>>engineering or medicine, and which is why waving the term \n>>\"malpractice\" around is just plain silly.\n>\n>Ok, since you know I am an engineer that crossed a professional line \n>in terms of insult. That finishes this conversation.\n\nActually, I don't know what you are. I obviously should have been more \nspecific that the field I was refering to is computer systems \nintegration, which isn't a licensed engineering profession in any \njurisdiction that I'm aware of. \n\n>...and you know very well that the use of the term \"malpractice\" was \n>not in the legal sense but in the strict dictionary sense: \"mal, \n>meaning bad\" \"practice, meaning \"professional practice.\"\n\nThat's the literal definition or etymology; the dictionary definition \nwill generally include terms like \"negligence\", \"established rules\", \netc., implying that there is an established, objective standard. I just \ndon't think that hard disk choice (or anything else about designing a \nhardware & software system) can be argued to have an established \nstandard best practice. Heck, you probably can't even say \"I did that \nsucessfully last year, we can just implement the same solution\" because \nin this industry you probably couldn't buy the same parts (exagerrating \nonly somewhat).\n\n>> And claiming to have to wait for perfection is a red herring. Did \n>>you record the numbers of disks involved (failed & nonfailed), the \n>>models, the environmental conditions, the power on hours, etc.? \n>>That's what would distinguish anecdote from systematic study.\n>\n>Yes, as a matter of fact I =do= keep such maintenance records for \n>operations centers I've been responsible for.\n\nGreat! If you presented those numbers along with some context the data \ncould be assessed to form some kind of rational conclusion. But to \nremind you of what you'd offered up to the time I suggested that you \nwere offering anecdotal evidence in response to a request for \nstatistical evidence:\n\n>OTOH, I've spent my career being as careful as possible to as much as \n>possible run HW within manufacturer's suggested specifications. I've \n>been chided for it over the years... ...usually by folks who \"save\" \n>money by buying commodity HDs for big RAID farms in NOCs or push their \n>environmental envelope or push their usage envelope or ... ...and then \n>act surprised when they have so much more down time and HW replacements \n>than I do.\n>\n>All I can tell you is that I've gotten to eat my holiday dinner far more \n>often than than my counterparts who push it in that fashion.\n\nI don't know how to describe that other than as anecdotal. You seem to \nbe interpreting the term \"anecdotal\" as pejorative rather than \ndescriptive. It's not anecdotal because I question your ability or any \nother such personal factor, it's anecdotal because if your answer to the \nquestion is \"in my professional opinion, A\" and someone else says \"in my \nprofessional opinion, !A\", we really haven't gotten any hard data to \nsynthesize a rational opinion. \n\nMike Stone\n", "msg_date": "Fri, 06 Apr 2007 17:02:29 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Fri, 6 Apr 2007, Scott Marlowe wrote:\n\n> Based on experience I think that on average server drives are more\n> reliable than consumer grade drives, and can take more punishment.\n\nthis I am not sure about\n\n> But,\n> the variables of manufacturer, model, and the batch often make even more\n> difference than grade.\n\nthis I will agree with fully.\n\nDavid Lang\n", "msg_date": "Fri, 6 Apr 2007 15:04:08 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Fri, 6 Apr 2007, [email protected] wrote:\n\n> On Fri, 6 Apr 2007, Scott Marlowe wrote:\n>\n>> Based on experience I think that on average server drives are more\n>> reliable than consumer grade drives, and can take more punishment.\n>\n> this I am not sure about\n\nI think they should survey Tivo owners next time.\n\nPerfect stress-testing environment. Mine runs at over 50C most of the \ntime, and it's writing 2 video streams 24/7. What more could you do to \npunish a drive? :)\n\nCharles\n\n\n>\n> David Lang\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n", "msg_date": "Fri, 6 Apr 2007 18:40:33 -0400 (EDT)", "msg_from": "Charles Sprickman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Fri, 6 Apr 2007, Charles Sprickman wrote:\n\n> On Fri, 6 Apr 2007, [email protected] wrote:\n>\n>> On Fri, 6 Apr 2007, Scott Marlowe wrote:\n>> \n>> > Based on experience I think that on average server drives are more\n>> > reliable than consumer grade drives, and can take more punishment.\n>>\n>> this I am not sure about\n>\n> I think they should survey Tivo owners next time.\n>\n> Perfect stress-testing environment. Mine runs at over 50C most of the time, \n> and it's writing 2 video streams 24/7. What more could you do to punish a \n> drive? :)\n\nand the drives that are in them are consumer IDE drives.\n\nI will admit that I've removed to cover from my tivo to allow it to run \ncooler, and I'm still on the origional drive + 100G drive I purchased way \nback when (7+ years ago) before I removed the cover I did have times when \nthe tivo would die from the heat (Los Angeles area in the summer with no \nA/C)\n\nDavid Lang\n", "msg_date": "Fri, 6 Apr 2007 16:00:06 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "* Charles Sprickman <[email protected]> [070407 00:49]:\n> On Fri, 6 Apr 2007, [email protected] wrote:\n> \n> >On Fri, 6 Apr 2007, Scott Marlowe wrote:\n> >\n> >>Based on experience I think that on average server drives are more\n> >>reliable than consumer grade drives, and can take more punishment.\n> >\n> >this I am not sure about\n> \n> I think they should survey Tivo owners next time.\n> \n> Perfect stress-testing environment. Mine runs at over 50C most of the time, and it's writing 2 video streams 24/7. What more could you do to punish a drive? :)\n\nWell, there is one thing, actually what my dreambox does ;)\n\n-) read/write 2 streams at the same time. (which means quite a bit of\nseeking under pressure)\n-) and even worse, standby and sleep states. And powering up the drive\nwhen needed.\n\nAndreas\n", "msg_date": "Sat, 7 Apr 2007 01:17:55 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Ron wrote:\n> I read them as soon as they were available. Then I shrugged and noted \n> YMMV to myself.\n> \n> \n> 1= Those studies are valid for =those= users under =those= users' \n> circumstances in =those= users' environments.\n> How well do those circumstances and environments mimic anyone else's?\n\nExactly, understanding whether the studies are applicable to you is the \ncritical step - before acting on their conclusions! Thanks Ron, for the \nthoughtful analysis on this topic!\n\nCheers\n\nMark\n", "msg_date": "Sat, 07 Apr 2007 11:24:37 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\nIn summary, it seems one of these is true:\n\n\t1. Drive manufacturers don't design server drives to be more\nreliable than consumer drive\n\n\t2. Drive manufacturers _do_ design server drives to be more\nreliable than consumer drive, but the design doesn't yield significantly\nbetter reliability.\n\n\t3. Server drives are significantly more reliable than consumer\ndrives.\n \n\n---------------------------------------------------------------------------\n\nScott Marlowe wrote:\n> On Thu, 2007-04-05 at 23:37, Greg Smith wrote:\n> > On Thu, 5 Apr 2007, Scott Marlowe wrote:\n> > \n> > > On Thu, 2007-04-05 at 14:30, James Mansion wrote:\n> > >> Can you cite any statistical evidence for this?\n> > > Logic?\n> > \n> > OK, everyone who hasn't already needs to read the Google and CMU papers. \n> > I'll even provide links for you:\n> > \n> > http://www.cs.cmu.edu/~bianca/fast07.pdf\n> > http://labs.google.com/papers/disk_failures.pdf\n> > \n> > There are several things their data suggests that are completely at odds \n> > with the lore suggested by traditional logic-based thinking in this area. \n> > Section 3.4 of Google's paper basically disproves that \"mechanical devices \n> > have decreasing MTBF when run in hotter environments\" applies to hard \n> > drives in the normal range they're operated in.\n> \n> On the google:\n> \n> The google study ONLY looked at consumer grade drives. It did not\n> compare them to server class drives.\n> \n> This is only true when the temperature is fairly low. Note that the\n> drive temperatures in the google study are <=55C. If the drive temp is\n> below 55C, then the environment, by extension, must be lower than that\n> by some fair bit, likely 10-15C, since the drive is a heat source, and\n> the environment the heat sink. So, the environment here is likely in\n> the 35C range.\n> \n> Most server drives are rated for 55-60C environmental temperature\n> operation, which means the drive would be even hotter.\n> \n> As for the CMU study:\n> \n> It didn't expressly compare server to consumer grade hard drives. \n> Remember, there are server class SATA drives, and there were (once upon\n> a time) consumer class SCSI drives. If they had separated out the\n> drives by server / consumer grade I think the study would have been more\n> interesting. But we just don't know from that study.\n> \n> Personal Experience:\n> \n> In my last job we had three very large storage arrays (big black\n> refrigerator looking boxes, you know the kind.) Each one had somewhere\n> in the range of 150 or so drives in it. The first two we purchased were\n> based on 9Gig server class SCSI drives. The third, and newer one, was\n> based on commodity IDE drives. I'm not sure of the size, but I believe\n> they were somewhere around 20Gigs or so. So, this was 5 or so years\n> ago, not recently.\n> \n> We had a cooling failure in our hosting center, and the internal\n> temperature of the data center rose to about 110F to 120F (43C to 48C). \n> We ran at that temperature for about 12 hours, before we got a\n> refrigerator on a flatbed brought in (btw, I highly recommend Aggreko if\n> you need large scale portable air conditioners or generators) to cool\n> things down.\n> \n> In the months that followed the drives in the IDE based storage array\n> failed by the dozens. We eventually replaced ALL the drives in that\n> storage array because of the failure rate. The SCSI based arrays had a\n> few extra drives fail than usual, but nothing too shocking.\n> \n> Now, maybe now Seagate et. al. are making their consumer grade drives\n> from yesterday's server grade technology, but 5 or 6 years ago that was\n> not the case from what I saw.\n> \n> > Your comments about \n> > server hard drives being rated to higher temperatures is helpful, but \n> > conclusions drawn from just thinking about something I don't trust when \n> > they conflict with statistics to the contrary.\n> \n> Actually, as I looked up some more data on this, I found it interesting\n> that 5 to 10 years ago, consumer grade drives were rated for 35C\n> environments, while today consumer grade drives seem to be rated to 55C\n> or 60C. Same as server drives were 5 to 10 years ago. I do think that\n> server grade drive tech has been migrating into the consumer realm over\n> time. I can imagine that today's high performance game / home systems\n> with their heat generating video cards and tendency towards RAID1 /\n> RAID0 drive setups are pushing the drive manufacturers to improve\n> reliability of consumer disk drives.\n> \n> > The main thing I wish they'd published is breaking some of the statistics \n> > down by drive manufacturer. For example, they suggest a significant \n> > number of drive failures were not predicted by SMART. I've seen plenty of \n> > drives where the SMART reporting was spotty at best (yes, I'm talking \n> > about you, Maxtor) and wouldn't be surprised that they were quiet right up \n> > to their bitter (and frequent) end. I'm not sure how that factor may have \n> > skewed this particular bit of data.\n> \n> I too have pretty much given up on Maxtor drives and things like SMART\n> or sleep mode, or just plain working properly.\n> \n> In recent months, we had an AC unit fail here at work, and we have two\n> drive manufacturers for our servers. Manufacturer F and S. The drives\n> from F failed at a much higher rate, and developed lots and lots of bad\n> sectors, the drives from manufacturer S, OTOH, have not had an increased\n> failure rate. While both manufacturers claim that their drives can\n> survive in an environment of 55/60C, I'm pretty sure one of them was\n> lying. We are silently replacing the failed drives with drives from\n> manufacturer S.\n> \n> Based on experience I think that on average server drives are more\n> reliable than consumer grade drives, and can take more punishment. But,\n> the variables of manufacturer, model, and the batch often make even more\n> difference than grade.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 6 Apr 2007 22:35:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "Given all the data I have personally + all that I have from NOC \npersonnel, Sys Admins, Network Engineers, Operations Managers, etc my \nexperience (I do systems architecture consulting that requires me to \ninterface with many of these on a regular basis) supports a variation \nof hypothesis 2. Let's call it 2a:\n\n2a= Drive manufacturers _do_ design server drives to be more reliable \nthan consumer drives\nThis is easily provable by opening the clam shells of a Seagate \nconsumer HD and a Seagate enterprise HD of the same generation and \ncomparing them.\nIn addition to non-visible quality differences in the actual media \n(which result in warranty differences), there are notable differences \nin the design and materials of the clam shells.\nHOWEVER, there are at least 2 complicating factors in actually being \nable to obtain the increased benefits from the better design:\n\n *HDs are often used in environments and use cases so far outside \ntheir manufacturer's suggested norms that the beating they take \noverwhelms the initial quality difference. For instance, dirty power \nevents or 100+F room temperatures will age HDs so fast that even if \nthe enterprise HDs survive better, it's only going to be a bit better \nin the worst cases.\n\n*The pace of innovation in this business is so brisk that HDs from 4 \nyears ago, of all types, are of considerably less quality than those made now.\nSomeone mentioned FDB and the difference they made. Very much \nso. If you compare HDs from 4 years ago to ones made 8 years ago you \nget a similar quality difference. Ditto 8 vs 12 years ago. Etc.\n\nThe reality is that all modern HDs are so good that it's actually \nquite rare for someone to suffer a data loss event. The consequences \nof such are so severe that the event stands out more than just the \nstatistics would imply. For those using small numbers of HDs, HDs just work.\n\nOTOH, for those of us doing work that involves DBMSs and relatively \nlarge numbers of HDs per system, both the math and the RW conditions \nof service require us to pay more attention to quality details.\nLike many things, one can decide on one of multiple ways to \"pay the piper\".\n\na= The choice made by many, for instance in the studies mentioned, is \nto minimize initial acquisition cost and operating overhead and \nsimply accept having to replace HDs more often.\n\nb= For those in fields were this is not a reasonable option \n(financial services, health care, etc), or for those literally using \n100's of HD per system (where statistical failure rates are so likely \nthat TLC is required), policies and procedures like those mentioned \nin this thread (paying close attention to environment and use \nfactors, sector remap detecting, rotating HDs into and out of roles \nbased on age, etc) are necessary.\n\nAnyone who does some close variation of \"b\" directly above =will= see \nthe benefits of using better HDs.\n\nAt least in my supposedly unqualified anecdotal 25 years of \nprofessional experience.\nRon Peacetree\n\n\n\nAt 10:35 PM 4/6/2007, Bruce Momjian wrote:\n\n>In summary, it seems one of these is true:\n>\n> 1. Drive manufacturers don't design server drives to be more\n>reliable than consumer drive\n>\n> 2. Drive manufacturers _do_ design server drives to be more\n>reliable than consumer drive, but the design doesn't yield significantly\n>better reliability.\n>\n> 3. Server drives are significantly more reliable than consumer\n>drives.\n>\n\n", "msg_date": "Sat, 07 Apr 2007 09:03:59 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Sat, 7 Apr 2007, Ron wrote:\n\n> The reality is that all modern HDs are so good that it's actually quite rare \n> for someone to suffer a data loss event. The consequences of such are so \n> severe that the event stands out more than just the statistics would imply. \n> For those using small numbers of HDs, HDs just work.\n>\n> OTOH, for those of us doing work that involves DBMSs and relatively large \n> numbers of HDs per system, both the math and the RW conditions of service \n> require us to pay more attention to quality details.\n> Like many things, one can decide on one of multiple ways to \"pay the piper\".\n>\n> a= The choice made by many, for instance in the studies mentioned, is to \n> minimize initial acquisition cost and operating overhead and simply accept \n> having to replace HDs more often.\n>\n> b= For those in fields were this is not a reasonable option (financial \n> services, health care, etc), or for those literally using 100's of HD per \n> system (where statistical failure rates are so likely that TLC is required), \n> policies and procedures like those mentioned in this thread (paying close \n> attention to environment and use factors, sector remap detecting, rotating \n> HDs into and out of roles based on age, etc) are necessary.\n>\n> Anyone who does some close variation of \"b\" directly above =will= see the \n> benefits of using better HDs.\n>\n> At least in my supposedly unqualified anecdotal 25 years of professional \n> experience.\n\nRon, why is it that you assume that anyone who disagrees with you doesn't \nwork in an environment where they care about the datacenter environment, \nand aren't in fields like financial services? and why do you think that we \nare just trying to save a few pennies? (the costs do factor in, but it's \nnot a matter of pennies, it's a matter of tens of thousands of dollars)\n\nI actually work in the financial services field, I do have a good \ndatacenter environment that's well cared for.\n\nwhile I don't personally maintain machines with hundreds of drives each, I \ndo maintain hundreds of machines with a small number of drives in each, \nand a handful of machines with a few dozens of drives. (the database \nmachines are maintained by others, I do see their failed drives however)\n\nit's also true that my expericance is only over the last 10 years, so I've \nonly been working with a few generations of drives, but my experiance is \ndifferent from yours.\n\nmy experiance is that until the drives get to be 5+ years old the failure \nrate seems to be about the same for the 'cheap' drives as for the 'good' \ndrives. I won't say that they are exactly the same, but they are close \nenough that I don't believe that there is a significant difference.\n\nin other words, these studies do seem to match my experiance.\n\nthis is why, when I recently had to create some large capacity arrays, I'm \nonly ending up with machines with a few dozen drives in them instead of \nhundreds. I've got two machines with 6TB of disk, one with 8TB, one with \n10TB, and one with 20TB. I'm building these sytems for ~$1K/TB for the \ndisk arrays. other departments sho shoose $bigname 'enterprise' disk \narrays are routinely paying 50x that price\n\nI am very sure that they are not getting 50x the reliability, I'm sure \nthat they aren't getting 2x the reliability.\n\nI believe that the biggest cause for data loss from people useing the \n'cheap' drives is due to the fact that one 'cheap' drive holds the \ncapacity of 5 or so 'expensive' drives, and since people don't realize \nthis they don't realize that the time to rebuild the failed drive onto a \nhot-spare is correspondingly longer.\n\nin the thread 'Sunfire X4500 recommendations' we recently had a discussion \non this topic starting from a guy who was asking the best way to configure \nthe drives in his sun x4500 (48 drive) system for safety. in that \ndiscussion I took some numbers from the cmu study and as a working figure \nI said a 10% chance for a drive to fail in a year (the study said 5-7% in \nmost cases, but some third year drives were around 10%). combining this \nwith the time needed to write 750G useing ~10% of the systems capacity \nresults in a rebuild time of about 5 days. it turns out that there is \nalmost a 5% chance of a second drive failing in a 48 drive array in this \ntime. If I were to build a single array with 142G 'enterprise' drives \ninstead of with 750G 'cheap' drives the rebuild time would be only 1 day \ninstead of 5, but you would have ~250 drives instead of 48 and so your \nchance of a problem would be the same (I acknoledge that it's unlikly to \nuse 250 drives in a single array, and yes that does help, however if you \nhad 5 arrays of 50 drives each you would still have a 1% chance of a \nsecond failure)\n\nwhen I look at these numbers, my reaction isn't that it's wrong to go with \nthe 'cheap' drives, my reaction is that single reducndancy isn't good \nenough. depending on how valuble the data is, you need to either replicate \nthe data to another system, or go with dual-parity redundancy (or both)\n\nwhile drives probably won't be this bad in real life (this is after all, \nslightly worse then the studies show for their 3rd year drives, and \n'enterprise' drives may be slightly better) , I have to assume that they \nwill be for my reliability planning.\n\nalso, if you read throught the cmu study, drive failures were only a small \npercentage of system outages (16-25% depending on the site). you have to \nmake sure that you aren't so fixated on drive reliabilty that you fail to \naccount for other types of problems (down to and including the chance of \nsomeone accidently powering down the rack that you are plugged into, be \nit from hitting a power switch, to overloading a weak circuit breaker)\n\nIn looking at these problems overall I find that in most cases I need to \nhave redundant systems with the data replicated anyway (with logs sent \nelsewhere), so I can get away with building failover pairs instead of \nhaving each machine with redundant drives. I've found that I can \nfrequently get a pair of machines for less money then other departments \nspend on buying a single 'enterprise' machine with the same specs \n(although the prices are dropping enough on the top-tier manufacturers \nthat this is less true today then it was a couple of years ago), and I \nfind that the failure rate is about the same on a per-machine basis, so I \nend up with a much better uptime record due to having the redundancy of \nthe second full system (never mind things like it being easier to do \nupgrades as I can work on the inactive machine and then failover to work \non the other, now, inactive machine). while I could ask for the budget to \nbe doubled to provide the same redundancy with the top-tier manufacturers \nI don't do so for several reasons, the top two being that these \nmanufacurers frequently won't configure a machine the way I want them to \n(just try to get a box with writeable media built in, either a floppy of a \nCDR/DVDR, they want you to use something external), and doing so also \nexposes me to people second guessing me on where redundancy is needed \n('that's only development, we don't need redundancy there', until a system \ngoes down for a day and the entire department is unable to work)\n\nit's not that the people who disagree with you don't care about their \ndata, it's that they have different experiances then you do (experiances \nthat come close to matching the studies where they tracked hundereds of \nthousands of drives of different types), and as a result believe that the \ndifference (if any) between the different types of drives isn't \nsignificant in the overall failure rate (especially when you take the \ndifference of drive capacity into account)\n\nDavid Lang\n\nP.S. here is a chart from that thread showing the chances of loosing data \nwith different array configurations.\n\nif you say that there is a 10% chance of a disk failing each year \n(significnatly higher then the studies listed above, but close enough) \nthen this works out to ~0.001% chance of a drive failing per hour (a \nreasonably round number to work with)\n\nto write 750G at ~45MB/sec takes 5 hours of 100% system throughput, or ~50 \nhours at 10% of the system throughput (background rebuilding)\n\nif we cut this in half to account for inefficiancies in retrieving data \nfrom other disks to calculate pairity it can take 100 hours (just over \nfour days) to do a background rebuild, or about 0.1% chance for each disk \nof loosing a seond disk. with 48 drives this is ~5% chance of loosing \neverything with single-parity, however the odds of loosing two disks \nduring this time are .25% so double-parity is _well_ worth it.\n\nchance of loosing data before hotspare is finished rebuilding (assumes one \nhotspare per group, you may be able to share a hotspare between multiple \ngroups to get slightly higher capacity)\n\n> RAID 60 or Z2 -- Double-parity must loose 3 disks from the same group to loose data:\n> disks_per_group num_groups total_disks usable_disks risk_of_data_loss\n> 2 24 48 n/a n/a\n> 3 16 48 n/a (0.0001% with manual replacement of drive)\n> 4 12 48 12 0.0009%\n> 6 8 48 24 0.003%\n> 8 6 48 30 0.006%\n> 12 4 48 36 0.02%\n> 16 3 48 39 0.03%\n> 24 2 48 42 0.06%\n> 48 1 48 45 0.25%\n\n> RAID 10 or 50 -- Mirroring or single-parity must loose 2 disks from the same group to loose data:\n> disks_per_group num_groups total_disks usable_disks risk_of_data_loss\n> 2 24 48 n/a (~0.1% with manual replacement of drive)\n> 3 16 48 16 0.2%\n> 4 12 48 24 0.3%\n> 6 8 48 32 0.5%\n> 8 6 48 36 0.8%\n> 12 4 48 40 1.3%\n> 16 3 48 42 1.7%\n> 24 2 48 44 2.5%\n> 48 1 48 46 5%\n\nso if I've done the math correctly the odds of losing data with the \nworst-case double-parity (one large array including hotspare) are about \nthe same as the best case single parity (mirror+ hotspare), but with \nalmost triple the capacity.\n\n\n", "msg_date": "Sat, 7 Apr 2007 14:42:47 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "At 05:42 PM 4/7/2007, [email protected] wrote:\n>On Sat, 7 Apr 2007, Ron wrote:\n>\n>>The reality is that all modern HDs are so good that it's actually \n>>quite rare for someone to suffer a data loss event. The \n>>consequences of such are so severe that the event stands out more \n>>than just the statistics would imply. For those using small numbers \n>>of HDs, HDs just work.\n>>\n>>OTOH, for those of us doing work that involves DBMSs and relatively \n>>large numbers of HDs per system, both the math and the RW \n>>conditions of service require us to pay more attention to quality details.\n>>Like many things, one can decide on one of multiple ways to \"pay the piper\".\n>>\n>>a= The choice made by many, for instance in the studies mentioned, \n>>is to minimize initial acquisition cost and operating overhead and \n>>simply accept having to replace HDs more often.\n>>\n>>b= For those in fields were this is not a reasonable option \n>>(financial services, health care, etc), or for those literally \n>>using 100's of HD per system (where statistical failure rates are \n>>so likely that TLC is required), policies and procedures like those \n>>mentioned in this thread (paying close attention to environment and \n>>use factors, sector remap detecting, rotating HDs into and out of \n>>roles based on age, etc) are necessary.\n>>\n>>Anyone who does some close variation of \"b\" directly above =will= \n>>see the benefits of using better HDs.\n>>\n>>At least in my supposedly unqualified anecdotal 25 years of \n>>professional experience.\n>\n>Ron, why is it that you assume that anyone who disagrees with you \n>doesn't work in an environment where they care about the datacenter \n>environment, and aren't in fields like financial services? and why \n>do you think that we are just trying to save a few pennies? (the \n>costs do factor in, but it's not a matter of pennies, it's a matter \n>of tens of thousands of dollars)\nI don't assume that. I didn't make any assumptions. I (rightfully \nIMHO) criticized everyone jumping on the \"See, cheap =is= good!\" \nbandwagon that the Google and CMU studies seem to have ignited w/o \nthinking critically about them.\nI've never mentioned or discussed specific financial amounts, so \nyou're making an (erroneous) assumption when you think my concern is \nover people \"trying to save a few pennies\".\n\nIn fact, \"saving pennies\" is at the =bottom= of my priority list for \nthe class of applications I've been discussing. I'm all for \neconomical, but to paraphrase Einstein \"Things should be as cheap as \npossible; but no cheaper.\"\n\nMy biggest concern is that something I've seen over and over again in \nmy career will happen again:\nPeople tend to jump at the _slightest_ excuse to believe a story that \nwill save them short term money and resist even _strong_ reasons to \npay up front for quality. Even if paying more up front would lower \ntheir lifetime TCO.\n\nThe Google and CMU studies are =not= based on data drawn from \nbusinesses where the lesser consequences of an outage are losing \n$10Ks or $100K per minute... ...and where the greater consequences \ninclude the chance of loss of human life.\nNor are they based on businesses that must rely exclusively on highly \nskilled and therefore expensive labor.\n\nIn the case of the CMU study, people are even extrapolating an \neconomic conclusion the original author did not even make or intend!\nIs it any wonder I'm expressing concern regarding inappropriate \nextrapolation of those studies?\n\n\n>I actually work in the financial services field, I do have a good \n>datacenter environment that's well cared for.\n>\n>while I don't personally maintain machines with hundreds of drives \n>each, I do maintain hundreds of machines with a small number of \n>drives in each, and a handful of machines with a few dozens of \n>drives. (the database machines are maintained by others, I do see \n>their failed drives however)\n>\n>it's also true that my expericance is only over the last 10 years, \n>so I've only been working with a few generations of drives, but my \n>experiance is different from yours.\n>\n>my experiance is that until the drives get to be 5+ years old the \n>failure rate seems to be about the same for the 'cheap' drives as \n>for the 'good' drives. I won't say that they are exactly the same, \n>but they are close enough that I don't believe that there is a \n>significant difference.\n>\n>in other words, these studies do seem to match my experiance.\nFine. Let's pretend =You= get to build Citibank's or Humana's next \nmission critical production DBMS using exclusively HDs with 1 year warranties.\n(never would be allowed ITRW)\n\nEven if you RAID 6 them, I'll bet you anything that a system with 32+ \nHDs on it is likely enough to spend a high enough percentage of its \ntime operating in degraded mode that you are likely to be looking for \na job as a consequence of such a decision.\n...and if you actually suffer data loss or, worse, data corruption, \nthat's a Career Killing Move.\n(and it should be given the likely consequences to the public of such a F* up).\n\n\n>this is why, when I recently had to create some large capacity \n>arrays, I'm only ending up with machines with a few dozen drives in \n>them instead of hundreds. I've got two machines with 6TB of disk, \n>one with 8TB, one with 10TB, and one with 20TB. I'm building these \n>sytems for ~$1K/TB for the disk arrays. other departments sho shoose \n>$bigname 'enterprise' disk arrays are routinely paying 50x that price\n>\n>I am very sure that they are not getting 50x the reliability, I'm \n>sure that they aren't getting 2x the reliability.\n...and I'm very sure they are being gouged mercilessly by vendors who \nare padding their profit margins exorbitantly at the customer's expense.\nHDs or memory from the likes of EMC, HP, IBM, or Sun has been \noverpriced for decades.\nUnfortunately, for every one of me who shop around for good vendors \nthere are 20+ corporate buyers who keep on letting themselves get gouged.\nGouging is not going stop until the gouge prices are unacceptable to \nenough buyers.\n\nNow if the issue of price difference is based on =I/O interface= (SAS \nvs SATA vs FC vs SCSI), that's a different, and orthogonal, issue.\nThe simple fact is that optical interconnects are far more expensive \nthan anything else and that SCSI electronics cost significantly more \nthan anything except FC.\nThere's gouging here as well, but far more of the pricing is justified.\n\n\n\n>I believe that the biggest cause for data loss from people useing \n>the 'cheap' drives is due to the fact that one 'cheap' drive holds \n>the capacity of 5 or so 'expensive' drives, and since people don't \n>realize this they don't realize that the time to rebuild the failed \n>drive onto a hot-spare is correspondingly longer.\nCommodity HDs get 1 year warranties for the same reason enterprise \nHDs get 5+ year warranties: the vendor's confidence that they are not \ngoing to lose money honoring the warranty in question.\n\nAFAIK, there is no correlation between capacity of HDs and failure \nrates or warranties on them.\n\n\nYour point regarding using 2 cheaper systems in parallel instead of 1 \ngold plated system is in fact an expression of a basic Axiom of \nSystems Theory with regards to Single Points of Failure. Once \ncomponents become cheap enough, it is almost always better to have \nredundancy rather than all one's eggs in 1 heavily protected basket.\n\n\nFrankly, the only thing that made me feel combative is when someone \nclaimed there's no difference between anecdotal evidence and a \nprofessional opinion or advice.\nThat's just so utterly unrealistic as to defy belief.\nNo one would ever get anything done if every business decision had to \nwait on properly designed and executed lab studies.\n\nIt's also insulting to everyone who puts in the time and effort to be \na professional within a field rather than a lay person.\n\nWhether there's a name for it or not, there's definitely an important \ndistinction between each of anecdote, professional opinion, and study result.\n\n\nCheers,\nRon Peacetree\n\n\n\n\n\n", "msg_date": "Sat, 07 Apr 2007 20:46:33 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Sat, 7 Apr 2007, Ron wrote:\n\n>> Ron, why is it that you assume that anyone who disagrees with you doesn't \n>> work in an environment where they care about the datacenter environment, \n>> and aren't in fields like financial services? and why do you think that we \n>> are just trying to save a few pennies? (the costs do factor in, but it's \n>> not a matter of pennies, it's a matter of tens of thousands of dollars)\n> I don't assume that. I didn't make any assumptions. I (rightfully IMHO) \n> criticized everyone jumping on the \"See, cheap =is= good!\" bandwagon that the \n> Google and CMU studies seem to have ignited w/o thinking critically about \n> them.\n\nRon, I think that many people aren't saying cheap==good, what we are \ndoing is arguing against the idea that expesnsive==good (and it's \ncoorelary cheap==bad)\n\n> I've never mentioned or discussed specific financial amounts, so you're \n> making an (erroneous) assumption when you think my concern is over people \n> \"trying to save a few pennies\".\n>\n> In fact, \"saving pennies\" is at the =bottom= of my priority list for the \n> class of applications I've been discussing. I'm all for economical, but to \n> paraphrase Einstein \"Things should be as cheap as possible; but no cheaper.\"\n\nthis I fully agree with, I have no problem spending money if I believe \nthat there's a cooresponding benifit.\n\n> My biggest concern is that something I've seen over and over again in my \n> career will happen again:\n> People tend to jump at the _slightest_ excuse to believe a story that will \n> save them short term money and resist even _strong_ reasons to pay up front \n> for quality. Even if paying more up front would lower their lifetime TCO.\n\non the other hand, it's easy for people to blow $bigbucks with this \nargument with no significant reduction in their maintinance costs.\n\n> The Google and CMU studies are =not= based on data drawn from businesses \n> where the lesser consequences of an outage are losing $10Ks or $100K per \n> minute... ...and where the greater consequences include the chance of loss of \n> human life.\n> Nor are they based on businesses that must rely exclusively on highly skilled \n> and therefore expensive labor.\n\nhmm, I didn't see the CMU study document what businesses it used.\n\n> In the case of the CMU study, people are even extrapolating an economic \n> conclusion the original author did not even make or intend!\n> Is it any wonder I'm expressing concern regarding inappropriate extrapolation \n> of those studies?\n\nI missed the posts where people were extrapolating economic conclusions, \nwhat I saw was people stateing that 'you better buy the SCSI drives as \nthey are more reliable', and other people pointing out that recent studies \nindicate that there's not a significant difference in drive reliability \nbetween the two types of drives\n\n>> I actually work in the financial services field, I do have a good \n>> datacenter environment that's well cared for.\n>> \n>> while I don't personally maintain machines with hundreds of drives each, I \n>> do maintain hundreds of machines with a small number of drives in each, and \n>> a handful of machines with a few dozens of drives. (the database machines \n>> are maintained by others, I do see their failed drives however)\n>> \n>> it's also true that my expericance is only over the last 10 years, so I've \n>> only been working with a few generations of drives, but my experiance is \n>> different from yours.\n>> \n>> my experiance is that until the drives get to be 5+ years old the failure \n>> rate seems to be about the same for the 'cheap' drives as for the 'good' \n>> drives. I won't say that they are exactly the same, but they are close \n>> enough that I don't believe that there is a significant difference.\n>> \n>> in other words, these studies do seem to match my experiance.\n> Fine. Let's pretend =You= get to build Citibank's or Humana's next mission \n> critical production DBMS using exclusively HDs with 1 year warranties.\n> (never would be allowed ITRW)\n\nwho is arguing that you should use drives with 1 year warranties? in case \nyou blinked consumer drive warranties are backup to 5 years.\n\n> Even if you RAID 6 them, I'll bet you anything that a system with 32+ HDs on \n> it is likely enough to spend a high enough percentage of its time operating \n> in degraded mode that you are likely to be looking for a job as a consequence \n> of such a decision.\n> ...and if you actually suffer data loss or, worse, data corruption, that's a \n> Career Killing Move.\n> (and it should be given the likely consequences to the public of such a F* \n> up).\n\nso now it's \"nobody got fired for buying SCSI?\"\n\n>> this is why, when I recently had to create some large capacity arrays, I'm \n>> only ending up with machines with a few dozen drives in them instead of \n>> hundreds. I've got two machines with 6TB of disk, one with 8TB, one with \n>> 10TB, and one with 20TB. I'm building these sytems for ~$1K/TB for the disk \n>> arrays. other departments sho shoose $bigname 'enterprise' disk arrays are \n>> routinely paying 50x that price\n>> \n>> I am very sure that they are not getting 50x the reliability, I'm sure that \n>> they aren't getting 2x the reliability.\n> ...and I'm very sure they are being gouged mercilessly by vendors who are \n> padding their profit margins exorbitantly at the customer's expense.\n> HDs or memory from the likes of EMC, HP, IBM, or Sun has been overpriced for \n> decades.\n> Unfortunately, for every one of me who shop around for good vendors there are \n> 20+ corporate buyers who keep on letting themselves get gouged.\n> Gouging is not going stop until the gouge prices are unacceptable to enough \n> buyers.\n\nit's also not going to be stopped until people actually look at the \nreliability of what they are getting, rather than assuming that becouse \nit's labled 'enterprise' and costs more that it must be more reliable.\n\nfrankly, I think that a lot of the cost comes from the simple fact that \nthey use smaller SCSI drives (most of them haven't starting useing 300G \ndrives yet), and so they end up needing ~5x more drive bays, power, \ncooling, cableing, ports on the controllers, etc. if you need 5x the \nnumber of drives and they each cost 3x as much, you are already up to 15x \nprice multiplier, going from there to 50x is only adding another 3x \nmultiplier (which with the extra complexity of everything is easy to see, \nand almost seems reasonable)\n\n> Now if the issue of price difference is based on =I/O interface= (SAS vs SATA \n> vs FC vs SCSI), that's a different, and orthogonal, issue.\n> The simple fact is that optical interconnects are far more expensive than \n> anything else and that SCSI electronics cost significantly more than anything \n> except FC.\n> There's gouging here as well, but far more of the pricing is justified.\n\ngoing back to the post that started this thread. the OP was looking at two \nequivalently priced systems, one with 8x73G SCSI and the other with \n24x300G SATA. I don't buy the argument that the SCSI electronics are \n_that_ expensive (especially since SATA and SAS are designed to be \ncompatable enough to plug togeather). yes the SCSI drives spin faster, and \nthat does contribute to the cost, but it still should't make one drive \ncost 3x the other.\n\n>> I believe that the biggest cause for data loss from people useing the \n>> 'cheap' drives is due to the fact that one 'cheap' drive holds the capacity \n>> of 5 or so 'expensive' drives, and since people don't realize this they \n>> don't realize that the time to rebuild the failed drive onto a hot-spare is \n>> correspondingly longer.\n> Commodity HDs get 1 year warranties for the same reason enterprise HDs get 5+ \n> year warranties: the vendor's confidence that they are not going to lose \n> money honoring the warranty in question.\n\nat least seagate gives 5 year warranties on their consumer drives.\n\n> AFAIK, there is no correlation between capacity of HDs and failure rates or \n> warranties on them.\n\ncorrect, but the larger drive will take longer to rebuild, so your window \nof vunerability is larger.\n\n> Your point regarding using 2 cheaper systems in parallel instead of 1 gold \n> plated system is in fact an expression of a basic Axiom of Systems Theory \n> with regards to Single Points of Failure. Once components become cheap \n> enough, it is almost always better to have redundancy rather than all one's \n> eggs in 1 heavily protected basket.\n\nalso correct, the question is 'have hard drives reached this point'\n\nthe thought that there isn't a big difference in the reliability of the \ndrives doesn't mean that the enterprise drives are getting worse, it means \nthat the consumer drives are getting better, so much so that they they are \na valid option.\n\nif I had the money to waste, I would love to see someone open the \n'consumer grade' seagate Barracuda 7200.10 750G drive along with a \n'enterprise grade' seagate Barracuda ES 750G drive (both of which have 5 \nyear warranties) to see if there is still the same 'dramatic difference' \nbetween consumer and enterprise drives that there used to be.\n\nit would also be interesting to compare the high-end scsi drives with the \nrecent SATA/IDE drives. I'll have to look and see if I can catch some \ndead drives before they get destroyed and open them up.\n\n> Frankly, the only thing that made me feel combative is when someone claimed \n> there's no difference between anecdotal evidence and a professional opinion \n> or advice.\n> That's just so utterly unrealistic as to defy belief.\n> No one would ever get anything done if every business decision had to wait on \n> properly designed and executed lab studies.\n\nI think the assumption on lists like this is that anything anyone says is \na professional opinion, until proven otherwise. but a professional \nopinion (no matter who it's from) isn't as good as a formal study\n\n> It's also insulting to everyone who puts in the time and effort to be a \n> professional within a field rather than a lay person.\n\nit's also insulting to assume (or appear to assume) that everyone who \ndisagrees with your is a lay person. you may not have meant it (this is \ne-mail after all, with all the problems that come from that), but this is \nwhat you seem to have been implying, if not outright saying.\n\n> Whether there's a name for it or not, there's definitely an important \n> distinction between each of anecdote, professional opinion, and study result.\n\nthe line between an anecdote and a professional opinion is pretty blury, \nand hard to see without wasting a lot of time getting everyone to give \ntheir credentials, etc. if a professional doesn't spend enough time \nthinking about some of the details (i.e. how many drive failures of each \ntype have I seen in the last 5 years as opposed to in the 5 year timeframe \nfrom 1980-1985) they can end up giving an opinion that's in the range of \nreliability and relavance that anecdotes are.\n\ndon't assume malice so quickly.\n\nDavid Lang\n", "msg_date": "Sat, 7 Apr 2007 20:13:51 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "\n>>> I believe that the biggest cause for data loss from people useing the \n>>> 'cheap' drives is due to the fact that one 'cheap' drive holds the \n>>> capacity of 5 or so 'expensive' drives, and since people don't \n>>> realize this they don't realize that the time to rebuild the failed \n>>> drive onto a hot-spare is correspondingly longer.\n>> Commodity HDs get 1 year warranties for the same reason enterprise HDs \n>> get 5+ year warranties: the vendor's confidence that they are not \n>> going to lose money honoring the warranty in question.\n> \n> at least seagate gives 5 year warranties on their consumer drives.\n\nHitachi 3 years\nMaxtor 3 years\nSamsung 1-3 years depending on drive (but who buys samsung drives)\nSeagate 5 years (300 Gig, 7200 RPM perpendicular recording... 89 bucks)\nWestern Digital 3-5 years depending on drive\n\nJoshua D. Drake\n\n\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Sat, 07 Apr 2007 20:20:28 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "On Sat, Apr 07, 2007 at 08:46:33PM -0400, Ron wrote:\n> The Google and CMU studies are =not= based on data drawn from \n> businesses where the lesser consequences of an outage are losing \n> $10Ks or $100K per minute... ...and where the greater consequences \n> include the chance of loss of human life.\n> Nor are they based on businesses that must rely exclusively on highly \n> skilled and therefore expensive labor.\n\nGoogle up time seems to be quite good. Reliability can be had from trusting\nmore reputable (and usually more expensive) manufacturers and product lines,\nor it can be had through redundancy. The \"I\" in RAID.\n\nI recall reading the Google study before, and believe I recall it\nlacking in terms of how much it costs to pay the employees to maintain\nthe system. It would be interesting to know whether the inexpensive\ndrives require more staff time to be spent on it. Staff time can\neasily become more expensive than the drives themselves.\n\nI believe there are factors that exist that are not easy to calculate.\nSomebody else mentioned how Intel was not the cleanest architecture,\nand yet, how Intel architecture makes up the world's fastest machines,\nand the cheapest machines per work to complete. There is a game of\nnumbers being played. A manufacturer that sells 10+ million units has\nthe resources, the profit margin, and the motivation, to ensure that\ntheir drives are better than a manufacturer that sells 100 thousand\nunits. Even if the manufacturer of the 100 K units spends double in\ndevelopment per unit, they would only be spending 1/50 as much as the\nmanufacturer who makes 10+ million units.\n\nAs for your experience - no disrespect - but if your experience is over\nthe last 25 years, then you should agree that most of those years are\nno longer relevant in terms of experience. SATA has only existed for\n5 years or less, and is only now stabilizing in terms of having the\ndifferent layers of a solution supporting the features like command\nqueuing. The drives of today have already broken all sorts of rules that\npeople assumed were not possible to break 5 years ago, 10 years ago, or\n20 years ago. The playing field is changing. Even if your experience is\ncorrect or valid today - it may not be true tomorrow.\n\nThe drives of today, I consider to be incredible in terms of quality,\nreliability, speed, and density. All of the major brands, for desktops\nor servers, IDE, SATA, or SCSI, are amazing compared to only 10 years\nago. To say that they don't meet a standard - which standard?\n\nEverything has a cost. Having a drive never break, will have a very\nlarge cost. It will cost more to turn 99.9% to 99.99%. Given that the\nproducts will never be perfect, perhaps it is valid to invest in a\nlow-cost fast-to-implement recovery solution, that will assumes that\nsome number of drives will fail in 6 months, 1 year, 2 years, and 5\nyears. Assume they will fail, because regardless of what you buy -\ntheir is a good chance that they *will* fail. Paying double price for\nhardware, with a hope that they will not fail, may not be a good\nstrategy.\n\nI don't have a conclusion here - only things to consider.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Sun, 8 Apr 2007 03:49:54 -0400", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": "At 11:13 PM 4/7/2007, [email protected] wrote:\n>On Sat, 7 Apr 2007, Ron wrote:\n>\n>Ron, I think that many people aren't saying cheap==good, what we are \n>doing is arguing against the idea that expesnsive==good (and it's \n>coorelary cheap==bad)\nSince the buying decision is binary, you either buy high quality HDs \nor you don't, the distinction between the two statements makes no \ndifference ITRW and therefore is meaningless. \"The difference that \nmakes no difference =is= no difference.\"\n\nThe bottom line here is that no matter how it is \"spun\", people are \nusing the Google and the CMU studies to consider justifying reducing \nthe quality of the HDs they buy in order to reduce costs.\n\nFrankly, they would be better advised to directly attack price \ngouging by certain large vendors instead; but that is perceived as a \nharder problem. So instead they are considering what is essentially \nan example of Programming by Side Effect.\nEvery SW professional on this list has been taught how bad a strategy \nthat usually is.\n\n\n>>My biggest concern is that something I've seen over and over again \n>>in my career will happen again:\n>>People tend to jump at the _slightest_ excuse to believe a story \n>>that will save them short term money and resist even _strong_ \n>>reasons to pay up front for quality. Even if paying more up front \n>>would lower their lifetime TCO.\n>\n>on the other hand, it's easy for people to blow $bigbucks with this \n>argument with no significant reduction in their maintinance costs.\nNo argument there. My comments on people putting up with price \ngouging should make clear my position on overspending.\n\n\n>>The Google and CMU studies are =not= based on data drawn from \n>>businesses where the lesser consequences of an outage are losing \n>>$10Ks or $100K per minute... ...and where the greater consequences \n>>include the chance of loss of human life.\n>>Nor are they based on businesses that must rely exclusively on \n>>highly skilled and therefore expensive labor.\n>\n>hmm, I didn't see the CMU study document what businesses it used.\nSection 2.3: Data Sources, p3-4.\n3 HPC clusters, each described as \"The applications running on this \nsystem are typically large-scale scientific simulations or \nvisualization applications. +\n3 ISPs, 1 HW failure log, 1 warranty service log of hardware \nfailures, and 1 exclusively FC HD set based on 4 different kinds of FC HDs.\n\n\n>>In the case of the CMU study, people are even extrapolating an \n>>economic conclusion the original author did not even make or intend!\n>>Is it any wonder I'm expressing concern regarding inappropriate \n>>extrapolation of those studies?\n>\n>I missed the posts where people were extrapolating economic \n>conclusions, what I saw was people stateing that 'you better buy the \n>SCSI drives as they are more reliable', and other people pointing \n>out that recent studies indicate that there's not a significant \n>difference in drive reliability between the two types of drives\nThe original poster asked a simple question regarding 8 SCSI HDs vs \n24 SATA HDs. That question was answered definitively some posts ago\n(use 24 SATA HDs).\n\nOnce this thread started talking about the Google and CMU studies, it \nexpanded beyond the OPs original SCSI vs SATA question.\n(else why are we including FC and other issues in our considerations \nas in the CMU study?)\n\nWe seem to have evolved to\n\"Does paying more for enterprise class HDs vs consumer class HDs \nresult in enough of a quality difference to be worth it?\"\n\nTo analyze that question, the only two HD metrics that should be \nconsidered are\n1= whether the vendor rates the HD as \"enterprise\" or not, and\n2= the length of the warranty on the HD in question.\nOtherwise, one risks clouding the analysis due to the costs of the \ninterface used.\n(there are plenty of non HD metrics that need to be considered to \nexamine the issue properly.)\n\nThe CMU study was not examining any economic issue, and therefore to \ndraw an economic conclusion from it is questionable.\nThe CMU study was about whether the industry standard failure model \nmatched empirical historical evidence.\nUsing the CMU study for any other purpose risks misjudgment.\n\n\n>>Let's pretend =You= get to build Citibank's or Humana's next \n>>mission critical production DBMS using exclusively HDs with 1 year warranties.\n>>(never would be allowed ITRW)\n>\n>who is arguing that you should use drives with 1 year warranties? in \n>case you blinked consumer drive warranties are backup to 5 years.\nAs Josh Drake has since posted, they are not (although TBF most seem \nto be greater than 1 year at this point).\n\nSo can I safely assume that we have agreement that you would not \nadvise using HDs with less than 5 year warranties for any DBMS?\nIf so, the only debate point left is whether there is a meaningful \ndistinction between HDs rated as \"enterprise class\" vs others by the \nsame vendor within the same generation.\n\n\n>>Even if you RAID 6 them, I'll bet you anything that a system with \n>>32+ HDs on it is likely enough to spend a high enough percentage of \n>>its time operating in degraded mode that you are likely to be \n>>looking for a job as a consequence of such a decision.\n>>...and if you actually suffer data loss or, worse, data corruption, \n>>that's a Career Killing Move.\n>>(and it should be given the likely consequences to the public of \n>>such a F* up).\n>\n>so now it's \"nobody got fired for buying SCSI?\"\n|\nAgain, we are way past simply SCSI vs SATA interfaces issues and well \ninto more fundamental issues of HD quality and price.\n\nLet's bear in mind that SCSI is =a legacy technology=. Seagate will \ncease making all SCSI HDs in 2007. The SCSI standard has been \nstagnant and obsolescent for years. Frankly, the failure of the FC \nvendors to come out with 10Gb FC in a timely fashion has probably \nkilled that interface as well.\n\nThe future is most likely SATA vs SAS. =Those= are most likely the \nrelevant long-term technologies in this discussion.\n\n\n>frankly, I think that a lot of the cost comes from the simple fact \n>that they use smaller SCSI drives (most of them haven't starting \n>useing 300G drives yet), and so they end up needing ~5x more drive \n>bays, power, cooling, cableing, ports on the controllers, etc. if \n>you need 5x the number of drives and they each cost 3x as much, you \n>are already up to 15x price multiplier, going from there to 50x is \n>only adding another 3x multiplier (which with the extra complexity \n>of everything is easy to see, and almost seems reasonable)\n|\nWell, be prepared to re-examine this issue when you have to consider \nusing 2.5\" 73GB SAS HDs vs using 3.5\" >= 500GB SATA HDs.\n\nFor OLTP-like workloads, there is a high likelihood that solutions \ninvolving more spindles are going to be better than those involving \nfewer spindles.\n\nReliability isn't the only metric of consideration here. If \norganizations have to go certain routes to meet their business goals, \ntheir choices are legitimately constrained.\n(I recall being asked for a 10TB OLAP system 7 years ago and telling \nthe client for that point in time the only DBMS products that could \nbe trusted with that task were DB2 and Oracle: an answer the M$ \nfavoring CEO of the client did !not! like.)\n\n\n>if I had the money to waste, I would love to see someone open the \n>'consumer grade' seagate Barracuda 7200.10 750G drive along with a \n>'enterprise grade' seagate Barracuda ES 750G drive (both of which \n>have 5 year warranties) to see if there is still the same 'dramatic \n>difference' between consumer and enterprise drives that there used to be.\n>\n>it would also be interesting to compare the high-end scsi drives \n>with the recent SATA/IDE drives. I'll have to look and see if I can \n>catch some dead drives before they get destroyed and open them up.\nI have to admit I haven't done this experiment in a few years \neither. When I did, there always was a notable difference (in \nkeeping with the vendor's claims as such)\n\n\nThis thread is not about whether there is a difference worthy of note \nbetween anecdotal opinion, professional advice, and the results of studies.\nI've made my POV clear on that topic and if there is to be a more \nthorough analysis or discussion of it, it properly belongs in another thread.\n\n\nCheers,\nRon Peacetree \n\n", "msg_date": "Sun, 08 Apr 2007 11:03:43 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" }, { "msg_contents": ">Logic?\n\nFoul! That's NOT evidence.\n\n>\n>Mechanical devices have decreasing MTBF when run in hotter environments,\n>often at non-linear rates.\n\nI agree that this seems intuitive. But I think taking it as a cast-iron\ntruth is dangerous.\n\n>Server class drives are designed with a longer lifespan in mind.\n\nEvidence?\n\n>Server class hard drives are rated at higher temperatures than desktop\n>drives.\n>\n>Google can supply any numbers to fill those facts in, but I found a\n>dozen or so data sheets for various enterprise versus desktop drives in\n>a matter of minutes.\n\nI know what the marketing info says, that's not the point. Bear in mind\nthat these are somewhat designed to justify very much higher prices.\n\nI'm looking for statistical evidence that the difference is there, not\nmarketing colateral. They may be designed to be more reliable. And\nthe design targets *that the manufacturer admits to* may be more\nstringent, but I'm interested to know what the actual measured difference\nis.\n\n From the sound of it, you DON'T have such evidence. Which is not a\nsurprise, because I don't have it either, and I do try to keep my eyes\nopen for it.\n\nJames\n\n\n\n\n--\nNo virus found in this outgoing message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 269.0.0/751 - Release Date: 07/04/2007\n22:57\n\n", "msg_date": "Sun, 8 Apr 2007 19:46:43 +0100", "msg_from": "\"James Mansion\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SCSI vs SATA" } ]
[ { "msg_contents": "Hi Thor,\n\nThor-Michael St�re wrote:\n> On 2007-04-04 Arnau wrote:\n>> Josh Berkus wrote:\n>>> Arnau,\n>>> \n>>>> Is there anything similar in PostgreSQL? The idea behind this\n>>>> is how I can do in PostgreSQL to have tables where I can query\n>>>> on them very often something like every few seconds and get\n>>>> results very fast without overloading the postmaster.\n>>> If you're only querying the tables every few seconds, then you\n>>> don't really need to worry about performance.\n> \n>> Well, the idea behind this is to have events tables, and a\n>> monitoring system polls that table every few seconds. I'd like to\n>> have a kind of FIFO stack. From \"the events producer\" point of view\n>> he'll be pushing rows into that table, when it's filled the oldest\n>> one will be removed to leave room to the newest one. From \"the\n>> consumer\" point of view he'll read all the contents of that table.\n> \n>> So I'll not only querying the tables, I'll need to also modify that\n>> tables.\n> \n> Please try to refrain from doing this. This is the \"Database as an\n> IPC\" antipattern (Antipatterns are \"commonly-reinvented bad solutions\n> to problems\", I.E. you can be sure someone has tried this very thing\n> before and found it to be a bad solution)\n> \n> http://en.wikipedia.org/wiki/Database_as_an_IPC\n> \n> Best solution is (like Ansgar hinted at) to use a real IPC system.\n> \n> Ofcourse, I've done it myself (not on PostgreSQL though) when working\n> at a large corporation where corporate politics prevented me from\n> introducing any new interdependency between systems (like having two\n> start talking with eachother when they previously didn't), the only\n> \"common ground\" for systems that needed to communicate was a\n> database, and one of the systems was only able to run simple SQL\n> statements and not stored procedures.\n\n\n First of all, thanks for your interested but let me explain what I \nneed to do.\n\n We have a web application where customers want to monitor how it's \nperforming, but not performing in terms of speed but how many customers \nare now browsing in the application, how many have payed browsing \nsessions, how many payments have been done, ... More or less is to have \na control panel. The difference is that they want that the information \ndisplayed on a web browser must be \"real-time\" that is a query every \n1-10 seconds.\n\n Then, I haven't read yet the article but I'll do it, how you'd do \nwhat I need to do?\n\nThanks\n-- \nArnau\n", "msg_date": "Wed, 04 Apr 2007 14:35:56 +0200", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" }, { "msg_contents": "Arnau wrote:\n> Hi Thor,\n> \n> Thor-Michael St�re wrote:\n>> On 2007-04-04 Arnau wrote:\n>>> Josh Berkus wrote:\n>>>> Arnau,\n>>>>\n>>>>> Is there anything similar in PostgreSQL? The idea behind this\n>>>>> is how I can do in PostgreSQL to have tables where I can query\n>>>>> on them very often something like every few seconds and get\n>>>>> results very fast without overloading the postmaster.\n>>>> If you're only querying the tables every few seconds, then you\n>>>> don't really need to worry about performance.\n>>\n>>> Well, the idea behind this is to have events tables, and a\n>>> monitoring system polls that table every few seconds. I'd like to\n>>> have a kind of FIFO stack. From \"the events producer\" point of view\n>>> he'll be pushing rows into that table, when it's filled the oldest\n>>> one will be removed to leave room to the newest one. From \"the\n>>> consumer\" point of view he'll read all the contents of that table.\n>>\n>>> So I'll not only querying the tables, I'll need to also modify that\n>>> tables.\n>>\n>> Please try to refrain from doing this. This is the \"Database as an\n>> IPC\" antipattern (Antipatterns are \"commonly-reinvented bad solutions\n>> to problems\", I.E. you can be sure someone has tried this very thing\n>> before and found it to be a bad solution)\n>>\n>> http://en.wikipedia.org/wiki/Database_as_an_IPC\n>>\n>> Best solution is (like Ansgar hinted at) to use a real IPC system.\n>>\n>> Ofcourse, I've done it myself (not on PostgreSQL though) when working\n>> at a large corporation where corporate politics prevented me from\n>> introducing any new interdependency between systems (like having two\n>> start talking with eachother when they previously didn't), the only\n>> \"common ground\" for systems that needed to communicate was a\n>> database, and one of the systems was only able to run simple SQL\n>> statements and not stored procedures.\n> \n> \n> First of all, thanks for your interested but let me explain what I \n> need to do.\n> \n> We have a web application where customers want to monitor how it's \n> performing, but not performing in terms of speed but how many customers \n> are now browsing in the application, how many have payed browsing \n> sessions, how many payments have been done, ... More or less is to have \n> a control panel. The difference is that they want that the information \n> displayed on a web browser must be \"real-time\" that is a query every \n> 1-10 seconds.\n\n\nThough that has been suggested earlier, but why not use pgmemcache and \npush each event as a new key? As memcached is FIFO by design that is \nexacly what you ask for. Besides that memcached is so fast that your OS \nis more busy with handling all that TCP connections than running memcached.\n\nAnd in case you'd like to display statistical data and not tailing \nevents, let PG push that to memcached keys as well. See memcached as a \nmaterialized view in that case.\n\nAs middleware I'd recommend lighttpd with mod_magnet.\n\nYou should be able to delivery that admin page way more than 5000 times \n/ sec with some outdated desktop hardware. If that's not enough read up \non things like \nhttp://blog.lighttpd.net/articles/2006/11/27/comet-meets-mod_mailbox\n\n\n-- \nBest regards,\nHannes Dorbath\n", "msg_date": "Wed, 18 Apr 2007 22:04:56 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Equivalents in PostgreSQL of MySQL's \"ENGINE=MEMORY\"\n\t\"MAX_ROWS=1000\"" } ]
[ { "msg_contents": "Hello All,\n\nI've been searching the archives for something similar, without success..\n\nWe have an application subjected do sign documents and store them\nsomewhere. The files size may vary from Kb to Mb. Delelopers are\narguing about the reasons to store files direcly on operating system\nfile system or on the database, as large objects. My boss is\nconsidering file system storing, because he is concerned about\nintegrity, backup/restore corruptions. I'd like to know some reasons\nto convince them to store these files on PosgtreSQL, including\nintegrity, and of course, performance. I would like to know the file\nsystem storing disadvantages as well.\n\nThanks in advace.\nAlex\n", "msg_date": "Wed, 4 Apr 2007 11:03:49 -0300", "msg_from": "\"Alexandre Vasconcelos\" <[email protected]>", "msg_from_op": true, "msg_subject": "Large objetcs performance" }, { "msg_contents": "On 04.04.2007, at 08:03, Alexandre Vasconcelos wrote:\n\n> We have an application subjected do sign documents and store them\n> somewhere. The files size may vary from Kb to Mb. Delelopers are\n> arguing about the reasons to store files direcly on operating system\n> file system or on the database, as large objects. My boss is\n> considering file system storing, because he is concerned about\n> integrity, backup/restore corruptions. I'd like to know some reasons\n> to convince them to store these files on PosgtreSQL, including\n> integrity, and of course, performance. I would like to know the file\n> system storing disadvantages as well.\n\nIt is not directly PostgreSQL related, but this might give you \nsomething to think about:\n\nhttp://en.wikibooks.org/wiki/Programming:WebObjects/Web_Applications/ \nDevelopment/Database_vs_Filesystem\n\ncug\n", "msg_date": "Wed, 4 Apr 2007 08:11:27 -0600", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large objetcs performance" }, { "msg_contents": "On 4/4/07, Alexandre Vasconcelos <[email protected]> wrote:\n> We have an application subjected do sign documents and store them\n> somewhere. The files size may vary from Kb to Mb. Delelopers are\n> arguing about the reasons to store files direcly on operating system\n> file system or on the database, as large objects. My boss is\n> considering file system storing, because he is concerned about\n> integrity, backup/restore corruptions. I'd like to know some reasons\n> to convince them to store these files on PosgtreSQL, including\n> integrity, and of course, performance. I would like to know the file\n> system storing disadvantages as well.\n\nThis topic actually gets debated about once a month on the lists :-).\nCheck the archives, but here is a quick summary:\n\nStoring objects on the file system:\n* usually indexed on the database for searching\n* faster than database (usually)\n* more typical usage pattern\n* requires extra engineering if you want to store huge numbers of objects\n* requires extra engineering to keep your database in sync. on\npostgresql irc someone suggested a clever solution with inotify\n* backup can be a pain (even rsync has its limits) -- for really big\nsystems, look at clustering solutions (drbd for example)\n* lots of people will tell you this 'feels' right or wrong -- ignore them :-)\n* well traveled path. it can be made to work.\n\nStoring objects on the database:\n* slower, but getting faster -- its mostly cpu bound currently\n* get very recent cpu. core2 xeons appear to be particularly good at this.\n* use bytea, not large objects\n* will punish you if your client interface does not communicate with\ndatabase in binary\n* less engineering in the sense you are not maintaining two separate systems\n* forget backing up with pg_dump...go right to pitr (maybe slony?)\n* 1gb limit. be aware of high memory requirements\n* you get to work with all your data with single interface and\nadministrate one system -- thats the big payoff.\n* less well traveled path. put your r&d cap on and be optimistic but\nskeptical. do some tests.\n\nmerlin\n", "msg_date": "Thu, 12 Apr 2007 09:42:03 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large objetcs performance" }, { "msg_contents": "Hello Alexandre,\n\n<We have an application subjected do sign documents and store them \nsomewhere.>\n\nI developed a relative simple \"file archive\" with PostgreSQL (web \napplication with JSF for user interface). The major structure is one \ntable with some \"key word fields\", and 3 blob-fields (because exactly 3 \nfiles belong to one record). I have do deal with millions of files (95% \nabout 2-5KB, 5% are greater than 1MB).\nThe great advantage is that I don't have to \"communicate\" with the file \nsystem (try to open a directory with 300T files on a windows system... \nit's horrible, even on the command line).\n\nThe database now is 12Gb, but searching with the web interface has a \nmaximum of 5 seconds (most searches are faster). The one disadvantage is \nthe backup (I use pg_dump once a week which needs about 10 hours). But \nfor now, this is acceptable for me. But I want to look at slony or port \neverything to a linux machine.\n\nUlrich\n\n\n\n\n\n\n\nHello Alexandre,\n\n<We have an application subjected do sign documents and store\nthem\nsomewhere.>\n\nI developed a relative simple\n\"file archive\" with PostgreSQL (web application with JSF for user\ninterface). The major structure is one table with some \"key word\nfields\", and 3 blob-fields (because exactly 3 files belong to one\nrecord). I have do deal with millions of files (95% about 2-5KB, 5% are\ngreater than 1MB).\nThe great advantage is that I don't have to \"communicate\" with the file\nsystem (try to open a directory with 300T files on a windows system...\nit's horrible, even on the command line).\n\nThe database now is 12Gb, but searching with the web interface has a\nmaximum of 5 seconds (most searches are faster). The one disadvantage\nis the backup (I use pg_dump once a week which needs about 10 hours).\nBut for now, this is acceptable for me. But I want to look at slony or\nport everything to a linux machine.\n\nUlrich", "msg_date": "Sat, 21 Apr 2007 09:27:03 +0200", "msg_from": "Ulrich Cech <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large objetcs performance" }, { "msg_contents": "Hello Alexandre,\n\n<We have an application subjected do sign documents and store them \nsomewhere.>\n\nI developed a relative simple \"file archive\" with PostgreSQL (web \napplication with JSF for user interface). The major structure is one \ntable with some \"key word fields\", and 3 blob-fields (because exactly 3 \nfiles belong to one record). I have do deal with millions of files (95% \nabout 2-5KB, 5% are greater than 1MB).\nThe great advantage is that I don't have to \"communicate\" with the file \nsystem (try to open a directory with 300T files on a windows system... \nit's horrible, even on the command line).\n\nThe database now is 12Gb, but searching with the web interface has a \nmaximum of 5 seconds (most searches are faster). The one disadvantage is \nthe backup (I use pg_dump once a week which needs about 10 hours). But \nfor now, this is acceptable for me. But I want to look at slony or port \neverything to a linux machine.\n\nUlrich\n", "msg_date": "Sun, 22 Apr 2007 11:01:19 +0200", "msg_from": "Ulrich Cech <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large objetcs performance" } ]
[ { "msg_contents": "Hi,\n\nA page may be double buffered in PG's buffer pool and in OS's buffer cache.\nOther DBMS like DB2 and Oracle has provided Direct I/O option to eliminate\ndouble buffering. I noticed there were discusses on the list. But\nI can not find similar option in PG. Does PG support direct I/O now?\n\nThe tuning guide of PG usually recommends a small shared buffer pool\n(compared\nto the size of physical memory). I think it is to avoid swapping. If\nthere were\nswapping, OS kernel may swap out some pages in PG's buffer pool even PG\nwant to keep them in memory. i.e. PG would loose full control over\nbuffer pool.\nA large buffer pool is not good because it may\n1. cause more pages double buffered, and thus decrease the efficiency of\nbuffer\ncache and buffer pool.\n2. may cause swapping.\nAm I right?\n\nIf PG's buffer pool is small compared with physical memory, can I say\nthat the\nhit ratio of PG's buffer pool is not so meaningful because most misses\ncan be\nsatisfied by OS Kernel's buffer cache?\n\nThanks!\n\n\nXiaoning\n\n", "msg_date": "Thu, 05 Apr 2007 13:09:49 -0400", "msg_from": "Xiaoning Ding <[email protected]>", "msg_from_op": true, "msg_subject": "a question about Direct I/O and double buffering" }, { "msg_contents": "On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:\n\n> Hi,\n>\n> A page may be double buffered in PG's buffer pool and in OS's \n> buffer cache.\n> Other DBMS like DB2 and Oracle has provided Direct I/O option to \n> eliminate\n> double buffering. I noticed there were discusses on the list. But\n> I can not find similar option in PG. Does PG support direct I/O now?\n>\n> The tuning guide of PG usually recommends a small shared buffer pool\n> (compared\n> to the size of physical memory). I think it is to avoid swapping. If\n> there were\n> swapping, OS kernel may swap out some pages in PG's buffer pool \n> even PG\n> want to keep them in memory. i.e. PG would loose full control over\n> buffer pool.\n> A large buffer pool is not good because it may\n> 1. cause more pages double buffered, and thus decrease the \n> efficiency of\n> buffer\n> cache and buffer pool.\n> 2. may cause swapping.\n> Am I right?\n>\n> If PG's buffer pool is small compared with physical memory, can I say\n> that the\n> hit ratio of PG's buffer pool is not so meaningful because most misses\n> can be\n> satisfied by OS Kernel's buffer cache?\n>\n> Thanks!\n\nTo the best of my knowledge, Postgres itself does not have a direct \nIO option (although it would be a good addition). So, in order to \nuse direct IO with postgres you'll need to consult your filesystem \ndocs for how to set the forcedirectio mount option. I believe it can \nbe set dynamically, but if you want it to be permanent you'll to add \nit to your fstab/vfstab file.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:Hi,A page may be double buffered in PG's buffer pool and in OS's buffer cache.Other DBMS like DB2 and Oracle has provided Direct I/O option to eliminatedouble buffering. I noticed there were discusses on the list. ButI can not find similar option in PG. Does PG support direct I/O now?The tuning guide of PG usually recommends a small shared buffer pool(comparedto the size of physical memory).  I think it is to avoid swapping. Ifthere wereswapping, OS kernel may swap out some pages in PG's buffer pool even PGwant to keep them in memory. i.e. PG would loose full control overbuffer pool.A large buffer pool is not good because it may1. cause more pages double buffered, and thus decrease the efficiency ofbuffercache and buffer pool.2. may cause swapping.Am I right?If PG's buffer pool is small compared with physical memory, can I saythat thehit ratio of PG's buffer pool is not so meaningful because most missescan besatisfied by OS Kernel's buffer cache?Thanks!To the best of my knowledge, Postgres itself does not have a direct IO option (although it would be a good addition).  So, in order to use direct IO with postgres you'll need to consult your filesystem docs for how to set the forcedirectio mount option.  I believe it can be set dynamically, but if you want it to be permanent you'll to add it to your fstab/vfstab file. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Thu, 5 Apr 2007 13:09:39 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "Erik Jones wrote:\n> On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:\n> \n>> Hi,\n>>\n>> A page may be double buffered in PG's buffer pool and in OS's buffer \n>> cache.\n>> Other DBMS like DB2 and Oracle has provided Direct I/O option to eliminate\n>> double buffering. I noticed there were discusses on the list. But\n>> I can not find similar option in PG. Does PG support direct I/O now?\n>>\n>> The tuning guide of PG usually recommends a small shared buffer pool\n>> (compared\n>> to the size of physical memory). I think it is to avoid swapping. If\n>> there were\n>> swapping, OS kernel may swap out some pages in PG's buffer pool even PG\n>> want to keep them in memory. i.e. PG would loose full control over\n>> buffer pool.\n>> A large buffer pool is not good because it may\n>> 1. cause more pages double buffered, and thus decrease the efficiency of\n>> buffer\n>> cache and buffer pool.\n>> 2. may cause swapping.\n>> Am I right?\n>>\n>> If PG's buffer pool is small compared with physical memory, can I say\n>> that the\n>> hit ratio of PG's buffer pool is not so meaningful because most misses\n>> can be\n>> satisfied by OS Kernel's buffer cache?\n>>\n>> Thanks!\n> \n> To the best of my knowledge, Postgres itself does not have a direct IO \n> option (although it would be a good addition). So, in order to use \n> direct IO with postgres you'll need to consult your filesystem docs for \n> how to set the forcedirectio mount option. I believe it can be set \n> dynamically, but if you want it to be permanent you'll to add it to your \n> fstab/vfstab file.\n\nI use Linux. It supports direct I/O on a per-file basis only. To \nbypass OS buffer cache,\nfiles should be opened with O_DIRECT option. I afraid that I have to \nmodify PG.\n\nXiaoning\n> \n> erik jones <[email protected] <mailto:[email protected]>>\n> software developer\n> 615-296-0838\n> emma(r)\n> \n> \n> \n\n", "msg_date": "Thu, 05 Apr 2007 14:22:53 -0400", "msg_from": "Xiaoning Ding <[email protected]>", "msg_from_op": true, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "Not to hijack this thread, but has anybody here tested the behavior of\nPG on a file system with OS-level caching disabled via forcedirectio or\nby using an inherently non-caching file system such as ocfs2?\n\nI've been thinking about trying this setup to avoid double-caching now\nthat the 8.x series scales shared buffers better, but I figured I'd ask\nfirst if anybody here had experience with similar configurations.\n\n-- Mark\n\nOn Thu, 2007-04-05 at 13:09 -0500, Erik Jones wrote:\n> On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:\n> \n> > Hi,\n> > \n> > \n> > A page may be double buffered in PG's buffer pool and in OS's buffer\n> > cache.\n> > Other DBMS like DB2 and Oracle has provided Direct I/O option to\n> > eliminate\n> > double buffering. I noticed there were discusses on the list. But\n> > I can not find similar option in PG. Does PG support direct I/O now?\n> > \n> > \n> > The tuning guide of PG usually recommends a small shared buffer pool\n> > (compared\n> > to the size of physical memory). I think it is to avoid swapping.\n> > If\n> > there were\n> > swapping, OS kernel may swap out some pages in PG's buffer pool even\n> > PG\n> > want to keep them in memory. i.e. PG would loose full control over\n> > buffer pool.\n> > A large buffer pool is not good because it may\n> > 1. cause more pages double buffered, and thus decrease the\n> > efficiency of\n> > buffer\n> > cache and buffer pool.\n> > 2. may cause swapping.\n> > Am I right?\n> > \n> > \n> > If PG's buffer pool is small compared with physical memory, can I\n> > say\n> > that the\n> > hit ratio of PG's buffer pool is not so meaningful because most\n> > misses\n> > can be\n> > satisfied by OS Kernel's buffer cache?\n> > \n> > \n> > Thanks!\n> \n> \n> To the best of my knowledge, Postgres itself does not have a direct IO\n> option (although it would be a good addition). So, in order to use\n> direct IO with postgres you'll need to consult your filesystem docs\n> for how to set the forcedirectio mount option. I believe it can be\n> set dynamically, but if you want it to be permanent you'll to add it\n> to your fstab/vfstab file.\n> \n> \n> erik jones <[email protected]>\n> software developer\n> 615-296-0838\n> emma(r)\n> \n> \n> \n> \n> \n", "msg_date": "Thu, 05 Apr 2007 11:27:09 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:\n\n> Erik Jones wrote:\n>> On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:\n>>> Hi,\n>>>\n>>> A page may be double buffered in PG's buffer pool and in OS's \n>>> buffer cache.\n>>> Other DBMS like DB2 and Oracle has provided Direct I/O option to \n>>> eliminate\n>>> double buffering. I noticed there were discusses on the list. But\n>>> I can not find similar option in PG. Does PG support direct I/O now?\n>>>\n>>> The tuning guide of PG usually recommends a small shared buffer pool\n>>> (compared\n>>> to the size of physical memory). I think it is to avoid \n>>> swapping. If\n>>> there were\n>>> swapping, OS kernel may swap out some pages in PG's buffer pool \n>>> even PG\n>>> want to keep them in memory. i.e. PG would loose full control over\n>>> buffer pool.\n>>> A large buffer pool is not good because it may\n>>> 1. cause more pages double buffered, and thus decrease the \n>>> efficiency of\n>>> buffer\n>>> cache and buffer pool.\n>>> 2. may cause swapping.\n>>> Am I right?\n>>>\n>>> If PG's buffer pool is small compared with physical memory, can I \n>>> say\n>>> that the\n>>> hit ratio of PG's buffer pool is not so meaningful because most \n>>> misses\n>>> can be\n>>> satisfied by OS Kernel's buffer cache?\n>>>\n>>> Thanks!\n>> To the best of my knowledge, Postgres itself does not have a \n>> direct IO option (although it would be a good addition). So, in \n>> order to use direct IO with postgres you'll need to consult your \n>> filesystem docs for how to set the forcedirectio mount option. I \n>> believe it can be set dynamically, but if you want it to be \n>> permanent you'll to add it to your fstab/vfstab file.\n>\n> I use Linux. It supports direct I/O on a per-file basis only. To \n> bypass OS buffer cache,\n> files should be opened with O_DIRECT option. I afraid that I have \n> to modify PG.\n>\n> Xiaoning\n\nLooks like it. I just did a cursory search of the archives and it \nseems that others have looked at this before so you'll probably want \nto start there if your up to it.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:Erik Jones wrote: On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote: Hi,A page may be double buffered in PG's buffer pool and in OS's buffer cache.Other DBMS like DB2 and Oracle has provided Direct I/O option to eliminatedouble buffering. I noticed there were discusses on the list. ButI can not find similar option in PG. Does PG support direct I/O now?The tuning guide of PG usually recommends a small shared buffer pool(comparedto the size of physical memory).  I think it is to avoid swapping. Ifthere wereswapping, OS kernel may swap out some pages in PG's buffer pool even PGwant to keep them in memory. i.e. PG would loose full control overbuffer pool.A large buffer pool is not good because it may1. cause more pages double buffered, and thus decrease the efficiency ofbuffercache and buffer pool.2. may cause swapping.Am I right?If PG's buffer pool is small compared with physical memory, can I saythat thehit ratio of PG's buffer pool is not so meaningful because most missescan besatisfied by OS Kernel's buffer cache?Thanks! To the best of my knowledge, Postgres itself does not have a direct IO option (although it would be a good addition).  So, in order to use direct IO with postgres you'll need to consult your filesystem docs for how to set the forcedirectio mount option.  I believe it can be set dynamically, but if you want it to be permanent you'll to add it to your fstab/vfstab file. I use Linux.  It supports direct I/O on a per-file basis only.  To bypass OS buffer cache,files should be opened with O_DIRECT option.  I afraid that I have to modify PG.XiaoningLooks like it.  I just did a cursory search of the archives and it seems that others have looked at this before so you'll probably want to start there if your up to it. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Thu, 5 Apr 2007 13:47:40 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "On 4/5/07, Erik Jones <[email protected]> wrote:\n>\n> On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:\n>\n> Erik Jones wrote:\n> On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:\n> Hi,\n>\n> A page may be double buffered in PG's buffer pool and in OS's buffer cache.\n> Other DBMS like DB2 and Oracle has provided Direct I/O option to eliminate\n> double buffering. I noticed there were discusses on the list. But\n> I can not find similar option in PG. Does PG support direct I/O now?\n>\n> The tuning guide of PG usually recommends a small shared buffer pool\n> (compared\n> to the size of physical memory). I think it is to avoid swapping. If\n> there were\n> swapping, OS kernel may swap out some pages in PG's buffer pool even PG\n> want to keep them in memory. i.e. PG would loose full control over\n> buffer pool.\n> A large buffer pool is not good because it may\n> 1. cause more pages double buffered, and thus decrease the efficiency of\n> buffer\n> cache and buffer pool.\n> 2. may cause swapping.\n> Am I right?\n>\n> If PG's buffer pool is small compared with physical memory, can I say\n> that the\n> hit ratio of PG's buffer pool is not so meaningful because most misses\n> can be\n> satisfied by OS Kernel's buffer cache?\n>\n> Thanks!\n> To the best of my knowledge, Postgres itself does not have a direct IO\n> option (although it would be a good addition). So, in order to use direct\n> IO with postgres you'll need to consult your filesystem docs for how to set\n> the forcedirectio mount option. I believe it can be set dynamically, but if\n> you want it to be permanent you'll to add it to your fstab/vfstab file.\n>\n> I use Linux. It supports direct I/O on a per-file basis only. To bypass OS\n> buffer cache,\n> files should be opened with O_DIRECT option. I afraid that I have to modify\n> PG.\n>\n> Xiaoning\n> Looks like it. I just did a cursory search of the archives and it seems\n> that others have looked at this before so you'll probably want to start\n> there if your up to it.\n>\n\nLinux used to have (still does?) a RAW interface which might also be\nuseful. I think the original code was contributed by oracle so they\ncould support direct IO.\n\nAlex\n", "msg_date": "Thu, 5 Apr 2007 14:56:13 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "On Apr 5, 2007, at 1:27 PM, Mark Lewis wrote:\n> On Thu, 2007-04-05 at 13:09 -0500, Erik Jones wrote:\n>> On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:\n>>\n>>> Hi,\n>>>\n>>>\n>>> A page may be double buffered in PG's buffer pool and in OS's buffer\n>>> cache.\n>>> Other DBMS like DB2 and Oracle has provided Direct I/O option to\n>>> eliminate\n>>> double buffering. I noticed there were discusses on the list. But\n>>> I can not find similar option in PG. Does PG support direct I/O now?\n>>>\n>>>\n>>> The tuning guide of PG usually recommends a small shared buffer pool\n>>> (compared\n>>> to the size of physical memory). I think it is to avoid swapping.\n>>> If\n>>> there were\n>>> swapping, OS kernel may swap out some pages in PG's buffer pool even\n>>> PG\n>>> want to keep them in memory. i.e. PG would loose full control over\n>>> buffer pool.\n>>> A large buffer pool is not good because it may\n>>> 1. cause more pages double buffered, and thus decrease the\n>>> efficiency of\n>>> buffer\n>>> cache and buffer pool.\n>>> 2. may cause swapping.\n>>> Am I right?\n>>>\n>>>\n>>> If PG's buffer pool is small compared with physical memory, can I\n>>> say\n>>> that the\n>>> hit ratio of PG's buffer pool is not so meaningful because most\n>>> misses\n>>> can be\n>>> satisfied by OS Kernel's buffer cache?\n>>>\n>>>\n>>> Thanks!\n>>\n>>\n>> To the best of my knowledge, Postgres itself does not have a \n>> direct IO\n>> option (although it would be a good addition). So, in order to use\n>> direct IO with postgres you'll need to consult your filesystem docs\n>> for how to set the forcedirectio mount option. I believe it can be\n>> set dynamically, but if you want it to be permanent you'll to add it\n>> to your fstab/vfstab file.\n\n> Not to hijack this thread, but has anybody here tested the behavior of\n> PG on a file system with OS-level caching disabled via \n> forcedirectio or\n> by using an inherently non-caching file system such as ocfs2?\n>\n> I've been thinking about trying this setup to avoid double-caching now\n> that the 8.x series scales shared buffers better, but I figured I'd \n> ask\n> first if anybody here had experience with similar configurations.\n>\n> -- Mark\n\nRather than repeat everything that was said just last week, I'll \npoint out that we just had a pretty decent discusson on this last \nweek that I started, so check the archives. In summary though, if \nyou have a high io transaction load with a db where the average size \nof your \"working set\" of data doesn't fit in memory with room to \nspare, then direct io can be a huge plus, otherwise you probably \nwon't see much of a difference. I have yet to hear of anybody \nactually seeing any degradation in the db performance from it. In \naddition, while it doesn't bother me, I'd watch the top posting as \nsome people get pretty religious about (I moved your comments down).\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Apr 5, 2007, at 1:27 PM, Mark Lewis wrote:On Thu, 2007-04-05 at 13:09 -0500, Erik Jones wrote: On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote: Hi,A page may be double buffered in PG's buffer pool and in OS's buffercache.Other DBMS like DB2 and Oracle has provided Direct I/O option toeliminatedouble buffering. I noticed there were discusses on the list. ButI can not find similar option in PG. Does PG support direct I/O now?The tuning guide of PG usually recommends a small shared buffer pool(comparedto the size of physical memory).  I think it is to avoid swapping.Ifthere wereswapping, OS kernel may swap out some pages in PG's buffer pool evenPGwant to keep them in memory. i.e. PG would loose full control overbuffer pool.A large buffer pool is not good because it may1. cause more pages double buffered, and thus decrease theefficiency ofbuffercache and buffer pool.2. may cause swapping.Am I right?If PG's buffer pool is small compared with physical memory, can Isaythat thehit ratio of PG's buffer pool is not so meaningful because mostmissescan besatisfied by OS Kernel's buffer cache?Thanks! To the best of my knowledge, Postgres itself does not have a direct IOoption (although it would be a good addition).  So, in order to usedirect IO with postgres you'll need to consult your filesystem docsfor how to set the forcedirectio mount option.  I believe it can beset dynamically, but if you want it to be permanent you'll to add itto your fstab/vfstab file.Not to hijack this thread, but has anybody here tested the behavior ofPG on a file system with OS-level caching disabled via forcedirectio orby using an inherently non-caching file system such as ocfs2?I've been thinking about trying this setup to avoid double-caching nowthat the 8.x series scales shared buffers better, but I figured I'd askfirst if anybody here had experience with similar configurations.-- MarkRather than repeat everything that was said just last week, I'll point out that we just had a pretty decent discusson on this last week that I started, so check the archives.  In summary though, if you have a high io transaction load with a db where the average size of your \"working set\" of data doesn't fit in memory with room to spare, then direct io can be a huge plus, otherwise you probably won't see much of a difference.  I have yet to hear of anybody actually seeing any degradation in the db performance from it.  In addition, while it doesn't bother me, I'd watch the top posting as some people get pretty religious about (I moved your comments down). erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Thu, 5 Apr 2007 13:58:31 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "...\n[snipped for brevity]\n...\n> \n> > Not to hijack this thread, but has anybody here tested the behavior\n> > of\n> > PG on a file system with OS-level caching disabled via forcedirectio\n> > or\n> > by using an inherently non-caching file system such as ocfs2?\n> > \n> > \n> > I've been thinking about trying this setup to avoid double-caching\n> > now\n> > that the 8.x series scales shared buffers better, but I figured I'd\n> > ask\n> > first if anybody here had experience with similar configurations.\n> > \n> > \n> > -- Mark\n> \n> \n> Rather than repeat everything that was said just last week, I'll point\n> out that we just had a pretty decent discusson on this last week that\n> I started, so check the archives. In summary though, if you have a\n> high io transaction load with a db where the average size of your\n> \"working set\" of data doesn't fit in memory with room to spare, then\n> direct io can be a huge plus, otherwise you probably won't see much of\n> a difference. I have yet to hear of anybody actually seeing any\n> degradation in the db performance from it. In addition, while it\n> doesn't bother me, I'd watch the top posting as some people get pretty\n> religious about (I moved your comments down).\n\nI saw the thread, but my understanding from reading through it was that\nyou never fully tracked down the cause of the factor of 10 write volume\nmismatch, so I pretty much wrote it off as a data point for\nforcedirectio because of the unknowns. Did you ever figure out the\ncause of that?\n\n-- Mark Lewis\n", "msg_date": "Thu, 05 Apr 2007 12:56:12 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "On Apr 5, 2007, at 2:56 PM, Mark Lewis wrote:\n\n> ...\n> [snipped for brevity]\n> ...\n>>\n>>> Not to hijack this thread, but has anybody here tested the behavior\n>>> of\n>>> PG on a file system with OS-level caching disabled via forcedirectio\n>>> or\n>>> by using an inherently non-caching file system such as ocfs2?\n>>>\n>>>\n>>> I've been thinking about trying this setup to avoid double-caching\n>>> now\n>>> that the 8.x series scales shared buffers better, but I figured I'd\n>>> ask\n>>> first if anybody here had experience with similar configurations.\n>>>\n>>>\n>>> -- Mark\n>>\n>>\n>> Rather than repeat everything that was said just last week, I'll \n>> point\n>> out that we just had a pretty decent discusson on this last week that\n>> I started, so check the archives. In summary though, if you have a\n>> high io transaction load with a db where the average size of your\n>> \"working set\" of data doesn't fit in memory with room to spare, then\n>> direct io can be a huge plus, otherwise you probably won't see \n>> much of\n>> a difference. I have yet to hear of anybody actually seeing any\n>> degradation in the db performance from it. In addition, while it\n>> doesn't bother me, I'd watch the top posting as some people get \n>> pretty\n>> religious about (I moved your comments down).\n>\n> I saw the thread, but my understanding from reading through it was \n> that\n> you never fully tracked down the cause of the factor of 10 write \n> volume\n> mismatch, so I pretty much wrote it off as a data point for\n> forcedirectio because of the unknowns. Did you ever figure out the\n> cause of that?\n>\n> -- Mark Lewis\n\nNope. What we never tracked down was the factor of 10 drop in \ndatabase transactions, not disk transactions. The write volume was \nmost definitely due to the direct io setting -- writes are now being \ndone in terms of the system's block size where as before they were \nbeing done in terms of the the filesystem's cache page size (as it's \nin virtual memory). Basically, we do so many write transactions that \nthe fs cache was constantly paging.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Apr 5, 2007, at 2:56 PM, Mark Lewis wrote:...[snipped for brevity]... Not to hijack this thread, but has anybody here tested the behaviorofPG on a file system with OS-level caching disabled via forcedirectioorby using an inherently non-caching file system such as ocfs2?I've been thinking about trying this setup to avoid double-cachingnowthat the 8.x series scales shared buffers better, but I figured I'daskfirst if anybody here had experience with similar configurations.-- Mark Rather than repeat everything that was said just last week, I'll pointout that we just had a pretty decent discusson on this last week thatI started, so check the archives.  In summary though, if you have ahigh io transaction load with a db where the average size of your\"working set\" of data doesn't fit in memory with room to spare, thendirect io can be a huge plus, otherwise you probably won't see much ofa difference.  I have yet to hear of anybody actually seeing anydegradation in the db performance from it.  In addition, while itdoesn't bother me, I'd watch the top posting as some people get prettyreligious about (I moved your comments down). I saw the thread, but my understanding from reading through it was thatyou never fully tracked down the cause of the factor of 10 write volumemismatch, so I pretty much wrote it off as a data point forforcedirectio because of the unknowns.  Did you ever figure out thecause of that?-- Mark Lewis Nope.  What we never tracked down was the factor of 10 drop in database transactions, not disk transactions.  The write volume was most definitely due to the direct io setting -- writes are now being done in terms of the system's block size where as before they were being done in terms of the the filesystem's cache page size (as it's in virtual memory).  Basically, we do so many write transactions that the fs cache was constantly paging. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Thu, 5 Apr 2007 15:10:43 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "On Thu, 5 Apr 2007, Xiaoning Ding wrote:\n\n>>\n>> To the best of my knowledge, Postgres itself does not have a direct IO\n>> option (although it would be a good addition). So, in order to use direct\n>> IO with postgres you'll need to consult your filesystem docs for how to\n>> set the forcedirectio mount option. I believe it can be set dynamically,\n>> but if you want it to be permanent you'll to add it to your fstab/vfstab\n>> file.\n>\n> I use Linux. It supports direct I/O on a per-file basis only. To bypass OS \n> buffer cache,\n> files should be opened with O_DIRECT option. I afraid that I have to modify \n> PG.\n\nas someone who has been reading the linux-kernel mailing list for 10 \nyears, let me comment on this a bit.\n\nlinux does have a direct i/o option, but it has significant limits on when \nand how you cna use it (buffers must be 512byte aligned and multiples of \n512 bytes, things like that). Also, in many cases testing has shon that \nthere is a fairly significant performance hit for this, not a perfomance \ngain.\n\nwhat I think that postgres really needs is to add support for write \nbarriers (telling the OS to make shure that everything before the barrier \nis written to disk before anything after the barrier) I beleive that these \nare avaiable on SCSI drives, and on some SATA drives. this sort of \nsupport, along with appropriate async I/O support (which is probably going \nto end up being the 'syslets' or 'threadlets' stuff that's in the early \nexperimental stage, rather then the current aio API) has the potential to \nbe a noticable improvement.\n\nif you haven't followed the syslets discussion on the kernel list, \nthreadlets are an approach that basicly lets you turn any syscall into a \nasync interface (if the call doesn't block on anything you get the answer \nback immediatly, if it does block it gets turned into a async call by the \nkernel)\n\nsyslets are a way to combine multiple syscalls into a single call, \navoiding the user->system->user calling overhead for the additional calls. \n(it's also viewed as a way to do prototyping of possible new calls, if a \nsequence of syscalls end up being common enough the kernel devs will look \nat makeing a new, combined, syscall (for example lock, write, unlock could \nbe made into one if it's common enough and there's enough of a performance \ngain)\n\nDavid Lang\n", "msg_date": "Thu, 5 Apr 2007 13:33:49 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "Alex Deucher wrote:\n> On 4/5/07, Erik Jones <[email protected]> wrote:\n>>\n>> On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:\n>>\n>> Erik Jones wrote:\n>> On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:\n>> Hi,\n>>\n>> A page may be double buffered in PG's buffer pool and in OS's buffer \n>> cache.\n>> Other DBMS like DB2 and Oracle has provided Direct I/O option to \n>> eliminate\n>> double buffering. I noticed there were discusses on the list. But\n>> I can not find similar option in PG. Does PG support direct I/O now?\n>>\n>> The tuning guide of PG usually recommends a small shared buffer pool\n>> (compared\n>> to the size of physical memory). I think it is to avoid swapping. If\n>> there were\n>> swapping, OS kernel may swap out some pages in PG's buffer pool even PG\n>> want to keep them in memory. i.e. PG would loose full control over\n>> buffer pool.\n>> A large buffer pool is not good because it may\n>> 1. cause more pages double buffered, and thus decrease the efficiency of\n>> buffer\n>> cache and buffer pool.\n>> 2. may cause swapping.\n>> Am I right?\n>>\n>> If PG's buffer pool is small compared with physical memory, can I say\n>> that the\n>> hit ratio of PG's buffer pool is not so meaningful because most misses\n>> can be\n>> satisfied by OS Kernel's buffer cache?\n>>\n>> Thanks!\n>> To the best of my knowledge, Postgres itself does not have a direct IO\n>> option (although it would be a good addition). So, in order to use \n>> direct\n>> IO with postgres you'll need to consult your filesystem docs for how \n>> to set\n>> the forcedirectio mount option. I believe it can be set dynamically, \n>> but if\n>> you want it to be permanent you'll to add it to your fstab/vfstab file.\n>>\n>> I use Linux. It supports direct I/O on a per-file basis only. To \n>> bypass OS\n>> buffer cache,\n>> files should be opened with O_DIRECT option. I afraid that I have to \n>> modify\n>> PG.\n>>\n>> Xiaoning\n>> Looks like it. I just did a cursory search of the archives and it seems\n>> that others have looked at this before so you'll probably want to start\n>> there if your up to it.\n>>\n> \n> Linux used to have (still does?) a RAW interface which might also be\n> useful. I think the original code was contributed by oracle so they\n> could support direct IO.\n> \n> Alex\nI am more concerned with reads , and how to do direct I/O under Linux here.\nReading raw devices in linux bypasses OS buffer cache. But how can you\nmount a raw device( it is a character device) as a file system?\n\n Xiaoning\n", "msg_date": "Thu, 05 Apr 2007 16:58:29 -0400", "msg_from": "Xiaoning Ding <[email protected]>", "msg_from_op": true, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "On 4/5/07, Xiaoning Ding <[email protected]> wrote:\n> Alex Deucher wrote:\n> > On 4/5/07, Erik Jones <[email protected]> wrote:\n> >>\n> >> On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:\n> >>\n> >> Erik Jones wrote:\n> >> On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:\n> >> Hi,\n> >>\n> >> A page may be double buffered in PG's buffer pool and in OS's buffer\n> >> cache.\n> >> Other DBMS like DB2 and Oracle has provided Direct I/O option to\n> >> eliminate\n> >> double buffering. I noticed there were discusses on the list. But\n> >> I can not find similar option in PG. Does PG support direct I/O now?\n> >>\n> >> The tuning guide of PG usually recommends a small shared buffer pool\n> >> (compared\n> >> to the size of physical memory). I think it is to avoid swapping. If\n> >> there were\n> >> swapping, OS kernel may swap out some pages in PG's buffer pool even PG\n> >> want to keep them in memory. i.e. PG would loose full control over\n> >> buffer pool.\n> >> A large buffer pool is not good because it may\n> >> 1. cause more pages double buffered, and thus decrease the efficiency of\n> >> buffer\n> >> cache and buffer pool.\n> >> 2. may cause swapping.\n> >> Am I right?\n> >>\n> >> If PG's buffer pool is small compared with physical memory, can I say\n> >> that the\n> >> hit ratio of PG's buffer pool is not so meaningful because most misses\n> >> can be\n> >> satisfied by OS Kernel's buffer cache?\n> >>\n> >> Thanks!\n> >> To the best of my knowledge, Postgres itself does not have a direct IO\n> >> option (although it would be a good addition). So, in order to use\n> >> direct\n> >> IO with postgres you'll need to consult your filesystem docs for how\n> >> to set\n> >> the forcedirectio mount option. I believe it can be set dynamically,\n> >> but if\n> >> you want it to be permanent you'll to add it to your fstab/vfstab file.\n> >>\n> >> I use Linux. It supports direct I/O on a per-file basis only. To\n> >> bypass OS\n> >> buffer cache,\n> >> files should be opened with O_DIRECT option. I afraid that I have to\n> >> modify\n> >> PG.\n> >>\n> >> Xiaoning\n> >> Looks like it. I just did a cursory search of the archives and it seems\n> >> that others have looked at this before so you'll probably want to start\n> >> there if your up to it.\n> >>\n> >\n> > Linux used to have (still does?) a RAW interface which might also be\n> > useful. I think the original code was contributed by oracle so they\n> > could support direct IO.\n> >\n> > Alex\n> I am more concerned with reads , and how to do direct I/O under Linux here.\n> Reading raw devices in linux bypasses OS buffer cache. But how can you\n> mount a raw device( it is a character device) as a file system?\n>\n\nIn this case, I guess you'd probably have to do it within pg itself.\n\nAlex\n", "msg_date": "Thu, 5 Apr 2007 17:06:09 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "On Apr 5, 2007, at 3:33 PM, [email protected] wrote:\n\n> On Thu, 5 Apr 2007, Xiaoning Ding wrote:\n>\n>>>\n>>> To the best of my knowledge, Postgres itself does not have a \n>>> direct IO\n>>> option (although it would be a good addition). So, in order to \n>>> use direct\n>>> IO with postgres you'll need to consult your filesystem docs for \n>>> how to\n>>> set the forcedirectio mount option. I believe it can be set \n>>> dynamically,\n>>> but if you want it to be permanent you'll to add it to your \n>>> fstab/vfstab\n>>> file.\n>>\n>> I use Linux. It supports direct I/O on a per-file basis only. To \n>> bypass OS buffer cache,\n>> files should be opened with O_DIRECT option. I afraid that I have \n>> to modify PG.\n>\n> as someone who has been reading the linux-kernel mailing list for \n> 10 years, let me comment on this a bit.\n>\n> linux does have a direct i/o option,\n\nYes, I know applications can request direct i/o with the O_DIRECT \nflag to open(), but can this be set to be forced for all applications \nor for individual applications from \"outside\" the application (not \nthat I've ever heard of something like the second)?\n\n> but it has significant limits on when and how you cna use it \n> (buffers must be 512byte aligned and multiples of 512 bytes, things \n> like that).\n\nThat's a standard limit imposed by the sector size of hard drives, \nand is present in all direct i/o implementations, not just Linux.\n\n> Also, in many cases testing has shon that there is a fairly \n> significant performance hit for this, not a perfomance gain.\n\nThose performance hits have been noticed for high i/o transaction \ndatabases? The idea here is that these kinds of database manage \ntheir own caches and having a separate filesystem cache in virtual \nmemory that works with system memory page sizes is an unneeded level \nof indirection. Yes, you should expect other \"normal\" utilities will \nsuffer a performance hit as if you are trying to cp a 500 byte file \nyou'll still have to work with 8K writes and reads whereas with the \nfilesystem cache you can just write/read part of a page in memory and \nlet the cache decide when it needs to write and read from disk. If \nthere are other caveats to direct i/o on Linux I'd love to hear them.\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n\nOn Apr 5, 2007, at 3:33 PM, [email protected] wrote:On Thu, 5 Apr 2007, Xiaoning Ding wrote:  To the best of my knowledge, Postgres itself does not have a direct IO option (although it would be a good addition).  So, in order to use direct IO with postgres you'll need to consult your filesystem docs for how to set the forcedirectio mount option.  I believe it can be set dynamically, but if you want it to be permanent you'll to add it to your fstab/vfstab file. I use Linux.  It supports direct I/O on a per-file basis only.  To bypass OS buffer cache,files should be opened with O_DIRECT option.  I afraid that I have to modify PG. as someone who has been reading the linux-kernel mailing list for 10 years, let me comment on this a bit.linux does have a direct i/o option, Yes, I know applications can request direct i/o with the O_DIRECT flag to open(), but can this be set to be forced for all applications or for individual applications from \"outside\" the application (not that I've ever heard of something like the second)?but it has significant limits on when and how you cna use it (buffers must be 512byte aligned and multiples of 512 bytes, things like that). That's a standard limit imposed by the sector size of hard drives, and is present in all direct i/o implementations, not just Linux.Also, in many cases testing has shon that there is a fairly significant performance hit for this, not a perfomance gain.Those performance hits have been noticed for high i/o transaction databases?  The idea here is that these kinds of database manage their own caches and having a separate filesystem cache in virtual memory that works with system memory page sizes is an unneeded level of indirection.  Yes, you should expect other \"normal\" utilities will suffer a performance hit as if you are trying to cp a 500 byte file you'll still have to work with 8K writes and reads whereas with the filesystem cache you can just write/read part of a page in memory and let the cache decide when it needs to write and read from disk.  If there are other caveats to direct i/o on Linux I'd love to hear them. erik jones <[email protected]>software developer615-296-0838emma(r)", "msg_date": "Thu, 5 Apr 2007 16:39:12 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "On Thu, 5 Apr 2007, Xiaoning Ding wrote:\n\n>> > Xiaoning\n>> > Looks like it. I just did a cursory search of the archives and it seems\n>> > that others have looked at this before so you'll probably want to start\n>> > there if your up to it.\n>> >\n>>\n>> Linux used to have (still does?) a RAW interface which might also be\n>> useful. I think the original code was contributed by oracle so they\n>> could support direct IO.\n>>\n>> Alex\n> I am more concerned with reads , and how to do direct I/O under Linux here.\n> Reading raw devices in linux bypasses OS buffer cache.\n\nit also bypassed OS readahead, not nessasarily a win\n\n> But how can you\n> mount a raw device( it is a character device) as a file system?\n\nyou can do a makefs on /dev/hda just like you do on /dev/hda2 and then \nmount the result as a filesystem.\n\nPostgres wants the OS layer to provide the filesystem, Oracle implements \nit's own filesystem, so you would just point it at the drive/partition and \nit would do it's own 'formatting'\n\nthis is something that may be reasonable for postgres to consider doing \nsomeday, since postgres allocates things into 1m files and then keeps \ntrack of what filename is used for what, it could instead allocate things \nin 1m (or whatever size) chunks on the disk, and just keep track of what \naddresses are used for what instead of filenames. this would definantly \nallow you to work around problems like the ext2/3 indirect lookup \nproblems. now that the ability for partitioned table spaces it would be an \ninteresting experiment to be able to define a tablespace that used a raw \ndevice instead of a filesystem to see if there are any noticable \nperformance gains\n\nDavid Lang\n", "msg_date": "Thu, 5 Apr 2007 18:47:12 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "On Thu, 5 Apr 2007, Erik Jones wrote:\n\n> On Apr 5, 2007, at 3:33 PM, [email protected] wrote:\n>\n>> On Thu, 5 Apr 2007, Xiaoning Ding wrote:\n>> \n>> > > \n>> > > To the best of my knowledge, Postgres itself does not have a direct IO\n>> > > option (although it would be a good addition). So, in order to use \n>> > > direct\n>> > > IO with postgres you'll need to consult your filesystem docs for how \n>> > > to\n>> > > set the forcedirectio mount option. I believe it can be set \n>> > > dynamically,\n>> > > but if you want it to be permanent you'll to add it to your \n>> > > fstab/vfstab\n>> > > file.\n>> > \n>> > I use Linux. It supports direct I/O on a per-file basis only. To bypass \n>> > OS buffer cache,\n>> > files should be opened with O_DIRECT option. I afraid that I have to \n>> > modify PG.\n>> \n>> as someone who has been reading the linux-kernel mailing list for 10 years, \n>> let me comment on this a bit.\n>> \n>> linux does have a direct i/o option,\n>\n> Yes, I know applications can request direct i/o with the O_DIRECT flag to \n> open(), but can this be set to be forced for all applications or for \n> individual applications from \"outside\" the application (not that I've ever \n> heard of something like the second)?\n\nno it can't, due to the fact that direct i/o has additional requirements \nfor what you can user for buffers that don't apply to normal i/o\n\n>> but it has significant limits on when and how you cna use it (buffers must \n>> be 512byte aligned and multiples of 512 bytes, things like that).\n>\n> That's a standard limit imposed by the sector size of hard drives, and is \n> present in all direct i/o implementations, not just Linux.\n\nright, but you don't have those limits for normal i/o\n\n>> Also, in many cases testing has shon that there is a fairly significant \n>> performance hit for this, not a perfomance gain.\n>\n> Those performance hits have been noticed for high i/o transaction databases? \n> The idea here is that these kinds of database manage their own caches and \n> having a separate filesystem cache in virtual memory that works with system \n> memory page sizes is an unneeded level of indirection.\n\nahh, you're proposing a re-think of how postgres interacts with the O/S, \nnot just an optimization to be applied to the current architecture.\n\nunlike Oracle, Postgres doesn't try to be an OS itself, it tries very hard \nto rely on the OS to properly implement things rather then doing it's own \nimplementation.\n\n> Yes, you should \n> expect other \"normal\" utilities will suffer a performance hit as if you are \n> trying to cp a 500 byte file you'll still have to work with 8K writes and \n> reads whereas with the filesystem cache you can just write/read part of a \n> page in memory and let the cache decide when it needs to write and read from \n> disk. If there are other caveats to direct i/o on Linux I'd love to hear \n> them.\n\nother then bad interactions with \"normal\" utilities not compiled for \ndriect i/o I don't remember them offhand.\n\nDavid Lang\n", "msg_date": "Thu, 5 Apr 2007 21:33:58 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" }, { "msg_contents": "On Thu, Apr 05, 2007 at 03:10:43PM -0500, Erik Jones wrote:\n> Nope. What we never tracked down was the factor of 10 drop in \n> database transactions, not disk transactions. The write volume was \n> most definitely due to the direct io setting -- writes are now being \n> done in terms of the system's block size where as before they were \n> being done in terms of the the filesystem's cache page size (as it's \n> in virtual memory). Basically, we do so many write transactions that \n> the fs cache was constantly paging.\n\nDid you try decreasing the size of the cache pages? I didn't realize\nthat Solaris used a different size for cache pages and filesystem\nblocks. Perhaps the OS was also being too aggressive with read-aheads?\n\nMy concern is that you're essentially leaving a lot of your memory\nunused this way, since shared_buffers is only set to 1.6G.\n\nBTW, did you ever increase the parameter that controls how much memory\nSolaris will use for filesystem caching?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n", "msg_date": "Wed, 18 Apr 2007 13:07:59 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: a question about Direct I/O and double buffering" } ]
[ { "msg_contents": "Hi,\n\nIve looked through the README in the contrib/adminpack dir but it doesn't\nreally say what these functions do and how to use them....\n\n\nAny help?\n\nThanks\n\nmiguel\n\nHi,Ive looked through the README in the contrib/adminpack dir but it doesn't really say what these functions do and how to use them....Any help?Thanksmiguel", "msg_date": "Thu, 5 Apr 2007 14:01:54 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "What do the adminpack functions do? (8.2.3)" }, { "msg_contents": "Michael Dengler wrote:\n> Hi,\n> \n> Ive looked through the README in the contrib/adminpack dir but it \n> doesn't really say what these functions do and how to use them....\nIt is for use with Pgadmin.\n\nJ\n\n\n> \n> \n> Any help?\n> \n> Thanks\n> \n> miguel\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n", "msg_date": "Thu, 05 Apr 2007 11:07:01 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What do the adminpack functions do? (8.2.3)" } ]
[ { "msg_contents": "I have a query hitting a view that takes 0.15s on 8.1 but 6s on 8.2.3:\n\n select party_id from clan_members_v cm, clans c\n where cm.clan_id = c.id\n and c.type = 'standard'\n\nThe problem seems to be that clan_members_v contains a call to an\nexpensive function:\n\ncreate or replace view clan_members_v as\nselect cm.clan_id, cm.user_id, cp.party_id, cm.date_accepted,\n p.name as party_name, p_tp_total(p.id)::int as tp_total\nfrom clan_members cm, clan_participants cp, parties p\nwhere cm.user_id = p.user_id\n and p.id = cp.party_id\n;\n\np_tp_total takes around 50ms per row.\n\nIf I create clan_members_v without the function call, the original\nquery's speed goes to the 150ms range on 8.2 as well.\n\nIs this a regression, or a \"feature\" of 8.2?\n", "msg_date": "Thu, 5 Apr 2007 12:46:08 -0600", "msg_from": "\"Jonathan Ellis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Premature view materialization in 8.2?" }, { "msg_contents": "\"Jonathan Ellis\" <[email protected]> writes:\n> I have a query hitting a view that takes 0.15s on 8.1 but 6s on 8.2.3:\n> ...\n> Is this a regression, or a \"feature\" of 8.2?\n\nHard to say without EXPLAIN ANALYZE output to compare.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Apr 2007 01:43:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Premature view materialization in 8.2? " }, { "msg_contents": "On 4/5/07, Tom Lane <[email protected]> wrote:\n> \"Jonathan Ellis\" <[email protected]> writes:\n> > I have a query hitting a view that takes 0.15s on 8.1 but 6s on 8.2.3:\n> > ...\n> > Is this a regression, or a \"feature\" of 8.2?\n>\n> Hard to say without EXPLAIN ANALYZE output to compare.\n\nTo my eye they are identical other than the speed but perhaps I am\nmissing something.\n\n8.2:\n\n Hash Join (cost=91.94..560.71 rows=259 width=4) (actual\ntime=22.120..6388.754 rows=958 loops=1)\n Hash Cond: (cm.clan_id = c.id)\n -> Hash Join (cost=75.34..536.90 rows=336 width=36) (actual\ntime=19.542..6375.827 rows=1298 loops=1)\n Hash Cond: (p.user_id = cm.user_id)\n -> Hash Join (cost=36.32..487.94 rows=1303 width=24)\n(actual time=9.019..95.583 rows=1299 loops=1)\n Hash Cond: (p.id = cp.party_id)\n -> Seq Scan on parties p (cost=0.00..397.52\nrows=10952 width=20) (actual time=0.013..40.558 rows=10952 loops=1)\n -> Hash (cost=20.03..20.03 rows=1303 width=4) (actual\ntime=8.545..8.545 rows=1299 loops=1)\n -> Seq Scan on clan_participants cp\n(cost=0.00..20.03 rows=1303 width=4) (actual time=0.013..4.063\nrows=1299 loops=1)\n -> Hash (cost=22.90..22.90 rows=1290 width=16) (actual\ntime=8.748..8.748 rows=1294 loops=1)\n -> Seq Scan on clan_members cm (cost=0.00..22.90\nrows=1290 width=16) (actual time=0.013..4.307 rows=1294 loops=1)\n -> Hash (cost=11.99..11.99 rows=369 width=4) (actual\ntime=2.550..2.550 rows=368 loops=1)\n -> Seq Scan on clans c (cost=0.00..11.99 rows=369 width=4)\n(actual time=0.025..1.341 rows=368 loops=1)\n Filter: ((\"type\")::text = 'standard'::text)\n Total runtime: 6391.999 ms\n\n\n8.1:\n\n Hash Join (cost=62.37..681.10 rows=254 width=4) (actual\ntime=25.316..138.613 rows=967 loops=1)\n Hash Cond: (\"outer\".clan_id = \"inner\".id)\n -> Hash Join (cost=49.46..664.00 rows=331 width=8) (actual\ntime=21.331..126.194 rows=1305 loops=1)\n Hash Cond: (\"outer\".user_id = \"inner\".user_id)\n -> Hash Join (cost=23.32..628.02 rows=1306 width=8) (actual\ntime=10.674..105.352 rows=1306 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".party_id)\n -> Seq Scan on parties p (cost=0.00..537.09\nrows=10909 width=8) (actual time=0.018..49.754 rows=10855 loops=1)\n -> Hash (cost=20.06..20.06 rows=1306 width=4) (actual\ntime=10.334..10.334 rows=1306 loops=1)\n -> Seq Scan on clan_participants cp\n(cost=0.00..20.06 rows=1306 width=4) (actual time=0.020..5.172\nrows=1306 loops=1)\n -> Hash (cost=22.91..22.91 rows=1291 width=8) (actual\ntime=10.621..10.621 rows=1291 loops=1)\n -> Seq Scan on clan_members cm (cost=0.00..22.91\nrows=1291 width=8) (actual time=0.019..5.381 rows=1291 loops=1)\n -> Hash (cost=11.99..11.99 rows=368 width=4) (actual\ntime=3.834..3.834 rows=368 loops=1)\n -> Seq Scan on clans c (cost=0.00..11.99 rows=368 width=4)\n(actual time=0.043..2.373 rows=368 loops=1)\n Filter: ((\"type\")::text = 'standard'::text)\n Total runtime: 142.209 ms\n", "msg_date": "Thu, 5 Apr 2007 23:51:29 -0600", "msg_from": "\"Jonathan Ellis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Premature view materialization in 8.2?" }, { "msg_contents": "\"Jonathan Ellis\" <[email protected]> writes:\n> On 4/5/07, Tom Lane <[email protected]> wrote:\n>>> Is this a regression, or a \"feature\" of 8.2?\n>> \n>> Hard to say without EXPLAIN ANALYZE output to compare.\n\n> To my eye they are identical other than the speed but perhaps I am\n> missing something.\n\nYeah, it sure is the same plan, and 8.2 seems to be a tad faster right\nup to the hash join on user_id. Is user_id a textual datatype? I'm\nwondering if the 8.2 installation is using a different locale --- the\nspeed of simple string comparisons can be horrifically worse in some\nlocales compared to others.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Apr 2007 02:23:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Premature view materialization in 8.2? " }, { "msg_contents": "On 4/6/07, Tom Lane <[email protected]> wrote:\n> \"Jonathan Ellis\" <[email protected]> writes:\n> > On 4/5/07, Tom Lane <[email protected]> wrote:\n> >>> Is this a regression, or a \"feature\" of 8.2?\n> >>\n> >> Hard to say without EXPLAIN ANALYZE output to compare.\n>\n> > To my eye they are identical other than the speed but perhaps I am\n> > missing something.\n>\n> Yeah, it sure is the same plan, and 8.2 seems to be a tad faster right\n> up to the hash join on user_id. Is user_id a textual datatype? I'm\n> wondering if the 8.2 installation is using a different locale --- the\n> speed of simple string comparisons can be horrifically worse in some\n> locales compared to others.\n\nuser_id is an int; they are both C locale.\n", "msg_date": "Fri, 6 Apr 2007 08:24:58 -0600", "msg_from": "\"Jonathan Ellis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Premature view materialization in 8.2?" }, { "msg_contents": "\"Jonathan Ellis\" <[email protected]> writes:\n> On 4/6/07, Tom Lane <[email protected]> wrote:\n>> Yeah, it sure is the same plan, and 8.2 seems to be a tad faster right\n>> up to the hash join on user_id. Is user_id a textual datatype?\n\n> user_id is an int; they are both C locale.\n\nReally!? So much for that theory.\n\nIs work_mem set similarly on both installations?\n\nThe only other thing I can think is that you've exposed some unfortunate\ncorner case in the hash join logic. Would you be willing to send me\n(off-list) the lists of user_ids being joined? That would be the\nclan_members.user_id column and the user_id column from the join of\nparties and clan_participants.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Apr 2007 13:40:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Premature view materialization in 8.2? " }, { "msg_contents": "On 4/6/07, Tom Lane <[email protected]> wrote:\n> \"Jonathan Ellis\" <[email protected]> writes:\n> > On 4/6/07, Tom Lane <[email protected]> wrote:\n> >> Yeah, it sure is the same plan, and 8.2 seems to be a tad faster right\n> >> up to the hash join on user_id. Is user_id a textual datatype?\n>\n> > user_id is an int; they are both C locale.\n>\n> Really!? So much for that theory.\n\nYeah, this db goes back to 7.0 so I've been careful to keep the locale\nset to C to avoid surprises.\n\n> Is work_mem set similarly on both installations?\n\nwork_mem is 8MB on 8.2; work_mem is 1MB and sort_mem is 8MB on 8.1.\n(there's no disk io going on with the 8.2 installation either, so it's\nnot swapping or anything like that.)\n\n> The only other thing I can think is that you've exposed some unfortunate\n> corner case in the hash join logic. Would you be willing to send me\n> (off-list) the lists of user_ids being joined? That would be the\n> clan_members.user_id column and the user_id column from the join of\n> parties and clan_participants.\n\nI can do that... you don't think the fact I mentioned, that\nredefining the view to leave out the expensive function fixes the\nproblem, is relevant?\n", "msg_date": "Fri, 6 Apr 2007 11:57:27 -0600", "msg_from": "\"Jonathan Ellis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Premature view materialization in 8.2?" }, { "msg_contents": "\"Jonathan Ellis\" <[email protected]> writes:\n> I can do that... you don't think the fact I mentioned, that\n> redefining the view to leave out the expensive function fixes the\n> problem, is relevant?\n\nHm, I'd not have thought that an expensive function would get evaluated\npartway up the join tree, but maybe that's wrong. You never did show\nus the actual view definition ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Apr 2007 14:00:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Premature view materialization in 8.2? " }, { "msg_contents": "On 4/6/07, Tom Lane <[email protected]> wrote:\n> \"Jonathan Ellis\" <[email protected]> writes:\n> > I can do that... you don't think the fact I mentioned, that\n> > redefining the view to leave out the expensive function fixes the\n> > problem, is relevant?\n>\n> Hm, I'd not have thought that an expensive function would get evaluated\n> partway up the join tree, but maybe that's wrong. You never did show\n> us the actual view definition ...\n\nIt was in my original post unless it got clipped:\n\nThe problem seems to be that clan_members_v contains a call to an\nexpensive function:\n\ncreate or replace view clan_members_v as\nselect cm.clan_id, cm.user_id, cp.party_id, cm.date_accepted,\n p.name as party_name, p_tp_total(p.id)::int as tp_total\nfrom clan_members cm, clan_participants cp, parties p\nwhere cm.user_id = p.user_id\nand p.id = cp.party_id\n;\n\np_tp_total takes around 50ms per row.\n\nIf I create clan_members_v without the function call, the original\nquery's speed goes to the 150ms range on 8.2 as well.\n", "msg_date": "Fri, 6 Apr 2007 12:03:50 -0600", "msg_from": "\"Jonathan Ellis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Premature view materialization in 8.2?" }, { "msg_contents": "\"Jonathan Ellis\" <[email protected]> writes:\n> It was in my original post unless it got clipped:\n\nSorry, I had forgotten.\n\n> The problem seems to be that clan_members_v contains a call to an\n> expensive function:\n\nI'll bet that the function is marked VOLATILE. 8.2 is more conservative\nabout optimizing away volatile functions than previous releases. If\nit has no side effects, mark it STABLE (or can it even be IMMUTABLE?).\n\nIn some quick testing, I verified that 8.2 does evaluate the function at\nthe join level corresponding to the view's join (and I think this is\npreventing it from considering other join orders, too). If you change\nthe function's marking to be nonvolatile then the function disappears\nfrom the plan entirely, and also it seems to prefer joining \"clans\" sooner.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Apr 2007 14:34:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Premature view materialization in 8.2? " }, { "msg_contents": "On 4/6/07, Tom Lane <[email protected]> wrote:\n> > The problem seems to be that clan_members_v contains a call to an\n> > expensive function:\n>\n> I'll bet that the function is marked VOLATILE. 8.2 is more conservative\n> about optimizing away volatile functions than previous releases. If\n> it has no side effects, mark it STABLE (or can it even be IMMUTABLE?).\n\nThat's exactly right, it should have been STABLE.\n\nThanks a lot for figuring that out for me!\n", "msg_date": "Fri, 6 Apr 2007 14:26:49 -0600", "msg_from": "\"Jonathan Ellis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Premature view materialization in 8.2?" } ]
[ { "msg_contents": "Hello All\t\t\n\n\tI sent this message to the admin list and it never got through so I\nam trying the performance list.\n\tWe moved our application to a new machine last night. It is a Dell\nPowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. The\nmachine is running Redhat AS 4 Upd 4 and Redhat Cluster Suite. The SAN is an\nEMC SAS connected via fibre. We are using Postgres 7.4.16. We have recently\nhad some major hardware issues and replaced the hardware with brand new Dell\nequipment. We expected a major performance increase over the previous being\nthe old equipment was nearly three years old\n \tI will try and explain how things are configured. We have 10\nseparate postmasters running 5 on each node. Each of the postmasters is a\nsingle instance of each database. Each database is separated by division and\nalso we have them separate so we can restart an postmaster with needing to\nrestart all databases My largest database is about 7 GB. And the others run\nanywhere from 100MB - 1.8GB. \n\tThe other configuration was RHEL3 and Postgres 7.4.13 and Redhat\nCluster Suite. The application seemed to run much faster on the older\nequipment. \n\tMy thoughts on the issues are that I could be something with the OS\ntuning. Here is what my kernel.shmmax, kernel.shmall = 1073741824. Is there\nsomething else that I could tune in the OS. My max_connections=35 and shared\nbuffers=8192 for my largest database.\n\nThanks\n\n\n\n", "msg_date": "Thu, 5 Apr 2007 15:06:14 -0400", "msg_from": "\"John Allgood\" <[email protected]>", "msg_from_op": true, "msg_subject": "High Load on Postgres 7.4.16 Server" }, { "msg_contents": "On Thu, 5 Apr 2007, John Allgood wrote:\n\n> Hello All\n>\n> \tI sent this message to the admin list and it never got through so I\n> am trying the performance list.\n> \tWe moved our application to a new machine last night. It is a Dell\n> PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. The\n> machine is running Redhat AS 4 Upd 4 and Redhat Cluster Suite. The SAN is an\n> EMC SAS connected via fibre. We are using Postgres 7.4.16. We have recently\n> had some major hardware issues and replaced the hardware with brand new Dell\n> equipment. We expected a major performance increase over the previous being\n> the old equipment was nearly three years old\n> \tI will try and explain how things are configured. We have 10\n> separate postmasters running 5 on each node. Each of the postmasters is a\n> single instance of each database. Each database is separated by division and\n> also we have them separate so we can restart an postmaster with needing to\n> restart all databases My largest database is about 7 GB. And the others run\n> anywhere from 100MB - 1.8GB.\n> \tThe other configuration was RHEL3 and Postgres 7.4.13 and Redhat\n> Cluster Suite. The application seemed to run much faster on the older\n> equipment.\n> \tMy thoughts on the issues are that I could be something with the OS\n> tuning. Here is what my kernel.shmmax, kernel.shmall = 1073741824. Is there\n> something else that I could tune in the OS. My max_connections=35 and shared\n> buffers=8192 for my largest database.\n\nJohn,\n\nWas the SAN connected to the previous machine or is it also a new addition \nwith the Dell hardware? We had a fairly recent post regarding a similar \nupgrade in which the SAN ended up being the problem, so the first thing I \nwould do is test the SAN with bonnie-++ and/or move your application to use a \nlocal disk and test again. With 8GB of RAM, I'd probably set the \nshared_buffers to at least 50000...If I remember correctly, this was the most \nyou could set it to on 7.4.x and continue benefitting from it. I'd strongly \nencourage you to upgrade to at least 8.1.8 (and possibly 8.2.3) if you can, as \nit has much better shared memory management. You might also want to double \ncheck your effective_cache_size and random_page_cost to see if they are set to \nreasonable values. Did you just copy the old postgresql.conf over?\n\nThis is the beginning of the thread I mentioned above:\n\nhttp://archives.postgresql.org/pgsql-performance/2007-03/msg00104.php\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 5 Apr 2007 12:23:33 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Load on Postgres 7.4.16 Server" }, { "msg_contents": "John Allgood wrote:\n> Hello All\t\t\n>\n> \tI sent this message to the admin list and it never got through so I\n> am trying the performance list.\n> \tWe moved our application to a new machine last night. It is a Dell\n> PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. The\n> machine is running Redhat AS 4 Upd 4 and Redhat Cluster Suite. The SAN is an\n> EMC SAS connected via fibre. We are using Postgres 7.4.16. We have recently\n> had some major hardware issues and replaced the hardware with brand new Dell\n> equipment. We expected a major performance increase over the previous being\n> the old equipment was nearly three years old\n> \tI will try and explain how things are configured. We have 10\n> separate postmasters running 5 on each node. Each of the postmasters is a\n> single instance of each database. Each database is separated by division and\n> also we have them separate so we can restart an postmaster with needing to\n> restart all databases My largest database is about 7 GB. And the others run\n> anywhere from 100MB - 1.8GB. \n> \tThe other configuration was RHEL3 and Postgres 7.4.13 and Redhat\n> Cluster Suite. The application seemed to run much faster on the older\n> equipment. \n> \tMy thoughts on the issues are that I could be something with the OS\n> tuning. Here is what my kernel.shmmax, kernel.shmall = 1073741824. Is there\n> something else that I could tune in the OS. My max_connections=35 and shared\n> buffers=8192 for my largest database.\n> \nUpdate to 8.x.x at least\n\nC.\n", "msg_date": "Thu, 05 Apr 2007 22:28:23 +0300", "msg_from": "=?ISO-8859-1?Q?=22C=2E_Bergstr=F6m=22?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Load on Postgres 7.4.16 Server" }, { "msg_contents": "The hard thing about running multiple postmasters is that you have to tune\neach one separate. Most of the databases I have limited the max-connections\nto 30-50 depending on the database. What would reasonable values for \neffective_cache_size and random_page_cost. I think I have these default.\nAlso what about kernel buffers on RHEL4.\n\nThanks\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jeff Frost\nSent: Thursday, April 05, 2007 3:24 PM\nTo: John Allgood\nCc: [email protected]\nSubject: Re: [PERFORM] High Load on Postgres 7.4.16 Server\n\nOn Thu, 5 Apr 2007, John Allgood wrote:\n\n> Hello All\n>\n> \tI sent this message to the admin list and it never got through so I\n> am trying the performance list.\n> \tWe moved our application to a new machine last night. It is a Dell\n> PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. The\n> machine is running Redhat AS 4 Upd 4 and Redhat Cluster Suite. The SAN is\nan\n> EMC SAS connected via fibre. We are using Postgres 7.4.16. We have\nrecently\n> had some major hardware issues and replaced the hardware with brand new\nDell\n> equipment. We expected a major performance increase over the previous\nbeing\n> the old equipment was nearly three years old\n> \tI will try and explain how things are configured. We have 10\n> separate postmasters running 5 on each node. Each of the postmasters is a\n> single instance of each database. Each database is separated by division\nand\n> also we have them separate so we can restart an postmaster with needing to\n> restart all databases My largest database is about 7 GB. And the others\nrun\n> anywhere from 100MB - 1.8GB.\n> \tThe other configuration was RHEL3 and Postgres 7.4.13 and Redhat\n> Cluster Suite. The application seemed to run much faster on the older\n> equipment.\n> \tMy thoughts on the issues are that I could be something with the OS\n> tuning. Here is what my kernel.shmmax, kernel.shmall = 1073741824. Is\nthere\n> something else that I could tune in the OS. My max_connections=35 and\nshared\n> buffers=8192 for my largest database.\n\nJohn,\n\nWas the SAN connected to the previous machine or is it also a new addition \nwith the Dell hardware? We had a fairly recent post regarding a similar \nupgrade in which the SAN ended up being the problem, so the first thing I \nwould do is test the SAN with bonnie-++ and/or move your application to use\na \nlocal disk and test again. With 8GB of RAM, I'd probably set the \nshared_buffers to at least 50000...If I remember correctly, this was the\nmost \nyou could set it to on 7.4.x and continue benefitting from it. I'd strongly\n\nencourage you to upgrade to at least 8.1.8 (and possibly 8.2.3) if you can,\nas \nit has much better shared memory management. You might also want to double \ncheck your effective_cache_size and random_page_cost to see if they are set\nto \nreasonable values. Did you just copy the old postgresql.conf over?\n\nThis is the beginning of the thread I mentioned above:\n\nhttp://archives.postgresql.org/pgsql-performance/2007-03/msg00104.php\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n-- \nNo virus found in this incoming message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 268.18.26/746 - Release Date: 4/4/2007\n1:09 PM\n\n\n", "msg_date": "Thu, 5 Apr 2007 15:33:27 -0400", "msg_from": "\"John Allgood\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High Load on Postgres 7.4.16 Server" }, { "msg_contents": "On Thu, 5 Apr 2007, John Allgood wrote:\n\n> The hard thing about running multiple postmasters is that you have to tune\n> each one separate. Most of the databases I have limited the max-connections\n> to 30-50 depending on the database. What would reasonable values for\n> effective_cache_size and random_page_cost. I think I have these default.\n> Also what about kernel buffers on RHEL4.\n\nNormally, you would look at the output of 'free' and set it to the amount of \ncache/8. For example:\n\n total used free shared buffers cached\nMem: 2055120 2025632 29488 0 505168 368132\n-/+ buffers/cache: 1152332 902788\nSwap: 2048184 2380 2045804\n\nSo, you could take 902788/8 = 112848. This machine is a bad example as it's \njust a workstation, but you get the idea.\n\nThat tells the planner it can expect the OS cache to have that much of the DB \ncached. It's kind of an order of magnitude knob, so it doesn't have to be \nthat precise.\n\nSince you're running multiple postmasters on the same machine (5 per machine \nright?), then setting the shared_buffers up to 50000 (400MB) on each \npostmaster is probably desirable, though if you have smaller DBs on some of \nthem, it might only be worth it for the largest one. I suspect that having \nthe effective_cache_size set to the output of free on each postmaster is \ndesirable, but your case likely requires some benchmarking to find the optimal \nconfig.\n\nIf you look through the archives, there is a formula for calculating what you \nneed to set the kernel shared memory parameters. Otherwise, you can just \nstart postgres and look at the log as it'll tell you what it tried to \nallocate.\n\nHopefully there's someone with experience running multiple postmasters on the \nsame machine that can speak to the postgresql.conf knobs more specifically.\n\nI'd still suggest you upgrade to at least 8.1.8.\n\n\n\n\n>\n> Thanks\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Jeff Frost\n> Sent: Thursday, April 05, 2007 3:24 PM\n> To: John Allgood\n> Cc: [email protected]\n> Subject: Re: [PERFORM] High Load on Postgres 7.4.16 Server\n>\n> On Thu, 5 Apr 2007, John Allgood wrote:\n>\n>> Hello All\n>>\n>> \tI sent this message to the admin list and it never got through so I\n>> am trying the performance list.\n>> \tWe moved our application to a new machine last night. It is a Dell\n>> PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. The\n>> machine is running Redhat AS 4 Upd 4 and Redhat Cluster Suite. The SAN is\n> an\n>> EMC SAS connected via fibre. We are using Postgres 7.4.16. We have\n> recently\n>> had some major hardware issues and replaced the hardware with brand new\n> Dell\n>> equipment. We expected a major performance increase over the previous\n> being\n>> the old equipment was nearly three years old\n>> \tI will try and explain how things are configured. We have 10\n>> separate postmasters running 5 on each node. Each of the postmasters is a\n>> single instance of each database. Each database is separated by division\n> and\n>> also we have them separate so we can restart an postmaster with needing to\n>> restart all databases My largest database is about 7 GB. And the others\n> run\n>> anywhere from 100MB - 1.8GB.\n>> \tThe other configuration was RHEL3 and Postgres 7.4.13 and Redhat\n>> Cluster Suite. The application seemed to run much faster on the older\n>> equipment.\n>> \tMy thoughts on the issues are that I could be something with the OS\n>> tuning. Here is what my kernel.shmmax, kernel.shmall = 1073741824. Is\n> there\n>> something else that I could tune in the OS. My max_connections=35 and\n> shared\n>> buffers=8192 for my largest database.\n>\n> John,\n>\n> Was the SAN connected to the previous machine or is it also a new addition\n> with the Dell hardware? We had a fairly recent post regarding a similar\n> upgrade in which the SAN ended up being the problem, so the first thing I\n> would do is test the SAN with bonnie-++ and/or move your application to use\n> a\n> local disk and test again. With 8GB of RAM, I'd probably set the\n> shared_buffers to at least 50000...If I remember correctly, this was the\n> most\n> you could set it to on 7.4.x and continue benefitting from it. I'd strongly\n>\n> encourage you to upgrade to at least 8.1.8 (and possibly 8.2.3) if you can,\n> as\n> it has much better shared memory management. You might also want to double\n> check your effective_cache_size and random_page_cost to see if they are set\n> to\n> reasonable values. Did you just copy the old postgresql.conf over?\n>\n> This is the beginning of the thread I mentioned above:\n>\n> http://archives.postgresql.org/pgsql-performance/2007-03/msg00104.php\n>\n>\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 5 Apr 2007 12:47:36 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Load on Postgres 7.4.16 Server" }, { "msg_contents": "\nOn 5-Apr-07, at 3:33 PM, John Allgood wrote:\n\n> The hard thing about running multiple postmasters is that you have \n> to tune\n> each one separate. Most of the databases I have limited the max- \n> connections\n> to 30-50 depending on the database. What would reasonable values for\n> effective_cache_size and random_page_cost. I think I have these \n> default.\n> Also what about kernel buffers on RHEL4.\n>\nrandom_page_cost should be left alone\n\nWhy do you run multiple postmasters ? I don't think this is not the \nmost efficient way to utilize your hardware.\n\nDave\n\n> Thanks\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Jeff \n> Frost\n> Sent: Thursday, April 05, 2007 3:24 PM\n> To: John Allgood\n> Cc: [email protected]\n> Subject: Re: [PERFORM] High Load on Postgres 7.4.16 Server\n>\n> On Thu, 5 Apr 2007, John Allgood wrote:\n>\n>> Hello All\n>>\n>> \tI sent this message to the admin list and it never got through so I\n>> am trying the performance list.\n>> \tWe moved our application to a new machine last night. It is a Dell\n>> PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. The\n>> machine is running Redhat AS 4 Upd 4 and Redhat Cluster Suite. The \n>> SAN is\n> an\n>> EMC SAS connected via fibre. We are using Postgres 7.4.16. We have\n> recently\n>> had some major hardware issues and replaced the hardware with \n>> brand new\n> Dell\n>> equipment. We expected a major performance increase over the previous\n> being\n>> the old equipment was nearly three years old\n>> \tI will try and explain how things are configured. We have 10\n>> separate postmasters running 5 on each node. Each of the \n>> postmasters is a\n>> single instance of each database. Each database is separated by \n>> division\n> and\n>> also we have them separate so we can restart an postmaster with \n>> needing to\n>> restart all databases My largest database is about 7 GB. And the \n>> others\n> run\n>> anywhere from 100MB - 1.8GB.\n>> \tThe other configuration was RHEL3 and Postgres 7.4.13 and Redhat\n>> Cluster Suite. The application seemed to run much faster on the older\n>> equipment.\n>> \tMy thoughts on the issues are that I could be something with the OS\n>> tuning. Here is what my kernel.shmmax, kernel.shmall = 1073741824. Is\n> there\n>> something else that I could tune in the OS. My max_connections=35 and\n> shared\n>> buffers=8192 for my largest database.\n>\n> John,\n>\n> Was the SAN connected to the previous machine or is it also a new \n> addition\n> with the Dell hardware? We had a fairly recent post regarding a \n> similar\n> upgrade in which the SAN ended up being the problem, so the first \n> thing I\n> would do is test the SAN with bonnie-++ and/or move your \n> application to use\n> a\n> local disk and test again. With 8GB of RAM, I'd probably set the\n> shared_buffers to at least 50000...If I remember correctly, this \n> was the\n> most\n> you could set it to on 7.4.x and continue benefitting from it. I'd \n> strongly\n>\n> encourage you to upgrade to at least 8.1.8 (and possibly 8.2.3) if \n> you can,\n> as\n> it has much better shared memory management. You might also want \n> to double\n> check your effective_cache_size and random_page_cost to see if they \n> are set\n> to\n> reasonable values. Did you just copy the old postgresql.conf over?\n>\n> This is the beginning of the thread I mentioned above:\n>\n> http://archives.postgresql.org/pgsql-performance/2007-03/msg00104.php\n>\n> -- \n> Jeff Frost, Owner \t<[email protected]>\n> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n> Phone: 650-780-7908\tFAX: 650-649-1954\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n> -- \n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.5.446 / Virus Database: 268.18.26/746 - Release Date: \n> 4/4/2007\n> 1:09 PM\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Thu, 5 Apr 2007 16:00:39 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Load on Postgres 7.4.16 Server" }, { "msg_contents": "We run multiple postmasters because we can shutdown one postmaster/database\nwithout affecting the other postmasters/databases. Each database is a\ndivision in our company. If we had everything under one postmaster if\nsomething happened to the one the whole company would be down.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Dave Cramer\nSent: Thursday, April 05, 2007 4:01 PM\nTo: John Allgood\nCc: 'Jeff Frost'; [email protected]\nSubject: Re: [PERFORM] High Load on Postgres 7.4.16 Server\n\n\nOn 5-Apr-07, at 3:33 PM, John Allgood wrote:\n\n> The hard thing about running multiple postmasters is that you have \n> to tune\n> each one separate. Most of the databases I have limited the max- \n> connections\n> to 30-50 depending on the database. What would reasonable values for\n> effective_cache_size and random_page_cost. I think I have these \n> default.\n> Also what about kernel buffers on RHEL4.\n>\nrandom_page_cost should be left alone\n\nWhy do you run multiple postmasters ? I don't think this is not the \nmost efficient way to utilize your hardware.\n\nDave\n\n> Thanks\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Jeff \n> Frost\n> Sent: Thursday, April 05, 2007 3:24 PM\n> To: John Allgood\n> Cc: [email protected]\n> Subject: Re: [PERFORM] High Load on Postgres 7.4.16 Server\n>\n> On Thu, 5 Apr 2007, John Allgood wrote:\n>\n>> Hello All\n>>\n>> \tI sent this message to the admin list and it never got through so I\n>> am trying the performance list.\n>> \tWe moved our application to a new machine last night. It is a Dell\n>> PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. The\n>> machine is running Redhat AS 4 Upd 4 and Redhat Cluster Suite. The \n>> SAN is\n> an\n>> EMC SAS connected via fibre. We are using Postgres 7.4.16. We have\n> recently\n>> had some major hardware issues and replaced the hardware with \n>> brand new\n> Dell\n>> equipment. We expected a major performance increase over the previous\n> being\n>> the old equipment was nearly three years old\n>> \tI will try and explain how things are configured. We have 10\n>> separate postmasters running 5 on each node. Each of the \n>> postmasters is a\n>> single instance of each database. Each database is separated by \n>> division\n> and\n>> also we have them separate so we can restart an postmaster with \n>> needing to\n>> restart all databases My largest database is about 7 GB. And the \n>> others\n> run\n>> anywhere from 100MB - 1.8GB.\n>> \tThe other configuration was RHEL3 and Postgres 7.4.13 and Redhat\n>> Cluster Suite. The application seemed to run much faster on the older\n>> equipment.\n>> \tMy thoughts on the issues are that I could be something with the OS\n>> tuning. Here is what my kernel.shmmax, kernel.shmall = 1073741824. Is\n> there\n>> something else that I could tune in the OS. My max_connections=35 and\n> shared\n>> buffers=8192 for my largest database.\n>\n> John,\n>\n> Was the SAN connected to the previous machine or is it also a new \n> addition\n> with the Dell hardware? We had a fairly recent post regarding a \n> similar\n> upgrade in which the SAN ended up being the problem, so the first \n> thing I\n> would do is test the SAN with bonnie-++ and/or move your \n> application to use\n> a\n> local disk and test again. With 8GB of RAM, I'd probably set the\n> shared_buffers to at least 50000...If I remember correctly, this \n> was the\n> most\n> you could set it to on 7.4.x and continue benefitting from it. I'd \n> strongly\n>\n> encourage you to upgrade to at least 8.1.8 (and possibly 8.2.3) if \n> you can,\n> as\n> it has much better shared memory management. You might also want \n> to double\n> check your effective_cache_size and random_page_cost to see if they \n> are set\n> to\n> reasonable values. Did you just copy the old postgresql.conf over?\n>\n> This is the beginning of the thread I mentioned above:\n>\n> http://archives.postgresql.org/pgsql-performance/2007-03/msg00104.php\n>\n> -- \n> Jeff Frost, Owner \t<[email protected]>\n> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n> Phone: 650-780-7908\tFAX: 650-649-1954\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n>\n> -- \n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.5.446 / Virus Database: 268.18.26/746 - Release Date: \n> 4/4/2007\n> 1:09 PM\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n-- \nNo virus found in this incoming message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 268.18.26/746 - Release Date: 4/4/2007\n1:09 PM\n\n\n", "msg_date": "Thu, 5 Apr 2007 16:13:15 -0400", "msg_from": "\"John Allgood\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High Load on Postgres 7.4.16 Server" }, { "msg_contents": "The problem with this is that it doesn't leverage shared buffers and \nkernel buffers well.\n\nAnyways, my bet is that your SAN isn't performing as you expect on \nthe new hardware.\n\nDave\nOn 5-Apr-07, at 4:13 PM, John Allgood wrote:\n\n> We run multiple postmasters because we can shutdown one postmaster/ \n> database\n> without affecting the other postmasters/databases. Each database is a\n> division in our company. If we had everything under one postmaster if\n> something happened to the one the whole company would be down.\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Dave \n> Cramer\n> Sent: Thursday, April 05, 2007 4:01 PM\n> To: John Allgood\n> Cc: 'Jeff Frost'; [email protected]\n> Subject: Re: [PERFORM] High Load on Postgres 7.4.16 Server\n>\n>\n> On 5-Apr-07, at 3:33 PM, John Allgood wrote:\n>\n>> The hard thing about running multiple postmasters is that you have\n>> to tune\n>> each one separate. Most of the databases I have limited the max-\n>> connections\n>> to 30-50 depending on the database. What would reasonable values for\n>> effective_cache_size and random_page_cost. I think I have these\n>> default.\n>> Also what about kernel buffers on RHEL4.\n>>\n> random_page_cost should be left alone\n>\n> Why do you run multiple postmasters ? I don't think this is not the\n> most efficient way to utilize your hardware.\n>\n> Dave\n>\n>> Thanks\n>>\n>> -----Original Message-----\n>> From: [email protected]\n>> [mailto:[email protected]] On Behalf Of Jeff\n>> Frost\n>> Sent: Thursday, April 05, 2007 3:24 PM\n>> To: John Allgood\n>> Cc: [email protected]\n>> Subject: Re: [PERFORM] High Load on Postgres 7.4.16 Server\n>>\n>> On Thu, 5 Apr 2007, John Allgood wrote:\n>>\n>>> Hello All\n>>>\n>>> \tI sent this message to the admin list and it never got through so I\n>>> am trying the performance list.\n>>> \tWe moved our application to a new machine last night. It is a Dell\n>>> PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. \n>>> The\n>>> machine is running Redhat AS 4 Upd 4 and Redhat Cluster Suite. The\n>>> SAN is\n>> an\n>>> EMC SAS connected via fibre. We are using Postgres 7.4.16. We have\n>> recently\n>>> had some major hardware issues and replaced the hardware with\n>>> brand new\n>> Dell\n>>> equipment. We expected a major performance increase over the \n>>> previous\n>> being\n>>> the old equipment was nearly three years old\n>>> \tI will try and explain how things are configured. We have 10\n>>> separate postmasters running 5 on each node. Each of the\n>>> postmasters is a\n>>> single instance of each database. Each database is separated by\n>>> division\n>> and\n>>> also we have them separate so we can restart an postmaster with\n>>> needing to\n>>> restart all databases My largest database is about 7 GB. And the\n>>> others\n>> run\n>>> anywhere from 100MB - 1.8GB.\n>>> \tThe other configuration was RHEL3 and Postgres 7.4.13 and Redhat\n>>> Cluster Suite. The application seemed to run much faster on the \n>>> older\n>>> equipment.\n>>> \tMy thoughts on the issues are that I could be something with the OS\n>>> tuning. Here is what my kernel.shmmax, kernel.shmall = \n>>> 1073741824. Is\n>> there\n>>> something else that I could tune in the OS. My max_connections=35 \n>>> and\n>> shared\n>>> buffers=8192 for my largest database.\n>>\n>> John,\n>>\n>> Was the SAN connected to the previous machine or is it also a new\n>> addition\n>> with the Dell hardware? We had a fairly recent post regarding a\n>> similar\n>> upgrade in which the SAN ended up being the problem, so the first\n>> thing I\n>> would do is test the SAN with bonnie-++ and/or move your\n>> application to use\n>> a\n>> local disk and test again. With 8GB of RAM, I'd probably set the\n>> shared_buffers to at least 50000...If I remember correctly, this\n>> was the\n>> most\n>> you could set it to on 7.4.x and continue benefitting from it. I'd\n>> strongly\n>>\n>> encourage you to upgrade to at least 8.1.8 (and possibly 8.2.3) if\n>> you can,\n>> as\n>> it has much better shared memory management. You might also want\n>> to double\n>> check your effective_cache_size and random_page_cost to see if they\n>> are set\n>> to\n>> reasonable values. Did you just copy the old postgresql.conf over?\n>>\n>> This is the beginning of the thread I mentioned above:\n>>\n>> http://archives.postgresql.org/pgsql-performance/2007-03/msg00104.php\n>>\n>> -- \n>> Jeff Frost, Owner \t<[email protected]>\n>> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n>> Phone: 650-780-7908\tFAX: 650-649-1954\n>>\n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>>\n>> -- \n>> No virus found in this incoming message.\n>> Checked by AVG Free Edition.\n>> Version: 7.5.446 / Virus Database: 268.18.26/746 - Release Date:\n>> 4/4/2007\n>> 1:09 PM\n>>\n>>\n>>\n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n>\n> -- \n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.5.446 / Virus Database: 268.18.26/746 - Release Date: \n> 4/4/2007\n> 1:09 PM\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n", "msg_date": "Thu, 5 Apr 2007 16:27:06 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Load on Postgres 7.4.16 Server" }, { "msg_contents": "\"John Allgood\" <[email protected]> writes:\n> ... The other configuration was RHEL3 and Postgres 7.4.13 and Redhat\n> Cluster Suite. The application seemed to run much faster on the older\n> equipment. \n\nWhile I agree with the other comments that you should think about moving\nto something newer than 7.4.x, there really shouldn't be any meaningful\nperformance difference between 7.4.13 and 7.4.16. I'm guessing some\nsort of pedestrian pilot error, like not using similar postgresql.conf\nsettings or forgetting to ANALYZE the database after reloading it into\nthe new installation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Apr 2007 01:50:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Load on Postgres 7.4.16 Server " }, { "msg_contents": "Hey Guys thanks for the input. I do have some more questions. I am looking a\ndoing some additional tuning on the system. The first think I am looking at\nis the OS tuning. What kernel parameters would I look at setting on a RHEL 4\nbox? I have set the SHMMAX and SHMALL to 1GB. What other tuning options\nshould I look into setting? Am I correct to assume to whatever I set the\nshared memory too that I can't set the total of all postgres buffers to\nlarger than the shared memory. I am still trying to learn about properly\ntuning an OS and PostgreSQL system correctly. I would be interested in\nhearing about what other people on the list have there kernel tuned too.\n\nBest Regards\nJohn Allgood\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Dave Cramer\nSent: Thursday, April 05, 2007 4:27 PM\nTo: John Allgood\nCc: 'Jeff Frost'; [email protected]\nSubject: Re: [PERFORM] High Load on Postgres 7.4.16 Server\n\nThe problem with this is that it doesn't leverage shared buffers and \nkernel buffers well.\n\nAnyways, my bet is that your SAN isn't performing as you expect on \nthe new hardware.\n\nDave\nOn 5-Apr-07, at 4:13 PM, John Allgood wrote:\n\n> We run multiple postmasters because we can shutdown one postmaster/ \n> database\n> without affecting the other postmasters/databases. Each database is a\n> division in our company. If we had everything under one postmaster if\n> something happened to the one the whole company would be down.\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Dave \n> Cramer\n> Sent: Thursday, April 05, 2007 4:01 PM\n> To: John Allgood\n> Cc: 'Jeff Frost'; [email protected]\n> Subject: Re: [PERFORM] High Load on Postgres 7.4.16 Server\n>\n>\n> On 5-Apr-07, at 3:33 PM, John Allgood wrote:\n>\n>> The hard thing about running multiple postmasters is that you have\n>> to tune\n>> each one separate. Most of the databases I have limited the max-\n>> connections\n>> to 30-50 depending on the database. What would reasonable values for\n>> effective_cache_size and random_page_cost. I think I have these\n>> default.\n>> Also what about kernel buffers on RHEL4.\n>>\n> random_page_cost should be left alone\n>\n> Why do you run multiple postmasters ? I don't think this is not the\n> most efficient way to utilize your hardware.\n>\n> Dave\n>\n>> Thanks\n>>\n>> -----Original Message-----\n>> From: [email protected]\n>> [mailto:[email protected]] On Behalf Of Jeff\n>> Frost\n>> Sent: Thursday, April 05, 2007 3:24 PM\n>> To: John Allgood\n>> Cc: [email protected]\n>> Subject: Re: [PERFORM] High Load on Postgres 7.4.16 Server\n>>\n>> On Thu, 5 Apr 2007, John Allgood wrote:\n>>\n>>> Hello All\n>>>\n>>> \tI sent this message to the admin list and it never got through so I\n>>> am trying the performance list.\n>>> \tWe moved our application to a new machine last night. It is a Dell\n>>> PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. \n>>> The\n>>> machine is running Redhat AS 4 Upd 4 and Redhat Cluster Suite. The\n>>> SAN is\n>> an\n>>> EMC SAS connected via fibre. We are using Postgres 7.4.16. We have\n>> recently\n>>> had some major hardware issues and replaced the hardware with\n>>> brand new\n>> Dell\n>>> equipment. We expected a major performance increase over the \n>>> previous\n>> being\n>>> the old equipment was nearly three years old\n>>> \tI will try and explain how things are configured. We have 10\n>>> separate postmasters running 5 on each node. Each of the\n>>> postmasters is a\n>>> single instance of each database. Each database is separated by\n>>> division\n>> and\n>>> also we have them separate so we can restart an postmaster with\n>>> needing to\n>>> restart all databases My largest database is about 7 GB. And the\n>>> others\n>> run\n>>> anywhere from 100MB - 1.8GB.\n>>> \tThe other configuration was RHEL3 and Postgres 7.4.13 and Redhat\n>>> Cluster Suite. The application seemed to run much faster on the \n>>> older\n>>> equipment.\n>>> \tMy thoughts on the issues are that I could be something with the OS\n>>> tuning. Here is what my kernel.shmmax, kernel.shmall = \n>>> 1073741824. Is\n>> there\n>>> something else that I could tune in the OS. My max_connections=35 \n>>> and\n>> shared\n>>> buffers=8192 for my largest database.\n>>\n>> John,\n>>\n>> Was the SAN connected to the previous machine or is it also a new\n>> addition\n>> with the Dell hardware? We had a fairly recent post regarding a\n>> similar\n>> upgrade in which the SAN ended up being the problem, so the first\n>> thing I\n>> would do is test the SAN with bonnie-++ and/or move your\n>> application to use\n>> a\n>> local disk and test again. With 8GB of RAM, I'd probably set the\n>> shared_buffers to at least 50000...If I remember correctly, this\n>> was the\n>> most\n>> you could set it to on 7.4.x and continue benefitting from it. I'd\n>> strongly\n>>\n>> encourage you to upgrade to at least 8.1.8 (and possibly 8.2.3) if\n>> you can,\n>> as\n>> it has much better shared memory management. You might also want\n>> to double\n>> check your effective_cache_size and random_page_cost to see if they\n>> are set\n>> to\n>> reasonable values. Did you just copy the old postgresql.conf over?\n>>\n>> This is the beginning of the thread I mentioned above:\n>>\n>> http://archives.postgresql.org/pgsql-performance/2007-03/msg00104.php\n>>\n>> -- \n>> Jeff Frost, Owner \t<[email protected]>\n>> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n>> Phone: 650-780-7908\tFAX: 650-649-1954\n>>\n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>>\n>> -- \n>> No virus found in this incoming message.\n>> Checked by AVG Free Edition.\n>> Version: 7.5.446 / Virus Database: 268.18.26/746 - Release Date:\n>> 4/4/2007\n>> 1:09 PM\n>>\n>>\n>>\n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n>\n> -- \n> No virus found in this incoming message.\n> Checked by AVG Free Edition.\n> Version: 7.5.446 / Virus Database: 268.18.26/746 - Release Date: \n> 4/4/2007\n> 1:09 PM\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n-- \nNo virus found in this incoming message.\nChecked by AVG Free Edition.\nVersion: 7.5.446 / Virus Database: 268.18.26/746 - Release Date: 4/4/2007\n1:09 PM\n\n\n", "msg_date": "Fri, 6 Apr 2007 11:21:18 -0400", "msg_from": "\"John Allgood\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High Load on Postgres 7.4.16 Server" } ]
[ { "msg_contents": "We are trying to attack this problem from multiple avenues, thus I'm \nstarting a separate thread. This is with regard to the problem posted \nvia thread:\n\nhttp://archives.postgresql.org/pgsql-performance/2007-04/msg00120.php\n\nOne thing we are seeing with this move to the new hardware (and rhas 4) \nis database connection processes that are left over by users who have \nexited the application. I've attached to these processes via gdb and \nfind they all have the same backtrace. Any insights into what might be \ncausing this issue would be appreciated. Understand, we did not have \nthis problem on the previous hardware running on rhes 3. Here is the \nbacktrace:\n\n#0 0x00ba47a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2\n#1 0x0019f1de in __lll_mutex_lock_wait () from /lib/tls/libpthread.so.0\n#2 0x0019ca7a in _L_mutex_lock_23 () from /lib/tls/libpthread.so.0\n#3 0xbfed9438 in ?? ()\n#4 0x00c96a4e in pthread_cond_destroy@@GLIBC_2.3.2 () from \n/lib/tls/libc.so.6\n#5 0x00c96a4e in pthread_cond_destroy@@GLIBC_2.3.2 () from \n/lib/tls/libc.so.6\n#6 0x0015243f in critSec::~critSec () from /usr/local/pcm170/libdalkutil.so\n#7 0x003a48b8 in Comp_ZipFiles () from /usr/local/pcm170/libcompress.so\n#8 0x00bec527 in exit () from /lib/tls/libc.so.6\n#9 0x0816a52f in proc_exit ()\n\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n", "msg_date": "Fri, 06 Apr 2007 11:41:01 -0400", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": true, "msg_subject": "more on high load on postgres 7.4.16" }, { "msg_contents": "Geoffrey wrote:\n> We are trying to attack this problem from multiple avenues, thus I'm\n> starting a separate thread. This is with regard to the problem posted\n> via thread:\n> \n> http://archives.postgresql.org/pgsql-performance/2007-04/msg00120.php\n> \n> One thing we are seeing with this move to the new hardware (and rhas 4)\n> is database connection processes that are left over by users who have\n> exited the application. I've attached to these processes via gdb and\n> find they all have the same backtrace. Any insights into what might be\n> causing this issue would be appreciated. Understand, we did not have\n> this problem on the previous hardware running on rhes 3. Here is the\n> backtrace:\n> \n> #0 0x00ba47a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2\n> #1 0x0019f1de in __lll_mutex_lock_wait () from /lib/tls/libpthread.so.0\n> #2 0x0019ca7a in _L_mutex_lock_23 () from /lib/tls/libpthread.so.0\n> #3 0xbfed9438 in ?? ()\n> #4 0x00c96a4e in pthread_cond_destroy@@GLIBC_2.3.2 () from\n> /lib/tls/libc.so.6\n> #5 0x00c96a4e in pthread_cond_destroy@@GLIBC_2.3.2 () from\n> /lib/tls/libc.so.6\n> #6 0x0015243f in critSec::~critSec () from\n> /usr/local/pcm170/libdalkutil.so\n> #7 0x003a48b8 in Comp_ZipFiles () from /usr/local/pcm170/libcompress.so\n\n/usr/local on RHEL should only contain software installed directly from\nsource - what exactly is pcm170/libdalkutil ?\nbeside that - is pg actually compiled with debugging symbols on that\nplatform ?\n\n\nStefan\n", "msg_date": "Fri, 06 Apr 2007 17:58:41 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more on high load on postgres 7.4.16" }, { "msg_contents": "Stefan Kaltenbrunner wrote:\n> Geoffrey wrote:\n>> We are trying to attack this problem from multiple avenues, thus I'm\n>> starting a separate thread. This is with regard to the problem posted\n>> via thread:\n>>\n>> http://archives.postgresql.org/pgsql-performance/2007-04/msg00120.php\n>>\n>> One thing we are seeing with this move to the new hardware (and rhas 4)\n>> is database connection processes that are left over by users who have\n>> exited the application. I've attached to these processes via gdb and\n>> find they all have the same backtrace. Any insights into what might be\n>> causing this issue would be appreciated. Understand, we did not have\n>> this problem on the previous hardware running on rhes 3. Here is the\n>> backtrace:\n>>\n>> #0 0x00ba47a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2\n>> #1 0x0019f1de in __lll_mutex_lock_wait () from /lib/tls/libpthread.so.0\n>> #2 0x0019ca7a in _L_mutex_lock_23 () from /lib/tls/libpthread.so.0\n>> #3 0xbfed9438 in ?? ()\n>> #4 0x00c96a4e in pthread_cond_destroy@@GLIBC_2.3.2 () from\n>> /lib/tls/libc.so.6\n>> #5 0x00c96a4e in pthread_cond_destroy@@GLIBC_2.3.2 () from\n>> /lib/tls/libc.so.6\n>> #6 0x0015243f in critSec::~critSec () from\n>> /usr/local/pcm170/libdalkutil.so\n>> #7 0x003a48b8 in Comp_ZipFiles () from /usr/local/pcm170/libcompress.so\n> \n> /usr/local on RHEL should only contain software installed directly from\n> source - what exactly is pcm170/libdalkutil ?\n\nIt is a third party package that we have build into the backend. \npcmiler. We do not have source to it though.\n\n> beside that - is pg actually compiled with debugging symbols on that\n> platform ?\n\nNot yet, I'm building it now, but I was hoping that the limited info \nabove might get us some insights. I plan to try and recreate the \nproblem and reproduce a more useful backtrace after rebuilding \npostgresql with debugging symbols.\n\n-- \nUntil later, Geoffrey\n\nThose who would give up essential Liberty, to purchase a little\ntemporary Safety, deserve neither Liberty nor Safety.\n - Benjamin Franklin\n", "msg_date": "Fri, 06 Apr 2007 13:04:53 -0400", "msg_from": "Geoffrey <[email protected]>", "msg_from_op": true, "msg_subject": "Re: more on high load on postgres 7.4.16" } ]
[ { "msg_contents": "One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded\nonto 8.2, both have locale set to C. 8.2 seems to prefer Seq Scans\nfor the first query while the ordering in the second query seems to\nperform worse on 8.2. I ran analyze. I've tried with the encoding\nset to UTF-8 and SQL_ASCII; same numbers and plans. Any ideas how to\nimprove this?\n\nThanks,\n\nAlex\n\npostgres 7.4\n\nEXPLAIN ANALYZE select pnum, event_pid, code_name, code_description,\ncode_mcam, event_date, effective_date, ref_country,\nref_country_legal_code, corresponding_pnum, withdrawal_date,\npayment_date, extension_date, fee_payment_year, requester, free_form\nfrom code inner join event on code_pid = code_pid_fk where pnum\n='AB5819188';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..60.87 rows=19 width=231) (actual\ntime=0.065..0.065 rows=0 loops=1)\n -> Index Scan using pnum_idx on event (cost=0.00..3.37 rows=19\nwidth=172) (actual time=0.063..0.063 rows=0 loops=1)\n Index Cond: ((pnum)::text = 'AB5819188'::text)\n -> Index Scan using code_pkey on code (cost=0.00..3.01 rows=1\nwidth=67) (never executed)\n Index Cond: (code.code_pid = \"outer\".code_pid_fk)\n Total runtime: 0.242 ms\n(6 rows)\n\n\npostgres 8.2\n\nEXPLAIN ANALYZE select pnum, event_pid, code_name, code_description,\ncode_mcam, event_date, effective_date, ref_country,\nref_country_legal_code, corresponding_pnum, withdrawal_date,\npayment_date, extension_date, fee_payment_year, requester, free_form\nfrom code inner join event on code_pid = code_pid_fk where pnum\n='AB5819188';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=106.91..3283.46 rows=1779 width=230) (actual\ntime=10.383..10.390 rows=1 loops=1)\n Hash Cond: (event.code_pid_fk = code.code_pid)\n -> Index Scan using pnum_idx on event (cost=0.00..3147.63\nrows=1779 width=171) (actual time=0.030..0.033 rows=1 loops=1)\n Index Cond: ((pnum)::text = 'AB5819188'::text)\n -> Hash (cost=70.85..70.85 rows=2885 width=67) (actual\ntime=10.329..10.329 rows=2885 loops=1)\n -> Seq Scan on code (cost=0.00..70.85 rows=2885 width=67)\n(actual time=0.013..4.805 rows=2885 loops=1)\n Total runtime: 10.490 ms\n(7 rows)\n\n\npostgres 7.4\n\nEXPLAIN ANALYZE select e.pnum, c.code_description, c.code_mcam,\ne.event_pid from event e, code c where c.code_name =\ne.ref_country_legal_code and c.code_country = e.ref_country and e.pnum\n= 'AB5819188';\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=3.47..106.28 rows=1 width=73) (actual\ntime=7.795..7.795 rows=0 loops=1)\n Hash Cond: (((\"outer\".code_name)::text =\n(\"inner\".ref_country_legal_code)::text) AND\n((\"outer\".code_country)::text = (\"inner\".ref_country)::text))\n -> Seq Scan on code c (cost=0.00..63.92 rows=2592 width=69)\n(actual time=0.010..3.881 rows=2592 loops=1)\n -> Hash (cost=3.37..3.37 rows=19 width=30) (actual\ntime=0.064..0.064 rows=0 loops=1)\n -> Index Scan using pnum_idx on event e (cost=0.00..3.37\nrows=19 width=30) (actual time=0.062..0.062 rows=0 loops=1)\n Index Cond: ((pnum)::text = 'AB5819188'::text)\n Total runtime: 7.947 ms\n(7 rows)\n\n\npostgres 8.2\n\nEXPLAIN ANALYZE select e.pnum, c.code_description, c.code_mcam,\ne.event_pid from event e, code c where c.code_name =\ne.ref_country_legal_code and c.code_country = e.ref_country and e.pnum\n= 'AB5819188';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=114.12..3368.51 rows=1 width=73) (actual\ntime=10.900..10.900 rows=0 loops=1)\n Hash Cond: (((e.ref_country_legal_code)::text =\n(c.code_name)::text) AND ((e.ref_country)::text =\n(c.code_country)::text))\n -> Index Scan using pnum_idx on event e (cost=0.00..3147.63\nrows=1779 width=30) (actual time=0.027..0.031 rows=1 loops=1)\n Index Cond: ((pnum)::text = 'AB5819188'::text)\n -> Hash (cost=70.85..70.85 rows=2885 width=69) (actual\ntime=10.838..10.838 rows=2885 loops=1)\n -> Seq Scan on code c (cost=0.00..70.85 rows=2885 width=69)\n(actual time=0.011..4.863 rows=2885 loops=1)\n Total runtime: 11.018 ms\n(7 rows)\n", "msg_date": "Fri, 6 Apr 2007 16:38:33 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgres 8.2 seems to prefer Seq Scan" }, { "msg_contents": "On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:\n> One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded\n> onto 8.2, both have locale set to C. 8.2 seems to prefer Seq Scans\n> for the first query while the ordering in the second query seems to\n> perform worse on 8.2. I ran analyze. I've tried with the encoding\n> set to UTF-8 and SQL_ASCII; same numbers and plans. Any ideas how to\n> improve this?\n\nAre you sure the data sets are identical? The 7.4 query returned\n0 rows; the 8.2 query returned 1 row. If you're running the same\nquery against the same data in both versions then at least one of\nthem appears to be returning the wrong result. Exactly which\nversions of 7.4 and 8.2 are you running?\n\nHave you analyzed all tables in both versions? The row count\nestimate in 7.4 is much closer to reality than in 8.2:\n\n7.4\n> -> Index Scan using pnum_idx on event (cost=0.00..3.37 rows=19\n> width=172) (actual time=0.063..0.063 rows=0 loops=1)\n> Index Cond: ((pnum)::text = 'AB5819188'::text)\n\n8.2\n> -> Index Scan using pnum_idx on event (cost=0.00..3147.63\n> rows=1779 width=171) (actual time=0.030..0.033 rows=1 loops=1)\n> Index Cond: ((pnum)::text = 'AB5819188'::text)\n\nIf analyzing the event table doesn't improve the row count estimate\nthen try increasing the statistics target for event.pnum and analyzing\nagain. Example:\n\nALTER TABLE event ALTER pnum SET STATISTICS 100;\nANALYZE event;\n\nYou can set the statistics target as high as 1000 to get more\naccurate results at the cost of longer ANALYZE times.\n\n-- \nMichael Fuhr\n", "msg_date": "Fri, 6 Apr 2007 15:31:47 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 8.2 seems to prefer Seq Scan" }, { "msg_contents": "On 4/6/07, Michael Fuhr <[email protected]> wrote:\n> On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:\n> > One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded\n> > onto 8.2, both have locale set to C. 8.2 seems to prefer Seq Scans\n> > for the first query while the ordering in the second query seems to\n> > perform worse on 8.2. I ran analyze. I've tried with the encoding\n> > set to UTF-8 and SQL_ASCII; same numbers and plans. Any ideas how to\n> > improve this?\n>\n> Are you sure the data sets are identical? The 7.4 query returned\n> 0 rows; the 8.2 query returned 1 row. If you're running the same\n> query against the same data in both versions then at least one of\n> them appears to be returning the wrong result. Exactly which\n> versions of 7.4 and 8.2 are you running?\n\nThey should be although it's possible one of my co-workers updated one\nof the DB's since I last dumped it, but should be a negligible amount\nof data. Not sure of the exact version of 7.4; psql just says:\npsql --version\npsql (PostgreSQL) 7.4\ncontains support for command-line editing\n\n8.2 is 8.2.3\n\n>\n> Have you analyzed all tables in both versions? The row count\n> estimate in 7.4 is much closer to reality than in 8.2:\n>\n\nYes.\n\n> 7.4\n> > -> Index Scan using pnum_idx on event (cost=0.00..3.37 rows=19\n> > width=172) (actual time=0.063..0.063 rows=0 loops=1)\n> > Index Cond: ((pnum)::text = 'AB5819188'::text)\n>\n> 8.2\n> > -> Index Scan using pnum_idx on event (cost=0.00..3147.63\n> > rows=1779 width=171) (actual time=0.030..0.033 rows=1 loops=1)\n> > Index Cond: ((pnum)::text = 'AB5819188'::text)\n>\n> If analyzing the event table doesn't improve the row count estimate\n> then try increasing the statistics target for event.pnum and analyzing\n> again. Example:\n>\n> ALTER TABLE event ALTER pnum SET STATISTICS 100;\n> ANALYZE event;\n>\n> You can set the statistics target as high as 1000 to get more\n> accurate results at the cost of longer ANALYZE times.\n>\n\nThanks! I'll give that a try and report back.\n\nAlex\n", "msg_date": "Fri, 6 Apr 2007 17:48:28 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 8.2 seems to prefer Seq Scan" }, { "msg_contents": "On 4/6/07, Michael Fuhr <[email protected]> wrote:\n> On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:\n> > One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded\n> > onto 8.2, both have locale set to C. 8.2 seems to prefer Seq Scans\n> > for the first query while the ordering in the second query seems to\n> > perform worse on 8.2. I ran analyze. I've tried with the encoding\n> > set to UTF-8 and SQL_ASCII; same numbers and plans. Any ideas how to\n> > improve this?\n>\n> Are you sure the data sets are identical? The 7.4 query returned\n> 0 rows; the 8.2 query returned 1 row. If you're running the same\n> query against the same data in both versions then at least one of\n> them appears to be returning the wrong result. Exactly which\n> versions of 7.4 and 8.2 are you running?\n>\n> Have you analyzed all tables in both versions? The row count\n> estimate in 7.4 is much closer to reality than in 8.2:\n>\n> 7.4\n> > -> Index Scan using pnum_idx on event (cost=0.00..3.37 rows=19\n> > width=172) (actual time=0.063..0.063 rows=0 loops=1)\n> > Index Cond: ((pnum)::text = 'AB5819188'::text)\n>\n> 8.2\n> > -> Index Scan using pnum_idx on event (cost=0.00..3147.63\n> > rows=1779 width=171) (actual time=0.030..0.033 rows=1 loops=1)\n> > Index Cond: ((pnum)::text = 'AB5819188'::text)\n>\n> If analyzing the event table doesn't improve the row count estimate\n> then try increasing the statistics target for event.pnum and analyzing\n> again. Example:\n>\n> ALTER TABLE event ALTER pnum SET STATISTICS 100;\n> ANALYZE event;\n>\n> You can set the statistics target as high as 1000 to get more\n> accurate results at the cost of longer ANALYZE times.\n>\n\nSetting statistics to 400 seems to be the sweet spot. Values above\nthat seem to only marginally improve performance. However, I have to\ndisable seqscan in order for the query to be fast. Why does the query\nplanner insist on doing a seq scan? Is there anyway to make it prefer\nthe index scan?\n\nThanks,\n\nAlex\n\npostgres 8.2\n\ndb=# EXPLAIN ANALYZE select pnum, event_pid, code_name,\ncode_description, code_mcam, event_date, effective_date, ref_country,\nref_country_legal_code, corresponding_pnum, withdrawal_date,\npayment_date, extension_date, fee_payment_year, requester, free_form\nfrom code inner join event on code_pid = code_pid_fk where pnum\n='US5819188';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=106.91..210.37 rows=54 width=229) (actual\ntime=11.245..11.253 rows=1 loops=1)\n Hash Cond: (event.code_pid_fk = code.code_pid)\n -> Index Scan using pnum_idx on event (cost=0.00..102.58 rows=54\nwidth=170) (actual time=0.108..0.112 rows=1 loops=1)\n Index Cond: ((pnum)::text = 'US5819188'::text)\n -> Hash (cost=70.85..70.85 rows=2885 width=67) (actual\ntime=11.006..11.006 rows=2885 loops=1)\n -> Seq Scan on code (cost=0.00..70.85 rows=2885 width=67)\n(actual time=0.025..5.392 rows=2885 loops=1)\n Total runtime: 11.429 ms\n(7 rows)\n\ndb=# set enable_seqscan=0;\nSET\ndb=# EXPLAIN ANALYZE select pnum, event_pid, code_name,\ncode_description, code_mcam, event_date, effective_date, ref_country,\nref_country_legal_code, corresponding_pnum, withdrawal_date,\npayment_date, extension_date, fee_payment_year, requester, free_form\nfrom code inner join event on code_pid = code_pid_fk where pnum\n='US5819188';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..289.72 rows=54 width=229) (actual\ntime=0.068..0.076 rows=1 loops=1)\n -> Index Scan using pnum_idx on event (cost=0.00..102.58 rows=54\nwidth=170) (actual time=0.019..0.020 rows=1 loops=1)\n Index Cond: ((pnum)::text = 'US5819188'::text)\n -> Index Scan using code_pkey on code (cost=0.00..3.45 rows=1\nwidth=67) (actual time=0.041..0.043 rows=1 loops=1)\n Index Cond: (code.code_pid = event.code_pid_fk)\n Total runtime: 0.126 ms\n(6 rows)\n", "msg_date": "Mon, 9 Apr 2007 14:43:55 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 8.2 seems to prefer Seq Scan" }, { "msg_contents": "On 4/9/07, Alex Deucher <[email protected]> wrote:\n> On 4/6/07, Michael Fuhr <[email protected]> wrote:\n> > On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:\n> > > One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded\n> > > onto 8.2, both have locale set to C. 8.2 seems to prefer Seq Scans\n> > > for the first query while the ordering in the second query seems to\n> > > perform worse on 8.2. I ran analyze. I've tried with the encoding\n> > > set to UTF-8 and SQL_ASCII; same numbers and plans. Any ideas how to\n> > > improve this?\n> >\n> > Are you sure the data sets are identical? The 7.4 query returned\n> > 0 rows; the 8.2 query returned 1 row. If you're running the same\n> > query against the same data in both versions then at least one of\n> > them appears to be returning the wrong result. Exactly which\n> > versions of 7.4 and 8.2 are you running?\n> >\n> > Have you analyzed all tables in both versions? The row count\n> > estimate in 7.4 is much closer to reality than in 8.2:\n> >\n> > 7.4\n> > > -> Index Scan using pnum_idx on event (cost=0.00..3.37 rows=19\n> > > width=172) (actual time=0.063..0.063 rows=0 loops=1)\n> > > Index Cond: ((pnum)::text = 'AB5819188'::text)\n> >\n> > 8.2\n> > > -> Index Scan using pnum_idx on event (cost=0.00..3147.63\n> > > rows=1779 width=171) (actual time=0.030..0.033 rows=1 loops=1)\n> > > Index Cond: ((pnum)::text = 'AB5819188'::text)\n> >\n> > If analyzing the event table doesn't improve the row count estimate\n> > then try increasing the statistics target for event.pnum and analyzing\n> > again. Example:\n> >\n> > ALTER TABLE event ALTER pnum SET STATISTICS 100;\n> > ANALYZE event;\n> >\n> > You can set the statistics target as high as 1000 to get more\n> > accurate results at the cost of longer ANALYZE times.\n> >\n>\n> Setting statistics to 400 seems to be the sweet spot. Values above\n> that seem to only marginally improve performance. However, I have to\n> disable seqscan in order for the query to be fast. Why does the query\n> planner insist on doing a seq scan? Is there anyway to make it prefer\n> the index scan?\n>\n\nFWIW, disabling seqscan also makes the second query much faster:\n\nEXPLAIN ANALYZE select e.pnum, c.code_description, c.code_mcam,\ne.event_pid from event e, code c where c.code_name =\ne.ref_country_legal_code and c.code_country = e.ref_country and e.pnum\n= 'US5819188';\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=104.13..257.65 rows=1 width=73) (actual\ntime=0.038..0.038 rows=0 loops=1)\n Merge Cond: ((c.code_country)::text = \"inner\".\"?column5?\")\n Join Filter: ((c.code_name)::text = (e.ref_country_legal_code)::text)\n -> Index Scan using code_country_idx on code c (cost=0.00..134.00\nrows=2885 width=69) (actual time=0.012..0.012 rows=1 loops=1)\n -> Sort (cost=104.13..104.27 rows=54 width=30) (actual\ntime=0.019..0.021 rows=1 loops=1)\n Sort Key: (e.ref_country)::text\n -> Index Scan using pnum_idx on event e (cost=0.00..102.58\nrows=54 width=30) (actual time=0.010..0.012 rows=1 loops=1)\n Index Cond: ((pnum)::text = 'US5819188'::text)\n Total runtime: 0.072 ms\n(9 rows)\n", "msg_date": "Mon, 9 Apr 2007 15:14:48 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 8.2 seems to prefer Seq Scan" }, { "msg_contents": "On 4/9/07, Alex Deucher <[email protected]> wrote:\n> On 4/9/07, Alex Deucher <[email protected]> wrote:\n> > On 4/6/07, Michael Fuhr <[email protected]> wrote:\n> > > On Fri, Apr 06, 2007 at 04:38:33PM -0400, Alex Deucher wrote:\n> > > > One more anomaly between 7.4 and 8.2. DB dumped from 7.4 and loaded\n> > > > onto 8.2, both have locale set to C. 8.2 seems to prefer Seq Scans\n> > > > for the first query while the ordering in the second query seems to\n> > > > perform worse on 8.2. I ran analyze. I've tried with the encoding\n> > > > set to UTF-8 and SQL_ASCII; same numbers and plans. Any ideas how to\n> > > > improve this?\n> > >\n> > > Are you sure the data sets are identical? The 7.4 query returned\n> > > 0 rows; the 8.2 query returned 1 row. If you're running the same\n> > > query against the same data in both versions then at least one of\n> > > them appears to be returning the wrong result. Exactly which\n> > > versions of 7.4 and 8.2 are you running?\n> > >\n> > > Have you analyzed all tables in both versions? The row count\n> > > estimate in 7.4 is much closer to reality than in 8.2:\n> > >\n> > > 7.4\n> > > > -> Index Scan using pnum_idx on event (cost=0.00..3.37 rows=19\n> > > > width=172) (actual time=0.063..0.063 rows=0 loops=1)\n> > > > Index Cond: ((pnum)::text = 'AB5819188'::text)\n> > >\n> > > 8.2\n> > > > -> Index Scan using pnum_idx on event (cost=0.00..3147.63\n> > > > rows=1779 width=171) (actual time=0.030..0.033 rows=1 loops=1)\n> > > > Index Cond: ((pnum)::text = 'AB5819188'::text)\n> > >\n> > > If analyzing the event table doesn't improve the row count estimate\n> > > then try increasing the statistics target for event.pnum and analyzing\n> > > again. Example:\n> > >\n> > > ALTER TABLE event ALTER pnum SET STATISTICS 100;\n> > > ANALYZE event;\n> > >\n> > > You can set the statistics target as high as 1000 to get more\n> > > accurate results at the cost of longer ANALYZE times.\n> > >\n> >\n> > Setting statistics to 400 seems to be the sweet spot. Values above\n> > that seem to only marginally improve performance. However, I have to\n> > disable seqscan in order for the query to be fast. Why does the query\n> > planner insist on doing a seq scan? Is there anyway to make it prefer\n> > the index scan?\n> >\n\nOk, it looks like bumping up the the stats to 400 did the trick. It\nseems my test sets were not a good representation of the queries. The\nsets I was using were more of an exception to the rule since they were\nhitting comparatively fewer rows that most others. Thanks to everyone\non the list and IRC for their help.\n\nAlex\n", "msg_date": "Mon, 9 Apr 2007 16:37:33 -0400", "msg_from": "\"Alex Deucher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 8.2 seems to prefer Seq Scan" } ]
[ { "msg_contents": "Hello,\n\nI am trying to build a application to search CDs and their tracks and I\nam experiencing some performance difficulties.\n\nThe database is very simple at the moment, two tables \"cd\" and \"tracks\"\ncontain the CD-information and their respective tracks. A column\n\"cd_id\" in public.tracks is the foreign key to the cd table.\n\n#v+\n Table \"public.cd\"\n Column | Type | Modifiers\n-------------+-------------------+----------------------------------------------------\n revision | integer | not null default 0\n disc_length | integer |\n via | character varying |\n cd_id | integer | not null default nextval('cd_cd_id_seq'::regclass)\n discid | integer | not null\n title | character varying | not null\n artist | character varying | not null\n year | smallint |\n genre | character varying |\n ext | character varying |\n tstitle | tsvector |\n tsartist | tsvector |\nIndexes:\n \"cd_id_key\" PRIMARY KEY, btree (cd_id)\n \"discid_key\" UNIQUE, btree (discid)\n \"tsartist_cd_idx\" gist (tsartist)\n \"tstitle_cd_idx\" gist (tstitle)\nCheck constraints:\n \"year_check\" CHECK (\"year\" IS NULL OR \"year\" >= 0 AND \"year\" <= 10000)\nTablespace: \"d_separate\"\n\n Table \"public.tracks\"\n Column | Type | Modifiers \n----------+-------------------+-----------------------------------------------------------\n track_id | integer | not null default nextval('tracks_track_id_seq'::regclass)\n cd_id | integer | not null\n title | character varying | \n artist | character varying | \n ext | character varying | \n length | integer | \n number | smallint | not null default 0\n tstitle | tsvector | \n tsartist | tsvector | \nIndexes:\n \"tracks_pkey\" PRIMARY KEY, btree (track_id)\n \"cdid_tracks_idx\" btree (cd_id)\n \"tsartist_tracks_idx\" gist (tsartist)\n \"tstitle_tracks_idx\" gin (tstitle)\nForeign-key constraints:\n \"tracks_cd_id_fkey\" FOREIGN KEY (cd_id) REFERENCES cd(cd_id) ON UPDATE RESTRICT ON DELETE RESTRICT\nTablespace: \"d_separate\"\n\n#v-\n\nI am using tsearch2 to be able to search very fast for CD and track\nartists and titles.\n\nThe database is created only once and I expect SELECTS to happen very\noften, therefore the indexes will not hurt the performance. I also ran\na VACUUM FULL ANALYSE.\n\nThe query that I want to optimise at the moment is the \"Give me all CDs\nwith their tracks, that contain a track with the Title 'foobar'\". The\nquery is very expensive, so I try to limit it to 10 cds at once.\n\nMy first idea was:\n\n#+\ncddb=# EXPLAIN ANALYSE SELECT cd.cd_id,cd.title,cd.artist,tracks.title FROM tracks JOIN (SELECT cd.cd_id,cd.artist,cd.title FROM cd JOIN tracks USING (cd_id) WHERE tracks.tstitle @@ plainto_tsquery('simple','education') LIMIT 10) AS cd USING (cd_id);\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..3852.42 rows=11974 width=91) (actual time=310.983..972.739 rows=136 loops=1)\n -> Limit (cost=0.00..121.94 rows=10 width=46) (actual time=264.797..650.178 rows=10 loops=1)\n -> Nested Loop (cost=0.00..227602.43 rows=18665 width=46) (actual time=264.793..650.165 rows=10 loops=1)\n -> Index Scan using tstitle_tracks_idx on tracks (cost=0.00..73402.74 rows=18665 width=4) (actual time=155.516..155.578 rows=10 loops=1)\n Index Cond: (tstitle @@ '''education'''::tsquery)\n -> Index Scan using cd_id_key on cd (cost=0.00..8.25 rows=1 width=46) (actual time=49.452..49.453 rows=1 loops=10)\n Index Cond: (public.cd.cd_id = public.tracks.cd_id)\n -> Index Scan using cdid_tracks_idx on tracks (cost=0.00..358.08 rows=1197 width=27) (actual time=29.588..32.239 rows=14 loops=10)\n Index Cond: (public.tracks.cd_id = cd.cd_id)\n Total runtime: 972.917 ms\n(10 rows)\n#v-\n\n\nThe query is fast enough, but erroneous. If a cd contains more than one\ntrack, that matches the condition, the inner SELECT will return more\nthan one cd and therefore the whole query will shield duplicate cds.\n\nThe solution is to either insert DISTINCT into the above query or use\nEXISTS as condition, but both queries show a terrible performance:\n\n#v+\ncddb=# EXPLAIN ANALYSE SELECT cd.cd_id,cd.title,cd.artist,tracks.title FROM tracks JOIN (SELECT DISTINCT cd.cd_id,cd.artist,cd.title FROM cd JOIN tracks USING (cd_id) WHERE tracks.tstitle @@ plainto_tsquery('simple','education') LIMIT 10) AS cd USING (cd_id);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=152390.12..156120.71 rows=11974 width=91) (actual time=37356.517..37605.073 rows=137 loops=1)\n -> Limit (cost=152390.12..152390.22 rows=10 width=46) (actual time=37289.598..37289.643 rows=10 loops=1)\n -> Unique (cost=152390.12..152576.77 rows=18665 width=46) (actual time=37289.594..37289.629 rows=10 loops=1)\n -> Sort (cost=152390.12..152436.79 rows=18665 width=46) (actual time=37289.590..37289.601 rows=12 loops=1)\n Sort Key: public.cd.cd_id, public.cd.artist, public.cd.title\n -> Hash Join (cost=78926.50..151066.02 rows=18665 width=46) (actual time=36214.504..37285.974 rows=811 loops=1)\n Hash Cond: (public.tracks.cd_id = public.cd.cd_id)\n -> Bitmap Heap Scan on tracks (cost=536.76..59707.31 rows=18665 width=4) (actual time=0.724..39.253 rows=811 loops=1)\n Recheck Cond: (tstitle @@ '''education'''::tsquery)\n -> Bitmap Index Scan on tstitle_tracks_idx (cost=0.00..532.09 rows=18665 width=0) (actual time=0.492..0.492 rows=811 loops=1)\n Index Cond: (tstitle @@ '''education'''::tsquery)\n -> Hash (cost=49111.33..49111.33 rows=1344433 width=46) (actual time=36211.598..36211.598 rows=1344433 loops=1)\n -> Seq Scan on cd (cost=0.00..49111.33 rows=1344433 width=46) (actual time=31.094..19813.716 rows=1344433 loops=1)\n -> Index Scan using cdid_tracks_idx on tracks (cost=0.00..358.08 rows=1197 width=27) (actual time=31.294..31.527 rows=14 loops=10)\n Index Cond: (public.tracks.cd_id = cd.cd_id)\n Total runtime: 37614.523 ms\n(16 rows)\n\ncddb=# EXPLAIN ANALYSE SELECT cd.cd_id,cd.artist,cd.title,tracks.title FROM tracks JOIN (SELECT cd.cd_id,cd.artist,cd.title FROM cd WHERE EXISTS (SELECT 1 FROM tracks WHERE tracks.cd_id = cd.cd_id AND tracks.tstitle @@ plainto_tsquery('simple','education')) LIMIT 10) as cd USING (cd_id);\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..10023.37 rows=11974 width=91) (actual time=126.607..40853.563 rows=148 loops=1)\n -> Limit (cost=0.00..6292.89 rows=10 width=46) (actual time=126.587..40853.072 rows=10 loops=1)\n -> Seq Scan on cd (cost=0.00..423018283.46 rows=672216 width=46) (actual time=126.584..40853.035 rows=10 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using cdid_tracks_idx on tracks (cost=0.00..314.61 rows=1 width=0) (actual time=1.025..1.025 rows=0 loops=39706)\n Index Cond: (cd_id = $0)\n Filter: (tstitle @@ '''education'''::tsquery)\n -> Index Scan using cdid_tracks_idx on tracks (cost=0.00..358.08 rows=1197 width=27) (actual time=0.011..0.029 rows=15 loops=10)\n Index Cond: (tracks.cd_id = cd.cd_id)\n Total runtime: 40853.789 ms\n(11 rows)\n#v-\n\nRephrasing the EXISTS-query as an IN-query did not help the\nperformance, either.\n\nI get the impression, that I am blind and cannot find the obvious\nsolution, do you have any idea how to accomplish, what I am trying?\n\nBest Regards,\n\nTilo\n", "msg_date": "Sat, 7 Apr 2007 12:47:13 +0200", "msg_from": "Tilo Buschmann <[email protected]>", "msg_from_op": true, "msg_subject": "fast DISTINCT or EXIST" }, { "msg_contents": "Can't you use something like this? Or is the distinct on the t.cd_id \nstill causing the major slowdown here?\n\nSELECT ... FROM cd\n JOIN tracks ...\nWHERE cd.id IN (SELECT DISTINCT t.cd_id FROM tracks t\n WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 10)\n\nIf that is your main culprit, you could also use two limits based on the \nfact that there will be at most X songs per cd which would match your \ntitle (my not very educated guess is 3x). Its a bit ugly... but if that \nis what it takes to make postgresql not scan your entire index, so be it...\n\nSELECT ... FROM cd\n JOIN tracks ...\nWHERE cd.id IN (SELECT DISTINCT cd_id FROM (SELECT t.cd_id FROM tracks t\n WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 30) \nas foo LIMIT 10)\n\n\nBest regards,\n\nArjen\n\nOn 7-4-2007 12:47 Tilo Buschmann wrote:\n> Hello,\n> \n> I am trying to build a application to search CDs and their tracks and I\n> am experiencing some performance difficulties.\n> \n> The database is very simple at the moment, two tables \"cd\" and \"tracks\"\n> contain the CD-information and their respective tracks. A column\n> \"cd_id\" in public.tracks is the foreign key to the cd table.\n> \n> #v+\n> Table \"public.cd\"\n> Column | Type | Modifiers\n> -------------+-------------------+----------------------------------------------------\n> revision | integer | not null default 0\n> disc_length | integer |\n> via | character varying |\n> cd_id | integer | not null default nextval('cd_cd_id_seq'::regclass)\n> discid | integer | not null\n> title | character varying | not null\n> artist | character varying | not null\n> year | smallint |\n> genre | character varying |\n> ext | character varying |\n> tstitle | tsvector |\n> tsartist | tsvector |\n> Indexes:\n> \"cd_id_key\" PRIMARY KEY, btree (cd_id)\n> \"discid_key\" UNIQUE, btree (discid)\n> \"tsartist_cd_idx\" gist (tsartist)\n> \"tstitle_cd_idx\" gist (tstitle)\n> Check constraints:\n> \"year_check\" CHECK (\"year\" IS NULL OR \"year\" >= 0 AND \"year\" <= 10000)\n> Tablespace: \"d_separate\"\n> \n> Table \"public.tracks\"\n> Column | Type | Modifiers \n> ----------+-------------------+-----------------------------------------------------------\n> track_id | integer | not null default nextval('tracks_track_id_seq'::regclass)\n> cd_id | integer | not null\n> title | character varying | \n> artist | character varying | \n> ext | character varying | \n> length | integer | \n> number | smallint | not null default 0\n> tstitle | tsvector | \n> tsartist | tsvector | \n> Indexes:\n> \"tracks_pkey\" PRIMARY KEY, btree (track_id)\n> \"cdid_tracks_idx\" btree (cd_id)\n> \"tsartist_tracks_idx\" gist (tsartist)\n> \"tstitle_tracks_idx\" gin (tstitle)\n> Foreign-key constraints:\n> \"tracks_cd_id_fkey\" FOREIGN KEY (cd_id) REFERENCES cd(cd_id) ON UPDATE RESTRICT ON DELETE RESTRICT\n> Tablespace: \"d_separate\"\n> \n> #v-\n> \n> I am using tsearch2 to be able to search very fast for CD and track\n> artists and titles.\n> \n> The database is created only once and I expect SELECTS to happen very\n> often, therefore the indexes will not hurt the performance. I also ran\n> a VACUUM FULL ANALYSE.\n> \n> The query that I want to optimise at the moment is the \"Give me all CDs\n> with their tracks, that contain a track with the Title 'foobar'\". The\n> query is very expensive, so I try to limit it to 10 cds at once.\n> \n> My first idea was:\n> \n> #+\n> cddb=# EXPLAIN ANALYSE SELECT cd.cd_id,cd.title,cd.artist,tracks.title FROM tracks JOIN (SELECT cd.cd_id,cd.artist,cd.title FROM cd JOIN tracks USING (cd_id) WHERE tracks.tstitle @@ plainto_tsquery('simple','education') LIMIT 10) AS cd USING (cd_id);\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..3852.42 rows=11974 width=91) (actual time=310.983..972.739 rows=136 loops=1)\n> -> Limit (cost=0.00..121.94 rows=10 width=46) (actual time=264.797..650.178 rows=10 loops=1)\n> -> Nested Loop (cost=0.00..227602.43 rows=18665 width=46) (actual time=264.793..650.165 rows=10 loops=1)\n> -> Index Scan using tstitle_tracks_idx on tracks (cost=0.00..73402.74 rows=18665 width=4) (actual time=155.516..155.578 rows=10 loops=1)\n> Index Cond: (tstitle @@ '''education'''::tsquery)\n> -> Index Scan using cd_id_key on cd (cost=0.00..8.25 rows=1 width=46) (actual time=49.452..49.453 rows=1 loops=10)\n> Index Cond: (public.cd.cd_id = public.tracks.cd_id)\n> -> Index Scan using cdid_tracks_idx on tracks (cost=0.00..358.08 rows=1197 width=27) (actual time=29.588..32.239 rows=14 loops=10)\n> Index Cond: (public.tracks.cd_id = cd.cd_id)\n> Total runtime: 972.917 ms\n> (10 rows)\n> #v-\n> \n> \n> The query is fast enough, but erroneous. If a cd contains more than one\n> track, that matches the condition, the inner SELECT will return more\n> than one cd and therefore the whole query will shield duplicate cds.\n> \n> The solution is to either insert DISTINCT into the above query or use\n> EXISTS as condition, but both queries show a terrible performance:\n> \n> #v+\n> cddb=# EXPLAIN ANALYSE SELECT cd.cd_id,cd.title,cd.artist,tracks.title FROM tracks JOIN (SELECT DISTINCT cd.cd_id,cd.artist,cd.title FROM cd JOIN tracks USING (cd_id) WHERE tracks.tstitle @@ plainto_tsquery('simple','education') LIMIT 10) AS cd USING (cd_id);\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=152390.12..156120.71 rows=11974 width=91) (actual time=37356.517..37605.073 rows=137 loops=1)\n> -> Limit (cost=152390.12..152390.22 rows=10 width=46) (actual time=37289.598..37289.643 rows=10 loops=1)\n> -> Unique (cost=152390.12..152576.77 rows=18665 width=46) (actual time=37289.594..37289.629 rows=10 loops=1)\n> -> Sort (cost=152390.12..152436.79 rows=18665 width=46) (actual time=37289.590..37289.601 rows=12 loops=1)\n> Sort Key: public.cd.cd_id, public.cd.artist, public.cd.title\n> -> Hash Join (cost=78926.50..151066.02 rows=18665 width=46) (actual time=36214.504..37285.974 rows=811 loops=1)\n> Hash Cond: (public.tracks.cd_id = public.cd.cd_id)\n> -> Bitmap Heap Scan on tracks (cost=536.76..59707.31 rows=18665 width=4) (actual time=0.724..39.253 rows=811 loops=1)\n> Recheck Cond: (tstitle @@ '''education'''::tsquery)\n> -> Bitmap Index Scan on tstitle_tracks_idx (cost=0.00..532.09 rows=18665 width=0) (actual time=0.492..0.492 rows=811 loops=1)\n> Index Cond: (tstitle @@ '''education'''::tsquery)\n> -> Hash (cost=49111.33..49111.33 rows=1344433 width=46) (actual time=36211.598..36211.598 rows=1344433 loops=1)\n> -> Seq Scan on cd (cost=0.00..49111.33 rows=1344433 width=46) (actual time=31.094..19813.716 rows=1344433 loops=1)\n> -> Index Scan using cdid_tracks_idx on tracks (cost=0.00..358.08 rows=1197 width=27) (actual time=31.294..31.527 rows=14 loops=10)\n> Index Cond: (public.tracks.cd_id = cd.cd_id)\n> Total runtime: 37614.523 ms\n> (16 rows)\n> \n> cddb=# EXPLAIN ANALYSE SELECT cd.cd_id,cd.artist,cd.title,tracks.title FROM tracks JOIN (SELECT cd.cd_id,cd.artist,cd.title FROM cd WHERE EXISTS (SELECT 1 FROM tracks WHERE tracks.cd_id = cd.cd_id AND tracks.tstitle @@ plainto_tsquery('simple','education')) LIMIT 10) as cd USING (cd_id);\n> QUERY PLAN \n> --------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..10023.37 rows=11974 width=91) (actual time=126.607..40853.563 rows=148 loops=1)\n> -> Limit (cost=0.00..6292.89 rows=10 width=46) (actual time=126.587..40853.072 rows=10 loops=1)\n> -> Seq Scan on cd (cost=0.00..423018283.46 rows=672216 width=46) (actual time=126.584..40853.035 rows=10 loops=1)\n> Filter: (subplan)\n> SubPlan\n> -> Index Scan using cdid_tracks_idx on tracks (cost=0.00..314.61 rows=1 width=0) (actual time=1.025..1.025 rows=0 loops=39706)\n> Index Cond: (cd_id = $0)\n> Filter: (tstitle @@ '''education'''::tsquery)\n> -> Index Scan using cdid_tracks_idx on tracks (cost=0.00..358.08 rows=1197 width=27) (actual time=0.011..0.029 rows=15 loops=10)\n> Index Cond: (tracks.cd_id = cd.cd_id)\n> Total runtime: 40853.789 ms\n> (11 rows)\n> #v-\n> \n> Rephrasing the EXISTS-query as an IN-query did not help the\n> performance, either.\n> \n> I get the impression, that I am blind and cannot find the obvious\n> solution, do you have any idea how to accomplish, what I am trying?\n> \n> Best Regards,\n> \n> Tilo\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n", "msg_date": "Sat, 07 Apr 2007 14:32:37 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fast DISTINCT or EXIST" }, { "msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> If that is your main culprit, you could also use two limits based on the \n> fact that there will be at most X songs per cd which would match your \n> title (my not very educated guess is 3x). Its a bit ugly... but if that \n> is what it takes to make postgresql not scan your entire index, so be it...\n\n> SELECT ... FROM cd\n> JOIN tracks ...\n> WHERE cd.id IN (SELECT DISTINCT cd_id FROM (SELECT t.cd_id FROM tracks t\n> WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 30) \n> as foo LIMIT 10)\n\nI think that's the only way. There is no plan type in Postgres that\nwill generate unique-ified output without scanning the whole input\nfirst, except for Uniq on pre-sorted input, which we can't use here\nbecause the tsearch scan isn't going to deliver the rows in cd_id order.\n\nI can see how to build one: make a variant of HashAggregate that returns\neach input row immediately after hashing it, *if* it isn't a duplicate\nof one already in the hash table. But it'd be a lot of work for what\nseems a rather specialized need.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Apr 2007 11:54:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fast DISTINCT or EXIST " }, { "msg_contents": "Hi everyone,\n\nOn Sat, 07 Apr 2007 11:54:08 -0400\nTom Lane <[email protected]> wrote:\n\n> Arjen van der Meijden <[email protected]> writes:\n> > If that is your main culprit, you could also use two limits based on the \n> > fact that there will be at most X songs per cd which would match your \n> > title (my not very educated guess is 3x). Its a bit ugly... but if that \n> > is what it takes to make postgresql not scan your entire index, so be it...\n> \n> > SELECT ... FROM cd\n> > JOIN tracks ...\n> > WHERE cd.id IN (SELECT DISTINCT cd_id FROM (SELECT t.cd_id FROM tracks t\n> > WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 30) \n> > as foo LIMIT 10)\n> \n> I think that's the only way. There is no plan type in Postgres that\n> will generate unique-ified output without scanning the whole input\n> first, except for Uniq on pre-sorted input, which we can't use here\n> because the tsearch scan isn't going to deliver the rows in cd_id order.\n\nUnfortunately, the query above will definitely not work correctly, if\nsomeone searches for \"a\" or \"the\". \n\nThe correct query does not perform as well as I hoped. \n\n#v+\ncddb=# EXPLAIN ANALYSE SELECT cd.cd_id,cd.artist,cd.title,tracks.title FROM cd JOIN tracks USING (cd_id) WHERE cd_id IN (SELECT DISTINCT tracks.cd_id FROM tracks WHERE tracks.tstitle @@ plainto_tsquery('simple','sympathy') LIMIT 10);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=61031.41..64906.58 rows=139 width=69) (actual time=31236.562..31810.940 rows=166 loops=1)\n -> Nested Loop (cost=61031.41..61176.20 rows=10 width=50) (actual time=31208.649..31388.289 rows=10 loops=1)\n -> Limit (cost=61031.41..61089.74 rows=10 width=4) (actual time=31185.972..31186.024 rows=10 loops=1)\n -> Unique (cost=61031.41..61124.74 rows=16 width=4) (actual time=31185.967..31186.006 rows=10 loops=1)\n -> Sort (cost=61031.41..61078.07 rows=18665 width=4) (actual time=31185.961..31185.977 rows=11 loops=1)\n Sort Key: public.tracks.cd_id\n -> Bitmap Heap Scan on tracks (cost=536.76..59707.31 rows=18665 width=4) (actual time=146.222..30958.057 rows=1677 loops=1)\n Recheck Cond: (tstitle @@ '''sympathy'''::tsquery)\n -> Bitmap Index Scan on tstitle_tracks_idx (cost=0.00..532.09 rows=18665 width=0) (actual time=126.328..126.328 rows=1677 loops=1)\n Index Cond: (tstitle @@ '''sympathy'''::tsquery)\n -> Index Scan using cd_id_key on cd (cost=0.00..8.62 rows=1 width=46) (actual time=20.218..20.219 rows=1 loops=10)\n Index Cond: (cd.cd_id = \"IN_subquery\".cd_id)\n -> Index Scan using cdid_tracks_idx on tracks (cost=0.00..358.08 rows=1197 width=27) (actual time=39.935..42.247 rows=17 loops=10)\n Index Cond: (cd.cd_id = public.tracks.cd_id)\n Total runtime: 31811.256 ms\n(15 rows)\n#v-\n\nIt gets better when the rows are in memory (down to 10.452 ms), but\nMurphy tells me, that the content that I need will never be in memory.\n\nI think I disregarded this variant at first, because it limits the\npossibility to restrict the cd artist and title.\n\n> I can see how to build one: make a variant of HashAggregate that returns\n> each input row immediately after hashing it, *if* it isn't a duplicate\n> of one already in the hash table. But it'd be a lot of work for what\n> seems a rather specialized need.\n\nD'oh.\n\nActually, I hoped to find an alternative, that does not involve\nDISTINCT.\n\nBest Regards,\n\nTilo\n", "msg_date": "Sat, 7 Apr 2007 18:24:07 +0200", "msg_from": "Tilo Buschmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: fast DISTINCT or EXIST" }, { "msg_contents": "Tilo Buschmann <[email protected]> writes:\n>> Arjen van der Meijden <[email protected]> writes:\n>>> SELECT ... FROM cd\n>>> JOIN tracks ...\n>>> WHERE cd.id IN (SELECT DISTINCT cd_id FROM (SELECT t.cd_id FROM tracks t\n>>> WHERE t.tstitle @@ plainto_tsquery('simple','education') LIMIT 30) \n>>> as foo LIMIT 10)\n\n> Unfortunately, the query above will definitely not work correctly, if\n> someone searches for \"a\" or \"the\". \n\nWell, the \"incorrectness\" is only that it might deliver fewer than the\nhoped-for ten CDs ... but that was a completely arbitrary cutoff anyway,\nno? I think in practice this'd give perfectly acceptable results.\n\n> Actually, I hoped to find an alternative, that does not involve\n> DISTINCT.\n\nYou could try playing around with GROUP BY rather than DISTINCT; those\nare separate code paths and will probably give you different plans.\nBut I don't think you'll find that GROUP BY does any better on this\nparticular measure of yielding rows before the full input has been\nscanned.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Apr 2007 12:39:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fast DISTINCT or EXIST " }, { "msg_contents": "On 7-4-2007 18:24 Tilo Buschmann wrote:\n> Unfortunately, the query above will definitely not work correctly, if\n> someone searches for \"a\" or \"the\". \n\nThat are two words you may want to consider not searching on at all.\n\nAs Tom said, its not very likely to be fixed in PostgreSQL. But you can \nalways consider using application logic (or a pgpsql function, you could \neven use a set returning function to replace the double-limit subselects \nin your in-statement) which will automatically fetch more records when \nthe initial guess turns out to be wrong, obviously using something like \na NOT IN to remove the initially returned cd.id's for the next batches.\nThen again, even 'a' or 'the' will not likely be in *all* tracks of a \ncd, so you can also use the 'average amount of tracks per cd' (about 10 \nor 11?) as your multiplier rather than my initial 3. Obviously you'll \nloose performance with each increment of that value.\n\nBest regards,\n\nArjen\n", "msg_date": "Sat, 07 Apr 2007 19:28:52 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: fast DISTINCT or EXIST" } ]
[ { "msg_contents": "Hi,\nI am trying to figure out how to debug a performance problem / use psql explain.\nThe table in question is:\n# \\d word_association;\n Table \"public.word_association\"\n Column | Type | Modifiers\n--------+------------------------+--------------------\n word1 | character varying(128) | not null\n word2 | character varying(128) | not null\n count | integer | not null default 0\nIndexes:\n \"word1_word2_comb_unique\" unique, btree (word1, word2)\n \"word1_hash_index\" hash (word1)\n \"word2_hash_index\" hash (word2)\n \"word_association_count_index\" btree (count)\n \"word_association_index1_1\" btree (word1)\n \"word_association_index2_1\" btree (word2)\n\nIt has multiple indices since i wanted to see which one the planner choses.\n\n\n# explain select * FROM word_association WHERE (word1 = 'bdss' OR\nword2 = 'bdss') AND count >= 10;\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on word_association (cost=11.53..1192.09 rows=155 width=22)\n Recheck Cond: (((word1)::text = 'bdss'::text) OR ((word2)::text =\n'bdss'::text))\n Filter: (count >= 10)\n -> BitmapOr (cost=11.53..11.53 rows=364 width=0)\n -> Bitmap Index Scan on word_association_index1_1\n(cost=0.00..5.79 rows=190 width=0)\n Index Cond: ((word1)::text = 'bdss'::text)\n -> Bitmap Index Scan on word_association_index2_1\n(cost=0.00..5.67 rows=174 width=0)\n Index Cond: ((word2)::text = 'bdss'::text)\n(8 rows)\n\nThe questions:\n1. i can undestand where the cost=11.53 came from but where did the\n1192.09 come form? The values are in milli right ?\n2. the query takes in reality much longer than 1 second.\n\nIn short, it feels like something is very wrong here (i tried vacuum\nanalyze and it didn't do much diff).\nany ideas ?\n", "msg_date": "Mon, 9 Apr 2007 02:09:53 -0700", "msg_from": "\"s d\" <[email protected]>", "msg_from_op": true, "msg_subject": "Beginner Question" }, { "msg_contents": "On Monday 09 April 2007 05:09:53 s d wrote:\n> Hi,\n> I am trying to figure out how to debug a performance problem / use psql\n> explain. The table in question is:\n> # \\d word_association;\n> Table \"public.word_association\"\n> Column | Type | Modifiers\n> --------+------------------------+--------------------\n> word1 | character varying(128) | not null\n> word2 | character varying(128) | not null\n> count | integer | not null default 0\n> Indexes:\n> \"word1_word2_comb_unique\" unique, btree (word1, word2)\n> \"word1_hash_index\" hash (word1)\n> \"word2_hash_index\" hash (word2)\n> \"word_association_count_index\" btree (count)\n> \"word_association_index1_1\" btree (word1)\n> \"word_association_index2_1\" btree (word2)\n>\n> It has multiple indices since i wanted to see which one the planner choses.\n>\n>\n> # explain select * FROM word_association WHERE (word1 = 'bdss' OR\n> word2 = 'bdss') AND count >= 10;\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n>--------------------- Bitmap Heap Scan on word_association \n> (cost=11.53..1192.09 rows=155 width=22) Recheck Cond: (((word1)::text =\n> 'bdss'::text) OR ((word2)::text = 'bdss'::text))\n> Filter: (count >= 10)\n> -> BitmapOr (cost=11.53..11.53 rows=364 width=0)\n> -> Bitmap Index Scan on word_association_index1_1\n> (cost=0.00..5.79 rows=190 width=0)\n> Index Cond: ((word1)::text = 'bdss'::text)\n> -> Bitmap Index Scan on word_association_index2_1\n> (cost=0.00..5.67 rows=174 width=0)\n> Index Cond: ((word2)::text = 'bdss'::text)\n> (8 rows)\n>\n> The questions:\n> 1. i can undestand where the cost=11.53 came from but where did the\n> 1192.09 come form? The values are in milli right ?\n> 2. the query takes in reality much longer than 1 second.\n>\n> In short, it feels like something is very wrong here (i tried vacuum\n> analyze and it didn't do much diff).\n> any ideas ?\n\nYou need an index on (word1, word2, count). In your current setup it will have \nto scan all rows that satisfy word1 and word2 to see if count >= 10.\n\njan\n\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Mon, 9 Apr 2007 07:51:53 -0400", "msg_from": "\"Jan de Visser\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Beginner Question" }, { "msg_contents": "\"s d\" <[email protected]> writes:\n> 1. i can undestand where the cost=11.53 came from but where did the\n> 1192.09 come form? The values are in milli right ?\n\nNo, the unit of estimated cost is 1 disk page fetch. See\nhttp://www.postgresql.org/docs/8.2/static/using-explain.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Apr 2007 10:49:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Beginner Question " }, { "msg_contents": "Hi Jan,\nAdding this Index slowed down things by a factor of 4.\n\nAlso, the performance is so horrible (example bellow) that i am\ncertain i am doing something wrong.\n\nDoes the following explain gives any ideas ?\n\nThanks\n\n=# EXPLAIN ANALYZE select * from word_association where (word1 ='the'\nor word2='the') and count > 10;\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on word_association (cost=250.86..7256.59 rows=4624\nwidth=22) (actual time=13.461..211.568 rows=6601 loops=1)\n Recheck Cond: (((word1)::text = 'the'::text) OR ((word2)::text =\n'the'::text))\n Filter: (count > 10)\n -> BitmapOr (cost=250.86..250.86 rows=12243 width=0) (actual\ntime=9.052..9.052 rows=0 loops=1)\n -> Bitmap Index Scan on word_association_index1_1\n(cost=0.00..153.20 rows=7579 width=0) (actual time=5.786..5.786\nrows=7232 loops=1)\n Index Cond: ((word1)::text = 'the'::text)\n -> Bitmap Index Scan on word_association_index2_1\n(cost=0.00..95.34 rows=4664 width=0) (actual time=3.253..3.253\nrows=4073 loops=1)\n Index Cond: ((word2)::text = 'the'::text)\n Total runtime: 219.987 ms\n(9 rows)\n\n\nOn 4/9/07, Jan de Visser <[email protected]> wrote:\n> On Monday 09 April 2007 05:09:53 s d wrote:\n> > Hi,\n> > I am trying to figure out how to debug a performance problem / use psql\n> > explain. The table in question is:\n> > # \\d word_association;\n> > Table \"public.word_association\"\n> > Column | Type | Modifiers\n> > --------+------------------------+--------------------\n> > word1 | character varying(128) | not null\n> > word2 | character varying(128) | not null\n> > count | integer | not null default 0\n> > Indexes:\n> > \"word1_word2_comb_unique\" unique, btree (word1, word2)\n> > \"word1_hash_index\" hash (word1)\n> > \"word2_hash_index\" hash (word2)\n> > \"word_association_count_index\" btree (count)\n> > \"word_association_index1_1\" btree (word1)\n> > \"word_association_index2_1\" btree (word2)\n> >\n> > It has multiple indices since i wanted to see which one the planner choses.\n> >\n> >\n> > # explain select * FROM word_association WHERE (word1 = 'bdss' OR\n> > word2 = 'bdss') AND count >= 10;\n> > QUERY PLAN\n> > ---------------------------------------------------------------------------\n> >--------------------- Bitmap Heap Scan on word_association\n> > (cost=11.53..1192.09 rows=155 width=22) Recheck Cond: (((word1)::text =\n> > 'bdss'::text) OR ((word2)::text = 'bdss'::text))\n> > Filter: (count >= 10)\n> > -> BitmapOr (cost=11.53..11.53 rows=364 width=0)\n> > -> Bitmap Index Scan on word_association_index1_1\n> > (cost=0.00..5.79 rows=190 width=0)\n> > Index Cond: ((word1)::text = 'bdss'::text)\n> > -> Bitmap Index Scan on word_association_index2_1\n> > (cost=0.00..5.67 rows=174 width=0)\n> > Index Cond: ((word2)::text = 'bdss'::text)\n> > (8 rows)\n> >\n> > The questions:\n> > 1. i can undestand where the cost=11.53 came from but where did the\n> > 1192.09 come form? The values are in milli right ?\n> > 2. the query takes in reality much longer than 1 second.\n> >\n> > In short, it feels like something is very wrong here (i tried vacuum\n> > analyze and it didn't do much diff).\n> > any ideas ?\n>\n> You need an index on (word1, word2, count). In your current setup it will have\n> to scan all rows that satisfy word1 and word2 to see if count >= 10.\n>\n> jan\n>\n>\n> --\n> --------------------------------------------------------------\n> Jan de Visser [email protected]\n>\n> Baruk Khazad! Khazad ai-menu!\n> --------------------------------------------------------------\n>\n", "msg_date": "Mon, 9 Apr 2007 17:45:53 -0700", "msg_from": "\"s d\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Beginner Question" }, { "msg_contents": "Yeah, I have a lot of similar problems where an index that I have to \nspeed up one query is used in another query where it actually slows \nit down. Is there any way to ignore indexes for certain queries? \nWe've been appending empty strings and adding zero's to the column \ndata to force it into a filter, but it's a messy hack. I've tried \nordering the joins in the the most efficent way with a \njoin_collapse_limit of 1, but it still does uses this index in \nparallel with searching an index on another table (i guess the \nplanner figures it's saving some time up front).\n\n-Mike\n\nOn Apr 9, 2007, at 8:45 PM, s d wrote:\n\n> Hi Jan,\n> Adding this Index slowed down things by a factor of 4.\n>\n> Also, the performance is so horrible (example bellow) that i am\n> certain i am doing something wrong.\n>\n> Does the following explain gives any ideas ?\n>\n> Thanks\n>\n> =# EXPLAIN ANALYZE select * from word_association where (word1 ='the'\n> or word2='the') and count > 10;\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------- \n> ---------------------------------------------------------------------- \n> ----\n> Bitmap Heap Scan on word_association (cost=250.86..7256.59 rows=4624\n> width=22) (actual time=13.461..211.568 rows=6601 loops=1)\n> Recheck Cond: (((word1)::text = 'the'::text) OR ((word2)::text =\n> 'the'::text))\n> Filter: (count > 10)\n> -> BitmapOr (cost=250.86..250.86 rows=12243 width=0) (actual\n> time=9.052..9.052 rows=0 loops=1)\n> -> Bitmap Index Scan on word_association_index1_1\n> (cost=0.00..153.20 rows=7579 width=0) (actual time=5.786..5.786\n> rows=7232 loops=1)\n> Index Cond: ((word1)::text = 'the'::text)\n> -> Bitmap Index Scan on word_association_index2_1\n> (cost=0.00..95.34 rows=4664 width=0) (actual time=3.253..3.253\n> rows=4073 loops=1)\n> Index Cond: ((word2)::text = 'the'::text)\n> Total runtime: 219.987 ms\n> (9 rows)\n>\n>\n> On 4/9/07, Jan de Visser <[email protected]> wrote:\n>> On Monday 09 April 2007 05:09:53 s d wrote:\n>> > Hi,\n>> > I am trying to figure out how to debug a performance problem / \n>> use psql\n>> > explain. The table in question is:\n>> > # \\d word_association;\n>> > Table \"public.word_association\"\n>> > Column | Type | Modifiers\n>> > --------+------------------------+--------------------\n>> > word1 | character varying(128) | not null\n>> > word2 | character varying(128) | not null\n>> > count | integer | not null default 0\n>> > Indexes:\n>> > \"word1_word2_comb_unique\" unique, btree (word1, word2)\n>> > \"word1_hash_index\" hash (word1)\n>> > \"word2_hash_index\" hash (word2)\n>> > \"word_association_count_index\" btree (count)\n>> > \"word_association_index1_1\" btree (word1)\n>> > \"word_association_index2_1\" btree (word2)\n>> >\n>> > It has multiple indices since i wanted to see which one the \n>> planner choses.\n>> >\n>> >\n>> > # explain select * FROM word_association WHERE (word1 = 'bdss' OR\n>> > word2 = 'bdss') AND count >= 10;\n>> > QUERY PLAN\n>> > \n>> --------------------------------------------------------------------- \n>> ------\n>> >--------------------- Bitmap Heap Scan on word_association\n>> > (cost=11.53..1192.09 rows=155 width=22) Recheck Cond: \n>> (((word1)::text =\n>> > 'bdss'::text) OR ((word2)::text = 'bdss'::text))\n>> > Filter: (count >= 10)\n>> > -> BitmapOr (cost=11.53..11.53 rows=364 width=0)\n>> > -> Bitmap Index Scan on word_association_index1_1\n>> > (cost=0.00..5.79 rows=190 width=0)\n>> > Index Cond: ((word1)::text = 'bdss'::text)\n>> > -> Bitmap Index Scan on word_association_index2_1\n>> > (cost=0.00..5.67 rows=174 width=0)\n>> > Index Cond: ((word2)::text = 'bdss'::text)\n>> > (8 rows)\n>> >\n>> > The questions:\n>> > 1. i can undestand where the cost=11.53 came from but where did the\n>> > 1192.09 come form? The values are in milli right ?\n>> > 2. the query takes in reality much longer than 1 second.\n>> >\n>> > In short, it feels like something is very wrong here (i tried \n>> vacuum\n>> > analyze and it didn't do much diff).\n>> > any ideas ?\n>>\n>> You need an index on (word1, word2, count). In your current setup \n>> it will have\n>> to scan all rows that satisfy word1 and word2 to see if count >= 10.\n>>\n>> jan\n>>\n>>\n>> --\n>> --------------------------------------------------------------\n>> Jan de Visser [email protected]\n>>\n>> Baruk Khazad! Khazad ai-menu!\n>> --------------------------------------------------------------\n>>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\n", "msg_date": "Tue, 10 Apr 2007 09:15:33 -0400", "msg_from": "Mike Gargano <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Beginner Question" }, { "msg_contents": "In your first post you said that the query is taking much longer than a\nsecond, and in your second post you say the performance is horrible, but\nexplain analyze shows the query runs in 219 milliseconds, which doesn't seem\ntoo bad to me. I wonder if the slow part for you is returning all the rows\nto the client? How are you running this query? (JDBC, ODBC, C library?)\nDo you really need all the rows? Maybe you could use a cursor to page\nthrough the rows?\n\nDave\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of s d\n> Sent: Monday, April 09, 2007 7:46 PM\n> To: Jan de Visser\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Beginner Question\n> \n> \n> Hi Jan,\n> Adding this Index slowed down things by a factor of 4.\n> \n> Also, the performance is so horrible (example bellow) that i am\n> certain i am doing something wrong.\n> \n> Does the following explain gives any ideas ?\n> \n> Thanks\n> \n> =# EXPLAIN ANALYZE select * from word_association where (word1 ='the'\n> or word2='the') and count > 10;\n> \n> QUERY PLAN\n> --------------------------------------------------------------\n> --------------------------------------------------------------\n> --------------------\n> Bitmap Heap Scan on word_association (cost=250.86..7256.59 rows=4624\n> width=22) (actual time=13.461..211.568 rows=6601 loops=1)\n> Recheck Cond: (((word1)::text = 'the'::text) OR ((word2)::text =\n> 'the'::text))\n> Filter: (count > 10)\n> -> BitmapOr (cost=250.86..250.86 rows=12243 width=0) (actual\n> time=9.052..9.052 rows=0 loops=1)\n> -> Bitmap Index Scan on word_association_index1_1\n> (cost=0.00..153.20 rows=7579 width=0) (actual time=5.786..5.786\n> rows=7232 loops=1)\n> Index Cond: ((word1)::text = 'the'::text)\n> -> Bitmap Index Scan on word_association_index2_1\n> (cost=0.00..95.34 rows=4664 width=0) (actual time=3.253..3.253\n> rows=4073 loops=1)\n> Index Cond: ((word2)::text = 'the'::text)\n> Total runtime: 219.987 ms\n> (9 rows)\n\n", "msg_date": "Tue, 10 Apr 2007 09:11:57 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Beginner Question" } ]
[ { "msg_contents": "I have an odd performance issue on 8.2 that I'd thought I'd document\nhere. I have a workaround, but I'm if there is something that I'm not\nseeing.\n\nok, for starters:\nI have a large table that is basically organized like this:\ncreate table big\n(\n key1 int,\n key2 int,\n ts timestamp\n [other fields]\n);\n\nand a view most_recent_big which lists for each combination of key1\nand key2, the '[other fields]' that are behind the highest (most\nrecent) timestamp. The original view implementation involved a self\njoin which is the classic sql approach to pulling values from a\ndenormalized table (the real solution of course is to normalize the\ndata but I can't do that for various reasons). This wasn't very fast,\nso I wrote a custom aggregate to optimize the view (there are usuallly\nvery small #s of records for key1, key2 pair:\n\ncreate view latest_big_view as\n select key1, key2, max_other_fields[other fields]\n from big\n group by key1, key2;\n\nThis worked very well, but sometimes the index on key1, key2 does not\nget utilized when joining against latest_big_view. Let's say I have a\nnumber of key1, key2 pairs in another table:\n\nfor example:\nselect * from foo, latest_big_view using (key1, key2);\nbreaks down.\n\nhere is a example of the 'breakdown' plan on real tables. selecting a\nsingle record from the view is very fast...1ms or less. The join\ncan't 'see through' the view to filter the index.\n\ndev20400=# explain analyze select * from foo join latest_download\nusing (host_id, software_binary_id);\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=15.35..4616.65 rows=25 width=90) (actual\ntime=229.623..10601.317 rows=494 loops=1)\n Hash Cond: ((latest_download.host_id = foo.host_id) AND\n(latest_download.software_binary_id = foo.software_binary_id))\n -> GroupAggregate (cost=0.00..4499.01 rows=4535 width=94) (actual\ntime=0.346..10370.383 rows=37247 loops=1)\n -> Index Scan using software_download_idx on\nsoftware_download (cost=0.00..2526.53 rows=45342 width=94) (actual\ntime=0.028..344.591\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.006..0.011 rows=1 loops=37247)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.006..0.011 rows=1 loops=37247)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.005..0.011 rows=1 loops=37247)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.006..0.011 rows=1 loops=37247)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.006..0.011 rows=1 loops=37247)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.006..0.011 rows=1 loops=37247)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.005..0.011 rows=1 loops=37247)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.005..0.011 rows=1 loops=37247)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.006..0.011 rows=1 loops=37247)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.005..0.010 rows=1 loops=37247)\n -> Hash (cost=7.94..7.94 rows=494 width=8) (actual\ntime=5.568..5.568 rows=494 loops=1)\n -> Seq Scan on foo (cost=0.00..7.94 rows=494 width=8)\n(actual time=0.018..2.686 rows=494 loops=1)\n Total runtime: 10604.260 ms\n(18 rows)\n\n\nHere is the same query but on the root table, instead of the view:\ndev20400=# explain analyze select * from foo join software_download\nusing (host_id, software_binary_id);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..1521.60 rows=19 width=94) (actual\ntime=0.084..24.992 rows=607 loops=1)\n -> Seq Scan on foo (cost=0.00..7.94 rows=494 width=8) (actual\ntime=0.044..2.753 rows=494 loops=1)\n -> Index Scan using software_download_idx on software_download\n(cost=0.00..3.05 rows=1 width=94) (actual time=0.011..0.019 rows=1\nloops=49\n Index Cond: ((foo.host_id = software_download.host_id) AND\n(foo.software_binary_id = software_download.software_binary_id))\n Total runtime: 28.385 ms\n(5 rows)\n\nI can use a trick with a function to make the view give out reasonalbe results:\n\ncreate function foo(int, int) returns latest_download as\n$$ select * from latest_download where software_binary_id = $1 and\nhost_id = $2; $$ language sql;\n\ndev20400=# explain analyze select (v).* from (select\nfoo(software_binary_id, host_id) as v from foo) q;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Subquery Scan q (cost=0.00..14.12 rows=494 width=32) (actual\ntime=1.436..139.644 rows=494 loops=1)\n -> Seq Scan on foo (cost=0.00..9.18 rows=494 width=8) (actual\ntime=1.414..131.144 rows=494 loops=1)\n Total runtime: 142.887 ms\n(3 rows)\n\nTime: 144.306 ms\n\nmerlin\n", "msg_date": "Mon, 9 Apr 2007 15:05:17 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "join to view over custom aggregate seems like it should be faster" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> I have an odd performance issue on 8.2 that I'd thought I'd document\n> here. I have a workaround, but I'm if there is something that I'm not\n> seeing.\n\nIt's hard to comment on this without seeing the full details of the view\nand tables. I'm wondering where the SubPlans are coming from, for instance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Apr 2007 17:07:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join to view over custom aggregate seems like it should be faster" }, { "msg_contents": "On 4/9/07, Tom Lane <[email protected]> wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > I have an odd performance issue on 8.2 that I'd thought I'd document\n> > here. I have a workaround, but I'm if there is something that I'm not\n> > seeing.\n>\n> It's hard to comment on this without seeing the full details of the view\n> and tables. I'm wondering where the SubPlans are coming from, for instance.\n\nok, this is really odd. I was in the process of busting all that out\nfor you when I noticed this:\n\nhere is the source sql for the view\ncreate or replace view latest_download as\n select software_binary_id, host_id,\n ((\n select latest_software_download(\n (bds_status_id,\n mtime,\n dl_window_open,\n dl_window_close,\n download_start,\n download_stop,\n info,\n userupgradeable,\n overrideflag,\n percent_complete)::software_download_data)\n )::software_download_data).*\n from software_download group by host_id, software_binary_id;\n\nhere is what psql \\d shows:\n\nSELECT software_download.software_binary_id,\nsoftware_download.host_id, ((SELECT\nlatest_software_download(ROW(software_download.bds_status_id,\nsoftware_download.mtime, software_download.dl_window_open,\nsoftware_download.dl_window_close, software_download.download_start,\nsoftware_download.download_stop, software_download.info,\nsoftware_download.userupgradeable, software_download.overrideflag,\nsoftware_download.percent_complete)::software_download_data) AS\nlatest_software_download)).bds_status_id AS bds_status_id, ((SELECT l\n[snip]\n\nthis is repeated several more times...I replace the view just to be safe.\n\nfor posterity:\ncreate or replace function max_software_download(l\nsoftware_download_data, r software_download_data) returns\nsoftware_download_data as\n$$\n begin\n if l.mtime > r.mtime then\n return l;\n end if;\n\n return r;\n end;\n$$ language plpgsql;\n\nCREATE TYPE software_download_data as\n(\n bds_status_id integer,\n mtime timestamp with time zone,\n dl_window_open time without time zone,\n dl_window_close time without time zone,\n download_start timestamp with time zone,\n download_stop timestamp with time zone,\n info text,\n userupgradeable boolean,\n overrideflag boolean,\n percent_complete integer\n);\n\nCREATE AGGREGATE latest_software_download\n(\n BASETYPE=software_download_data,\n SFUNC=max_software_download,\n STYPE=software_download_data\n);\n\nmerlin\n", "msg_date": "Mon, 9 Apr 2007 17:34:39 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: join to view over custom aggregate seems like it should be faster" }, { "msg_contents": "On 4/9/07, Tom Lane <[email protected]> wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > I have an odd performance issue on 8.2 that I'd thought I'd document\n> > here. I have a workaround, but I'm if there is something that I'm not\n> > seeing.\n>\n> It's hard to comment on this without seeing the full details of the view\n> and tables. I'm wondering where the SubPlans are coming from, for instance.\n\nah, it looks like the aggregate is being re-expanded for each field\nreturned by the aggregate. I notice this for non-trivial record\nreturning functions also. standard m.o. is to push into a subquery\nand expand afterwords.\n\nmerlin\n", "msg_date": "Mon, 9 Apr 2007 17:36:47 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: join to view over custom aggregate seems like it should be faster" }, { "msg_contents": "On 4/9/07, Merlin Moncure <[email protected]> wrote:\n> On 4/9/07, Tom Lane <[email protected]> wrote:\n> > \"Merlin Moncure\" <[email protected]> writes:\n> > > I have an odd performance issue on 8.2 that I'd thought I'd document\n> > > here. I have a workaround, but I'm if there is something that I'm not\n> > > seeing.\n> >\n> > It's hard to comment on this without seeing the full details of the view\n> > and tables. I'm wondering where the SubPlans are coming from, for instance.\n>\n> ah, it looks like the aggregate is being re-expanded for each field\n> returned by the aggregate. I notice this for non-trivial record\n> returning functions also. standard m.o. is to push into a subquery\n> and expand afterwords.\n\n[sorry for the deluge of info]\n\nI cleaned up the view from:\ncreate or replace view latest_download as\n select software_binary_id, host_id,\n ((\n select latest_software_download(\n (bds_status_id,\n mtime,\n dl_window_open,\n dl_window_close,\n download_start,\n download_stop,\n info,\n userupgradeable,\n overrideflag,\n percent_complete)::software_download_data)\n )::software_download_data).*\n from software_download group by host_id, software_binary_id;\n\nto this:\ncreate or replace view latest_download as\n select software_binary_id, host_id, (v).* from\n (\n select\n software_binary_id, host_id,\n latest_software_download(\n (bds_status_id,\n mtime,\n dl_window_open,\n dl_window_close,\n download_start,\n download_stop,\n info,\n userupgradeable,\n overrideflag,\n percent_complete)::software_download_data) as v\n from software_download group by host_id, software_binary_id\n ) q;\n\nthis cleaned up the odd subplans but is still slow:\ndev20400=# explain analyze select * from foo join latest_download\nusing (host_id, software_binary_id);\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1308.84..1467.81 rows=25 width=40) (actual\ntime=1472.668..1914.799 rows=494 loops=1)\n Hash Cond: ((q.host_id = foo.host_id) AND (q.software_binary_id =\nfoo.software_binary_id))\n -> HashAggregate (cost=1293.48..1350.17 rows=4535 width=94)\n(actual time=1467.002..1700.388 rows=37247 loops=1)\n -> Seq Scan on software_download (cost=0.00..953.42\nrows=45342 width=94) (actual time=0.014..274.747 rows=45342 loops=1)\n -> Hash (cost=7.94..7.94 rows=494 width=8) (actual\ntime=5.028..5.028 rows=494 loops=1)\n -> Seq Scan on foo (cost=0.00..7.94 rows=494 width=8)\n(actual time=0.022..2.507 rows=494 loops=1)\n Total runtime: 1918.721 ms\n\ncompare it to this:\ndev20400=# explain analyze select * from foo f where exists (select *\nfrom latest_download where host_id = f.host_id and software_binary_id\n= f.software_binary_id);\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on foo f (cost=0.00..3122.01 rows=247 width=8) (actual\ntime=0.152..45.941 rows=494 loops=1)\n Filter: (subplan)\n SubPlan\n -> Subquery Scan q (cost=0.00..6.30 rows=1 width=40) (actual\ntime=0.081..0.081 rows=1 loops=494)\n -> GroupAggregate (cost=0.00..6.29 rows=1 width=94)\n(actual time=0.065..0.065 rows=1 loops=494)\n -> Index Scan using software_download_idx on\nsoftware_download (cost=0.00..6.27 rows=1 width=94) (actual\ntime=0.013..0.021 r\n Index Cond: ((host_id = $0) AND\n(software_binary_id = $1))\n Total runtime: 48.323 ms\n(8 rows)\n\nTime: 49.851 ms\n\nI since I need both sides, I can't figure out a way to force the index\nto be used during the join except to use a function to look up the\nview based on the key, which works:\ndev20400=# explain analyze select latest_download(host_id,\nsoftware_binary_id) from foo;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..9.18 rows=494 width=8) (actual\ntime=0.566..51.605 rows=494 loops=1)\n Total runtime: 54.290 ms\n(2 rows)\n\ndev20400=# explain analyze select * from latest_download where host_id\n= 1 and software_binary_id = 12345;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan q (cost=0.00..6.30 rows=1 width=40) (actual\ntime=0.046..0.046 rows=0 loops=1)\n -> GroupAggregate (cost=0.00..6.29 rows=1 width=94) (actual\ntime=0.035..0.035 rows=0 loops=1)\n -> Index Scan using software_download_idx on\nsoftware_download (cost=0.00..6.27 rows=1 width=94) (actual\ntime=0.024..0.024 rows=0 lo\n Index Cond: ((host_id = 1) AND (software_binary_id = 12345))\n Total runtime: 0.134 ms\n\nFor some reason, I can't get the index to be used on the table sitting\nunder a view during a join, even though it should be, or at least it\nseems....\n\nmerlin\n", "msg_date": "Tue, 10 Apr 2007 08:59:23 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: join to view over custom aggregate seems like it should be faster" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> For some reason, I can't get the index to be used on the table sitting\n> under a view during a join, even though it should be, or at least it\n> seems....\n\nNope, that's not going to work, because the aggregate keeps the subquery\nfrom being flattened into the upper query, which is what would have to\nhappen for a nestloop-with-inner-indexscan join to be considered.\nAFAICS you've got to structure it so that the aggregation happens above\nthe join.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Apr 2007 13:03:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join to view over custom aggregate seems like it should be faster" }, { "msg_contents": "On 4/10/07, Tom Lane <[email protected]> wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > For some reason, I can't get the index to be used on the table sitting\n> > under a view during a join, even though it should be, or at least it\n> > seems....\n>\n> Nope, that's not going to work, because the aggregate keeps the subquery\n> from being flattened into the upper query, which is what would have to\n> happen for a nestloop-with-inner-indexscan join to be considered.\n> AFAICS you've got to structure it so that the aggregation happens above\n> the join.\n\nright, i see that it's actually the 'group by' that does it:\n\nselect a, b from foo join (select a, b from bar group by a,b) q using (a,b);\n\nis enough to keep it from using the index on a,b from bar. thats too bad...\n\nmerlin\n", "msg_date": "Tue, 10 Apr 2007 13:53:02 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: join to view over custom aggregate seems like it should be faster" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> right, i see that it's actually the 'group by' that does it:\n> select a, b from foo join (select a, b from bar group by a,b) q using (a,b);\n> is enough to keep it from using the index on a,b from bar. thats too bad...\n\nSome day it'd be nice to be able to reorder grouping/aggregation steps\nrelative to joins, the way we can now reorder outer joins. Don't hold\nyour breath though ... I think it'll take some pretty major surgery on\nthe planner.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Apr 2007 14:21:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join to view over custom aggregate seems like it should be faster" } ]
[ { "msg_contents": "We have a query which generates a small set of rows (~1,000) which are\nto be used in a DELETE on the same table. The problem we have is that\nwe need to join on 5 different columns and it takes far too long. I\nhave a solution but I'm not sure it's the right one. Instead of joining\non 5 columns in the DELETE the join uses the ctid column.\n \nBEGIN;\nCREATE INDEX gregs_table_ctid_idx ON gregs_table(ctid);\nDELETE FROM gregs_table gt\n USING (SELECT ctid FROM gregs_table WHERE ...) as s\n WHERE gt.ctid=s.ctid;\nDROP INDEX gregs_table_ctid_idx;\nCOMMIT;\n \nThe difference to me is a 20+ minute to a ~5 second transaction. The\ntable is loaded using COPY, never INSERT, never UPDATE'd. COPY, SELECT\nand DELETE is its life. PostgreSQL 8.2.1 on RedHat ES 4.0 is the target\nplatform.\n \nAny possible issues with using ctid in the DELETE and transaction? I\nunderstand ctid is \"useless\" in the long run as the documentation points\nout but for the short term and within a transaction it seems to work\nwell.\n \nThoughts?\n \nGreg\n \n \n--\n Greg Spiegelberg\n [email protected] <mailto:[email protected]> \n 614.318.4314, office\n 614.431.8388, fax\n ISOdx Product Development Manager\n Cranel, Inc.\n \n \n\n\n\n\n\nWe have a query which generates a small set of rows (~1,000) which are to \nbe used in a DELETE on the same table.  The problem we have is that we need \nto join on 5 different columns and it takes far too long.  \nI have a \nsolution but I'm not sure it's the right one.  Instead of joining on 5 columns in the DELETE the \njoin uses the ctid column.\n \nBEGIN;\nCREATE INDEX \ngregs_table_ctid_idx ON gregs_table(ctid);\nDELETE FROM \ngregs_table gt\n  \n USING \n(SELECT ctid FROM gregs_table WHERE ...) as s\n   WHERE \ngt.ctid=s.ctid;\nDROP INDEX \ngregs_table_ctid_idx;\nCOMMIT;\n \nThe difference to me \nis a 20+ minute to a ~5 second transaction.  The table is loaded using COPY, \nnever INSERT, never UPDATE'd.  COPY, SELECT and DELETE is its life.  \nPostgreSQL 8.2.1 \non RedHat ES 4.0 is the target platform.\n \nAny possible issues with using ctid in the DELETE and \ntransaction?  I understand ctid is \"useless\" in the long run as the \ndocumentation points out but for the short term and within a transaction it \nseems to work well.\n \nThoughts?\n \nGreg\n \n \n--\n Greg \nSpiegelberg\n [email protected]\n 614.318.4314, office\n 614.431.8388, fax\n ISOdx Product \nDevelopment Manager\n Cranel, \nInc.", "msg_date": "Mon, 9 Apr 2007 16:01:53 -0400", "msg_from": "\"Spiegelberg, Greg\" <[email protected]>", "msg_from_op": true, "msg_subject": "DELETE with filter on ctid" }, { "msg_contents": "\"Spiegelberg, Greg\" <[email protected]> writes:\n> We have a query which generates a small set of rows (~1,000) which are\n> to be used in a DELETE on the same table. The problem we have is that\n> we need to join on 5 different columns and it takes far too long. I\n> have a solution but I'm not sure it's the right one. Instead of joining\n> on 5 columns in the DELETE the join uses the ctid column.\n\n> BEGIN;\n> CREATE INDEX gregs_table_ctid_idx ON gregs_table(ctid);\n> DELETE FROM gregs_table gt\n> USING (SELECT ctid FROM gregs_table WHERE ...) as s\n> WHERE gt.ctid=s.ctid;\n> DROP INDEX gregs_table_ctid_idx;\n> COMMIT;\n\nForget the index, it's useless here (hint: ctid is a physical address).\nI'm wondering though why you don't just transpose the subquery's WHERE\ncondition into the DELETE's WHERE? Or is this example oversimplified?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Apr 2007 16:55:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE with filter on ctid " }, { "msg_contents": "Spiegelberg, Greg wrote:\n> We have a query which generates a small set of rows (~1,000) which are \n> to be used in a DELETE on the same table. The problem we have is that \n> we need to join on 5 different columns and it takes far too long.\n\nYou may have encountered the same problem I did: You *must* run ANALYZE on a temporary table before you use in another query. It's surprising that this is true even for very small tables (a few hundred to a few thousand rows), but it is. I had a case where I created a \"scratch\" table like yours, and the before/after ANALYZE performance was the difference between 30 seconds and a few milliseconds for the same query.\n\nCraig\n", "msg_date": "Mon, 09 Apr 2007 14:57:35 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE with filter on ctid" }, { "msg_contents": "Tom et al,\n\nSometimes it takes a look from someone on the outside to get the job\ndone right.\n\nBelow is, I believe, everything pertinent to this problem. First is the\ntable in question, second is the problematic and original query, and\nfinal is the transaction that I have working today with the CTID\nimplementation.\n\nI would welcome any feedback.\n\nTIA,\nGreg\n\n\n\ncranel=# \\d sid2.data_id_table\n Table \"sid2.data_id_table\"\n Column | Type | Modifiers\n-------------+---------+---------------\n point_id | bigint |\n dtype_id | bigint |\n segment_id | bigint |\n key1_id | bigint | not null\n key2_id | bigint |\n data_id | bigint | not null\n deleted | boolean | default false\n removed | boolean | default false\n added | boolean | default false\n persist | boolean | default false\nIndexes:\n \"data_id_table_data_id_indx\" btree (data_id)\n \"data_id_table_dtype_id_indx\" btree (dtype_id)\n \"data_id_table_dtype_ss_id_indx\" btree (dtype_id, point_id)\n \"data_id_table_key1_id_indx\" btree (key1_id)\n \"data_id_table_key2_id_indx\" btree (key2_id)\n \"data_id_table_mod_dtype_ss_id_indx\" btree (segment_id, dtype_id,\npoint_id)\n \"data_id_table_segment_id_indx\" btree (segment_id)\n \"data_id_table_point_id_indx\" btree (point_id)\n\ncranel=# explain analyze\nDELETE FROM sid2.data_id_table AS dd\n USING public.points AS ss,\n (SELECT markeddel.*\n FROM (SELECT d.*\n FROM sid2.data_id_table d,public.points s\n WHERE s.systems_id=2 AND s.id<2 AND s.permpoint=FALSE\nAND s.id=d.point_id AND d.persist=FALSE\n AND d.dtype_id=3) AS markeddel\n JOIN\n (SELECT DISTINCT ON (d.key1_id,d.key2_id) d.*\n FROM sid2.data_id_table d,public.points s\n WHERE s.systems_id=2 AND s.id<=2 AND s.id=d.point_id\nAND d.dtype_id=3\n ORDER BY d.key1_id,d.key2_id,d.point_id DESC) AS rollup\n ON\n(markeddel.key1_id,markeddel.key2_id)=(rollup.key1_id,rollup.key2_id)\n WHERE markeddel.point_id<>rollup.point_id) ru\n WHERE ss.systems_id=2 AND ss.id<2 AND ss.permpoint=FALSE AND\nss.id=dd.point_id\n AND dd.persist=FALSE AND dd.dtype_id=3\n AND\n(dd.point_id,dd.key1_id,dd.key2_id)=(ru.point_id,ru.key1_id,ru.key2_id);\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------\n Nested Loop (cost=1037.06..1130.46 rows=1 width=6) (actual\ntime=33291.639..678047.543 rows=564 loops=1)\n Join Filter: ((dd.point_id = d.point_id) AND (d.point_id <>\nrollup.point_id))\n -> Merge Join (cost=1028.10..1117.47 rows=1 width=70) (actual\ntime=1775.971..3721.991 rows=156750 loops=1)\n Merge Cond: ((rollup.key1_id = dd.key1_id) AND (rollup.key2_id\n= dd.key2_id))\n -> Unique (cost=629.66..659.24 rows=3944 width=52) (actual\ntime=896.293..1571.591 rows=156779 loops=1)\n -> Sort (cost=629.66..639.52 rows=3944 width=52)\n(actual time=896.285..1080.444 rows=157342 loops=1)\n Sort Key: d.key1_id, d.key2_id, d.point_id\n -> Nested Loop (cost=0.00..394.10 rows=3944\nwidth=52) (actual time=8.846..529.901 rows=157352 loops=1)\n -> Seq Scan on points s (cost=0.00..1.72\nrows=1 width=8) (actual time=0.064..0.096 rows=2 loops=1)\n Filter: ((systems_id = 2) AND (id <=\n2))\n -> Index Scan using\ndata_id_table_point_id_indx on data_id_table d (cost=0.00..339.79\nrows=4207 width=52) (actual time=4.649..155.174 rows=78676 loops=2)\n Index Cond: (s.id = d.point_id)\n Filter: (dtype_id = 3)\n -> Sort (cost=398.44..398.64 rows=82 width=46) (actual\ntime=879.658..1109.830 rows=156750 loops=1)\n Sort Key: dd.key1_id, dd.key2_id\n -> Nested Loop (cost=0.00..395.83 rows=82 width=46)\n(actual time=5.197..549.873 rows=156750 loops=1)\n -> Nested Loop (cost=0.00..3.45 rows=1 width=16)\n(actual time=0.055..0.107 rows=1 loops=1)\n Join Filter: (ss.id = s.id)\n -> Seq Scan on points ss (cost=0.00..1.72\nrows=1 width=8) (actual time=0.037..0.052 rows=1 loops=1)\n Filter: ((systems_id = 2) AND (id < 2)\nAND (NOT permpoint))\n -> Seq Scan on points s (cost=0.00..1.72\nrows=1 width=8) (actual time=0.006..0.039 rows=1 loops=1)\n Filter: ((systems_id = 2) AND (id < 2)\nAND (NOT permpoint))\n -> Index Scan using data_id_table_point_id_indx on\ndata_id_table dd (cost=0.00..339.79 rows=4207 width=30) (actual\ntime=5.135..342.406 rows=156750 loops=1)\n Index Cond: (ss.id = dd.point_id)\n Filter: ((NOT persist) AND (dtype_id = 3))\n -> Bitmap Heap Scan on data_id_table d (cost=8.96..12.97 rows=1\nwidth=24) (actual time=4.289..4.290 rows=1 loops=156750)\n Recheck Cond: ((d.key1_id = rollup.key1_id) AND (d.key2_id =\nrollup.key2_id))\n Filter: ((NOT persist) AND (dtype_id = 3))\n -> BitmapAnd (cost=8.96..8.96 rows=1 width=0) (actual\ntime=4.280..4.280 rows=0 loops=156750)\n -> Bitmap Index Scan on data_id_table_key1_id_indx\n(cost=0.00..4.32 rows=4 width=0) (actual time=0.020..0.020 rows=31\nloops=156750)\n Index Cond: (d.key1_id = rollup.key1_id)\n -> Bitmap Index Scan on data_id_table_key2_id_indx\n(cost=0.00..4.38 rows=13 width=0) (actual time=4.254..4.254 rows=26187\nloops=156750)\n Index Cond: (d.key2_id = rollup.key2_id)\n Total runtime: 678063.873 ms\n(34 rows)\n\ncranel=# \\timing\nTiming is on.\ncranel=# BEGIN;\nBEGIN\nTime: 0.340 ms\n\ncranel=# CREATE INDEX data_id_table_ctid_idx ON\nsid2.data_id_table(ctid);\nCREATE INDEX\nTime: 648.911 ms\n\ncranel=# explain analyze\nDELETE FROM sid2.data_id_table AS dd\n USING public.points AS ss,\n (SELECT markeddel.ctid\n FROM (SELECT d.ctid,d.*\n FROM sid2.data_id_table d,public.points s\n WHERE s.systems_id=2\n AND s.id<2\n AND s.permpoint=FALSE\n AND s.id=d.point_id\n AND d.persist=FALSE\n AND d.dtype_id=3) AS markeddel\n JOIN\n (SELECT DISTINCT ON (d.key1_id,d.key2_id) d.*\n FROM sid2.data_id_table d,public.points s\n WHERE s.systems_id=2\n AND s.id<=2\n AND s.id=d.point_id\n AND d.dtype_id=3\n ORDER BY d.key1_id,d.key2_id,d.point_id DESC) AS rollup\n ON\n(markeddel.key1_id,markeddel.key2_id)=(rollup.key1_id,rollup.key2_id)\n WHERE markeddel.point_id<>rollup.point_id) ru\n WHERE ss.systems_id=2\n AND ss.id<2\n AND ss.permpoint=FALSE\n AND ss.id=dd.point_id\n AND dd.persist=FALSE\n AND dd.dtype_id=3\n AND dd.ctid=ru.ctid;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------\n Nested Loop (cost=1259.33..1378.37 rows=1 width=6) (actual\ntime=1807.429..2625.722 rows=562 loops=1)\n -> Nested Loop (cost=1259.33..1378.08 rows=1 width=14) (actual\ntime=1807.372..2619.592 rows=562 loops=1)\n -> Merge Join (cost=1259.33..1377.66 rows=1 width=6) (actual\ntime=1807.228..2606.901 rows=562 loops=1)\n Merge Cond: ((rollup.key1_id = d.key1_id) AND\n(rollup.key2_id = d.key2_id))\n Join Filter: (d.point_id <> rollup.point_id)\n -> Unique (cost=629.66..659.24 rows=3944 width=52)\n(actual time=911.409..1271.121 rows=156779 loops=1)\n -> Sort (cost=629.66..639.52 rows=3944 width=52)\n(actual time=911.403..1024.775 rows=157342 loops=1)\n Sort Key: d.key1_id, d.key2_id, d.point_id\n -> Nested Loop (cost=0.00..394.10 rows=3944\nwidth=52) (actual time=6.036..548.119 rows=157352 loops=1)\n -> Seq Scan on points s\n(cost=0.00..1.72 rows=1 width=8) (actual time=0.114..0.137 rows=2\nloops=1)\n Filter: ((systems_id = 2) AND (id\n<= 2))\n -> Index Scan using\ndata_id_table_point_id_indx on data_id_table d (cost=0.00..339.79\nrows=4207 width=52) (actual time=3.216..155.284\nrows=78676 loops=2)\n Index Cond: (s.id = d.point_id)\n Filter: (dtype_id = 3)\n -> Sort (cost=629.66..639.52 rows=3944 width=30)\n(actual time=875.213..980.618 rows=156750 loops=1)\n Sort Key: d.key1_id, d.key2_id\n -> Nested Loop (cost=0.00..394.10 rows=3944\nwidth=30) (actual time=5.864..553.290 rows=156750 loops=1)\n -> Seq Scan on points s (cost=0.00..1.72\nrows=1 width=8) (actual time=0.022..0.053 rows=1 loops=1)\n Filter: ((systems_id = 2) AND (id < 2)\nAND (NOT permpoint))\n -> Index Scan using\ndata_id_table_point_id_indx on data_id_table d (cost=0.00..339.79\nrows=4207 width=30) (actual time=5.831..355.139 rows=156750 loops=1)\n Index Cond: (s.id = d.point_id)\n Filter: ((NOT persist) AND (dtype_id =\n3))\n -> Index Scan using data_id_table_ctid_idx on data_id_table dd\n(cost=0.00..0.41 rows=1 width=14) (actual time=0.017..0.019 rows=1\nloops=562)\n Index Cond: (dd.ctid = d.ctid)\n Filter: ((NOT persist) AND (dtype_id = 3))\n -> Index Scan using points_pkey on points ss (cost=0.00..0.28\nrows=1 width=8) (actual time=0.005..0.007 rows=1 loops=562)\n Index Cond: ((ss.id < 2) AND (ss.id = dd.point_id))\n Filter: ((systems_id = 2) AND (NOT permpoint))\n Total runtime: 2641.820 ms\n(29 rows)\nTime: 2652.940 ms\n\ncranel=# DROP INDEX data_id_table_ctid_idx;\nDROP INDEX\nTime: 33.653 ms\n\ncranel=# DELETE FROM sid2.data_id_table AS dd WHERE dd.point_id=2 AND\ndd.dtype_id=3 AND dd.deleted AND NOT dd.persist;\nDELETE 0\nTime: 0.960 ms\n\ncranel=# COMMIT;\nTime: 20.500 ms\n\n\n\n\n \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, April 09, 2007 4:55 PM\nTo: Spiegelberg, Greg\nCc: [email protected]\nSubject: Re: [PERFORM] DELETE with filter on ctid \n\n\"Spiegelberg, Greg\" <[email protected]> writes:\n> We have a query which generates a small set of rows (~1,000) which are\n> to be used in a DELETE on the same table. The problem we have is that\n> we need to join on 5 different columns and it takes far too long. I\n> have a solution but I'm not sure it's the right one. Instead of\njoining\n> on 5 columns in the DELETE the join uses the ctid column.\n\n> BEGIN;\n> CREATE INDEX gregs_table_ctid_idx ON gregs_table(ctid);\n> DELETE FROM gregs_table gt\n> USING (SELECT ctid FROM gregs_table WHERE ...) as s\n> WHERE gt.ctid=s.ctid;\n> DROP INDEX gregs_table_ctid_idx;\n> COMMIT;\n\nForget the index, it's useless here (hint: ctid is a physical address).\nI'm wondering though why you don't just transpose the subquery's WHERE\ncondition into the DELETE's WHERE? Or is this example oversimplified?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Apr 2007 09:35:14 -0400", "msg_from": "\"Spiegelberg, Greg\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DELETE with filter on ctid " }, { "msg_contents": "Craig,\n\nI'm not using a TEMP TABLE in this DELETE however I have tried an\nANALYZE prior to the DELETE but it hardly makes a dent in the time.\n\nPlease look at the other follow-up email I just sent for full details.\n\nGreg\n \n\n-----Original Message-----\nFrom: Craig A. James [mailto:[email protected]] \nSent: Monday, April 09, 2007 5:58 PM\nTo: Spiegelberg, Greg\nCc: [email protected]\nSubject: Re: [PERFORM] DELETE with filter on ctid\n\nSpiegelberg, Greg wrote:\n> We have a query which generates a small set of rows (~1,000) which are\n\n> to be used in a DELETE on the same table. The problem we have is that\n\n> we need to join on 5 different columns and it takes far too long.\n\nYou may have encountered the same problem I did: You *must* run ANALYZE\non a temporary table before you use in another query. It's surprising\nthat this is true even for very small tables (a few hundred to a few\nthousand rows), but it is. I had a case where I created a \"scratch\"\ntable like yours, and the before/after ANALYZE performance was the\ndifference between 30 seconds and a few milliseconds for the same query.\n\nCraig\n", "msg_date": "Tue, 10 Apr 2007 09:37:34 -0400", "msg_from": "\"Spiegelberg, Greg\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DELETE with filter on ctid" }, { "msg_contents": "\"Spiegelberg, Greg\" <[email protected]> writes:\n> Below is, I believe, everything pertinent to this problem. First is the\n> table in question, second is the problematic and original query, and\n> final is the transaction that I have working today with the CTID\n> implementation.\n\nSo the basic issue here is that data_id_table hasn't got a primary key\nyou could use as a join key? I won't lecture you about that, but a lot\nof people think it's bad practice not to have a recognizable primary key.\n\nThe slow query's problem seems to be mostly that the rowcount estimates\nare horribly bad, leading to inappropriate choices of nestloop joins.\nAre the statistics up-to-date? You might try increasing the stats target\nfor data_id_table in particular. A really brute-force test would be to\nsee what happens with that query if you just set enable_nestloop = 0.\n\nAs for the CTID query, my initial reaction that you shouldn't need an\nindex was wrong; looking into the code I see\n\n * There is currently no special support for joins involving CTID; in\n * particular nothing corresponding to best_inner_indexscan().\tSince it's\n * not very useful to store TIDs of one table in another table, there\n * doesn't seem to be enough use-case to justify adding a lot of code\n * for that.\n\nMaybe we should revisit that sometime, though I'm still not entirely\nconvinced by this example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Apr 2007 12:46:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE with filter on ctid " } ]
[ { "msg_contents": "\nAnd by the subject, I mean: please provide a \"factual\" answer, as opposed\nto the more or less obvious answer which would be \"no one in their sane\nmind would even consider doing such thing\" :-)\n\n1) Would it be possible to entirely disable WAL? (something like setting a\nsymlink so that pg_xlog points to /dev/null, perhaps?)\n\n2) What would be the real implications of doing that?\n\nAs I understand it, the WAL provide some sort of redundancy for fault-\ntolerance purposes; if my understanding is correct, then what kind of\ndata loss are we talking about if I entirely disabled WAL and the machine\nrunning the DB cluster were to crash/freeze? What about a physical\nreboot (either a UPSless power-down, or someone pressing the reset\nbutton)? Or are we talking about the entire DB cluster possibly\nbecoming corrupt beyond repair with 0% of the data being recoverable?\n\n\nAgain, please bear with me ... I do know and understand the \"no one in\ntheir sane mind would even consider doing such thing\" aspect ... I'm\nfor now interested in knowing more in detail the real facts behind this\nfeature --- allow me to think outside the box for a little while :-)\n\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Mon, 09 Apr 2007 16:05:26 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Please humor me ... " }, { "msg_contents": "On Mon, 2007-04-09 at 16:05 -0400, Carlos Moreno wrote:\n> And by the subject, I mean: please provide a \"factual\" answer, as opposed\n> to the more or less obvious answer which would be \"no one in their sane\n> mind would even consider doing such thing\" :-)\n> \n> 1) Would it be possible to entirely disable WAL? (something like setting a\n> symlink so that pg_xlog points to /dev/null, perhaps?)\n\nYou can't disable WAL, but you can disable fsync.\n\n> 2) What would be the real implications of doing that?\n> \n\nA good chance that you lose your entire database cluster if the power\nfails.\n\nIt's not just your tables that require WAL, it's also the system\ncatalogs. If you were to disable it, and a system catalog became\ncorrupt, the database would not know what to do.\n\nThere's always a chance you could recover some of that data by a manual\nprocess (i.e. open up the database files in a hex editor and look for\nyour data), but there would be no guarantee.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Mon, 09 Apr 2007 14:05:44 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please humor me ..." }, { "msg_contents": "On Mon, 2007-04-09 at 16:05 -0400, Carlos Moreno wrote:\n> 2) What would be the real implications of doing that?\n\nMany people ask, hence why a whole chapter of the manual is devoted to\nthis important topic.\n\nhttp://developer.postgresql.org/pgdocs/postgres/wal.html\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 13 Apr 2007 21:35:09 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please humor me ..." } ]
[ { "msg_contents": "I have 2 tables (A,B) joined in a many-to-many relationship via a \njoin table (\"membership\"), where updating table A based on table B \ntakes a very long time.\n\nTables A and B have oid primary keys (a_id and b_id respectively).\nThe join table, \"membership\", has its own oid primary key \n(membership_id), as well as foreign keys \"a_id\" and \"b_id\".\n\nA SELECT query across all 3 tables takes 12 seconds.\n\"SELECT count(*) FROM a JOIN membership USING(a_id) JOIN b USING \n(b_id) WHERE b.is_public = true\"\n\nBut a simple UPDATE using the same SELECT query takes 30 minutes to \nan hour.\n\"UPDATE A set is_public=true WHERE a_id IN (SELECT count(*) FROM a \nJOIN membership USING(a_id) JOIN b USING(b_id) WHERE b.is_public = \ntrue)\".\n\nWhat am I doing wrong here? I'm not sure how to diagnose this further.\n\nHere's the output from explain:\ndb=# EXPLAIN SELECT a_id FROM a JOIN membership USING(a_id) JOIN b \nUSING(b_id) WHERE b.is_public = true;\n------------------------------------------------------------------------ \n-----------------------------------------\nHash Join (cost=167154.78..173749.48 rows=51345 width=4)\n Hash Cond: (a.a_id = membership.a_id)\n -> Function Scan on a (cost=0.00..12.50 rows=1000 width=4)\n -> Hash (cost=144406.48..144406.48 rows=1819864 width=4)\n -> Hash Join (cost=417.91..144406.48 rows=1819864 width=4)\n Hash Cond: (membership.b_id = b.b_id)\n -> Seq Scan on membership (cost=0.00..83623.83 \nrows=4818983 width=8)\n -> Hash (cost=348.52..348.52 rows=5551 width=4)\n -> Index Scan using b_is_public on b \n(cost=0.00..348.52 rows=5551 width=4)\n Index Cond: (is_public = true)\n Filter: is_public\n(11 rows)\n\n\ndb=# EXPLAIN UPDATE a SET is_public = true WHERE a_id IN\n ( SELECT a_id FROM a JOIN membership USING(a_id) JOIN b USING \n(b_id) WHERE b.is_public = true);\n------------------------------------------------------------------------ \n-----------------------------------------\nhash in join (cost=281680.17..370835.63 rows=1819864 width=90)\n hash cond: (public.a.a_id = public.a.a_id)\n -> seq scan on a (cost=0.00..47362.09 rows=2097309 width=90)\n -> hash (cost=258931.87..258931.87 rows=1819864 width=8)\n -> hash join (cost=73996.36..258931.87 rows=1819864 width=8)\n hash cond: (membership.a_id = public.a.a_id)\n -> hash join (cost=417.91..144406.48 rows=1819864 \nwidth=4)\n hash cond: (membership.b_id = b.b_id)\n -> seq scan on membership \n(cost=0.00..83623.83 rows=4818983 width=8)\n -> hash (cost=348.52..348.52 rows=5551 width=4)\n -> index scan using \nloc_submission_is_public on b (cost=0.00..348.52 rows=5551 width=4)\n index cond: (is_public = true)\n filter: is_public\n -> hash (cost=47362.09..47362.09 rows=2097309 width=4)\n -> seq scan on a (cost=0.00..47362.09 \nrows=2097309 width=4)\n\nThanks,\n\nDrew\n", "msg_date": "Mon, 9 Apr 2007 13:46:59 -0700", "msg_from": "Drew Wilson <[email protected]>", "msg_from_op": true, "msg_subject": "how to efficiently update tuple in many-to-many relationship?" }, { "msg_contents": "On 4/9/07, Drew Wilson <[email protected]> wrote:\n> I have 2 tables (A,B) joined in a many-to-many relationship via a\n> join table (\"membership\"), where updating table A based on table B\n> takes a very long time.\n>\n> Tables A and B have oid primary keys (a_id and b_id respectively).\n> The join table, \"membership\", has its own oid primary key\n> (membership_id), as well as foreign keys \"a_id\" and \"b_id\".\n>\n> A SELECT query across all 3 tables takes 12 seconds.\n> \"SELECT count(*) FROM a JOIN membership USING(a_id) JOIN b USING\n> (b_id) WHERE b.is_public = true\"\n>\n> But a simple UPDATE using the same SELECT query takes 30 minutes to\n> an hour.\n> \"UPDATE A set is_public=true WHERE a_id IN (SELECT count(*) FROM a\n> JOIN membership USING(a_id) JOIN b USING(b_id) WHERE b.is_public =\n> true)\".\n>\n> What am I doing wrong here? I'm not sure how to diagnose this further.\n>\n> Here's the output from explain:\n> db=# EXPLAIN SELECT a_id FROM a JOIN membership USING(a_id) JOIN b\n> USING(b_id) WHERE b.is_public = true;\n> ------------------------------------------------------------------------\n> -----------------------------------------\n> Hash Join (cost=167154.78..173749.48 rows=51345 width=4)\n> Hash Cond: (a.a_id = membership.a_id)\n> -> Function Scan on a (cost=0.00..12.50 rows=1000 width=4)\n> -> Hash (cost=144406.48..144406.48 rows=1819864 width=4)\n> -> Hash Join (cost=417.91..144406.48 rows=1819864 width=4)\n> Hash Cond: (membership.b_id = b.b_id)\n> -> Seq Scan on membership (cost=0.00..83623.83\n> rows=4818983 width=8)\n> -> Hash (cost=348.52..348.52 rows=5551 width=4)\n> -> Index Scan using b_is_public on b\n> (cost=0.00..348.52 rows=5551 width=4)\n> Index Cond: (is_public = true)\n> Filter: is_public\n> (11 rows)\n>\n>\n> db=# EXPLAIN UPDATE a SET is_public = true WHERE a_id IN\n> ( SELECT a_id FROM a JOIN membership USING(a_id) JOIN b USING\n> (b_id) WHERE b.is_public = true);\n> ------------------------------------------------------------------------\n> -----------------------------------------\n> hash in join (cost=281680.17..370835.63 rows=1819864 width=90)\n> hash cond: (public.a.a_id = public.a.a_id)\n> -> seq scan on a (cost=0.00..47362.09 rows=2097309 width=90)\n> -> hash (cost=258931.87..258931.87 rows=1819864 width=8)\n> -> hash join (cost=73996.36..258931.87 rows=1819864 width=8)\n> hash cond: (membership.a_id = public.a.a_id)\n> -> hash join (cost=417.91..144406.48 rows=1819864\n> width=4)\n> hash cond: (membership.b_id = b.b_id)\n> -> seq scan on membership\n> (cost=0.00..83623.83 rows=4818983 width=8)\n> -> hash (cost=348.52..348.52 rows=5551 width=4)\n> -> index scan using\n> loc_submission_is_public on b (cost=0.00..348.52 rows=5551 width=4)\n> index cond: (is_public = true)\n> filter: is_public\n> -> hash (cost=47362.09..47362.09 rows=2097309 width=4)\n> -> seq scan on a (cost=0.00..47362.09\n> rows=2097309 width=4)\n\n\nwhy don't you rewrite your update statement to use joins (joins >\nwhere exists > where in)?\n\nWHERE a_id IN (SELECT count(*) FROM a\nthe above looks wrong maybe?\n\nmerlin\n", "msg_date": "Mon, 9 Apr 2007 17:08:02 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to efficiently update tuple in many-to-many relationship?" }, { "msg_contents": "Drew Wilson <[email protected]> writes:\n> I have 2 tables (A,B) joined in a many-to-many relationship via a \n> join table (\"membership\"), where updating table A based on table B \n> takes a very long time.\n> ...\n> -> Function Scan on a (cost=0.00..12.50 rows=1000 width=4)\n\nI think you've left out some relevant details ... there's nothing\nin what you said about a set-returning function ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Apr 2007 17:43:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to efficiently update tuple in many-to-many relationship? " }, { "msg_contents": "My apologies. That function call was some test code to verify that my \nsubselect was only being called once.\n\nLet me try again, please.\n\nHere's the query plan for a SELECT statement that returns 1,207,161 \nrows in 6 seconds.\nMatchBox=# explain select count(translation_pair_id) from \ntranslation_pair_data\n join instance i using(translation_pair_id)\n join loc_submission ls using(loc_submission_id)\n where ls.is_public = true;\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------\nAggregate (cost=299276.72..299276.73 rows=1 width=4)\n -> Hash Join (cost=59962.72..294036.83 rows=2095954 width=4)\n Hash Cond: (i.translation_pair_id = \ntranslation_pair_data.translation_pair_id)\n -> Hash Join (cost=369.15..177405.01 rows=2095954 width=4)\n Hash Cond: (i.loc_submission_id = ls.loc_submission_id)\n -> Seq Scan on instance i (cost=0.00..99016.16 \nrows=5706016 width=8)\n -> Hash (cost=296.92..296.92 rows=5778 width=4)\n -> Index Scan using loc_submission_is_public \non loc_submission ls (cost=0.00..296.92 rows=5778 width=4)\n Index Cond: (is_public = true)\n Filter: is_public\n -> Hash (cost=31861.92..31861.92 rows=1690292 width=4)\n -> Seq Scan on translation_pair_data \n(cost=0.00..31861.92 rows=1690292 width=4)\n\n\nAnd here's the query plan for the UPDATE query that seems to never \ncomplete. (Execution time > 30 minutes.)\nMatchBox=# explain update translation_pair_data set is_public = true\n where translation_pair_id in\n (select translation_pair_id from translation_pair_data\n join instance i using(translation_pair_id)\n join loc_submission ls using(loc_submission_id)\n where ls.is_public = true);\n QUERY PLAN\n------------------------------------------------------------------------ \n-------------------------------------------------------------\nHash IN Join (cost=328000.49..453415.65 rows=1690282 width=90)\n Hash Cond: (public.translation_pair_data.translation_pair_id = \npublic.translation_pair_data.translation_pair_id)\n -> Seq Scan on translation_pair_data (cost=0.00..31861.82 \nrows=1690282 width=90)\n -> Hash (cost=293067.74..293067.74 rows=2067660 width=8)\n -> Hash Join (cost=59958.35..293067.74 rows=2067660 width=8)\n Hash Cond: (i.translation_pair_id = \npublic.translation_pair_data.translation_pair_id)\n -> Hash Join (cost=365.00..177117.92 rows=2067660 \nwidth=4)\n Hash Cond: (i.loc_submission_id = \nls.loc_submission_id)\n -> Seq Scan on instance i \n(cost=0.00..99016.16 rows=5706016 width=8)\n -> Hash (cost=293.75..293.75 rows=5700 width=4)\n -> Index Scan using \nloc_submission_is_public on loc_submission ls (cost=0.00..293.75 \nrows=5700 width=4)\n Index Cond: (is_public = true)\n Filter: is_public\n -> Hash (cost=31861.82..31861.82 rows=1690282 width=4)\n -> Seq Scan on translation_pair_data \n(cost=0.00..31861.82 rows=1690282 width=4)\n\n\nI figure I must be doing something wrong here. Thanks for the help,\n\nDrew\n\nOn Apr 9, 2007, at 2:43 PM, Tom Lane wrote:\n\n> Drew Wilson <[email protected]> writes:\n>> I have 2 tables (A,B) joined in a many-to-many relationship via a\n>> join table (\"membership\"), where updating table A based on table B\n>> takes a very long time.\n>> ...\n>> -> Function Scan on a (cost=0.00..12.50 rows=1000 width=4)\n>\n> I think you've left out some relevant details ... there's nothing\n> in what you said about a set-returning function ...\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Mon, 9 Apr 2007 18:29:41 -0700", "msg_from": "Drew Wilson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to efficiently update tuple in many-to-many relationship? " }, { "msg_contents": "Drew Wilson <[email protected]> writes:\n> Here's the query plan for a SELECT statement that returns 1,207,161 \n> rows in 6 seconds.\n> ...\n> And here's the query plan for the UPDATE query that seems to never \n> complete. (Execution time > 30 minutes.)\n\nWell, the subplan is certainly the same as before, so it seems there are\ntwo possibilities:\n\n* there's something unreasonably inefficient about the hash join being\nused to perform the IN (work_mem too small? inefficient-to-compare\ndatatype? bad data distribution?)\n\n* the time is actually going into the UPDATE operation proper, or\nperhaps some triggers it fires (have you got any foreign keys involving\nthis table? what's checkpoint_segments set to?)\n\nYou could narrow it down by checking the runtime for\nselect count(*) from translation_pair_data\n where translation_pair_id in\n (select translation_pair_id from translation_pair_data ...\n\nIf that's slow it's the topmost hash join's fault, else we have\nto look at the UPDATE's side effects.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Apr 2007 22:13:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to efficiently update tuple in many-to-many relationship? " }, { "msg_contents": "Thanks for the suggestions, Tom. But I'm still stumped.\n\nOn Apr 9, 2007, at 7:13 PM, Tom Lane wrote:\n\n> Drew Wilson <[email protected]> writes:\n>> Here's the query plan for a SELECT statement that returns 1,207,161\n>> rows in 6 seconds.\n>> ...\n>> And here's the query plan for the UPDATE query that seems to never\n>> complete. (Execution time > 30 minutes.)\n>\n> Well, the subplan is certainly the same as before, so it seems \n> there are\n> two possibilities:\n>\n> * there's something unreasonably inefficient about the hash join being\n> used to perform the IN (work_mem too small? inefficient-to-compare\n> datatype? bad data distribution?)\nI'm not sure why. The ids are OIDs generated from a sequence, with no \ndeletions.\n\n> * the time is actually going into the UPDATE operation proper, or\n> perhaps some triggers it fires (have you got any foreign keys \n> involving\n> this table? what's checkpoint_segments set to?)\n\n> You could narrow it down by checking the runtime for\n> select count(*) from translation_pair_data\n> where translation_pair_id in\n> (select translation_pair_id from translation_pair_data ...\nAfter I increasing work_mem from 1M to 32M, checkpoint_segments from \n3 to 8, (and reloading), the UPDATE operation still takes about 15 \nminutes (944 seconds) to update 637,712 rows.\n\nWhereas replacing the the \"UPDATE ... WHERE translation_pair_id IN\" \nwith \"SELECT count(*) WHERE translation_pair_id IN\" drops the time \nfrom 15 minutes to 19 seconds (returning the same 637712 rows.)\n\n> If that's slow it's the topmost hash join's fault, else we have\n> to look at the UPDATE's side effects.\n\nThe SELECT is not slow, so its a side effect of the update... Looking \nat the table definition, there is a \"BEFORE ON DELETE\" trigger \ndefined, two CHECK constraints for this table, and three foreign \nkeys. Nothing looks suspicious to me.\nAny clues in the table description below?\n\nHere's the table definition. (And I've appended updated query plans \ndescriptions.)\n\nMatchBox=# \\d translation_pair_data Table \n\"public.translation_pair_data\"\n Column | Type | Modifiers\n---------------------+-----------------------------+---------------\ntranslation_pair_id | oid | not null\ntranslation_id | oid | not null\nhistory_id | oid | not null\nsource_id | oid | not null\ncreated_ts | timestamp without time zone | default now()\nlast_added_ts | timestamp without time zone | default now()\nobsolete | boolean |\nstyle | character(1) |\nlocalizability | boolean |\nui_restricted | boolean |\nlinguistic | boolean |\ngender | character(1) |\nplatforms | character varying[] |\nis_public | boolean |\nIndexes:\n \"translation_pair_pkey\" PRIMARY KEY, btree (translation_pair_id)\n \"translation_pair_source_id_key\" UNIQUE, btree (source_id, \ntranslation_id)\n \"translation_pair_created_date\" btree (date(created_ts))\n \"translation_pair_data_is_public\" btree (is_public)\n \"translation_pair_source_id\" btree (source_id)\n \"translation_pair_source_id_is_not_obsolete\" btree (source_id, \nobsolete) WHERE obsolete IS NOT TRUE\n \"translation_pair_translation_id\" btree (translation_id)\nCheck constraints:\n \"translation_pair_gender_check\" CHECK (gender = 'M'::bpchar OR \ngender = 'F'::bpchar OR gender = 'N'::bpchar)\n \"translation_pair_style_check\" CHECK (style = 'P'::bpchar OR \nstyle = 'O'::bpchar OR style = 'N'::bpchar)\nForeign-key constraints:\n \"translation_pair_history_id_fkey\" FOREIGN KEY (history_id) \nREFERENCES history(history_id)\n \"translation_pair_source_id_fkey\" FOREIGN KEY (source_id) \nREFERENCES source_data(source_id)\n \"translation_pair_translation_id_fkey\" FOREIGN KEY \n(translation_id) REFERENCES translation_data(translation_id)\nTriggers:\n del_tp_prodtype BEFORE DELETE ON translation_pair_data FOR EACH \nROW EXECUTE PROCEDURE eme_delete_tp_prodtype()\n\n\nThanks for all your help,\n\nDrew\n\np.s. here are the updated query plans after bumping work_mem to 32M.\n\nMatchBox=# explain select count(*) from \ntranslation_pair_data \n \n where translation_pair_id in (select \ntranslation_pair_id from translation_pair_data join instance i using \n(translation_pair_id) join loc_submission ls using(loc_submission_id) \nwhere ls.is_public = true);\n \nQUERY PLAN\n------------------------------------------------------------------------ \n-------------------------------------------------------------------\nAggregate (cost=424978.46..424978.47 rows=1 width=0)\n -> Hash IN Join (cost=324546.91..420732.64 rows=1698329 width=0)\n Hash Cond: \n(public.translation_pair_data.translation_pair_id = \npublic.translation_pair_data.translation_pair_id)\n -> Seq Scan on translation_pair_data (cost=0.00..38494.29 \nrows=1698329 width=4)\n -> Hash (cost=290643.93..290643.93 rows=2006718 width=8)\n -> Hash Join (cost=66710.78..290643.93 rows=2006718 \nwidth=8)\n Hash Cond: (i.translation_pair_id = \npublic.translation_pair_data.translation_pair_id)\n -> Hash Join (cost=352.38..169363.36 \nrows=2006718 width=4)\n Hash Cond: (i.loc_submission_id = \nls.loc_submission_id)\n -> Seq Scan on instance i \n(cost=0.00..99016.16 rows=5706016 width=8)\n -> Hash (cost=283.23..283.23 rows=5532 \nwidth=4)\n -> Index Scan using \nloc_submission_is_public on loc_submission ls (cost=0.00..283.23 \nrows=5532 width=4)\n Index Cond: (is_public = true)\n Filter: is_public\n -> Hash (cost=38494.29..38494.29 rows=1698329 \nwidth=4)\n -> Seq Scan on translation_pair_data \n(cost=0.00..38494.29 rows=1698329 width=4)\n\nThe SELECT above takes approx 20s, whereas this UPDATE below takes \n944s (15 minutes)\n\nMatchBox=# explain update translation_pair_data set is_public = true \nwhere translation_pair_id in (select translation_pair_id from \ntranslation_pair_data join instance i using(translation_pair_id) join \nloc_submission ls using(loc_submission_id) where ls.is_public = true);\n QUERY PLAN\n------------------------------------------------------------------------ \n-------------------------------------------------------------\nHash IN Join (cost=324546.91..457218.64 rows=1698329 width=90)\n Hash Cond: (public.translation_pair_data.translation_pair_id = \npublic.translation_pair_data.translation_pair_id)\n -> Seq Scan on translation_pair_data (cost=0.00..38494.29 \nrows=1698329 width=90)\n -> Hash (cost=290643.93..290643.93 rows=2006718 width=8)\n -> Hash Join (cost=66710.78..290643.93 rows=2006718 width=8)\n Hash Cond: (i.translation_pair_id = \npublic.translation_pair_data.translation_pair_id)\n -> Hash Join (cost=352.38..169363.36 rows=2006718 \nwidth=4)\n Hash Cond: (i.loc_submission_id = \nls.loc_submission_id)\n -> Seq Scan on instance i \n(cost=0.00..99016.16 rows=5706016 width=8)\n -> Hash (cost=283.23..283.23 rows=5532 width=4)\n -> Index Scan using \nloc_submission_is_public on loc_submission ls (cost=0.00..283.23 \nrows=5532 width=4)\n Index Cond: (is_public = true)\n Filter: is_public\n -> Hash (cost=38494.29..38494.29 rows=1698329 width=4)\n -> Seq Scan on translation_pair_data \n(cost=0.00..38494.29 rows=1698329 width=4)\n\n", "msg_date": "Mon, 9 Apr 2007 22:46:29 -0700", "msg_from": "Drew Wilson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to efficiently update tuple in many-to-many relationship? " }, { "msg_contents": "Drew Wilson <[email protected]> writes:\n> The SELECT is not slow, so its a side effect of the update... Looking \n> at the table definition, there is a \"BEFORE ON DELETE\" trigger \n> defined, two CHECK constraints for this table, and three foreign \n> keys. Nothing looks suspicious to me.\n\nSince this is an update we can ignore the before-delete trigger, and\nthe check constraints don't look expensive to test. Outgoing foreign\nkey references are normally not a problem either, since there must\nbe an index on the other end. But *incoming* foreign key references\nmight be an issue --- are there any linking to this table?\n\nAlso, the seven indexes seem a bit excessive. I'm not sure if that's\nwhere the update time is going, but they sure aren't helping, and\nsome of them seem redundant anyway. In particular I think that the\npartial index WHERE obsolete IS NOT TRUE is probably a waste (do you\nhave any queries you know use it? what do they look like?) and you\nprobably don't need all three combinations of source_id and\ntranslation_id --- see discussion here:\nhttp://www.postgresql.org/docs/8.2/static/indexes-bitmap-scans.html\n\nBTW, I don't think you ever mentioned what PG version this is exactly?\nIf it's 8.1 or later it would be worth slogging through EXPLAIN ANALYZE\non the update, or maybe an update of 10% or so of the rows if you're\nimpatient. That would break out the time spent in the triggers, which\nwould let us eliminate them (or not) as the cause of the problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Apr 2007 09:54:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to efficiently update tuple in many-to-many relationship? " }, { "msg_contents": "\nOn Apr 10, 2007, at 6:54 AM, Tom Lane wrote:\n\n> Drew Wilson <[email protected]> writes:\n>> The SELECT is not slow, so its a side effect of the update... Looking\n>> at the table definition, there is a \"BEFORE ON DELETE\" trigger\n>> defined, two CHECK constraints for this table, and three foreign\n>> keys. Nothing looks suspicious to me.\n>\n> Since this is an update we can ignore the before-delete trigger, and\n> the check constraints don't look expensive to test. Outgoing foreign\n> key references are normally not a problem either, since there must\n> be an index on the other end. But *incoming* foreign key references\n> might be an issue --- are there any linking to this table?\nThere is only one incoming foreign key - the one coming in from the \nmany-to-many join table ('instance').\n\n>\n> Also, the seven indexes seem a bit excessive. I'm not sure if that's\n> where the update time is going, but they sure aren't helping, and\n> some of them seem redundant anyway. In particular I think that the\n> partial index WHERE obsolete IS NOT TRUE is probably a waste (do you\n> have any queries you know use it? what do they look like?) and you\n> probably don't need all three combinations of source_id and\n> translation_id --- see discussion here:\n> http://www.postgresql.org/docs/8.2/static/indexes-bitmap-scans.html\n99% of our queries use obsolete IS NOT TRUE, so we have an index on \nthis.\n\n> BTW, I don't think you ever mentioned what PG version this is exactly?\n> If it's 8.1 or later it would be worth slogging through EXPLAIN \n> ANALYZE\n> on the update, or maybe an update of 10% or so of the rows if you're\n> impatient. That would break out the time spent in the triggers, which\n> would let us eliminate them (or not) as the cause of the problem.\nSorry. I'm using 8.2.3 on Mac OS X 10.4.9, w/ 2.Ghz Intel Core Duo, \nand 2G RAM.\n\nIf I understand the EXPLAIN ANALYZE results below, it looks like the \ntime spent applying the \"set is_public = true\" is much much more than \nthe fetch. I don't see any triggers firing. Is there something else I \ncan look for in the logs?\n\nHere is the explain analyze output:\nMatchBox=# EXPLAIN ANALYZE UPDATE translation_pair_data SET is_public \n= true\n WHERE translation_pair_id IN\n (SELECT translation_pair_id FROM translation_pair_data\n JOIN instance i using(translation_pair_id)\n JOIN loc_submission ls using(loc_submission_id)\n WHERE ls.is_public = true);\n QUERY PLAN\n------------------------------------------------------------------------ \n---------------------------------------------\nHash IN Join (cost=324546.91..457218.64 rows=1698329 width=90) \n(actual time=12891.309..33621.801 rows=637712 loops=1)\n Hash Cond: (public.translation_pair_data.translation_pair_id = \npublic.translation_pair_data.translation_pair_id)\n -> Seq Scan on translation_pair_data (cost=0.00..38494.29 \nrows=1698329 width=90) (actual time=0.045..19352.184 rows=1690272 \nloops=1)\n -> Hash (cost=290643.93..290643.93 rows=2006718 width=8) \n(actual time=10510.411..10510.411 rows=1207161 loops=1)\n -> Hash Join (cost=66710.78..290643.93 rows=2006718 \nwidth=8) (actual time=1810.299..9821.862 rows=1207161 loops=1)\n Hash Cond: (i.translation_pair_id = \npublic.translation_pair_data.translation_pair_id)\n -> Hash Join (cost=352.38..169363.36 rows=2006718 \nwidth=4) (actual time=11.369..6273.439 rows=1207161 loops=1)\n Hash Cond: (i.loc_submission_id = \nls.loc_submission_id)\n -> Seq Scan on instance i \n(cost=0.00..99016.16 rows=5706016 width=8) (actual \ntime=0.029..3774.705 rows=5705932 loops=1)\n -> Hash (cost=283.23..283.23 rows=5532 \nwidth=4) (actual time=11.277..11.277 rows=5563 loops=1)\n -> Index Scan using \nloc_submission_is_public on loc_submission ls (cost=0.00..283.23 \nrows=5532 width=4) (actual time=0.110..7.717 rows=5563 loops=1)\n Index Cond: (is_public = true)\n Filter: is_public\n -> Hash (cost=38494.29..38494.29 rows=1698329 \nwidth=4) (actual time=1796.574..1796.574 rows=1690272 loops=1)\n -> Seq Scan on translation_pair_data \n(cost=0.00..38494.29 rows=1698329 width=4) (actual \ntime=0.012..917.006 rows=1690272 loops=1)\nTotal runtime: 1008985.005 ms\n\nThanks for your help,\n\nDrew\n", "msg_date": "Tue, 10 Apr 2007 08:37:31 -0700", "msg_from": "Drew Wilson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to efficiently update tuple in many-to-many relationship? " }, { "msg_contents": "Drew Wilson <[email protected]> writes:\n> If I understand the EXPLAIN ANALYZE results below, it looks like the \n> time spent applying the \"set is_public = true\" is much much more than \n> the fetch. I don't see any triggers firing.\n\nNope, there aren't any. 8.2 is smart enough to bypass firing FK\ntriggers on UPDATE if the key columns didn't change. Of course that\ncheck takes a certain amount of time, but I don't think it's a large\namount. So basically we're looking at index update and WAL logging\nas the major costs here, I think.\n\n[ thinks for a bit... ] Part of the problem might be that the working\nset for updating all the indexes is too large. What do you have\nshared_buffers set to, and can you increase it?\n\nOh, and checkpoint_segments at 8 is probably still not nearly enough;\nif you have disk space to spare, try something like 30 (which'd eat\nabout 1Gb of disk space). It might be educational to set\ncheckpoint_warning to 300 or so and watch the log to see how often\ncheckpoints happen during the update --- you want them at least a couple\nminutes apart even during max load.\n\nAlso, bumping up wal_buffers a little might help some, for large updates\nlike this. I've heard people claim that values as high as 64 are helpful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Apr 2007 12:08:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to efficiently update tuple in many-to-many relationship? " }, { "msg_contents": "Hey there;\n\nI'm trying to tune the memory usage of a new machine that has a -lot- of \nmemory in it (32 gigs). We're upgrading from a machine that had 16 gigs \nof RAM and using a database that's around 130-some gigs on disc. Our \nlargest tables have in the order of close to 10 million rows.\n\nProblem is, the postgres documentation isn't always clear about what \ndifferent memory things are used for and it's definitely not clear about \nwhat 'useful values' would be for various things. Further, looking \nonline, gets a lot of random stuff and most of the configuration \ninformation out there is for pre-8.1 versions that don't have all these \nnew and strange values :)\n\nThis machine exists only for the database. With that in mind, a few \nquestions.\n\n\n- I've set up a configuration (I'll show important values below), and \nI\"m wondering if there's any way I can actually see the distribution of \nmemory in the DB and how the memory is being used.\n\n- What is temp_buffers used for exactly? Does this matter for, say, \nnested queries or anything in specific? Is there any case where having \nthis as a large number actually -helps-?\n\n- Do full_page_writes and wal_buffers settings matter AT ALL for a machine \nwhere fysnc = off ?\n\n- What does wal_buffers mean and does increasing this value actually help \nanything?\n\n- Any idea if this is a smart configuration for this machine? It's a \nRedhat Enterprise Linux machine (kernel 2.6.18), 8 dual-core AMD 64bit \nprocessors, 32 gigs of RAM, 4x 176 (or whatever the exact number is) gig \nSCSI hard drives in a stripe. Only values I have modified are mentioned, \neverything else left at default:\n\nshared_buffers = 16GB\ntemp_buffers = 128MB\nmax_prepared_transactions = 0\n\n# This value is going to probably set off cries of using this as a set\n# command instead of a big global value; however there's more big queries\n# than small ones and the number of simultaneous users is very small so\n# 'for now' this can be set globally big and if it shows improvement\n# I'll implement it as set commands later.\n#\n# Question; does this mean 2 gigs will be immediately allocated to\n# every query, or is this just how big the work memory is allowed to\n# grow per transaction?\nwork_mem=2G\n\nmaintenance_work_mem = 4GB\nmax_stack_depth = 16MB\n\n# Vacuum suggested I make this 'over 3600000' on the old machine, so\n# I use this value; if it's too big, this is a symptom of another problem,\n# I'd be interested to know :)\nmax_fsm_pages = 5000000\n\n# For a lot of reasons, it doesn't make any sense to use fsync for this\n# DB. Read-only during the day, backed up daily, UPS'd, etc.\nfsync = off\nfull_page_writes = off\nwal_buffers = 512MB\n\n# Leaving this low makes the DB complain, but I'm not sure what's \n# reasonable.\ncheckpoint_segments = 128\n\nrandom_page_cost = 1.5\ncpu_tuple_cost = 0.001\ncpu_index_tuple_cost = 0.0005\ncpu_operator_cost = 0.00025\neffective_cache_size = 8GB\n\ndefault_statistics_target = 100\n\n\n\n\nThanks for all your help!\n\nSteve\n", "msg_date": "Tue, 10 Apr 2007 15:28:48 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Question about memory allocations" }, { "msg_contents": "Steve <[email protected]> writes:\n> - What is temp_buffers used for exactly?\n\nTemporary tables. Pages of temp tables belonging to your own backend\ndon't ever get loaded into the main shared-buffers arena, they are read\ninto backend-local memory. temp_buffers is the max amount (per backend)\nof local memory to use for this purpose.\n\n> - Do full_page_writes and wal_buffers settings matter AT ALL for a machine \n> where fysnc = off ?\n\nYes.\n\n> - What does wal_buffers mean and does increasing this value actually help \n> anything?\n\nIt's the amount of space available to buffer WAL log data that's not\nbeen written to disk. If you have a lot of short transactions then\nthere's not much benefit to increasing it (because the WAL will be\ngetting forced to disk frequently anyway) but I've heard reports that\nfor workloads involving long single transactions bumping it up to 64\nor 100 or so helps.\n\n> - Any idea if this is a smart configuration for this machine?\n\nUm ... you didn't mention which PG version?\n\n> # This value is going to probably set off cries of using this as a set\n> # command instead of a big global value;\n\nNo kidding. You do NOT want work_mem that high, at least not without an\nextremely predictable, simple workload.\n\n> wal_buffers = 512MB\n\nI haven't heard any reports that there's a point in values even as high\nas 1 meg for this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 00:31:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about memory allocations " }, { "msg_contents": "\n> Steve <[email protected]> writes:\n>> - What is temp_buffers used for exactly?\n>\n> Temporary tables. Pages of temp tables belonging to your own backend\n> don't ever get loaded into the main shared-buffers arena, they are read\n> into backend-local memory. temp_buffers is the max amount (per backend)\n> of local memory to use for this purpose.\n\n \tAre these only tables explicitly stated as 'temporary' (which as I \nrecall is a create table option) or are temporary tables used for other \nthings to like, say, nested queries or other lil in the background things?\n\n>> - Any idea if this is a smart configuration for this machine?\n>\n> Um ... you didn't mention which PG version?\n>\n\n \tThe latest and greatest stable as downloaded a couple days ago. \n8.2.3. :)\n\n\nThanks for the info!\n\n\nSteve\n", "msg_date": "Thu, 12 Apr 2007 12:46:00 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about memory allocations " }, { "msg_contents": "On Tue, 10 Apr 2007, Steve wrote:\n\n> - I've set up a configuration (I'll show important values below), and I\"m \n> wondering if there's any way I can actually see the distribution of memory in \n> the DB and how the memory is being used.\n\nI didn't notice anyone address this for you yet. There is a tool in \ncontrib/pg_buffercache whose purpose in life is to show you what the \nshared buffer cache has inside it. The documentation in that directory \nleads through installing it. The additional variable you'll likely never \nknow is what additional information is inside the operating system's \nbuffer cache.\n\n> # Leaving this low makes the DB complain, but I'm not sure what's # \n> reasonable.\n> checkpoint_segments = 128\n\nThat's a reasonable setting for a large server. The main downside to \nsetting it that high is longer recovery periods after a crash, but I doubt \nthat's a problem for you if you're so brazen as to turn off fsync.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 12 Apr 2007 22:50:08 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about memory allocations" }, { "msg_contents": "> I didn't notice anyone address this for you yet. There is a tool in \n> contrib/pg_buffercache whose purpose in life is to show you what the shared \n> buffer cache has inside it. The documentation in that directory leads \n> through installing it. The additional variable you'll likely never know is \n> what additional information is inside the operating system's buffer cache.\n\n \tOkay -- thanks! I'll take a look at this.\n\n>> # Leaving this low makes the DB complain, but I'm not sure what's # \n>> reasonable.\n>> checkpoint_segments = 128\n>\n> That's a reasonable setting for a large server. The main downside to setting \n> it that high is longer recovery periods after a crash, but I doubt that's a \n> problem for you if you're so brazen as to turn off fsync.\n\n \tHahaha yeah. It's 100% assumed that if something goes bad we're \nrestoring from the previous day's backup. However because the DB is read \nonly for -most- of the day and only read/write at night it's acceptable \nrisk for us anyway. But good to know that's a reasonable value.\n\n\nSteve\n", "msg_date": "Thu, 12 Apr 2007 23:57:24 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about memory allocations" }, { "msg_contents": "On Tue, 2007-04-10 at 15:28 -0400, Steve wrote:\n> \n> I'm trying to tune the memory usage of a new machine that has a -lot- of \n> memory in it (32 gigs).\n\n...\n> \n> shared_buffers = 16GB\n\nReally?\n\nWow!\n\nCommon wisdom in the past has been that values above a couple of hundred\nMB will degrade performance. Have you done any benchmarks on 8.2.x that\nshow that you get an improvement from this, or did you just take the\n\"too much of a good thing is wonderful\" approach?\n\nCheers,\n\t\t\t\t\tAndrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n You have an unusual equipment for success. Be sure to use it properly.\n-------------------------------------------------------------------------", "msg_date": "Fri, 13 Apr 2007 20:35:55 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about memory allocations" }, { "msg_contents": "> Really?\n>\n> Wow!\n>\n> Common wisdom in the past has been that values above a couple of hundred\n> MB will degrade performance. Have you done any benchmarks on 8.2.x that\n> show that you get an improvement from this, or did you just take the\n> \"too much of a good thing is wonderful\" approach?\n>\n\n \tNot to be rude, but there's more common wisdom on this particular \nsubject than anything else in postgres I'd say ;) I think I recently read \nsomeone else on this list who's laundry-listed the recommended memory \nvalues that are out there these days and pretty much it ranges from \nwhat you've just said to \"half of system memory\".\n\n \tI've tried many memory layouts, and in my own experience with \nthis huge DB, more -does- appear to be better but marginally so; more \nmemory alone won't fix a speed problem. It may be a function of how much \nreading/writing is done to the DB and if fsync is used or not if that \nmakes any sense :) Seems there's no \"silver bullet\" to the shared_memory \nquestion. Or if there is, nobody can agree on it ;)\n\n\nAnyway, talk to you later!\n\n\nSteve\n", "msg_date": "Fri, 13 Apr 2007 12:38:17 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about memory allocations" }, { "msg_contents": "At 12:38 PM 4/13/2007, Steve wrote:\n>>Really?\n>>\n>>Wow!\n>>\n>>Common wisdom in the past has been that values above a couple of hundred\n>>MB will degrade performance. Have you done any benchmarks on 8.2.x that\n>>show that you get an improvement from this, or did you just take the\n>>\"too much of a good thing is wonderful\" approach?\n>\n> Not to be rude, but there's more common wisdom on this \n> particular subject than anything else in postgres I'd say ;) I \n> think I recently read someone else on this list who's \n> laundry-listed the recommended memory values that are out there \n> these days and pretty much it ranges from what you've just said to \n> \"half of system memory\".\n>\n> I've tried many memory layouts, and in my own experience \n> with this huge DB, more -does- appear to be better but marginally \n> so; more memory alone won't fix a speed problem. It may be a \n> function of how much reading/writing is done to the DB and if fsync \n> is used or not if that makes any sense :) Seems there's no \"silver \n> bullet\" to the shared_memory question. Or if there is, nobody can \n> agree on it ;)\n\nOne of the reasons for the wide variance in suggested values for pg \nmemory use is that pg 7.x and pg 8.x are =very= different beasts.\n\nIf you break the advice into pg 7.x and pg 8.x categories, you find \nthat there is far less variation in the suggestions.\n\nBottom line: pg 7.x could not take advantage of larger sums of memory \nanywhere near as well as pg 8.x can.\n\nCheers,\nRon \n\n", "msg_date": "Fri, 13 Apr 2007 14:23:08 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about memory allocations" }, { "msg_contents": "Steve wrote:\n>>\n>> Common wisdom in the past has been that values above a couple of hundred\n>> MB will degrade performance. \n\nThe annotated config file talks about setting shared_buffers to a third \nof the\navailable memory --- well, it says \"it should be no more than 1/3 of the \ntotal\namount of memory\" (quoting off the top of my head). Don't recall seeing\nany warning about not exceeding a few hundred megabytes.\n\nMy eternal curiosity when it comes to this memory and shared_buffers thing:\n\nHow does PG take advantage of the available memory? I mean, if I have a\nmachine with, say, 4 or 8GB of memory, how will those GBs would end\nup being used? They just do?? (I mean, I would find that a vaild \nanswer;\nbut I ask, because this configuration parameters stuff makes me think that\nperhaps PG does not simply use whatever memory is in there, but it has\nto go through the parameters in the config file to allocate whatever it has\nto use).\n\nSo, is it just like that? We put more memory and PG will automatically\nmake use of it?\n\nCarlos\n--\n\n", "msg_date": "Fri, 13 Apr 2007 14:53:53 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about memory allocations" }, { "msg_contents": "On Friday 13 April 2007 14:53:53 Carlos Moreno wrote:\n> How does PG take advantage of the available memory?  I mean, if I have a\n> machine with, say, 4 or 8GB of memory, how will those GBs would end\n> up being used?   They just do??   (I mean, I would find that a vaild\n> answer;\n\nOn linux the filesystem cache will gobble them up, which means indirectly \npgsql profits as well (assuming no other apps poison the fs cache).\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser                     [email protected]\n\n                Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n", "msg_date": "Fri, 13 Apr 2007 16:24:55 -0400", "msg_from": "\"Jan de Visser\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about memory allocations" }, { "msg_contents": "Ron <[email protected]> writes:\n> One of the reasons for the wide variance in suggested values for pg \n> memory use is that pg 7.x and pg 8.x are =very= different beasts.\n> If you break the advice into pg 7.x and pg 8.x categories, you find \n> that there is far less variation in the suggestions.\n> Bottom line: pg 7.x could not take advantage of larger sums of memory \n> anywhere near as well as pg 8.x can.\n\nActually I think it was 8.1 that really broke the barrier in terms of\nscalability of shared_buffers. Pre-8.1, the buffer manager just didn't\nscale well enough to make it useful to use more than a few hundred meg.\n(In fact, we never even bothered to fix the shared-memory-sizing\ncalculations to be able to deal with >2GB shared memory until 8.1;\nif you try it in 8.0 it'll probably just crash.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Apr 2007 12:37:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about memory allocations " } ]
[ { "msg_contents": "Hi,\n\nI'm using RHEL4 and wondering if I need to upgrade the php and php-pgsql\npackages when upgrading from Postgres 7.4.1 to 8.2.3.\n\nAny Help?\n\nThanks\n\nMike\n\nHi,I'm using RHEL4 and wondering if I need to upgrade the php and php-pgsql packages when upgrading from Postgres 7.4.1 to 8.2.3.Any Help?ThanksMike", "msg_date": "Tue, 10 Apr 2007 15:44:52 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Do I need to rebuild php-pgsql for 8.2.3" }, { "msg_contents": "On 4/10/07, Michael Dengler <[email protected]> wrote:\n> I'm using RHEL4 and wondering if I need to upgrade the php and php-pgsql\n> packages when upgrading from Postgres 7.4.1 to 8.2.3.\n\nNo you don't. Devrim Gunduz provides compat RPM for a long time now.\n\nSee http://developer.postgresql.org/~devrim/rpms/compat/ and choose\nthe correct package for your architecture.\n\n--\nGuillaume\n", "msg_date": "Tue, 10 Apr 2007 22:55:03 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I need to rebuild php-pgsql for 8.2.3" }, { "msg_contents": "Hi,\n\nOn Tue, 2007-04-10 at 22:55 +0200, Guillaume Smet wrote:\n> See http://developer.postgresql.org/~devrim/rpms/compat/ and choose\n> the correct package for your architecture. \n\n... or better, each RHEL4 directory in our FTP site has compat package\n(that directory is not up2date now).\n\nRegards,\n-- \nDevrim GÜNDÜZ\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, ODBCng - http://www.commandprompt.com/", "msg_date": "Wed, 11 Apr 2007 00:01:22 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I need to rebuild php-pgsql for 8.2.3" }, { "msg_contents": "Hi,\n\nThanks for the info. One more thing....I am in rpm hell. When I try to\n# rpm -Uvh postgresql-libs-8.2.3-1PGDG.i686.rpm\nI get:\nerror: Failed dependencies:\n libpq.so.3 is needed by (installed) perl-DBD-Pg-1.31-6.i386\n libpq.so.3 is needed by (installed)\npostgresql-python-7.4.13-2.RHEL4.1.i386\n libpq.so.3 is needed by (installed) php-pgsql-4.3.9-3.15.i386\nand when I try:\n# rpm -ivh compat-postgresql-libs-3-3PGDG.i686.rpm\nI get:\nerror: Failed dependencies:\n postgresql-libs < 8.0.2 conflicts with\ncompat-postgresql-libs-3-3PGDG.i686\ngrrrrr...\nshould just force the upgrade (ie. --nodeps)?\n\nThanks\n\nMike\n\n\nOn 4/10/07, Devrim GÜNDÜZ <[email protected]> wrote:\n>\n> Hi,\n>\n> On Tue, 2007-04-10 at 22:55 +0200, Guillaume Smet wrote:\n> > See http://developer.postgresql.org/~devrim/rpms/compat/ and choose\n> > the correct package for your architecture.\n>\n> ... or better, each RHEL4 directory in our FTP site has compat package\n> (that directory is not up2date now).\n>\n> Regards,\n> --\n> Devrim GÜNDÜZ\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n> Managed Services, Shared and Dedicated Hosting\n> Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/\n>\n>\n>\n>\n\nHi,Thanks for the info. One more thing....I am in rpm hell. When I try to # rpm -Uvh postgresql-libs-8.2.3-1PGDG.i686.rpmI get:error: Failed dependencies:        libpq.so.3 is needed by (installed) \nperl-DBD-Pg-1.31-6.i386        libpq.so.3 is needed by (installed) postgresql-python-7.4.13-2.RHEL4.1.i386        libpq.so.3 is needed by (installed) php-pgsql-4.3.9-3.15.i386and when I try:# rpm -ivh compat-postgresql-libs-3-3PGDG.i686.rpm\nI get:error: Failed dependencies:        postgresql-libs < 8.0.2 conflicts with compat-postgresql-libs-3-3PGDG.i686grrrrr...should just force the upgrade (ie. --nodeps)?ThanksMike\nOn 4/10/07, Devrim GÜNDÜZ <[email protected]> wrote:\nHi,On Tue, 2007-04-10 at 22:55 +0200, Guillaume Smet wrote:> See http://developer.postgresql.org/~devrim/rpms/compat/ and choose> the correct package for your architecture.\n... or better, each RHEL4 directory in our FTP site has compat package(that directory is not up2date now).Regards,--Devrim GÜNDÜZPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated HostingCo-Authors: plPHP, ODBCng - http://www.commandprompt.com/", "msg_date": "Wed, 11 Apr 2007 13:25:25 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Do I need to rebuild php-pgsql for 8.2.3" }, { "msg_contents": "Here's what I do... \n \n1) Install postgresql-libs from the RHEL source\n2) Install compat-postgresql-libs from postgresql.org (install, not\nupgrade, use rpm -hiv) use force if necessary\n3) Install postgresq-libs from postgresql.org (again, install, not\nupgrade, use rpm-hiv) use force if necessary\n \nIf done correctly, you'll end up with all 3 client versions:\n \n/usr/lib/libpq.so.3\n/usr/lib/libpq.so.4\n/usr/lib/libpq.so.5\n \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Michael\nDengler\nSent: Wednesday, April 11, 2007 12:25 PM\nTo: Devrim GÜNDÜZ\nCc: pgsql-performance; Guillaume Smet\nSubject: Re: [PERFORM] Do I need to rebuild php-pgsql for 8.2.3\n\n\nHi,\n\nThanks for the info. One more thing....I am in rpm hell. When I try to \n# rpm -Uvh postgresql-libs-8.2.3-1PGDG.i686.rpm\nI get:\nerror: Failed dependencies:\n libpq.so.3 is needed by (installed) perl-DBD-Pg-1.31-6.i386\n libpq.so.3 is needed by (installed)\npostgresql-python-7.4.13-2.RHEL4.1.i386\n libpq.so.3 is needed by (installed) php-pgsql-4.3.9-3.15.i386\nand when I try:\n# rpm -ivh compat-postgresql-libs-3-3PGDG.i686.rpm \nI get:\nerror: Failed dependencies:\n postgresql-libs < 8.0.2 conflicts with\ncompat-postgresql-libs-3-3PGDG.i686\ngrrrrr...\nshould just force the upgrade (ie. --nodeps)?\n\nThanks\n\nMike\n\n\n\nOn 4/10/07, Devrim GÜNDÜZ <[email protected]> wrote: \n\nHi,\n\nOn Tue, 2007-04-10 at 22:55 +0200, Guillaume Smet wrote:\n> See http://developer.postgresql.org/~devrim/rpms/compat/ and choose\n> the correct package for your architecture. \n\n... or better, each RHEL4 directory in our FTP site has compat package\n(that directory is not up2date now).\n\nRegards,\n--\nDevrim GÜNDÜZ\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support \nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, ODBCng - http://www.commandprompt.com/\n\n\n\n\n\n\n\n\n\nMessage\n\n\nHere's \nwhat I do... \n \n1) \nInstall postgresql-libs from the RHEL source\n2) \nInstall compat-postgresql-libs from postgresql.org (install, not upgrade, use \nrpm -hiv) use force if necessary\n3) \nInstall postgresq-libs from postgresql.org (again, install, not upgrade, use \nrpm-hiv) use force if necessary\n \nIf \ndone correctly, you'll end up with all 3 client versions:\n \n/usr/lib/libpq.so.3\n\n/usr/lib/libpq.so.4\n\n/usr/lib/libpq.so.5\n \n \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of Michael \n DenglerSent: Wednesday, April 11, 2007 12:25 PMTo: \n Devrim GÜNDÜZCc: pgsql-performance; Guillaume \n SmetSubject: Re: [PERFORM] Do I need to rebuild php-pgsql for \n 8.2.3Hi,Thanks for the info. One more thing....I \n am in rpm hell. When I try to # rpm -Uvh \n postgresql-libs-8.2.3-1PGDG.i686.rpmI get:error: Failed \n dependencies:        libpq.so.3 is \n needed by (installed) \n perl-DBD-Pg-1.31-6.i386        \n libpq.so.3 is needed by (installed) \n postgresql-python-7.4.13-2.RHEL4.1.i386        \n libpq.so.3 is needed by (installed) php-pgsql-4.3.9-3.15.i386and when I \n try:# rpm -ivh compat-postgresql-libs-3-3PGDG.i686.rpm I \n get:error: Failed \n dependencies:        postgresql-libs \n < 8.0.2 conflicts with \n compat-postgresql-libs-3-3PGDG.i686grrrrr...should just force the \n upgrade (ie. --nodeps)?ThanksMike\nOn 4/10/07, Devrim \n GÜNDÜZ <[email protected]> \n wrote:\nHi,On \n Tue, 2007-04-10 at 22:55 +0200, Guillaume Smet wrote:> See http://developer.postgresql.org/~devrim/rpms/compat/ \n and choose> the correct package for your architecture. ... or \n better, each RHEL4 directory in our FTP site has compat package(that \n directory is not up2date now).Regards,--Devrim \n GÜNDÜZPostgreSQL Replication, Consulting, Custom Development, 24x7 \n support Managed Services, Shared and Dedicated HostingCo-Authors: \n plPHP, ODBCng - http://www.commandprompt.com/", "msg_date": "Wed, 11 Apr 2007 12:53:39 -0500", "msg_from": "\"Adam Rich\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I need to rebuild php-pgsql for 8.2.3" }, { "msg_contents": "Hi,\n\nOn Wed, 2007-04-11 at 13:25 -0400, Michael Dengler wrote:\n\n> Thanks for the info. One more thing....I am in rpm hell. When I try\n> to \n> # rpm -Uvh postgresql-libs-8.2.3-1PGDG.i686.rpm\n> I get:\n> error: Failed dependencies:\n> libpq.so.3 is needed by (installed) perl-DBD-Pg-1.31-6.i386\n> libpq.so.3 is needed by (installed)\n> postgresql-python-7.4.13-2.RHEL4.1.i386\n> libpq.so.3 is needed by (installed) php-pgsql-4.3.9-3.15.i386\n> and when I try:\n> # rpm -ivh compat-postgresql-libs-3-3PGDG.i686.rpm \n> I get:\n> error: Failed dependencies:\n> postgresql-libs < 8.0.2 conflicts with\n> compat-postgresql-libs-3-3PGDG.i686\n\nIt seems that you already have PostgreSQL installed on your server. Tı\ninstall 8.2.3:\n\n* Take a dump using pg_dump(all).\n\n* Remove existing RPMS, ignore warnings about libpq.so*\n\n* Install compat-3 package\n\n* Install 8.2.3 packages.\n\n* Reload your dump.\n\nRegards,\n-- \nDevrim GÜNDÜZ\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, ODBCng - http://www.commandprompt.com/", "msg_date": "Wed, 11 Apr 2007 21:12:33 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Do I need to rebuild php-pgsql for 8.2.3" }, { "msg_contents": "Thanks...worked perfectly!\n\nMike\n\n\nOn 4/11/07, Adam Rich <[email protected]> wrote:\n>\n> Here's what I do...\n>\n> 1) Install postgresql-libs from the RHEL source\n> 2) Install compat-postgresql-libs from postgresql.org (install, not\n> upgrade, use rpm -hiv) use force if necessary\n> 3) Install postgresq-libs from postgresql.org (again, install, not\n> upgrade, use rpm-hiv) use force if necessary\n>\n> If done correctly, you'll end up with all 3 client versions:\n>\n> /usr/lib/libpq.so.3\n> /usr/lib/libpq.so.4\n> /usr/lib/libpq.so.5\n>\n>\n>\n> -----Original Message-----\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Michael Dengler\n> *Sent:* Wednesday, April 11, 2007 12:25 PM\n> *To:* Devrim GÜNDÜZ\n> *Cc:* pgsql-performance; Guillaume Smet\n> *Subject:* Re: [PERFORM] Do I need to rebuild php-pgsql for 8.2.3\n>\n> Hi,\n>\n> Thanks for the info. One more thing....I am in rpm hell. When I try to\n> # rpm -Uvh postgresql-libs-8.2.3-1PGDG.i686.rpm\n> I get:\n> error: Failed dependencies:\n> libpq.so.3 is needed by (installed) perl-DBD-Pg-1.31-6.i386\n> libpq.so.3 is needed by (installed)\n> postgresql-python-7.4.13-2.RHEL4.1.i386\n> libpq.so.3 is needed by (installed) php-pgsql-4.3.9-3.15.i386\n> and when I try:\n> # rpm -ivh compat-postgresql-libs-3-3PGDG.i686.rpm\n> I get:\n> error: Failed dependencies:\n> postgresql-libs < 8.0.2 conflicts with\n> compat-postgresql-libs-3-3PGDG.i686\n> grrrrr...\n> should just force the upgrade (ie. --nodeps)?\n>\n> Thanks\n>\n> Mike\n>\n>\n> On 4/10/07, Devrim GÜNDÜZ <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > On Tue, 2007-04-10 at 22:55 +0200, Guillaume Smet wrote:\n> > > See http://developer.postgresql.org/~devrim/rpms/compat/<http://developer.postgresql.org/%7Edevrim/rpms/compat/>and choose\n> > > the correct package for your architecture.\n> >\n> > ... or better, each RHEL4 directory in our FTP site has compat package\n> > (that directory is not up2date now).\n> >\n> > Regards,\n> > --\n> > Devrim GÜNDÜZ\n> > PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n> > Managed Services, Shared and Dedicated Hosting\n> > Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/\n> >\n> >\n> >\n> >\n>\n\nThanks...worked perfectly!MikeOn 4/11/07, Adam Rich <[email protected]> wrote:\n\nHere's \nwhat I do... \n \n1) \nInstall postgresql-libs from the RHEL source\n2) \nInstall compat-postgresql-libs from postgresql.org (install, not upgrade, use \nrpm -hiv) use force if necessary\n3) \nInstall postgresq-libs from postgresql.org (again, install, not upgrade, use \nrpm-hiv) use force if necessary\n \nIf \ndone correctly, you'll end up with all 3 client versions:\n \n/usr/lib/libpq.so.3\n\n/usr/lib/libpq.so.4\n\n/usr/lib/libpq.so.5\n \n \n\n\n-----Original Message-----From:\[email protected] \n [mailto:[email protected]] On Behalf Of Michael \n DenglerSent: Wednesday, April 11, 2007 12:25 PMTo: \n Devrim GÜNDÜZCc: pgsql-performance; Guillaume \n SmetSubject: Re: [PERFORM] Do I need to rebuild php-pgsql for \n 8.2.3Hi,Thanks for the info. One more thing....I \n am in rpm hell. When I try to # rpm -Uvh \n postgresql-libs-8.2.3-1PGDG.i686.rpmI get:error: Failed \n dependencies:        libpq.so.3 is \n needed by (installed) \n perl-DBD-Pg-1.31-6.i386        \n libpq.so.3 is needed by (installed) \n postgresql-python-7.4.13-2.RHEL4.1.i386        \n libpq.so.3 is needed by (installed) php-pgsql-4.3.9-3.15.i386and when I \n try:# rpm -ivh compat-postgresql-libs-3-3PGDG.i686.rpm I \n get:error: Failed \n dependencies:        postgresql-libs \n < 8.0.2 conflicts with \n compat-postgresql-libs-3-3PGDG.i686grrrrr...should just force the \n upgrade (ie. --nodeps)?ThanksMike\nOn 4/10/07, Devrim \n GÜNDÜZ <[email protected]> \n wrote:\nHi,On \n Tue, 2007-04-10 at 22:55 +0200, Guillaume Smet wrote:> See http://developer.postgresql.org/~devrim/rpms/compat/\n \n and choose> the correct package for your architecture. ... or \n better, each RHEL4 directory in our FTP site has compat package(that \n directory is not up2date now).Regards,--Devrim \n GÜNDÜZPostgreSQL Replication, Consulting, Custom Development, 24x7 \n support Managed Services, Shared and Dedicated HostingCo-Authors: \n plPHP, ODBCng - http://www.commandprompt.com/", "msg_date": "Wed, 11 Apr 2007 17:47:41 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Do I need to rebuild php-pgsql for 8.2.3" } ]
[ { "msg_contents": "We had problems again, caused by long running transactions. I'm\nmonitoring the pg_stat_activity view, checking the query_start of all\nrequests that are not idle - but this one slipped under the radar as the\napplication was running frequent queries towards the database.\n\nThat's not what concerns me most. We had two databases running under\npostgres at this host - like, main production database (A) and a\nseparate smaller database for a separate project (B). As far as I\nunderstood postgres philosophy, the databases should be isolated from\neach other, i.e. one are not allowed to create a query that goes across\nthe database borders (select * from A.customers join B.logins ...). So,\nI was surprised to see that the application working towards database B\nmanaged to jam up database A, to the extent that we couldn't get A\nvacuumed properly.\n", "msg_date": "Wed, 11 Apr 2007 00:50:37 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Long running transactions again ..." }, { "msg_contents": "On Wed, Apr 11, 2007 at 12:50:37AM +0200, Tobias Brox wrote:\n> We had problems again, caused by long running transactions. I'm\n> monitoring the pg_stat_activity view, checking the query_start of all\n> requests that are not idle - but this one slipped under the radar as the\n> application was running frequent queries towards the database.\n> \n> That's not what concerns me most. We had two databases running under\n> postgres at this host - like, main production database (A) and a\n> separate smaller database for a separate project (B). As far as I\n> understood postgres philosophy, the databases should be isolated from\n> each other, i.e. one are not allowed to create a query that goes across\n> the database borders (select * from A.customers join B.logins ...). So,\n> I was surprised to see that the application working towards database B\n> managed to jam up database A, to the extent that we couldn't get A\n> vacuumed properly.\n\nVacuums do ignore other databases, except for shared relations such as\npg_database. If one of the databases wasn't being vacuumed properly it\nmeans there was in fact a transaction open. Note that until recently,\nvacuums wouldn't ignore other vacuums, so a long-running vacuum would\nprevent repeated vacuums on the same table from accomplishing much.\n\nAre you sure that your monitoring doesn't accidentally ignore\nbackends marked as <IDLE> in transaction?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n", "msg_date": "Wed, 18 Apr 2007 13:25:53 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long running transactions again ..." } ]
[ { "msg_contents": "Hello all,\n\nMy website has been having issues with our new Linux/PostgreSQL \nserver being somewhat slow. I have done tests using Apache Benchmark \nand for pages that do not connect to Postgres, the speeds are much \nfaster (334 requests/second v. 1-2 requests/second), so it seems that \nPostgres is what's causing the problem and not Apache. I did some \nreserach, and it seems that the bottleneck is in fact the hard \ndrives! Here's an excerpt from vmstat:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n-----cpu------\nr b swpd free buff cache si so bi bo in cs us \nsy id wa st\n1 1 140 24780 166636 575144 0 0 0 3900 1462 3299 1 \n4 49 48 0\n0 1 140 24780 166636 575144 0 0 0 3828 1455 3391 0 \n4 48 48 0\n1 1 140 24780 166636 575144 0 0 0 2440 960 2033 0 \n3 48 48 0\n0 1 140 24780 166636 575144 0 0 0 2552 1001 2131 0 \n2 50 49 0\n0 1 140 24780 166636 575144 0 0 0 3188 1233 2755 0 \n3 49 48 0\n0 1 140 24780 166636 575144 0 0 0 2048 868 1812 0 \n2 49 49 0\n0 1 140 24780 166636 575144 0 0 0 2720 1094 2386 0 \n3 49 49 0\n\nAs you can see, almost 50% of the CPU is waiting on I/O. This doesn't \nseem like it should be happening, however, since we are using a RAID \n1 setup (160+160). We have 1GB ram, and have upped shared_buffers to \n13000 and work_mem to 8096. What would cause the computer to only use \nsuch a small percentage of the CPU, with more than half of it waiting \non I/O requests?\n\nThanks a lot\nJason\n\n", "msg_date": "Wed, 11 Apr 2007 18:02:48 -0400", "msg_from": "Jason Lustig <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Postgresql server" }, { "msg_contents": "Jason Lustig skrev:\n> and work_mem to 8096. What would cause the computer to only use such a \n> small percentage of the CPU, with more than half of it waiting on I/O \n> requests?\n\nDo your webpages write things to the database on each connect?\n\nMaybe it do a bunch of writes each individually commited? For every \ncommit pg will wait for the data to be written down to the disk platter \nbefore it move on. So if you do several writes you want to do them in \none transaction so you only need one commit.\n\n/Dennis\n", "msg_date": "Thu, 12 Apr 2007 06:34:14 +0200", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "On Wed, 11 Apr 2007, Jason Lustig wrote:\n\n> Hello all,\n>\n> My website has been having issues with our new Linux/PostgreSQL server being \n> somewhat slow. I have done tests using Apache Benchmark and for pages that do \n> not connect to Postgres, the speeds are much faster (334 requests/second v. \n> 1-2 requests/second), so it seems that Postgres is what's causing the problem \n> and not Apache. I did some reserach, and it seems that the bottleneck is in \n> fact the hard drives! Here's an excerpt from vmstat:\n>\n> procs -----------memory---------- ---swap-- -----io---- --system-- \n> -----cpu------\n> r b swpd free buff cache si so bi bo in cs us sy id wa \n> st\n> 1 1 140 24780 166636 575144 0 0 0 3900 1462 3299 1 4 49 48 \n> 0\n> 0 1 140 24780 166636 575144 0 0 0 3828 1455 3391 0 4 48 48 \n> 0\n> 1 1 140 24780 166636 575144 0 0 0 2440 960 2033 0 3 48 48 \n> 0\n> 0 1 140 24780 166636 575144 0 0 0 2552 1001 2131 0 2 50 49 \n> 0\n> 0 1 140 24780 166636 575144 0 0 0 3188 1233 2755 0 3 49 48 \n> 0\n> 0 1 140 24780 166636 575144 0 0 0 2048 868 1812 0 2 49 49 \n> 0\n> 0 1 140 24780 166636 575144 0 0 0 2720 1094 2386 0 3 49 49 \n> 0\n>\n> As you can see, almost 50% of the CPU is waiting on I/O. This doesn't seem \n> like it should be happening, however, since we are using a RAID 1 setup \n> (160+160). We have 1GB ram, and have upped shared_buffers to 13000 and \n> work_mem to 8096. What would cause the computer to only use such a small \n> percentage of the CPU, with more than half of it waiting on I/O requests?\n\nWell, the simple answer is a slow disk subsystem. Is it hardware or software \nRAID1? If hardware, what's the RAID controller? Based on your vmstat output, \nI'd guess that this query activity is all writes since I see only blocks out. \nCan you identify what the slow queries are? What version of postgres? How \nlarge is the database? Can you post the non-default values in your \npostgresql.conf?\n\nI'd suggest you test your disk subsystem to see if it's as performant as you \nthink with bonnie++. Here's some output from my RAID1 test server:\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\npgtest 4G 47090 92 52348 11 30954 6 41838 65 73396 8 255.9 1\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 894 2 +++++ +++ 854 1 817 2 +++++ +++ 969 2\n\nSo, that's 52MB/sec block writes and 73MB/sec block reads. That's typical of \na RAID1 on 2 semi-fast SATA drives.\n\nIf you're doing writes to the DB on every web page, you might consider playing \nwith the commit_delay and commit_siblings parameters in the postgresql.conf. \nAlso, if you're doing multiple inserts as separate transactions, you should \nconsider batching them up in one transaction.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Wed, 11 Apr 2007 22:33:12 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "1= RAID 1improves data =intregrity=, not IO performance.\nYour HD IO performance is essentially that of 1 160GB HD of whatever \nperformance one of those HDs have.\n(what kind of HDs are they anyway? For instance 7200rpm 160GB HDs \nare not particularly \"high performance\")\nBEST case is streaming IO involving no seeks => ~50 MBps.\nYou can't get even that as the back end of a website.\n\n2= 1GB of RAM is -small- for a DB server.\n\nYou need to buy RAM and HD.\n\nBoost the RAM to 4GB, change pg config parameters appropriately and \nsee how much it helps.\nNon ECC RAM is currently running ~$60-$75 per GB for 1 or 2 GB sticks\nECC RAM prices will be ~ 1.5x - 2x that, $120 - $150 per GB for 1 or \n2 GB sticks.\n(do !not! buy 4GB sticks unless you have a large budget. Their price \npr GB is still too high)\n\nIf adding RAM helps as much as I suspect it will, find out how big \nthe \"hot\" section of your DB is and see if you can buy enough RAM to \nmake it RAM resident.\nIf you can do this, it will result in the lowest term DB maintenance.\n\nIf you can't do that for whatever reason, the next step is to improve \nyour HD subsystem.\nCheap RAID cards with enough BB cache to allow writes to be coalesced \ninto larger streams (reducing seeks) will help, but you fundamentally \nyou will need more HDs.\n\nRAID 5 is an reasonable option for most website DBs workloads.\nTo hit the 300MBps speeds attainable by the cheap RAID cards, you are \ngoing to at least 7 HDs (6 HDs * 50MBps ASTR = 300MBps ASTR + the \nequivalent of 1 HD gets used for the \"R\" in RAID). A minimum of 8 \nHDs are need for this performance if you want to use RAID 6.\nMost tower case (not mini-tower, tower) cases can hold this internally.\nPrice per MBps of HD is all over the map. The simplest (but not \nnecessarily best) option is to buy more of the 160GB HDs you already have.\nOptimizing the money spent when buying HDs for a RAID set is a bit \nmore complicated than doing so for RAM. Lot's of context dependent \nthings affect the final decision.\n\nI see you are mailing from Brandeis. I'm local. Drop me some \nprivate email at the address I'm posting from if you want and I'll \nsend you further contact info so we can talk in more detail.\n\nCheers,\nRon Peacetree\n\n\nAt 06:02 PM 4/11/2007, Jason Lustig wrote:\n>Hello all,\n>\n>My website has been having issues with our new Linux/PostgreSQL\n>server being somewhat slow. I have done tests using Apache Benchmark\n>and for pages that do not connect to Postgres, the speeds are much\n>faster (334 requests/second v. 1-2 requests/second), so it seems that\n>Postgres is what's causing the problem and not Apache. I did some\n>reserach, and it seems that the bottleneck is in fact the hard\n>drives! Here's an excerpt from vmstat:\n>\n>procs -----------memory---------- ---swap-- -----io---- --system--\n>-----cpu------\n>r b swpd free buff cache si so bi bo in cs us\n>sy id wa st\n>1 1 140 24780 166636 575144 0 0 0 3900 1462 3299 1\n>4 49 48 0\n>0 1 140 24780 166636 575144 0 0 0 3828 1455 3391 0\n>4 48 48 0\n>1 1 140 24780 166636 575144 0 0 0 2440 960 2033 0\n>3 48 48 0\n>0 1 140 24780 166636 575144 0 0 0 2552 1001 2131 0\n>2 50 49 0\n>0 1 140 24780 166636 575144 0 0 0 3188 1233 2755 0\n>3 49 48 0\n>0 1 140 24780 166636 575144 0 0 0 2048 868 1812 0\n>2 49 49 0\n>0 1 140 24780 166636 575144 0 0 0 2720 1094 2386 0\n>3 49 49 0\n>\n>As you can see, almost 50% of the CPU is waiting on I/O. This doesn't\n>seem like it should be happening, however, since we are using a RAID\n>1 setup (160+160). We have 1GB ram, and have upped shared_buffers to\n>13000 and work_mem to 8096. What would cause the computer to only use\n>such a small percentage of the CPU, with more than half of it waiting\n>on I/O requests?\n>\n>Thanks a lot\n>Jason\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Thu, 12 Apr 2007 09:26:24 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "On 12.04.2007, at 07:26, Ron wrote:\n\n> You need to buy RAM and HD.\n\nBefore he does that, wouldn't it be more useful, to find out WHY he \nhas so much IO?\n\nHave I missed that or has nobody suggested finding the slow queries \n(when you have much IO on them, they might be slow at least with a \nhigh shared memory setting).\n\nSo, my first idea is, to turn on query logging for queries longer \nthan a xy milliseconds, \"explain analyse\" these queries and see \nwether there are a lot of seq scans involved, which would explain the \nhigh IO.\n\nJust an idea, perhaps I missed that step in that discussion \nsomewhere ...\n\nBut yes, it might also be, that the server is swapping, that's \nanother thing to find out.\n\ncug\n", "msg_date": "Thu, 12 Apr 2007 08:08:03 -0600", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "At 10:08 AM 4/12/2007, Guido Neitzer wrote:\n>On 12.04.2007, at 07:26, Ron wrote:\n>\n>>You need to buy RAM and HD.\n>\n>Before he does that, wouldn't it be more useful, to find out WHY he\n>has so much IO?\n\n1= Unless I missed something, the OP described pg being used as a \nbackend DB for a webserver.\n\nI know the typical IO demands of that scenario better than I sometimes want to.\n:-(\n\n\n2= 1GB of RAM + effectively 1 160GB HD = p*ss poor DB IO support.\n~ 1/2 that RAM is going to be used for OS stuff, leaving only ~512MB \nof RAM to be used supporting pg.\nThat RAID 1 set is effectively 1 HD head that all IO requests are \ngoing to contend for.\nEven if the HD in question is a 15Krpm screamer, that level of HW \ncontention has very adverse implications.\n\n\nCompletely agree that at some point the queries need to be examined \n(ditto the table schema, etc), but this system is starting off in a \nBad Place for its stated purpose IME.\nSome minimum stuff is obvious even w/o spending time looking at \nanything beyond the HW config.\n\nCheers,\nRon Peacetree\n\n\n>Have I missed that or has nobody suggested finding the slow queries\n>(when you have much IO on them, they might be slow at least with a\n>high shared memory setting).\n>\n>So, my first idea is, to turn on query logging for queries longer\n>than a xy milliseconds, \"explain analyse\" these queries and see\n>wether there are a lot of seq scans involved, which would explain the\n>high IO.\n>\n>Just an idea, perhaps I missed that step in that discussion\n>somewhere ...\n>\n>But yes, it might also be, that the server is swapping, that's\n>another thing to find out.\n>\n>cug\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Thu, 12 Apr 2007 10:59:10 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "On 12.04.2007, at 08:59, Ron wrote:\n\n> 1= Unless I missed something, the OP described pg being used as a \n> backend DB for a webserver.\n\nYep.\n\n> I know the typical IO demands of that scenario better than I \n> sometimes want to.\n> :-(\n\nYep. Same here. ;-)\n\n> 2= 1GB of RAM + effectively 1 160GB HD = p*ss poor DB IO support.\n\nAbsolutely right. Depending a little bit on the DB and WebSite layout \nand on the actual requirements, but yes - it's not really a kick-ass \nmachine ...\n\n> Completely agree that at some point the queries need to be examined \n> (ditto the table schema, etc), but this system is starting off in a \n> Bad Place for its stated purpose IME.\n> Some minimum stuff is obvious even w/o spending time looking at \n> anything beyond the HW config.\n\nDepends. As I said - if the whole DB fits into the remaining space, \nand a lot of website backend DBs do, it might just work out. But this \nseems not to be the case - either the site is chewing on seq scans \nall the time which will cause I/O or it is bound by the lack of \nmemory and swaps the whole time ... He has to find out.\n\ncug\n", "msg_date": "Thu, 12 Apr 2007 09:19:57 -0600", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "On Thu, 12 Apr 2007, Jason Lustig wrote:\n\n> 0 <-- BM starts here\n> 10 0 180 700436 16420 91740 0 0 0 176 278 2923 59 41 0 \n> 0 0\n> 11 0 180 696736 16420 91740 0 0 0 0 254 2904 57 43 0 \n> 0 0\n> 12 0 180 691272 16420 91740 0 0 0 0 255 3043 60 39 1 \n> 0 0\n> 9 0 180 690396 16420 91740 0 0 0 0 254 3078 63 36 2 0 \n> 0\n>\n> Obviously, I've turned off logging now but I'd like to get it running again \n> (without bogging down the server) so that I can profile the system and find \n> out which queries I need to optimize. My logging settings (with unnecessary \n> comments taken out) were:\n\nSo what did you get in the logs when you had logging turned on? If you have \nthe statement logging, perhaps it's worth running through pgfouine to generate \na report.\n\n>\n> log_destination = 'syslog' # Valid values are combinations of\n> redirect_stderr = off # Enable capturing of stderr into log\n> log_min_duration_statement = 0 # -1 is disabled, 0 logs all \n> statements\n> silent_mode = on # DO NOT USE without syslog or\n> log_duration = off\n> log_line_prefix = 'user=%u,db=%d' # Special values:\n> log_statement = 'none' # none, ddl, mod, all\n>\n\nPerhaps you just want to log slow queries > 100ms? But since you don't seem \nto know what queries you're running on each web page, I'd suggest you just \nturn on the following and run your benchmark against it, then turn it back \noff:\n\nlog_duration = on\nlog_statement = 'all'\n\nThen go grab pgfouine and run the report against the logs to see what queries \nare chewing up all your time.\n\n> So you know, we're using Postgres 8.2.3. The database currently is pretty \n> small (we're just running a testing database right now with a few megabytes \n> of data). No doubt some of our queries are slow, but I was concerned because \n> no matter how slow the queries were (at most the worst were taking a couple \n> of msecs anyway), I was getting ridiculously slow responses from the server. \n> Outside of logging, our only other non-default postgresql.conf items are:\n>\n> shared_buffers = 13000 # min 128kB or max_connections*16kB\n> work_mem = 8096 # min 64kB\n>\n> In terms of the server itself, I think that it uses software raid. How can I \n> tell? Our hosting company set it up with the server so I guess I could ask \n> them, but is there a program I can run which will tell me the information? I \n> also ran bonnie++ and got this output:\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input- \n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec \n> %CP\n> pgtest 2000M 29277 67 33819 15 15446 4 35144 62 48887 5 152.7 0\n> ------Sequential Create------ --------Random \n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read--- \n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec \n> %CP\n> 16 17886 77 +++++ +++ +++++ +++ 23258 99 +++++ +++ +++++ \n> +++\n>\n> So I'm getting 33MB and 48MB write/read respectively. Is this slow? Is there \n> anything I should be doing to optimize our RAID configuration?\n>\n\nIt's not fast, but at least it's about the same speed as an average IDE drive \nfrom this era. More disks would help, but since you indicate the DB fits in \nRAM with plenty of room to spare, how about you update your \neffective_cache_size to something reasonable. You can use the output of the \n'free' command and take the cache number and divide by 8 to get a reasonable \nvalue on linux. Then turn on logging and run your benchmark. After that, run \na pgfouine report against the log and post us the explain analyze from your \nslow queries.\n\nAnd if Ron is indeed local, it might be worthwhile to contact him. Someone \nonsite would likely get this taken care of much faster than we can on the \nmailing list.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 12 Apr 2007 08:35:24 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "On Thu, 2007-04-12 at 10:19, Guido Neitzer wrote:\n> On 12.04.2007, at 08:59, Ron wrote:\n\n> \n> Depends. As I said - if the whole DB fits into the remaining space, \n> and a lot of website backend DBs do, it might just work out. But this \n> seems not to be the case - either the site is chewing on seq scans \n> all the time which will cause I/O or it is bound by the lack of \n> memory and swaps the whole time ... He has to find out.\n\nIt could also be something as simple as a very bloated data store.\n\nI'd ask the user what vacuum verbose says at the end\n", "msg_date": "Thu, 12 Apr 2007 11:56:04 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "On Thu, 12 Apr 2007, Scott Marlowe wrote:\n\n> On Thu, 2007-04-12 at 10:19, Guido Neitzer wrote:\n>> On 12.04.2007, at 08:59, Ron wrote:\n>\n>>\n>> Depends. As I said - if the whole DB fits into the remaining space,\n>> and a lot of website backend DBs do, it might just work out. But this\n>> seems not to be the case - either the site is chewing on seq scans\n>> all the time which will cause I/O or it is bound by the lack of\n>> memory and swaps the whole time ... He has to find out.\n>\n> It could also be something as simple as a very bloated data store.\n>\n> I'd ask the user what vacuum verbose says at the end\n\nYou know, I should answer emails at night...we didn't ask when the last time \nthe data was vacuumed or analyzed and I believe he indicated that the only \nnon-default values were memory related, so no autovacuum running.\n\nJason,\n\nBefore you go any further, run 'vacuum analyze;' on your DB if you're not \ndoing this with regularity and strongly consider enabling autovacuum.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 12 Apr 2007 10:03:15 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "Jeff Frost wrote:\n>\n> You know, I should answer emails at night...\n\nIndeed you shouldN'T ;-)\n\nCarlos\n--\n\n", "msg_date": "Thu, 12 Apr 2007 15:10:18 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "Hey there;\n\nOn a Postgres 8.2.3 server, I've got a query that is running very slow in \nsome cases. With some work, I've determined the 'slow part' of the query. \n:) This is a query on a table with like 10 million rows or something like \nthat. encounter_id is an integer and receipt is of type 'date'.\n\nThis query runs really slow [minutes] (query and explain below):\n\nselect extract(epoch from ds.receipt) from detail_summary ds where\nds.receipt >= '1998-12-30 0:0:0' and\nds.encounter_id in\n(8813186,8813187,8813188,8813189,8813190,8813191,8813192,\n8813193,8813194,8813195,8813196,8813197,8813198,8813199,\n8813200,8813201,8813202,8813203,8813204,8813205,8813206,\n8813207,8813208,8813209,8813210,8813211,8813212,8813213,\n8813214,8813215,8813216,8813217,8813218,8813219,8813220,\n8813221,8813222,8813223,8813224,8813225,8813226,8813227,\n8813228,8813229,8813230,8813231,8813232,8813233,8813234,\n8813235,8813236,8813237,8813238,8813239,8813240,8813241,\n8813242,8813243,8813244,8813245,8813246,8813247,8813248,\n8813249,8813250,8813251,8813252,8813253,8813254,8813255,\n8813256,8813257,8813258,8813259,8813260,8813261,8813262,\n8813263,8813264,8813265,8813266,8813267,8813268,8813269,\n8813270,8813271,8813272,8813273,8813274,8813275,8813276,\n8813277,8813278,8813279,8813280,8813281,8813282,8813283,\n8813284,8815534)\n\nResults in the 'explain' :\n\n Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4)\n Filter: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n(2 rows)\n\n\nTurning enable_seqscan to off results in a slightly more interesting \nexplain, but an equally slow query.\n\n\nHOWEVER! The simple removal of the receipt date paramater results in a \nfast query, as such:\n\nselect extract(epoch from ds.receipt) from detail_summary ds where\nds.encounter_id in\n(8813186,8813187,8813188,8813189,8813190,8813191,8813192,\n8813193,8813194,8813195,8813196,8813197,8813198,8813199,\n8813200,8813201,8813202,8813203,8813204,8813205,8813206,\n8813207,8813208,8813209,8813210,8813211,8813212,8813213,\n8813214,8813215,8813216,8813217,8813218,8813219,8813220,\n8813221,8813222,8813223,8813224,8813225,8813226,8813227,\n8813228,8813229,8813230,8813231,8813232,8813233,8813234,\n8813235,8813236,8813237,8813238,8813239,8813240,8813241,\n8813242,8813243,8813244,8813245,8813246,8813247,8813248,\n8813249,8813250,8813251,8813252,8813253,8813254,8813255,\n8813256,8813257,8813258,8813259,8813260,8813261,8813262,\n8813263,8813264,8813265,8813266,8813267,8813268,8813269,\n8813270,8813271,8813272,8813273,8813274,8813275,8813276,\n8813277,8813278,8813279,8813280,8813281,8813282,8813283,\n8813284 ,8815534)\n\nThis query returns instantly and explains as:\n\n Bitmap Heap Scan on detail_summary ds (cost=161.00..14819.81 rows=9963 \nwidth=4)\n Recheck Cond: (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))\n -> Bitmap Index Scan on detail_summary_encounter_id_idx \n(cost=0.00..160.75 rows=9963 width=0)\n Index Cond: (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))\n\n\nAny thoughts? Both encounter_id and receipt date are indexed columns. \nI've vacuumed and analyzed the table. I tried making a combined index of \nencounter_id and receipt and it hasn't worked out any better.\n\n\nThanks!\n\n\nSteve\n", "msg_date": "Thu, 12 Apr 2007 17:03:17 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Strangely Variable Query Performance" }, { "msg_contents": "Steve <[email protected]> writes:\n> On a Postgres 8.2.3 server, I've got a query that is running very slow in \n> some cases.\n\nCould we see the exact definition of that table and its indexes?\nIt looks like the planner is missing the bitmap scan for some reason,\nbut I've not seen a case like that before.\n\nAlso, I assume the restriction on receipt date is very nonselective?\nIt doesn't seem to have changed the estimated rowcount much.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 17:24:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "> Could we see the exact definition of that table and its indexes?\n> It looks like the planner is missing the bitmap scan for some reason,\n> but I've not seen a case like that before.\n>\n> Also, I assume the restriction on receipt date is very nonselective?\n> It doesn't seem to have changed the estimated rowcount much.\n>\n\n \tThis is true -- This particular receipt date is actually quite \nmeaningless. It's equivalent to saying 'all receipt dates'. I don't \nthink there's even any data that goes back before 2005.\n\nHere's the table and it's indexes. Before looking, a note; there's \nseveral 'revop' indexes, this is for sorting. The customer insisted on, \nfrankly, meaninglessly complicated sorts. I don't think any of that \nmatters for our purposes here though :)\n\n Column | Type | \nModifiers\n-----------------------+------------------------+--------------------------------------------------------------------\n detailsummary_id | integer | not null default \nnextval(('detailsummary_id_seq'::text)::regclass)\n detailgroup_id | integer |\n receipt | date |\n batchnum | integer |\n encounternum | integer |\n procedureseq | integer |\n procedurecode | character varying(5) |\n wrong_procedurecode | character varying(5) |\n batch_id | integer |\n encounter_id | integer |\n procedure_id | integer |\n carrier_id | integer |\n product_line | integer |\n provider_id | integer |\n member_num | character varying(20) |\n wrong_member_num | character varying(20) |\n member_name | character varying(40) |\n patient_control | character varying(20) |\n rendering_prov_id | character varying(15) |\n rendering_prov_name | character varying(30) |\n referring_prov_id | character varying(15) |\n referring_prov_name | character varying(30) |\n servicedate | date |\n wrong_servicedate | date |\n diagnosis_codes | character varying(5)[] |\n wrong_diagnosis_codes | character varying(5)[] |\n ffs_charge | double precision |\n export_date | date |\n hedis_date | date |\n raps_date | date |\n diagnosis_pointers | character(1)[] |\n modifiers | character(2)[] |\n units | double precision |\n pos | character(2) |\n isduplicate | boolean |\n duplicate_id | integer |\n encounter_corrected | boolean |\n procedure_corrected | boolean |\n numerrors | integer |\n encerrors_codes | integer[] |\n procerror_code | integer |\n error_servicedate | text |\n e_duplicate_id | integer |\n ecode_counts | integer[] |\n p_record_status | integer |\n e_record_status | integer |\n e_delete_date | date |\n p_delete_date | date |\n b_record_status | integer |\n b_confirmation | character varying(20) |\n b_carrier_cobol_id | character varying(16) |\n b_provider_cobol_id | character varying(20) |\n b_provider_tax_id | character varying(16) |\n b_carrier_name | character varying(50) |\n b_provider_name | character varying(50) |\n b_submitter_file_id | character varying(40) |\n e_hist_carrier_id | integer |\n p_hist_carrier_id | integer |\n e_duplicate_id_orig | character varying(25) |\n p_duplicate_id_orig | character varying(25) |\n num_procerrors | integer |\n num_encerrors | integer |\n export_id | integer |\n raps_id | integer |\n hedis_id | integer |\nIndexes:\n \"detail_summary_b_record_status_idx\" btree (b_record_status)\n \"detail_summary_batch_id_idx\" btree (batch_id)\n \"detail_summary_batchnum_idx\" btree (batchnum)\n \"detail_summary_carrier_id_idx\" btree (carrier_id)\n \"detail_summary_duplicate_id_idx\" btree (duplicate_id)\n \"detail_summary_e_record_status_idx\" btree (e_record_status)\n \"detail_summary_encounter_id_idx\" btree (encounter_id)\n \"detail_summary_encounternum_idx\" btree (encounternum)\n \"detail_summary_export_date_idx\" btree (export_date)\n \"detail_summary_hedis_date_idx\" btree (hedis_date)\n \"detail_summary_member_name_idx\" btree (member_name)\n \"detail_summary_member_num_idx\" btree (member_num)\n \"detail_summary_p_record_status_idx\" btree (p_record_status)\n \"detail_summary_patient_control_idx\" btree (patient_control)\n \"detail_summary_procedurecode_idx\" btree (procedurecode)\n \"detail_summary_product_line_idx\" btree (product_line)\n \"detail_summary_provider_id_idx\" btree (provider_id)\n \"detail_summary_raps_date_idx\" btree (raps_date)\n \"detail_summary_receipt_encounter_idx\" btree (receipt, encounter_id)\n \"detail_summary_receipt_id_idx\" btree (receipt)\n \"detail_summary_referrering_prov_id_idx\" btree (referring_prov_id)\n \"detail_summary_rendering_prov_id_idx\" btree (rendering_prov_id)\n \"detail_summary_rendering_prov_name_idx\" btree (rendering_prov_name)\n \"detail_summary_servicedate_idx\" btree (servicedate)\n \"ds_sort_1\" btree (receipt date_revop, carrier_id, batchnum, \nencounternum, procedurecode, encounter_id)\n \"ds_sort_10\" btree (receipt date_revop, carrier_id, batchnum, \nencounternum, procedurecode, encounter_id, procedure_id)\n \"ed_cbee_norev\" btree (export_date, carrier_id, batchnum, \nencounternum, encounter_id)\n \"ed_cbee_norev_p\" btree (export_date, carrier_id, batchnum, \nencounternum, encounter_id, procedure_id)\n \"ed_cbee_rev\" btree (export_date date_revop, carrier_id, batchnum, \nencounternum, encounter_id)\n \"ed_cbee_rev_p\" btree (export_date date_revop, carrier_id, batchnum, \nencounternum, encounter_id, procedure_id)\n \"mcbe\" btree (member_name, carrier_id, batchnum, encounternum, \nencounter_id)\n \"mcbe_p\" btree (member_name, carrier_id, batchnum, encounternum, \nencounter_id, procedure_id)\n \"mcbe_rev\" btree (member_name text_revop, carrier_id, batchnum, \nencounternum, encounter_id)\n \"mcbe_rev_p\" btree (member_name text_revop, carrier_id, batchnum, \nencounternum, encounter_id, procedure_id)\n \"mcbee_norev\" btree (member_num, carrier_id, batchnum, encounternum, \nencounter_id)\n \"mcbee_norev_p\" btree (member_num, carrier_id, batchnum, encounternum, \nencounter_id, procedure_id)\n \"mcbee_rev\" btree (member_num text_revop, carrier_id, batchnum, \nencounternum, encounter_id)\n \"mcbee_rev_p\" btree (member_num text_revop, carrier_id, batchnum, \nencounternum, encounter_id, procedure_id)\n \"pcbee_norev\" btree (patient_control, carrier_id, batchnum, \nencounternum, encounter_id)\n \"pcbee_norev_p\" btree (patient_control, carrier_id, batchnum, \nencounternum, encounter_id, procedure_id)\n \"pcbee_rev\" btree (patient_control text_revop, carrier_id, batchnum, \nencounternum, encounter_id)\n \"pcbee_rev_p\" btree (patient_control text_revop, carrier_id, batchnum, \nencounternum, encounter_id, procedure_id)\n \"rcbee_norev\" btree (receipt, carrier_id, batchnum, encounternum, \nencounter_id)\n \"rcbee_norev_p\" btree (receipt, carrier_id, batchnum, encounternum, \nencounter_id, procedure_id)\n \"rp_cbee_norev\" btree (rendering_prov_name, carrier_id, batchnum, \nencounternum, encounter_id)\n \"rp_cbee_norev_p\" btree (rendering_prov_name, carrier_id, batchnum, \nencounternum, encounter_id, procedure_id)\n \"rp_cbee_rev\" btree (rendering_prov_name text_revop, carrier_id, \nbatchnum, encounternum, encounter_id)\n \"rp_cbee_rev_p\" btree (rendering_prov_name text_revop, carrier_id, \nbatchnum, encounternum, encounter_id, procedure_id)\n \"sd_cbee_norev\" btree (servicedate, carrier_id, batchnum, \nencounternum, encounter_id)\n \"sd_cbee_norev_p\" btree (servicedate, carrier_id, batchnum, \nencounternum, encounter_id, procedure_id)\n \"sd_cbee_rev\" btree (servicedate date_revop, carrier_id, batchnum, \nencounternum, encounter_id)\n \"sd_cbee_rev_p\" btree (servicedate date_revop, carrier_id, batchnum, \nencounternum, encounter_id, procedure_id)\n \"testrev\" btree (receipt date_revop, carrier_id, batchnum, \nencounternum, encounter_id)\n \"testrev_p\" btree (receipt date_revop, carrier_id, batchnum, \nencounternum, encounter_id, procedure_id)\n\n", "msg_date": "Thu, 12 Apr 2007 17:40:28 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "On Thu, 2007-04-12 at 16:03, Steve wrote:\n> Hey there;\n> \n> On a Postgres 8.2.3 server, I've got a query that is running very slow in \n> some cases. With some work, I've determined the 'slow part' of the query. \n> :) This is a query on a table with like 10 million rows or something like \n> that. encounter_id is an integer and receipt is of type 'date'.\n\nSNIP\n\n> Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4)\n> Filter: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY \n> ('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n> (2 rows)\n\nHow accurate is the row estimate made by the planner? (explain analyze\nto be sure)\n", "msg_date": "Thu, 12 Apr 2007 16:52:04 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance" }, { "msg_contents": "Hi all,\n\nWow! That's a lot to respond to. Let me go through some of the \nideas... First, I just turned on autovacuum, I forgot to do that. I'm \nnot seeing a major impact though. Also, I know that it's not optimal \nfor a dedicated server. It's not just for postgres, it's also got our \napache server on it. We're just getting started and didn't want to \nmake the major investment right now in getting the most expensive \nserver we can get. Within the next year, as our traffic grows, we \nwill most likely upgrade, but for now when we're in the beginning \nphases of our project, we're going to work with this server.\n\nIn terms of RAID not helping speed-wise (only making an impact in \ndata integrity) - I was under the impression that even a mirrored \ndisk set improves speed, because read requests can be sent to either \nof the disk controllers. Is this incorrect?\n\nI turned on logging again, only logging queries > 5ms. and it caused \nthe same problems. I think it might be an issue within the OS's \nlogging facilities, since it's going through stderr.\n\nSome of the queries are definitely making an impact on the speed. We \nare constantly trying to improve performance, and part of that is \nreassessing our indexes and denormalizing data where it would help. \nWe're also doing work with memcached to cache the results of some of \nthe more expensive operations.\n\nThanks for all your help guys - it's really fantastic to see the \ncommunity here! I've got a lot of database experience (mostly with ms \nsql and mysql) but this is my first time doing serious work with \npostgres and it's really a great system with great people too.\n\nJason\n\nOn Apr 12, 2007, at 11:35 AM, Jeff Frost wrote:\n\n> On Thu, 12 Apr 2007, Jason Lustig wrote:\n>\n>> 0 <-- BM starts here\n>> 10 0 180 700436 16420 91740 0 0 0 176 278 2923 \n>> 59 41 0 0 0\n>> 11 0 180 696736 16420 91740 0 0 0 0 254 2904 \n>> 57 43 0 0 0\n>> 12 0 180 691272 16420 91740 0 0 0 0 255 3043 \n>> 60 39 1 0 0\n>> 9 0 180 690396 16420 91740 0 0 0 0 254 3078 \n>> 63 36 2 0 0\n>>\n>> Obviously, I've turned off logging now but I'd like to get it \n>> running again (without bogging down the server) so that I can \n>> profile the system and find out which queries I need to optimize. \n>> My logging settings (with unnecessary comments taken out) were:\n>\n> So what did you get in the logs when you had logging turned on? If \n> you have the statement logging, perhaps it's worth running through \n> pgfouine to generate a report.\n>\n>>\n>> log_destination = 'syslog' # Valid values are \n>> combinations of\n>> redirect_stderr = off # Enable capturing of \n>> stderr into log\n>> log_min_duration_statement = 0 # -1 is disabled, 0 \n>> logs all statements\n>> silent_mode = on # DO NOT USE without \n>> syslog or\n>> log_duration = off\n>> log_line_prefix = 'user=%u,db=%d' # Special \n>> values:\n>> log_statement = 'none' # none, ddl, mod, all\n>>\n>\n> Perhaps you just want to log slow queries > 100ms? But since you \n> don't seem to know what queries you're running on each web page, \n> I'd suggest you just turn on the following and run your benchmark \n> against it, then turn it back off:\n>\n> log_duration = on\n> log_statement = 'all'\n>\n> Then go grab pgfouine and run the report against the logs to see \n> what queries are chewing up all your time.\n>\n>> So you know, we're using Postgres 8.2.3. The database currently is \n>> pretty small (we're just running a testing database right now with \n>> a few megabytes of data). No doubt some of our queries are slow, \n>> but I was concerned because no matter how slow the queries were \n>> (at most the worst were taking a couple of msecs anyway), I was \n>> getting ridiculously slow responses from the server. Outside of \n>> logging, our only other non-default postgresql.conf items are:\n>>\n>> shared_buffers = 13000 # min 128kB or \n>> max_connections*16kB\n>> work_mem = 8096 # min 64kB\n>>\n>> In terms of the server itself, I think that it uses software raid. \n>> How can I tell? Our hosting company set it up with the server so I \n>> guess I could ask them, but is there a program I can run which \n>> will tell me the information? I also ran bonnie++ and got this \n>> output:\n>>\n>> Version 1.03 ------Sequential Output------ --Sequential \n>> Input- --Random-\n>> -Per Chr- --Block-- -Rewrite- -Per Chr- -- \n>> Block-- --Seeks--\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \n>> CP /sec %CP\n>> pgtest 2000M 29277 67 33819 15 15446 4 35144 62 48887 5 \n>> 152.7 0\n>> ------Sequential Create------ --------Random \n>> Create--------\n>> -Create-- --Read--- -Delete-- -Create-- -- \n>> Read--- -Delete--\n>> files /sec %CP /sec %CP /sec %CP /sec %CP /sec % \n>> CP /sec %CP\n>> 16 17886 77 +++++ +++ +++++ +++ 23258 99 +++++ ++ \n>> + +++++ +++\n>>\n>> So I'm getting 33MB and 48MB write/read respectively. Is this \n>> slow? Is there anything I should be doing to optimize our RAID \n>> configuration?\n>>\n>\n> It's not fast, but at least it's about the same speed as an average \n> IDE drive from this era. More disks would help, but since you \n> indicate the DB fits in RAM with plenty of room to spare, how about \n> you update your effective_cache_size to something reasonable. You \n> can use the output of the 'free' command and take the cache number \n> and divide by 8 to get a reasonable value on linux. Then turn on \n> logging and run your benchmark. After that, run a pgfouine report \n> against the log and post us the explain analyze from your slow \n> queries.\n>\n> And if Ron is indeed local, it might be worthwhile to contact him. \n> Someone onsite would likely get this taken care of much faster than \n> we can on the mailing list.\n>\n> -- \n> Jeff Frost, Owner \t<[email protected]>\n> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n> Phone: 650-780-7908\tFAX: 650-649-1954\n\n", "msg_date": "Thu, 12 Apr 2007 17:58:49 -0400", "msg_from": "Jason Lustig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "Steve <[email protected]> writes:\n> Here's the table and it's indexes. Before looking, a note; there's \n> several 'revop' indexes, this is for sorting. The customer insisted on, \n> frankly, meaninglessly complicated sorts. I don't think any of that \n> matters for our purposes here though :)\n\nOy vey ... I hope this is a read-mostly table, because having that many\nindexes has got to be killing your insert/update performance.\n\nI see that some of the revop indexes might be considered relevant to\nthis query, so how exactly have you got those opclasses defined?\nThere's built-in support for reverse sort as of CVS HEAD, but in\nexisting releases you must have cobbled something together, and I wonder\nif that could be a contributing factor ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 18:01:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": ">\n> Oy vey ... I hope this is a read-mostly table, because having that many\n> indexes has got to be killing your insert/update performance.\n\n \tHahaha yeah these are read-only tables. Nightly inserts/updates. \nTakes a few hours, depending on how many records (between 4 and 10 \nusually). But during the day, while querying, read only.\n\n> I see that some of the revop indexes might be considered relevant to\n> this query, so how exactly have you got those opclasses defined?\n> There's built-in support for reverse sort as of CVS HEAD, but in\n> existing releases you must have cobbled something together, and I wonder\n> if that could be a contributing factor ...\n\nHere's the revops (the c functions are at the bottom):\n\nCREATE FUNCTION ddd_date_revcmp(date, date) RETURNS integer\n AS '/usr/local/pgsql/contrib/cmplib.so', 'ddd_date_revcmp'\n LANGUAGE c STRICT;\n\nCREATE FUNCTION ddd_int_revcmp(integer, integer) RETURNS integer\n AS '/usr/local/pgsql/contrib/cmplib.so', 'ddd_int_revcmp'\n LANGUAGE c STRICT;\n\nCREATE FUNCTION ddd_text_revcmp(text, text) RETURNS integer\n AS '/usr/local/pgsql/contrib/cmplib.so', 'ddd_text_revcmp'\n LANGUAGE c STRICT;\n\nCREATE OPERATOR CLASS date_revop\n FOR TYPE date USING btree AS\n OPERATOR 1 >(date,date) ,\n OPERATOR 2 >=(date,date) ,\n OPERATOR 3 =(date,date) ,\n OPERATOR 4 <=(date,date) ,\n OPERATOR 5 <(date,date) ,\n FUNCTION 1 ddd_date_revcmp(date,date);\n\nCREATE OPERATOR CLASS int4_revop\n FOR TYPE integer USING btree AS\n OPERATOR 1 >(integer,integer) ,\n OPERATOR 2 >=(integer,integer) ,\n OPERATOR 3 =(integer,integer) ,\n OPERATOR 4 <=(integer,integer) ,\n OPERATOR 5 <(integer,integer) ,\n FUNCTION 1 ddd_int_revcmp(integer,integer);\n\nCREATE OPERATOR CLASS text_revop\n FOR TYPE text USING btree AS\n OPERATOR 1 >(text,text) ,\n OPERATOR 2 >=(text,text) ,\n OPERATOR 3 =(text,text) ,\n OPERATOR 4 <=(text,text) ,\n OPERATOR 5 <(text,text) ,\n FUNCTION 1 ddd_text_revcmp(text,text);\n\nDatum ddd_date_revcmp(PG_FUNCTION_ARGS){\n DateADT arg1=PG_GETARG_DATEADT(0);\n DateADT arg2=PG_GETARG_DATEADT(1);\n\n PG_RETURN_INT32(arg2 - arg1);\n}\n\n\nDatum ddd_int_revcmp(PG_FUNCTION_ARGS){\n int32 arg1=PG_GETARG_INT32(0);\n int32 arg2=PG_GETARG_INT32(1);\n\n PG_RETURN_INT32(arg2 - arg1);\n}\n\nDatum ddd_text_revcmp(PG_FUNCTION_ARGS){\n char* arg1=(char*)VARDATA(PG_GETARG_TEXT_P(0));\n char* arg2=(char*)VARDATA(PG_GETARG_TEXT_P(1));\n\n if((*arg1) != (*arg2)){\n PG_RETURN_INT32(*arg2 - *arg1);\n }else{\n PG_RETURN_INT32(strcmp(arg2,arg1));\n }\n}\n\n\n", "msg_date": "Thu, 12 Apr 2007 18:03:47 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": ">> Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4)\n>> Filter: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY\n>> ('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n>> (2 rows)\n>\n> How accurate is the row estimate made by the planner? (explain analyze\n> to be sure)\n>\n\nResults:\n\n Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4) \n(actual time=62871.386..257258.249 rows=112 loops=1)\n Filter: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n Total runtime: 257258.652 ms\n\n\n", "msg_date": "Thu, 12 Apr 2007 18:04:33 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance" }, { "msg_contents": "On Thu, 2007-04-12 at 17:04, Steve wrote:\n> >> Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4)\n> >> Filter: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY\n> >> ('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n> >> (2 rows)\n> >\n> > How accurate is the row estimate made by the planner? (explain analyze\n> > to be sure)\n> >\n> \n> Results:\n> \n> Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4) \n> (actual time=62871.386..257258.249 rows=112 loops=1)\n> Filter: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY \n> ('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n> Total runtime: 257258.652 ms\n\nSo there's a misjudgment of the number of rows returned by a factor of\nabout 88. That's pretty big. Since you had the same number without the\nreceipt date (I think...) then it's the encounter_id that's not being\ncounted right.\n\nTry upping the stats target on that column and running analyze again and\nsee if you get closer to 112 in your analyze or not.\n", "msg_date": "Thu, 12 Apr 2007 17:14:37 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance" }, { "msg_contents": "On 12.04.2007, at 15:58, Jason Lustig wrote:\n\n> Wow! That's a lot to respond to. Let me go through some of the \n> ideas... First, I just turned on autovacuum, I forgot to do that. \n> I'm not seeing a major impact though. Also, I know that it's not \n> optimal for a dedicated server.\n\nHmm, why not? Have you recently vacuumed your db manually so it gets \ncleaned up? Even a vacuum full might be useful if the db is really \nbloated.\n\n> It's not just for postgres, it's also got our apache server on it. \n> We're just getting started and didn't want to make the major \n> investment right now in getting the most expensive server we can get\n\nHmmm, but more RAM would definitely make sense, especially in that \nszenaria. It really sounds like you machine is swapping to dead.\n\nWhat does the system say about memory usage?\n\n> Some of the queries are definitely making an impact on the speed. \n> We are constantly trying to improve performance, and part of that \n> is reassessing our indexes and denormalizing data where it would \n> help. We're also doing work with memcached to cache the results of \n> some of the more expensive operations.\n\nHmmm, that kills you even more, as it uses RAM. I really don't think \nat the moment that it has something to do with PG itself, but with \nnot enough memory for what you want to achieve.\n\nWhat perhaps helps might be connection pooling, so that not so many \nprocesses are created for the requests. It depends on your \"middle- \nware\" what you can do about that. pg_pool might be an option.\n\ncug\n\n\n", "msg_date": "Thu, 12 Apr 2007 16:18:14 -0600", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Postgresql server" }, { "msg_contents": "Steve <[email protected]> writes:\n> Datum ddd_text_revcmp(PG_FUNCTION_ARGS){\n> char* arg1=(char*)VARDATA(PG_GETARG_TEXT_P(0));\n> char* arg2=(char*)VARDATA(PG_GETARG_TEXT_P(1));\n\n> if((*arg1) != (*arg2)){\n> PG_RETURN_INT32(*arg2 - *arg1);\n> }else{\n> PG_RETURN_INT32(strcmp(arg2,arg1));\n> }\n> }\n\n[ itch... ] That code is just completely wrong, because the contents\nof a TEXT datum aren't guaranteed null-terminated. It'd be better to\ninvoke bttextcmp and negate its result.\n\nThat's not relevant to your immediate problem, but if you've noticed\nany strange behavior with your text_revop indexes, that's the reason...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 18:22:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "> [ itch... ] That code is just completely wrong, because the contents\n> of a TEXT datum aren't guaranteed null-terminated. It'd be better to\n> invoke bttextcmp and negate its result.\n>\n> That's not relevant to your immediate problem, but if you've noticed\n> any strange behavior with your text_revop indexes, that's the reason...\n\n \tThe indexes have all worked, though I'll make the change anyway. \nDocumentation on how to code these things is pretty sketchy and I believe \nI followed an example on the site if I remember right. :/\n\nThanks for the info though :)\n\n\nSteve\n", "msg_date": "Thu, 12 Apr 2007 18:24:48 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": ">\n> So there's a misjudgment of the number of rows returned by a factor of\n> about 88. That's pretty big. Since you had the same number without the\n> receipt date (I think...) then it's the encounter_id that's not being\n> counted right.\n>\n> Try upping the stats target on that column and running analyze again and\n> see if you get closer to 112 in your analyze or not.\n>\n\n \tIf I max the statistics targets at 1000, I get:\n\n Seq Scan on detail_summary ds (cost=0.00..1903030.26 rows=1099 width=4)\n Filter: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n\n\n \tSetting it ot 500 makes the estimated rows twice as much. It \nseems to have no effect on anything though, either way. :)\n\n\nSteve\n", "msg_date": "Thu, 12 Apr 2007 18:41:33 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance" }, { "msg_contents": "Here's my planner parameters:\n\nseq_page_cost = 1.0 # measured on an arbitrary scale\nrandom_page_cost = 1.5 # same scale as above\ncpu_tuple_cost = 0.001 # same scale as above\ncpu_index_tuple_cost = 0.0005 # same scale as above\ncpu_operator_cost = 0.00025 # same scale as above\neffective_cache_size = 8192MB\n\ndefault_statistics_target = 100 # range 1-1000\n\n\nOn a machine with 16 gigs of RAM. I tried to make it skew towards \nindexes. However, even if I force it to use the indexes \n(enable_seqscan=off) it doesn't make it any faster really :/\n\nSteve\n\nOn Thu, 12 Apr 2007, Tom Lane wrote:\n\n> Scott Marlowe <[email protected]> writes:\n>> So there's a misjudgment of the number of rows returned by a factor of\n>> about 88. That's pretty big. Since you had the same number without the\n>> receipt date (I think...) then it's the encounter_id that's not being\n>> counted right.\n>\n> I don't think that's Steve's problem, though. It's certainly\n> misestimating, but nonetheless the cost estimate for the seqscan is\n> 1902749.83 versus 14819.81 for the bitmap scan; it should've picked\n> the bitmap scan anyway.\n>\n> I tried to duplicate the problem here, without any success; I get\n>\n> Bitmap Heap Scan on detail_summary ds (cost=422.01..801.27 rows=100 width=4)\n> Recheck Cond: (encounter_id = ANY ('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))\n> Filter: (receipt >= '1998-12-30'::date)\n> -> Bitmap Index Scan on detail_summary_encounter_id_idx (cost=0.00..421.98 rows=100 width=0)\n> Index Cond: (encounter_id = ANY ('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))\n>\n> so either this has been fixed by a post-8.2.3 bug fix (which I doubt,\n> it doesn't seem familiar at all) or there's some additional contributing\n> factor. Steve, are you using any nondefault planner parameters?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n", "msg_date": "Thu, 12 Apr 2007 18:56:47 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> So there's a misjudgment of the number of rows returned by a factor of\n> about 88. That's pretty big. Since you had the same number without the\n> receipt date (I think...) then it's the encounter_id that's not being\n> counted right.\n\nI don't think that's Steve's problem, though. It's certainly\nmisestimating, but nonetheless the cost estimate for the seqscan is\n1902749.83 versus 14819.81 for the bitmap scan; it should've picked\nthe bitmap scan anyway.\n\nI tried to duplicate the problem here, without any success; I get\n\nBitmap Heap Scan on detail_summary ds (cost=422.01..801.27 rows=100 width=4)\n Recheck Cond: (encounter_id = ANY ('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))\n Filter: (receipt >= '1998-12-30'::date)\n -> Bitmap Index Scan on detail_summary_encounter_id_idx (cost=0.00..421.98 rows=100 width=0)\n Index Cond: (encounter_id = ANY ('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))\n\nso either this has been fixed by a post-8.2.3 bug fix (which I doubt,\nit doesn't seem familiar at all) or there's some additional contributing\nfactor. Steve, are you using any nondefault planner parameters?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 19:00:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "It's a redhat enterprise machine running AMD x64 processors.\n\nLinux ers3.dddcorp.com 2.6.9-42.0.10.ELsmp #1 SMP Fri Feb 16 17:13:42 EST \n2007 x86_64 x86_64 x86_64 GNU/Linux\n\nIt was compiled by me, straight up, nothing weird at all, no odd compiler \noptions or wahtever :)\n\nSo yeah :/ I'm quite baffled as well,\n\nTalk to you later,\n\nSteve\n\n\nOn Thu, 12 Apr 2007, Tom Lane wrote:\n\n> Steve <[email protected]> writes:\n>> Here's my planner parameters:\n>\n> I copied all these, and my 8.2.x still likes the bitmap scan a lot\n> better than the seqscan. Furthermore, I double-checked the CVS history\n> and there definitely haven't been any changes in that area in REL8_2\n> branch since 8.2.3. So I'm a bit baffled. Maybe the misbehavior is\n> platform-specific ... what are you on exactly? Is there anything\n> nonstandard about your Postgres installation?\n>\n> \t\t\tregards, tom lane\n>\n", "msg_date": "Thu, 12 Apr 2007 19:28:02 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Steve <[email protected]> writes:\n> Here's my planner parameters:\n\nI copied all these, and my 8.2.x still likes the bitmap scan a lot\nbetter than the seqscan. Furthermore, I double-checked the CVS history\nand there definitely haven't been any changes in that area in REL8_2\nbranch since 8.2.3. So I'm a bit baffled. Maybe the misbehavior is\nplatform-specific ... what are you on exactly? Is there anything\nnonstandard about your Postgres installation?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 19:29:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Steve <[email protected]> writes:\n> ... even if I force it to use the indexes \n> (enable_seqscan=off) it doesn't make it any faster really :/\n\nDoes that change the plan, or do you still get a seqscan?\n\nBTW, how big is this table really (how many rows)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 19:32:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Table size: 16,037,728 rows\n\nWith enable_seqscan=off I get:\n\n Bitmap Heap Scan on detail_summary ds (cost=4211395.20..4213045.32 \nrows=1099 width=4)\n Recheck Cond: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n -> Bitmap Index Scan on detail_summary_receipt_encounter_idx \n(cost=0.00..4211395.17 rows=1099 width=0)\n Index Cond: ((receipt >= '1998-12-30'::date) AND (encounter_id = \nANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n\n\nThe explain analyze is pending, running it now (however it doens't really \nappear to be any faster using this plan).\n\n\nSteve\n\nOn Thu, 12 Apr 2007, Tom Lane wrote:\n\n> Steve <[email protected]> writes:\n>> ... even if I force it to use the indexes\n>> (enable_seqscan=off) it doesn't make it any faster really :/\n>\n> Does that change the plan, or do you still get a seqscan?\n>\n> BTW, how big is this table really (how many rows)?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n", "msg_date": "Thu, 12 Apr 2007 19:40:15 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Here's the explain analyze with seqscan = off:\n\n Bitmap Heap Scan on detail_summary ds (cost=4211395.20..4213045.32 \nrows=1099 width=4) (actual time=121288.825..121305.908 rows=112 loops=1)\n Recheck Cond: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n -> Bitmap Index Scan on detail_summary_receipt_encounter_idx \n(cost=0.00..4211395.17 rows=1099 width=0) (actual \ntime=121256.681..121256.681 rows=112 loops=1)\n Index Cond: ((receipt >= '1998-12-30'::date) AND (encounter_id = \nANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[])))\n Total runtime: 121306.233 ms\n\n\nYour other question is answered in the other mail along with the \nnon-analyze'd query plan :D\n\nSteve\n\nOn Thu, 12 Apr 2007, Tom Lane wrote:\n\n> Steve <[email protected]> writes:\n>> ... even if I force it to use the indexes\n>> (enable_seqscan=off) it doesn't make it any faster really :/\n>\n> Does that change the plan, or do you still get a seqscan?\n>\n> BTW, how big is this table really (how many rows)?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n", "msg_date": "Thu, 12 Apr 2007 19:42:49 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Steve <[email protected]> writes:\n> With enable_seqscan=off I get:\n\n> -> Bitmap Index Scan on detail_summary_receipt_encounter_idx \n> (cost=0.00..4211395.17 rows=1099 width=0)\n> Index Cond: ((receipt >= '1998-12-30'::date) AND (encounter_id = \n> ANY ...\n\n> The explain analyze is pending, running it now (however it doens't really \n> appear to be any faster using this plan).\n\nYeah, that index is nearly useless for this query --- since the receipt\ncondition isn't really eliminating anything, it'll have to look at every\nindex entry :-( ... in fact, do so again for each of the IN arms :-( :-(\nSo it's definitely right not to want to use that plan. Question is, why\nis it seemingly failing to consider the \"right\" index?\n\nI'm busy setting up my test case on an x86_64 machine right now, but\nI rather fear it'll still work just fine for me. Have you got any\nnondefault parameter settings besides the ones you already mentioned?\n\nAnother thing that might be interesting, if you haven't got a problem\nwith exclusive-locking the table for a little bit, is\n\n\tBEGIN;\n\tDROP INDEX each index except detail_summary_encounter_id_idx\n\tEXPLAIN the problem query\n\tROLLBACK;\n\njust to see if it does the right thing when it's not distracted by\nall the \"wrong\" indexes (yeah, I'm grasping at straws here). If you\nset up the above as a SQL script it should only take a second to run.\nPlease try this with both settings of enable_seqscan --- you don't need\nto do \"explain analyze\" though, we just want to know which plan it picks\nand what the cost estimate is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 20:00:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "If the other indexes are removed, with enable_seqscan=on:\n\n Bitmap Heap Scan on detail_summary ds (cost=154.10..1804.22 rows=1099 \nwidth=4)\n Recheck Cond: (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))\n Filter: (receipt >= '1998-12-30'::date)\n -> Bitmap Index Scan on detail_summary_encounter_id_idx \n(cost=0.00..154.07 rows=1099 width=0)\n Index Cond: (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))\n\n\nWith it off:\n\n Bitmap Heap Scan on detail_summary ds (cost=154.10..1804.22 rows=1099 \nwidth=4)\n Recheck Cond: (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))\n Filter: (receipt >= '1998-12-30'::date)\n -> Bitmap Index Scan on detail_summary_encounter_id_idx \n(cost=0.00..154.07 rows=1099 width=0)\n Index Cond: (encounter_id = ANY \n('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204,8813205,8813206,8813207,8813208,8813209,8813210,8813211,8813212,8813213,8813214,8813215,8813216,8813217,8813218,8813219,8813220,8813221,8813222,8813223,8813224,8813225,8813226,8813227,8813228,8813229,8813230,8813231,8813232,8813233,8813234,8813235,8813236,8813237,8813238,8813239,8813240,8813241,8813242,8813243,8813244,8813245,8813246,8813247,8813248,8813249,8813250,8813251,8813252,8813253,8813254,8813255,8813256,8813257,8813258,8813259,8813260,8813261,8813262,8813263,8813264,8813265,8813266,8813267,8813268,8813269,8813270,8813271,8813272,8813273,8813274,8813275,8813276,8813277,8813278,8813279,8813280,8813281,8813282,8813283,8813284,8815534}'::integer[]))\n\n\nEither way, it runs perfectly fast. So it looks like the indexes are \nconfusing this query like you suspected. Any advise? This isn't the only \nquery we run on this table, much as I'd absolutely love to kill off some \nindexes to imrpove our nightly load times I can't foul up the other \nqueries :)\n\n\nThank you very much for all your help on this issue, too!\n\nSteve\n\nOn Thu, 12 Apr 2007, Tom Lane wrote:\n\n> Steve <[email protected]> writes:\n>> With enable_seqscan=off I get:\n>\n>> -> Bitmap Index Scan on detail_summary_receipt_encounter_idx\n>> (cost=0.00..4211395.17 rows=1099 width=0)\n>> Index Cond: ((receipt >= '1998-12-30'::date) AND (encounter_id =\n>> ANY ...\n>\n>> The explain analyze is pending, running it now (however it doens't really\n>> appear to be any faster using this plan).\n>\n> Yeah, that index is nearly useless for this query --- since the receipt\n> condition isn't really eliminating anything, it'll have to look at every\n> index entry :-( ... in fact, do so again for each of the IN arms :-( :-(\n> So it's definitely right not to want to use that plan. Question is, why\n> is it seemingly failing to consider the \"right\" index?\n>\n> I'm busy setting up my test case on an x86_64 machine right now, but\n> I rather fear it'll still work just fine for me. Have you got any\n> nondefault parameter settings besides the ones you already mentioned?\n>\n> Another thing that might be interesting, if you haven't got a problem\n> with exclusive-locking the table for a little bit, is\n>\n> \tBEGIN;\n> \tDROP INDEX each index except detail_summary_encounter_id_idx\n> \tEXPLAIN the problem query\n> \tROLLBACK;\n>\n> just to see if it does the right thing when it's not distracted by\n> all the \"wrong\" indexes (yeah, I'm grasping at straws here). If you\n> set up the above as a SQL script it should only take a second to run.\n> Please try this with both settings of enable_seqscan --- you don't need\n> to do \"explain analyze\" though, we just want to know which plan it picks\n> and what the cost estimate is.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n", "msg_date": "Thu, 12 Apr 2007 20:05:48 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Steve <[email protected]> writes:\n> Either way, it runs perfectly fast. So it looks like the indexes are \n> confusing this query like you suspected. Any advise?\n\nWow --- sometimes grasping at straws pays off. I was testing here with\njust a subset of the indexes to save build time, but I bet that one of\nthe \"irrelevant\" ones is affecting this somehow. Time to re-test.\n\nIf you have some time to kill, it might be interesting to vary that\nbegin/rollback test script to leave one or two other indexes in place,\nand see if you can identify exactly which other index(es) get it\nconfused.\n\nI'm about to go out to dinner with the wife, but will have a closer\nlook when I get back, or tomorrow morning. We'll figure this out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 20:20:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Okay -- I started leaving indexes on one by one.\n\nThe explain broke when the detail_summary_receipt_encounter_idx index was \nleft on (receipt, encounter_id).\n\nJust dropping that index had no effect, but there's a LOT of indexes that \nrefer to receipt. So on a hunch I tried dropping all indexes that refer \nto receipt date and that worked -- so it's the indexes that contain \nreceipt date that are teh problem.\n\nFor more fun, I tried leaving the index that's just receipt date alone \n(index detail_summary_receipt_id_idx) and THAT produced the correct query; \nit's all these multi-column queries that are fouling things up, it would \nseem!\n\n\n.... So does this mean I should experiment with dropping those indexes? \nI'm not sure if that will result in 'bad things' as there are other \ncomplicated actions like sorts that may go real slow if I drop those \nindexes. BUT I think it'll be easy to convince the customer to drop their \nabsurdly complicated sorts if I can come back with serious results like \nwhat we've worked out here.\n\n\nAnd thanks again -- have a good dinner! :)\n\nSteve\n\n\nOn Thu, 12 Apr 2007, Tom Lane wrote:\n\n> Steve <[email protected]> writes:\n>> Either way, it runs perfectly fast. So it looks like the indexes are\n>> confusing this query like you suspected. Any advise?\n>\n> Wow --- sometimes grasping at straws pays off. I was testing here with\n> just a subset of the indexes to save build time, but I bet that one of\n> the \"irrelevant\" ones is affecting this somehow. Time to re-test.\n>\n> If you have some time to kill, it might be interesting to vary that\n> begin/rollback test script to leave one or two other indexes in place,\n> and see if you can identify exactly which other index(es) get it\n> confused.\n>\n> I'm about to go out to dinner with the wife, but will have a closer\n> look when I get back, or tomorrow morning. We'll figure this out.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n", "msg_date": "Thu, 12 Apr 2007 20:34:18 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Steve <[email protected]> writes:\n> Okay -- I started leaving indexes on one by one.\n> ...\n> .... So does this mean I should experiment with dropping those indexes? \n\nNo, I think this means there's a planner bug to fix. I haven't quite\nscoped out what it is yet, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 22:40:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Steve <[email protected]> writes:\n> Just dropping that index had no effect, but there's a LOT of indexes that \n> refer to receipt. So on a hunch I tried dropping all indexes that refer \n> to receipt date and that worked -- so it's the indexes that contain \n> receipt date that are teh problem.\n\nI'm still not having any luck reproducing the failure here. Grasping at\nstraws again, I wonder if it's got something to do with the order in\nwhich the planner examines the indexes --- which is OID order. Could\nyou send me the results of \n\nselect indexrelid::regclass from pg_index where indrelid = 'detail_summary'::regclass order by indexrelid;\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Apr 2007 23:49:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Here you go:\n\n detail_summary_b_record_status_idx\n detail_summary_batch_id_idx\n detail_summary_batchnum_idx\n detail_summary_carrier_id_idx\n detail_summary_duplicate_id_idx\n detail_summary_e_record_status_idx\n detail_summary_encounter_id_idx\n detail_summary_encounternum_idx\n detail_summary_export_date_idx\n detail_summary_hedis_date_idx\n detail_summary_member_name_idx\n detail_summary_member_num_idx\n detail_summary_p_record_status_idx\n detail_summary_patient_control_idx\n detail_summary_procedurecode_idx\n detail_summary_product_line_idx\n detail_summary_provider_id_idx\n detail_summary_raps_date_idx\n detail_summary_receipt_id_idx\n detail_summary_referrering_prov_id_idx\n detail_summary_rendering_prov_id_idx\n detail_summary_rendering_prov_name_idx\n detail_summary_servicedate_idx\n ds_sort_1\n ds_sort_10\n ed_cbee_norev\n ed_cbee_norev_p\n ed_cbee_rev\n ed_cbee_rev_p\n mcbe\n mcbe_p\n mcbe_rev\n mcbe_rev_p\n mcbee_norev\n mcbee_norev_p\n mcbee_rev\n mcbee_rev_p\n pcbee_norev\n pcbee_norev_p\n pcbee_rev\n pcbee_rev_p\n rcbee_norev\n rcbee_norev_p\n rp_cbee_norev\n rp_cbee_norev_p\n rp_cbee_rev\n rp_cbee_rev_p\n sd_cbee_norev\n sd_cbee_norev_p\n sd_cbee_rev\n sd_cbee_rev_p\n testrev\n testrev_p\n detail_summary_receipt_encounter_idx\n\n\nOn Thu, 12 Apr 2007, Tom Lane wrote:\n\n> Steve <[email protected]> writes:\n>> Just dropping that index had no effect, but there's a LOT of indexes that\n>> refer to receipt. So on a hunch I tried dropping all indexes that refer\n>> to receipt date and that worked -- so it's the indexes that contain\n>> receipt date that are teh problem.\n>\n> I'm still not having any luck reproducing the failure here. Grasping at\n> straws again, I wonder if it's got something to do with the order in\n> which the planner examines the indexes --- which is OID order. Could\n> you send me the results of\n>\n> select indexrelid::regclass from pg_index where indrelid = 'detail_summary'::regclass order by indexrelid;\n>\n> \t\t\tregards, tom lane\n>\n", "msg_date": "Thu, 12 Apr 2007 23:55:00 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Steve <[email protected]> writes:\n> On Thu, 12 Apr 2007, Tom Lane wrote:\n>> I'm still not having any luck reproducing the failure here. Grasping at\n>> straws again, I wonder if it's got something to do with the order in\n>> which the planner examines the indexes --- which is OID order. Could\n>> you send me the results of\n\n> Here you go:\n\nNope, still doesn't fail for me. [ baffled and annoyed... ] There must\nbe something about your situation that's relevant but we aren't\nrecognizing it. Are you in a position to let me ssh into your machine\nand try to debug it? Or other options like sending me a dump of your\ndatabase? I'm about out of steam for tonight, but let's talk about it\noff-list tomorrow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Apr 2007 00:38:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strangely Variable Query Performance " }, { "msg_contents": "Steve <[email protected]> writes:\n> [ strange planner misbehavior in 8.2.3 ]\n\nAfter some off-list investigation (thanks, Steve, for letting me poke\nat your machine), the short answer is that the heuristics used by\nchoose_bitmap_and() suck. The problem query is like \n\nselect ... from ds where\nds.receipt >= '1998-12-30 0:0:0' and\nds.encounter_id in ( ... 100 distinct values ... );\n\nand the table has a truly remarkable variety of indexes on encounter_id,\nreceipt, and combinations of them with other columns. The receipt\ncondition is actually in effect a no-op, because all receipt dates are\nlater than that, but because ineq_histogram_selectivity doesn't trust\nhistogram data unreservedly we compute a selectivity of about 0.99997\nfor it. That means that the indexes that cover both receipt and\nencounter_id are given a selectivity score just fractionally better than\nthose involving encounter_id alone, and therefore they sort first in\nchoose_bitmap_and's sort step, and the way that that routine is coded,\nonly combinations of the very first index with other ones will be\nconsidered for a bitmap heap scan. So the possibility of using just the\nindex on encounter_id alone is never considered, even though that\nalternative is vastly cheaper than the alternatives that are considered.\n(It happens that encounter_id is a low-order column in all the indexes\nthat include receipt, and so these scans end up covering the whole index\n... multiple times even. The cost estimation is fine --- the thing\nknows these are expensive --- what's falling down is the heuristic for\nwhich combinations of indexes to consider using in a bitmap scan.)\n\nThe original coding of choose_bitmap_and involved a \"fuzzy\" comparison\nof selectivities, which would have avoided this problem, but we got rid\nof that later because it had its own problems. In fact,\nchoose_bitmap_and has caused us enough problems that I'm thinking we\nneed a fundamental rethink of how it works, rather than just marginal\ntweaks. If you haven't looked at this code before, the comments explain\nthe idea well enough:\n\n/*\n * choose_bitmap_and\n * Given a nonempty list of bitmap paths, AND them into one path.\n *\n * This is a nontrivial decision since we can legally use any subset of the\n * given path set. We want to choose a good tradeoff between selectivity\n * and cost of computing the bitmap.\n *\n * The result is either a single one of the inputs, or a BitmapAndPath\n * combining multiple inputs.\n */\n...\n /*\n * In theory we should consider every nonempty subset of the given paths.\n * In practice that seems like overkill, given the crude nature of the\n * estimates, not to mention the possible effects of higher-level AND and\n * OR clauses. As a compromise, we sort the paths by selectivity. We\n * always take the first, and sequentially add on paths that result in a\n * lower estimated cost.\n *\n * We also make some effort to detect directly redundant input paths, as\n * can happen if there are multiple possibly usable indexes. (Another way\n * it can happen is that best_inner_indexscan will find the same OR join\n * clauses that create_or_index_quals has pulled OR restriction clauses\n * out of, and then both versions show up as duplicate paths.) We\n * consider an index redundant if any of its index conditions were already\n * used by earlier indexes. (We could use predicate_implied_by to have a\n * more intelligent, but much more expensive, check --- but in most cases\n * simple pointer equality should suffice, since after all the index\n * conditions are all coming from the same RestrictInfo lists.)\n *\n * You might think the condition for redundancy should be \"all index\n * conditions already used\", not \"any\", but this turns out to be wrong.\n * For example, if we use an index on A, and then come to an index with\n * conditions on A and B, the only way that the second index can be later\n * in the selectivity-order sort is if the condition on B is completely\n * non-selective. In any case, we'd surely be drastically misestimating\n * the selectivity if we count the same condition twice.\n *\n * We include index predicate conditions in the redundancy test. Because\n * the test is just for pointer equality and not equal(), the effect is\n * that use of the same partial index in two different AND elements is\n * considered redundant. (XXX is this too strong?)\n *\n * Note: outputting the selected sub-paths in selectivity order is a good\n * thing even if we weren't using that as part of the selection method,\n * because it makes the short-circuit case in MultiExecBitmapAnd() more\n * likely to apply.\n */\n\n\nOne idea I thought about was to sort by index scan cost, using\nselectivity only as a tiebreaker for cost, rather than the other way\naround as is currently done. This seems fairly plausible because\nindexscans that are cheaper than other indexscans likely return fewer\nrows too, and so selectivity is already accounted for to some extent ---\nat least you can't have an enormously worse selectivity at lower cost,\nwhereas Steve's example proves it doesn't work the other way. But I'm\nworried about breaking the reasoning about redundant indexes that's\nmentioned in the comments.\n\nAnother alternative that would respond to the immediate problem is to\nmaintain the current sort order, but as we come to each index, consider\nusing that one alone, and throw away whatever AND we might have built up\nif that one alone beats the AND-so-far. This seems more conservative,\nas it's unlikely to break any cases that work well now, but on the other\nhand it feels like plastering another wart atop a structure that's\nalready rather rickety.\n\nHas anyone got any thoughts about the best way to do this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Apr 2007 18:48:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "choose_bitmap_and again (was Re: Strangely Variable Query\n Performance)" }, { "msg_contents": "Tom Lane wrote:\n\n> One idea I thought about was to sort by index scan cost, using\n> selectivity only as a tiebreaker for cost, rather than the other way\n> around as is currently done. This seems fairly plausible because\n> indexscans that are cheaper than other indexscans likely return fewer\n> rows too, and so selectivity is already accounted for to some extent ---\n> at least you can't have an enormously worse selectivity at lower cost,\n> whereas Steve's example proves it doesn't work the other way. But I'm\n> worried about breaking the reasoning about redundant indexes that's\n> mentioned in the comments.\n> \n> Another alternative that would respond to the immediate problem is to\n> maintain the current sort order, but as we come to each index, consider\n> using that one alone, and throw away whatever AND we might have built up\n> if that one alone beats the AND-so-far. This seems more conservative,\n> as it's unlikely to break any cases that work well now, but on the other\n> hand it feels like plastering another wart atop a structure that's\n> already rather rickety.\n> \n> Has anyone got any thoughts about the best way to do this?\n\nHow about doing both: sort the index by index scan cost; then pick the\nfirst index on the list and start adding indexes when they lower the\ncost. When adding each index, consider it by itself against the\nalready stacked indexes. If the cost is lower, put this index at the\ntop of the list, and restart the algorithm (after the sorting step of\ncourse).\n\nI think the concern about condition redundancy should be attacked\nseparately. How about just comparing whether they have common prefixes\nof conditions? I admit I don't understand what would happen with\nindexes defined like (lower(A), B, C) versus (A, B) for example.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 13 Apr 2007 19:24:09 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: Strangely Variable\n\tQuery Performance)" }, { "msg_contents": "On Fri, 2007-04-13 at 18:48 -0400, Tom Lane wrote:\n> Has anyone got any thoughts about the best way to do this?\n\nI don't think we know enough to pick one variant that works in all\ncases. This requires more detailed analysis of various cases.\n\nLets put in a parameter to allow the options to be varied. The purpose\nwould be to look for some more information that allows us to see what\nthe pre-conditions would be for each heuristic.\n\nInitially, we say this is a beta-only feature and may be withdrawn in\nthe production version.\n\nWhy did it not pick my index? is a tiring game, but one that must be\nplayed. I hope that the ORDER LIMIT optimization should reduce the\nnumber of indexes chosen to maintain sort order in the output.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 14 Apr 2007 09:24:07 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choose_bitmap_and again (was Re: [PERFORM] StrangelyVariable\n\tQuery Performance)" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:pgsql-hackers-\n> [email protected]] On Behalf Of Alvaro Herrera\n> Sent: Friday, April 13, 2007 4:24 PM\n> To: Tom Lane\n> Cc: [email protected]; PostgreSQL Performance; Steve\n> Subject: Re: [HACKERS] choose_bitmap_and again (was Re: [PERFORM]\n> Strangely Variable Query Performance)\n> \n> Tom Lane wrote:\n> \n> > One idea I thought about was to sort by index scan cost, using\n> > selectivity only as a tiebreaker for cost, rather than the other way\n> > around as is currently done. This seems fairly plausible because\n> > indexscans that are cheaper than other indexscans likely return\nfewer\n> > rows too, and so selectivity is already accounted for to some extent\n---\n> > at least you can't have an enormously worse selectivity at lower\ncost,\n> > whereas Steve's example proves it doesn't work the other way. But\nI'm\n> > worried about breaking the reasoning about redundant indexes that's\n> > mentioned in the comments.\n> >\n> > Another alternative that would respond to the immediate problem is\nto\n> > maintain the current sort order, but as we come to each index,\nconsider\n> > using that one alone, and throw away whatever AND we might have\nbuilt up\n> > if that one alone beats the AND-so-far. This seems more\nconservative,\n> > as it's unlikely to break any cases that work well now, but on the\nother\n> > hand it feels like plastering another wart atop a structure that's\n> > already rather rickety.\n> >\n> > Has anyone got any thoughts about the best way to do this?\n> \n> How about doing both: sort the index by index scan cost; then pick the\n> first index on the list and start adding indexes when they lower the\n> cost. When adding each index, consider it by itself against the\n> already stacked indexes. If the cost is lower, put this index at the\n> top of the list, and restart the algorithm (after the sorting step of\n> course).\n> \n> I think the concern about condition redundancy should be attacked\n> separately. How about just comparing whether they have common\nprefixes\n> of conditions? I admit I don't understand what would happen with\n> indexes defined like (lower(A), B, C) versus (A, B) for example.\n\nInstead of sorting, I suggest the quickselect() algorithm, which is\nO(n).\nProbably, if the list is small, it won't matter much, but it might offer\nsome tangible benefit.\n\nHere is an example of the algorithm:\n\n#include <stdlib.h>\ntypedef double Etype; /* Season to taste. */\n\nextern Etype RandomSelect(Etype * A, size_t p, size_t r, size_t i);\nextern size_t RandRange(size_t a, size_t b);\nextern size_t RandomPartition(Etype * A, size_t p, size_t r);\nextern size_t Partition(Etype * A, size_t p, size_t r);\n\n/*\n**\n** In the following code, every reference to CLR means:\n**\n** \"Introduction to Algorithms\"\n** By Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest\n** ISBN 0-07-013143-0\n*/\n\n\n/*\n** CLR, page 187\n*/\nEtype RandomSelect(Etype A[], size_t p, size_t r, size_t i)\n{\n size_t q,\n k;\n if (p == r)\n return A[p];\n q = RandomPartition(A, p, r);\n k = q - p + 1;\n\n if (i <= k)\n return RandomSelect(A, p, q, i);\n else\n return RandomSelect(A, q + 1, r, i - k);\n}\n\nsize_t RandRange(size_t a, size_t b)\n{\n size_t c = (size_t) ((double) rand() / ((double) RAND_MAX + 1) * (b\n- a));\n return c + a;\n}\n\n/*\n** CLR, page 162\n*/\nsize_t RandomPartition(Etype A[], size_t p, size_t r)\n{\n size_t i = RandRange(p, r);\n Etype Temp;\n Temp = A[p];\n A[p] = A[i];\n A[i] = Temp;\n return Partition(A, p, r);\n}\n\n/*\n** CLR, page 154\n*/\nsize_t Partition(Etype A[], size_t p, size_t r)\n{\n Etype x,\n temp;\n size_t i,\n j;\n\n x = A[p];\n i = p - 1;\n j = r + 1;\n\n for (;;) {\n do {\n j--;\n } while (!(A[j] <= x));\n do {\n i++;\n } while (!(A[i] >= x));\n if (i < j) {\n temp = A[i];\n A[i] = A[j];\n A[j] = temp;\n } else\n return j;\n }\n}\n", "msg_date": "Sat, 14 Apr 2007 02:04:24 -0700", "msg_from": "\"Dann Corbit\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choose_bitmap_and again (was Re: [PERFORM] Strangely Variable\n\tQuery Performance)" }, { "msg_contents": "\"Dann Corbit\" <[email protected]> writes:\n> Instead of sorting, I suggest the quickselect() algorithm, which is\n> O(n).\n\nWhat for? Common cases have less than half a dozen entries. That is\nnot the place we need to be spending engineering effort --- what we\nneed to worry about is what's the choice algorithm, not implementation\ndetails.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Apr 2007 11:52:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: Strangely Variable\n\tQuery Performance)" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I think the concern about condition redundancy should be attacked\n> separately. How about just comparing whether they have common prefixes\n> of conditions? I admit I don't understand what would happen with\n> indexes defined like (lower(A), B, C) versus (A, B) for example.\n\nI understand that issue a bit better than I did when I wrote the comment\n(so I suppose I better rewrite it). The $64 reason for rejecting\nAND-combinations of indexes that are using some of the same\nWHERE-conditions is that if we don't, we effectively double-count the\nselectivity of those conditions, causing us to prefer useless\nAND-combinations. An example is \"WHERE A > 42 AND B < 100\" where we\nhave an index on A and one on (A,B). The selectivity calculation\nwill blindly assume that the selectivities of the two indexes are\nindependent and thus prefer to BitmapAnd them, when obviously there\nis no point in using both. Ideally we should improve the selectivity\ncalculation to not get fooled like this, but that seems hard and\nprobably slow. So for the moment we have the heuristic that no\nWHERE-clause should be used twice in any AND-combination.\n\nGiven that we are using that heuristic, it becomes important that\nwe visit the indexes in the \"right order\" --- as the code stands,\nin the (A) vs (A,B) case it will consider only the first index it\narrives at in the selectivity sort order, because the second will\nbe rejected on the basis of re-use of the WHERE A > 42 condition.\nSorting by selectivity tends to ensure that we pick the index that\ncan make use of as many WHERE-clauses as possible.\n\nThe idea of considering each index alone fixes the order dependency\nfor cases where a single index is the best answer, but it doesn't\nhelp much for cases where you really do want a BitmapAnd, only not\none using the index with the individually best selectivity.\n\nWe really need a heuristic here --- exhaustive search will be\nimpractical in cases where there are many indexes, because you'd\nbe looking at 2^N combinations. (In Steve's example there are\nactually 38 relevant indexes, which is bad database design if\nyou ask me, but in any case we cannot afford to search through\n2^38 possibilities.) But the one we're using now is too fragile.\n\nMaybe we should use a cutoff similar to the GEQO one: do exhaustive\nsearch if there are less than N relevant indexes, for some N.\nBut that's not going to help Steve; we still need a smarter heuristic\nfor what to look for above the cutoff.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Apr 2007 13:19:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: Strangely Variable\n\tQuery Performance)" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane wrote:\n>> Has anyone got any thoughts about the best way to do this?\n\n> How about doing both: sort the index by index scan cost; then pick the\n> first index on the list and start adding indexes when they lower the\n> cost. When adding each index, consider it by itself against the\n> already stacked indexes. If the cost is lower, put this index at the\n> top of the list, and restart the algorithm (after the sorting step of\n> course).\n\nThe \"restart\" part of that bothers me --- it's not entirely clear that\nyou couldn't get into an infinite loop. (Imagine that A+B is better\nthan A alone, so we adopt it, but worse than C alone, so we take C as\nthe new leader and start over. Then perhaps C+B is better than C alone\nbut worse than A alone, so we take A as the leader and start over.\nMaybe this is impossible but I'm unsure.)\n\nI looked back at the gdb results I'd gotten from Steve's example and\nnoticed that for his 38 indexes there were only three distinct index\nselectivity values, and the sort step grouped indexes by cost within\nthose groups. In hindsight of course this is obvious: the selectivity\ndepends on the set of WHERE-clauses used, so with two WHERE-clauses\nthere are three possible selectivities (to within roundoff error anyway)\ndepending on whether the index uses one or both clauses. So the\nexisting algorithm gets one thing right: for any two indexes that make\nuse of just the same set of WHERE-clauses, it will always take the one\nwith cheaper scan cost.\n\nThinking more about this leads me to the following proposal:\n\n1. Explicitly group the indexes according to the subset of\nWHERE-conditions (and partial index conditions, if any) they use.\nWithin each such group, discard all but the cheapest-scan-cost one.\n\n2. Sort the remaining indexes according to scan cost.\n\n3. For each index in order, consider it as a standalone scan, and also\nconsider adding it on to the AND-group led by each preceding index,\nusing the same logic as now: reject using any WHERE-condition twice\nin a group, and then add on only if the total cost of the AND-group\nscan is reduced.\n\nThis would be approximately O(N^2) in the number of relevant indexes,\nwhereas the current scheme is roughly linear (handwaving a bit here\nbecause the number of WHERE-clauses is a factor too). But that seems\nunlikely to be a problem for sane numbers of indexes, unlike the O(2^N)\nbehavior of an exhaustive search. It would get rid of (most of) the\norder-sensitivity problem, since we would definitely consider the\nAND-combination of every pair of combinable indexes. I can imagine\ncases where it wouldn't notice useful three-or-more-way combinations\nbecause the preceding two-way combination didn't win, but they seem\npretty remote possibilities.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Apr 2007 14:21:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: Strangely Variable\n\tQuery Performance)" }, { "msg_contents": "I wrote:\n> Thinking more about this leads me to the following proposal:\n\n> 1. Explicitly group the indexes according to the subset of\n> WHERE-conditions (and partial index conditions, if any) they use.\n> Within each such group, discard all but the cheapest-scan-cost one.\n\n> 2. Sort the remaining indexes according to scan cost.\n\n> 3. For each index in order, consider it as a standalone scan, and also\n> consider adding it on to the AND-group led by each preceding index,\n> using the same logic as now: reject using any WHERE-condition twice\n> in a group, and then add on only if the total cost of the AND-group\n> scan is reduced.\n\nHere is a patch along these lines, in fact two patches (HEAD and 8.2\nversions). The 8.2 version picks up some additional partial-index\nintelligence that I added to HEAD on Mar 21 but did not at that time\nrisk back-patching --- since this is a fairly large rewrite of the\nroutine, keeping the branches in sync seems best.\n\nSteve, can you try this out on your queries and see if it makes better\nor worse decisions? It seems to fix your initial complaint but I do\nnot have a large stock of test cases to try.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 14 Apr 2007 18:02:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: [PERFORM] Strangely\n\tVariable Query Performance)" }, { "msg_contents": "> Steve, can you try this out on your queries and see if it makes better\n> or worse decisions? It seems to fix your initial complaint but I do\n> not have a large stock of test cases to try.\n>\n\n \tWow, this is a remarkable difference. Queries that were taking \nminutes to complete are coming up in seconds. Good work, I think this'll \nsolve my customer's needs for their demo on the 19th :)\n\nThank you so much!\n\n\nSteve\n\n", "msg_date": "Sat, 14 Apr 2007 18:55:45 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: [PERFORM] Strangely\n\tVariable Query Performance)" }, { "msg_contents": "Steve <[email protected]> writes:\n>> Steve, can you try this out on your queries and see if it makes better\n>> or worse decisions? It seems to fix your initial complaint but I do\n>> not have a large stock of test cases to try.\n\n> \tWow, this is a remarkable difference. Queries that were taking \n> minutes to complete are coming up in seconds. Good work, I think this'll \n> solve my customer's needs for their demo on the 19th :)\n\nCan you find any cases where it makes a worse choice than before?\nAnother thing to pay attention to is whether the planning time gets\nnoticeably worse. If we can't find any cases where it loses badly\non those measures, I'll feel comfortable in applying it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 15 Apr 2007 01:11:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: [PERFORM] Strangely\n\tVariable Query Performance)" }, { "msg_contents": ">\n> Can you find any cases where it makes a worse choice than before?\n> Another thing to pay attention to is whether the planning time gets\n> noticeably worse. If we can't find any cases where it loses badly\n> on those measures, I'll feel comfortable in applying it...\n>\n\n \tI'll see what I can find -- I'll let you know on Monday if I can \nfind any queries that perform worse. My tests so far have shown \nequivalent or better performance so far but I've only done sort of a \nsurvey so far ... I've got plenty of special cases to test that should \nput this through the paces.\n\n\nSteve\n", "msg_date": "Sun, 15 Apr 2007 04:13:53 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: [PERFORM] Strangely\n\tVariable Query Performance)" }, { "msg_contents": ">\n> Can you find any cases where it makes a worse choice than before?\n> Another thing to pay attention to is whether the planning time gets\n> noticeably worse. If we can't find any cases where it loses badly\n> on those measures, I'll feel comfortable in applying it...\n>\n\n \tOkay, here's the vedict; all the \"extremely slow\" queries (i.e. \nqueries that took more than 30 seconds and upwards of several minutes to \ncomplete) are now running in the realm of reason. In fact, most queries \nthat took between 1 and 4 minutes are now down to taking about 9 seconds \nwhich is obviously a tremendous improvement.\n\n \tA few of the queries that were taking 9 seconds or less are \n\"slightly slower\" -- meaning a second or two slower. However most of them \nare running at the same speed they were before, or better.\n\n \tSo I'd say as far as I can tell with my application and my \ndataset, this change is solid and an obvious improvement.\n\n\nTalk to you later,\n\nSteve\n", "msg_date": "Mon, 16 Apr 2007 18:13:54 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: [PERFORM] Strangely\n\tVariable Query Performance)" }, { "msg_contents": "Steve wrote:\n\n> >Can you find any cases where it makes a worse choice than before?\n> >Another thing to pay attention to is whether the planning time gets\n> >noticeably worse. If we can't find any cases where it loses badly\n> >on those measures, I'll feel comfortable in applying it...\n> \n> \tOkay, here's the vedict; all the \"extremely slow\" queries (i.e. \n> queries that took more than 30 seconds and upwards of several minutes to \n> complete) are now running in the realm of reason. In fact, most queries \n> that took between 1 and 4 minutes are now down to taking about 9 seconds \n> which is obviously a tremendous improvement.\n> \n> \tA few of the queries that were taking 9 seconds or less are \n> \"slightly slower\" -- meaning a second or two slower. However most of them \n> are running at the same speed they were before, or better.\n> \n> \tSo I'd say as far as I can tell with my application and my \n> dataset, this change is solid and an obvious improvement.\n\nMaybe it would be interesting to see in detail those cases that got a\nbit slower, to further tweak the heuristic if necessary. Is the extra\ntime, time spent in planning or in execution?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 16 Apr 2007 19:01:04 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: [PERFORM] Strangely\n\tVariable Query Performance)" }, { "msg_contents": "Hi Tom / Steve,\n\nAm one of the silent readers of performance issues that come up on this list\n(and are discussed in detail) ... just like this one.\n\nIf and when you do come up with a solution, please do post some details\nabout them here... (i say that coz it seems that for obvious reasons, things\nmust have gone off air after tom's last email, and one can understand that).\nBut an analysis, or atleast a pointer may be of help to someone (like me)\nreading this list.\n\nThanks\nRobins\n\n---------- Forwarded message ----------\nFrom: Tom Lane <[email protected]>\nDate: Apr 13, 2007 10:08 AM\nSubject: Re: [PERFORM] Strangely Variable Query Performance\nTo: Steve <[email protected]>\nCc: Scott Marlowe <[email protected]>, PostgreSQL Performance <\[email protected]>\n\nSteve <[email protected]> writes:\n> On Thu, 12 Apr 2007, Tom Lane wrote:\n>> I'm still not having any luck reproducing the failure here. Grasping at\n>> straws again, I wonder if it's got something to do with the order in\n>> which the planner examines the indexes --- which is OID order. Could\n>> you send me the results of\n\n> Here you go:\n\nNope, still doesn't fail for me. [ baffled and annoyed... ] There must\nbe something about your situation that's relevant but we aren't\nrecognizing it. Are you in a position to let me ssh into your machine\nand try to debug it? Or other options like sending me a dump of your\ndatabase? I'm about out of steam for tonight, but let's talk about it\noff-list tomorrow.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n\n-- \nRobins\n\nHi Tom / Steve,Am one of the silent readers of performance issues that come up on this list (and are discussed in detail) ... just like this one.If and when you do come up with a solution, please do post some details about them here... (i say that coz it seems that for obvious reasons, things must have gone off air after tom's last email, and one can understand that). But an analysis, or atleast a pointer may be of help to someone (like me) reading this list.\nThanksRobins---------- Forwarded message ----------From: Tom Lane <[email protected]>Date: Apr 13, 2007 10:08 AM\nSubject: Re: [PERFORM] Strangely Variable Query PerformanceTo: Steve <[email protected]>Cc: Scott Marlowe <[email protected]\n>, PostgreSQL Performance <[email protected]>Steve <[email protected]> writes:\n> On Thu, 12 Apr 2007, Tom Lane wrote:>> I'm still not having any luck reproducing the failure here.  Grasping at>> straws again, I wonder if it's got something to do with the order in\n>> which the planner examines the indexes --- which is OID order.  Could>> you send me the results of> Here you go:Nope, still doesn't fail for me.  [ baffled and annoyed... ]  There must\nbe something about your situation that's relevant but we aren'trecognizing it.  Are you in a position to let me ssh into your machineand try to debug it?  Or other options like sending me a dump of your\ndatabase?   I'm about out of steam for tonight, but let's talk about itoff-list tomorrow.                        regards, tom lane---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to       choose an index scan if your joining column's datatypes do not       match-- Robins", "msg_date": "Tue, 17 Apr 2007 08:54:15 +0530", "msg_from": "\"Robins Tharakan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Fwd: Strangely Variable Query Performance" }, { "msg_contents": "Hi Tom / Steve,\n\nAm one of the silent readers of performance issues that come up on this list\n(and are discussed in detail) ... just like this one.\n\nIf and when you do come up with a solution, please do post some details\nabout them here... (i say that coz it seems that for obvious reasons, things\nmust have gone off air after tom's last email, and one can understand that).\nBut an analysis, or atleast a pointer may be of help to someone (like me)\nreading this list.\n\nThanks\nRobins\n\n---------- Forwarded message ----------\nFrom: Tom Lane <[email protected]>\nDate: Apr 13, 2007 10:08 AM\nSubject: Re: [PERFORM] Strangely Variable Query Performance\nTo: Steve <[email protected]>\nCc: Scott Marlowe <[email protected] >, PostgreSQL Performance <\[email protected]>\n\nSteve <[email protected]> writes:\n> On Thu, 12 Apr 2007, Tom Lane wrote:\n>> I'm still not having any luck reproducing the failure here. Grasping at\n>> straws again, I wonder if it's got something to do with the order in\n>> which the planner examines the indexes --- which is OID order. Could\n>> you send me the results of\n\n> Here you go:\n\nNope, still doesn't fail for me. [ baffled and annoyed... ] There must\nbe something about your situation that's relevant but we aren't\nrecognizing it. Are you in a position to let me ssh into your machine\nand try to debug it? Or other options like sending me a dump of your\ndatabase? I'm about out of steam for tonight, but let's talk about it\noff-list tomorrow.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\nHi Tom / Steve,Am one of the silent readers of performance issues that come up on this list (and are discussed in detail) ... just like this one.If and when you do come up with a solution, please do post some details about them here... (i say that coz it seems that for obvious reasons, things must have gone off air after tom's last email, and one can understand that). But an analysis, or atleast a pointer may be of help to someone (like me) reading this list.\nThanksRobins---------- Forwarded message ----------From: Tom Lane <\[email protected]>Date: Apr 13, 2007 10:08 AM\nSubject: Re: [PERFORM] Strangely Variable Query PerformanceTo: Steve <[email protected]>Cc: Scott Marlowe <\[email protected]\n>, PostgreSQL Performance <[email protected]>Steve <\[email protected]> writes:\n> On Thu, 12 Apr 2007, Tom Lane wrote:>> I'm still not having any luck reproducing the failure here.  Grasping at>> straws again, I wonder if it's got something to do with the order in\n\n>> which the planner examines the indexes --- which is OID order.  Could>> you send me the results of> Here you go:Nope, still doesn't fail for me.  [ baffled and annoyed... ]  There must\nbe something about your situation that's relevant but we aren'trecognizing it.  Are you in a position to let me ssh into your machineand try to debug it?  Or other options like sending me a dump of your\ndatabase?   I'm about out of steam for tonight, but let's talk about itoff-list tomorrow.                        regards, tom lane---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to       choose an index scan if your joining column's datatypes do not       match", "msg_date": "Tue, 17 Apr 2007 09:13:30 +0530", "msg_from": "Robins <[email protected]>", "msg_from_op": false, "msg_subject": "Fwd: Strangely Variable Query Performance" }, { "msg_contents": "Robins <[email protected]> writes:\n> If and when you do come up with a solution, please do post some details\n> about them here... (i say that coz it seems that for obvious reasons, things\n> must have gone off air after tom's last email, and one can understand that).\n> But an analysis, or atleast a pointer may be of help to someone (like me)\n> reading this list.\n\nOh, sorry, the subsequent discussion moved over to pgsql-hackers:\nhttp://archives.postgresql.org/pgsql-hackers/2007-04/msg00621.php\nand -patches:\nhttp://archives.postgresql.org/pgsql-patches/2007-04/msg00374.php\n\nThose are good places to look if a discussion on -bugs or other lists\nseems to tail off...\n\n\t\t\tregards, tom lane\n\nPS: the reason I couldn't reproduce the behavior was just that the dummy\ndata I was using didn't have the right statistics.\n", "msg_date": "Tue, 17 Apr 2007 00:44:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Strangely Variable Query Performance " }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Steve wrote:\n>> So I'd say as far as I can tell with my application and my \n>> dataset, this change is solid and an obvious improvement.\n\n> Maybe it would be interesting to see in detail those cases that got a\n> bit slower, to further tweak the heuristic if necessary. Is the extra\n> time, time spent in planning or in execution?\n\nSince there doesn't seem to be vast interest out there in testing this\nfurther, I'm going to go ahead and apply the patch to get it out of my\nworking directory. We can always tweak it more later if new info\nsurfaces.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Apr 2007 15:04:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: [PERFORM] Strangely\n\tVariable Query Performance)" }, { "msg_contents": ">> Maybe it would be interesting to see in detail those cases that got a\n>> bit slower, to further tweak the heuristic if necessary. Is the extra\n>> time, time spent in planning or in execution?\n>\n> Since there doesn't seem to be vast interest out there in testing this\n> further, I'm going to go ahead and apply the patch to get it out of my\n> working directory. We can always tweak it more later if new info\n> surfaces.\n>\n\n \tDoing my routine patching seems to have exploded my mail server, \nsorry for not replying sooner!\n\n \tI don't actually have planning vs. execution time statistics from \nthe older version for the queries in question -- there were not 'problem \nqueries' and therefore were never really analyzed. My customer's already \ndragging me off to another issue, so I've got to shift gears.\n\nAppreciate all your work -- thanks again!!! :)\n\n\nSteve\n", "msg_date": "Tue, 17 Apr 2007 15:19:25 -0400 (EDT)", "msg_from": "Steve <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] choose_bitmap_and again (was Re: [PERFORM]\n\tStrangely Variable Query Performance)" } ]
[ { "msg_contents": "Dear All.\n\n \n\nHow to compute the frequency of predicate (e.g. Salary > $70000) in an\nSQL query from a DB's pre-defined indexes?\". I'm specifically looking at\nhow to retrieve information about indices (like number of pages at each\nlevel of index, range of attribute values etc.)\n\n \n\nAny suggestions regarding the same would be great\n\n \n\nThanks,\n\n \n\n \n\nAvdhoot K. Saple\nJunior Research Associate\nHigh Performance & Grid Computing \nInfosys Technologies Ltd., Bangalore\n\n \n\n\n\n**************** CAUTION - Disclaimer *****************\nThis e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely for the use of the addressee(s). If you are not the intended recipient, please notify the sender by e-mail and delete the original message. Further, you are not to copy, disclose, or distribute this e-mail or its contents to any other person and any such actions are unlawful. This e-mail may contain viruses. Infosys has taken every reasonable precaution to minimize this risk, but is not liable for any damage you may sustain as a result of any virus in this e-mail. You should carry out your own virus checks before opening the e-mail or attachment. Infosys reserves the right to monitor and review the content of all messages sent to or from this e-mail address. Messages sent to or from this e-mail address may be stored on the Infosys e-mail system.\n***INFOSYS******** End of Disclaimer ********INFOSYS***\n\n\n\n\n\n\n\n\n\n\n\nDear All.\n \nHow to compute the frequency of predicate (e.g. Salary >\n$70000) in an SQL query from a DB’s pre-defined indexes?”.\nI’m specifically looking at how to retrieve information about indices\n(like number of pages at each level of index, range of attribute values etc.)\n \nAny suggestions regarding the same would be great\n \nThanks,\n \n \nAvdhoot K. Saple\nJunior Research Associate\nHigh Performance & Grid Computing \nInfosys Technologies Ltd., Bangalore", "msg_date": "Fri, 13 Apr 2007 14:29:30 +0530", "msg_from": "\"Avdhoot Kishore Saple\" <[email protected]>", "msg_from_op": true, "msg_subject": "local selectivity estimation - computing frequency of predicates" }, { "msg_contents": "\"Avdhoot Kishore Saple\" <[email protected]> writes:\n> How to compute the frequency of predicate (e.g. Salary > $70000) in an\n> SQL query from a DB's pre-defined indexes?\". I'm specifically looking at\n> how to retrieve information about indices (like number of pages at each\n> level of index, range of attribute values etc.)\n\nI don't think what you're looking for is exposed anywhere. Postgres\ndoesn't rely on indexes for statistical information anyway; the\npg_statistic system catalog (see also pg_stats view) is used for that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Apr 2007 10:13:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: local selectivity estimation - computing frequency of predicates " } ]
[ { "msg_contents": "Is there a pg_stat_* table or the like that will show how bloated an index is? \nI am trying to squeeze some disk space and want to track down where the worst \noffenders are before performing a global REINDEX on all tables, as the database \nis rougly 400GB on disk and this takes a very long time to run.\n\nI have been able to do this with tables, using a helpful view posted to this \nlist a few months back, but I'm not sure if I can get the same results on indexes.\n\nThanks\n\n-Dan\n", "msg_date": "Fri, 13 Apr 2007 14:01:40 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Finding bloated indexes?" }, { "msg_contents": "On Apr 13, 2007, at 4:01 PM, Dan Harris wrote:\n\n> Is there a pg_stat_* table or the like that will show how bloated \n> an index is? I am trying to squeeze some disk space and want to \n> track down where the worst offenders are before performing a global \n> REINDEX on all tables, as the database is rougly 400GB on disk and \n> this takes a very long time to run.\n\nI find this as a helpful guide:\n\nselect relname,relkind,relpages from pg_class where relname like 'user \n%';\n\nfor example (obviously change the LIKE clause to something useful to \nyou).\n\nthen with your knowledge of how big your rows are and how many \nrelpages the table itself takes, you can see if your index is too \nbig. It helps to watch these numbers over time.\n\nAlso, running \"analyze verbose\" on the table gives you a hint at how \nsparse the pages are, which might imply something for table bloat. \nI'm not sure.\n\nMore expensive is \"vacuum verbose\" which gives lots of info on how \nmany \"unused pointers\" there are in your indexes. This may be of \nuse. If this is a high number compared to the number of row \nversions, then you probably have bloat there.", "msg_date": "Fri, 13 Apr 2007 17:07:26 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bloated indexes?" }, { "msg_contents": "On Fri, 2007-04-13 at 14:01 -0600, Dan Harris wrote:\n> Is there a pg_stat_* table or the like that will show how bloated an index is? \n> I am trying to squeeze some disk space and want to track down where the worst \n> offenders are before performing a global REINDEX on all tables, as the database \n> is rougly 400GB on disk and this takes a very long time to run.\n> \n> I have been able to do this with tables, using a helpful view posted to this \n> list a few months back, but I'm not sure if I can get the same results on indexes.\n\nUse pgstatindex in contrib/pgstattuple\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 14 Apr 2007 09:27:25 +0100", "msg_from": "\"Simon Riggs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bloated indexes?" } ]
[ { "msg_contents": "Consider these two very similar schemas:\n\nSchema 1:\n\n\nCREATE TABLE foo (\n id serial PRIMARY KEY,\n frobnitz character(varying 100) NOT NULL UNIQUE\n);\n\n\nCREATE TABLE bar (\n id serial PRIMARY KEY,\n foo_id int REFERENCES foo(id)\n)\n\n\nSchema 2:\n\n\nCREATE TABLE foo (\n frobnitz character(varying 100) PRIMARY KEY\n);\n\n\nCREATE TABLE bar (\n id serial PRIMARY KEY,\n frobnitz character(varying 100) REFERENCES foo(frobnitz)\n)\n\n\n\n\nThe two situations are semantically identical: each record in table bar\nrefers to a record in table foo. The difference is that in the first\nschema, this referencing is done through an \"artificial\" serial-integer\nprimary key, while in the second schema this reference is done through a\ndata field that happens to be unique and not null, so it can serve as\nprimary key.\n\n\nI find Schema 1 awkward and unnatural; more specifically, foo.id seems\nunnecessary in light of the non-null uniqueness of foo.frobnitz. But I\nremember once reading that \"long\" fields like foo.frobnitz did not make good\nprimary keys.\n\n\nIs the field foo.id in Schema 1 superfluous? For example, wouldn't the\nreferencing from bar to foo really be done \"behind the scenes\" through some\nhidden field (oid?) instead of through the frobnitz text field? Which of\nthe two schemas would give better perfornance?\n\n\nThanks!\n\n\nkj\n\nConsider these two very similar schemas: Schema 1: CREATE TABLE foo (  id serial PRIMARY KEY,\n  frobnitz character(varying 100) NOT NULL UNIQUE); CREATE TABLE bar (  id serial PRIMARY KEY,\n  foo_id int REFERENCES foo(id)) Schema 2: CREATE TABLE foo (  frobnitz character(varying 100) PRIMARY KEY\n); CREATE TABLE bar (  id serial PRIMARY KEY,  frobnitz character(varying 100) REFERENCES foo(frobnitz)\n)  The two situations are semantically identical: each record in table bar refers to a record in table foo.  The difference is that in the first schema, this referencing is done through an \"artificial\" serial-integer primary key, while in the second schema this reference is done through a data field that happens to be unique and not null, so it can serve as primary key.\n I find Schema 1 awkward and unnatural; more specifically, foo.id seems unnecessary in light of the non-null uniqueness of \nfoo.frobnitz.  But I remember once reading that \"long\" fields like foo.frobnitz did not make good primary keys. Is the field \n\nfoo.id in Schema 1 superfluous?  For example, wouldn't the referencing from bar to foo really be done \"behind the scenes\" through some hidden field (oid?) instead of through the frobnitz text field?  Which of the two schemas would give better perfornance?\n Thanks! kj", "msg_date": "Sat, 14 Apr 2007 07:19:30 -0400", "msg_from": "\"Kynn Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "Basic Q on superfluous primary keys" }, { "msg_contents": "In response to \"Kynn Jones\" <[email protected]>:\n\n> Consider these two very similar schemas:\n> \n> Schema 1:\n> \n> \n> CREATE TABLE foo (\n> id serial PRIMARY KEY,\n> frobnitz character(varying 100) NOT NULL UNIQUE\n> );\n> \n> \n> CREATE TABLE bar (\n> id serial PRIMARY KEY,\n> foo_id int REFERENCES foo(id)\n> )\n> \n> \n> Schema 2:\n> \n> \n> CREATE TABLE foo (\n> frobnitz character(varying 100) PRIMARY KEY\n> );\n> \n> \n> CREATE TABLE bar (\n> id serial PRIMARY KEY,\n> frobnitz character(varying 100) REFERENCES foo(frobnitz)\n> )\n> \n> \n> \n> \n> The two situations are semantically identical: each record in table bar\n> refers to a record in table foo. The difference is that in the first\n> schema, this referencing is done through an \"artificial\" serial-integer\n> primary key, while in the second schema this reference is done through a\n> data field that happens to be unique and not null, so it can serve as\n> primary key.\n\nThe first case is call a \"surrogate key\". A little googling on that term\nwill turn up a wealth of discussion -- both for and against.\n\n> I find Schema 1 awkward and unnatural; more specifically, foo.id seems\n> unnecessary in light of the non-null uniqueness of foo.frobnitz. But I\n> remember once reading that \"long\" fields like foo.frobnitz did not make good\n> primary keys.\n\nI had a discussion about this recently on the Drupal mailing lists, at the\nend of which I promised to do some benchmarking to determine whether or\nnot text keys really do hurt performance of indexes. Unfortunately, I\nstill haven't followed through on that promise -- maybe I'll get to it\ntomorrow.\n\n> Is the field foo.id in Schema 1 superfluous? For example, wouldn't the\n> referencing from bar to foo really be done \"behind the scenes\" through some\n> hidden field (oid?) instead of through the frobnitz text field? Which of\n> the two schemas would give better perfornance?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Sat, 14 Apr 2007 07:59:47 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "On 4/14/07, Bill Moran <[email protected]> wrote:\n> In response to \"Kynn Jones\" <[email protected]>:\n> > The two situations are semantically identical: each record in table bar\n> > refers to a record in table foo. The difference is that in the first\n> > schema, this referencing is done through an \"artificial\" serial-integer\n> > primary key, while in the second schema this reference is done through a\n> > data field that happens to be unique and not null, so it can serve as\n> > primary key.\n>\n> I had a discussion about this recently on the Drupal mailing lists, at the\n> end of which I promised to do some benchmarking to determine whether or\n> not text keys really do hurt performance of indexes. Unfortunately, I\n> still haven't followed through on that promise -- maybe I'll get to it\n> tomorrow.\n>\n\nThe main reason why integer indexes are faster than natural\ncounterparts is that the index is smaller and puts less pressure on\ncache. This is however offset by removing joins here and there and\nyou usually just end up indexing the data anyways. Performance is\nkind of tangential to the argument though -- I've seen databases using\nall natural keys and found them to be very clean and performant.\n\nUsing surrogate keys is dangerous and can lead to very bad design\nhabits that are unfortunately so prevalent in the software industry\nthey are virtually taught in schools. Many software frameworks assume\nyou use them and refuse to work without them (avoid!) While there is\nnothing wrong with them in principle (you are exchanging one key for\nanother as a performance optimization), they make it all too easy to\ncreate denormalized designs and tables with no real identifying\ncriteria, etc, and the resultant stinky queries to put it all back\ntogether again, (full of unions, self joins, extraneous groups, case\nstatements, etc).\n\nA good compromise in your designs is to identify your natural key but\nuse the surrogate if you have valid performance reasons:\n\nCREATE TABLE foo (\n frobnitz_id int unique,\n frobnitz character(varying 100) PRIMARY KEY\\\n [...]\n);\n\nfrobnitz_id is of course optional and not necessary in all tables. It\nmay be a pain to relate a large table with a four or five part key and\njudicious use of surrogates may be justified for performance or even\njust to keep your queries smaller:\n\ncreate table order_line_item_discount\n(\n company_name text,\n order_no int,\n line_item_seq_no int,\n discount_code text,\n primary key(company_name, order_no, line_item_seq_no, discount_code)\n)\n\nbecomes\n\ncreate table order_line_item_discount\n(\n order_line_item_id int,\n discount_code text,\n primary key (order_line_item_id, discount_code)\n)\n\nmerlin\n", "msg_date": "Mon, 16 Apr 2007 09:03:42 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "Merlin Moncure wrote:\n> Using surrogate keys is dangerous and can lead to very bad design\n> habits that are unfortunately so prevalent in the software industry\n> they are virtually taught in schools. ... While there is\n> nothing wrong with them in principle (you are exchanging one key for\n> another as a performance optimization), they make it all too easy to\n> create denormalized designs and tables with no real identifying\n> criteria, etc,...\n\nWow, that's the opposite of everything I've ever been taught, and all my experience in the last few decades.\n\nI can't recall ever seeing a \"natural\" key that was immutable. In my business (chemistry), we've seen several disasterous situations were companies picked keys they thought were natural and immutable, and years down the road they discovered (for example) that chemical compounds they thought were pure were in fact isotopic mixtures, or simply the wrong molecule (as analytical techniques improved). Or during a corporate takeover, they discovered that two companies using the same \"natural\" keys had as much as 10% differences in their multi-million-compound databases. These errors led to six-month to year-long delays, as each of the conflicting chemical record had to be examined by hand by a PhD chemist to reclassify it.\n\nIn other businesses, almost any natural identifier you pick is subject to simple typographical errors. When you discover the errors in a field you've used as a primary key, it can be quite hard to fix, particularly if you have distributed data across several systems and schemas.\n\nWe've always recommended to our customers that all primary keys be completely information free. They should be not based on any information or combination of information from the data records. Every time the customer has not followed this advice, they've later regretted it.\n\nI'm sure there are situations where a natural key is appropriate, but I haven't seen it in my work.\n\nCraig \n", "msg_date": "Mon, 16 Apr 2007 08:02:24 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "On 4/16/07, Craig A. James <[email protected]> wrote:\n> Merlin Moncure wrote:\n> > Using surrogate keys is dangerous and can lead to very bad design\n> > habits that are unfortunately so prevalent in the software industry\n> > they are virtually taught in schools. ... While there is\n> > nothing wrong with them in principle (you are exchanging one key for\n> > another as a performance optimization), they make it all too easy to\n> > create denormalized designs and tables with no real identifying\n> > criteria, etc,...\n\n> I can't recall ever seeing a \"natural\" key that was immutable. In my business (chemistry), we've seen several disasterous situations were companies picked keys they thought were natural and immutable, and years down the road they discovered (for example) that chemical compounds they thought were pure were in fact isotopic mixtures, or simply the wrong molecule (as analytical techniques improved). Or during a corporate takeover, they discovered that two companies using the same \"natural\" keys had as much as 10% differences in their multi-million-compound databases. These errors led to six-month to year-long delays, as each of the conflicting chemical record had to be examined by hand by a PhD chemist to reclassify it.\n\nwhile your example might be a good case study in proper\nclassification, it has nothing to do with key selection. it is\nespecially unclear how adding an integer to a table will somehow\nmagically solve these problems. are you claiming that a primary key\ncan't be changed?\n\nmutability is strictly a performance argument. since RI handles\ncascading primary key changes, it's simply a matter of if you are\nwilling to wait for RI to do its work or not (if not, swap in the id\nkey as in my example). the performance argument really only applies\nto specific cases, and can be considered a attribute of certain\ntables. extraordinary cases do happen, like a company overhauling its\nnumbering systems, but such cases can be dealt with by a number of\nmethods including letting RI do its thing.\n\nmerlin\n", "msg_date": "Mon, 16 Apr 2007 11:55:29 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "Craig A. James wrote:\n> Merlin Moncure wrote:\n>> Using surrogate keys is dangerous and can lead to very bad design\n>> habits that are unfortunately so prevalent in the software industry\n>> they are virtually taught in schools. ... While there is\n>> nothing wrong with them in principle (you are exchanging one key for\n>> another as a performance optimization), they make it all too easy to\n>> create denormalized designs and tables with no real identifying\n>> criteria, etc,...\n> \n> Wow, that's the opposite of everything I've ever been taught, and all my\n> experience in the last few decades.\n> \n> ...chemistry...two companies using the same \"natural\"\n> keys had as much as 10% differences in their multi-million-compound\n> databases. These errors led to six-month to year-long delays, as each\n> of the conflicting chemical record had to be examined by hand by a PhD\n> chemist to reclassify it.\n\nThat sounds almost like a feature, not a bug - giving information\nabout what assumptions that went into the \"natural key\" need to be\nreconsidered.\n\nAnd I don't see how it would have been improved by adding a surrogate\nkey - except that the data would have been just as messed up though\nharder to see where the messups were.\n\n> We've always recommended to our customers that all primary keys be\n> completely information free. They should be not based on any\n> information or combination of information from the data records. Every\n> time the customer has not followed this advice, they've later regretted it.\n\nHmm... but then do you put a unique index on what the\notherwise-would-have-been-natural-primary-key columns?\n\nIf not, you tend to get into the odd situation of multiple\nrows that only vary in their surrogate key -- and it seems\nthe surrogate key is redundant.\n\n> I'm sure there are situations where a natural key is appropriate, but I\n> haven't seen it in my work.\n\nI've seen both - and indeed usually use surrogate keys for convenience;\nbut also find that having to fix incorrect assumptions in natural primary\nkeys tends to raise business issues that are worth addressing anyway.\n", "msg_date": "Mon, 16 Apr 2007 11:24:15 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "On Mon, 16 Apr 2007, Merlin Moncure wrote:\n\n> extraordinary cases do happen, like a company overhauling its numbering \n> systems, but such cases can be dealt with by a number of methods \n> including letting RI do its thing.\n\nI think the point Craig was trying to make is that what you refer to here \nas \"extraordinary cases\" are, in fact, rather common. I've never seen a \ndatabase built on natural keys that didn't at some point turn ugly when \nsome internal or external business need suddenly invalidated the believed \nuniqueness of that key.\n\nThe last really bad one I saw was a manufacturing database that used a \ncombination of the customer code and the customer's part number as the \nkey. Surely if the customer changes their part number, we should switch \nours to match so the orders are easy to process, right? When this got fun \nwas when one large customer who released products on a yearly schedule \ndecided to change the bill of material for many of their parts for the new \nyear, but re-used the same part number; oh, and they were still ordering \nthe old parts as well. Hilarity ensued.\n\n> it is especially unclear how adding an integer to a table will somehow \n> magically solve these problems.\n\nIf the key is a integer, it's always possible to figure out a trivial map \nthat renumbers the entire database programmatically in order to merge two \nsets of data.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 16 Apr 2007 23:02:48 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "On 4/16/07, Greg Smith <[email protected]> wrote:\n> I think the point Craig was trying to make is that what you refer to here\n> as \"extraordinary cases\" are, in fact, rather common. I've never seen a\n> database built on natural keys that didn't at some point turn ugly when\n> some internal or external business need suddenly invalidated the believed\n> uniqueness of that key.\n\nI don't think it's so terrible to add a field to a key...I too have\nworked on a ERP system based on natural keys and was quite amazed on\nhow well organized the database was. When the company decided to\nre-number all the items in the database, it was a minor pain.\nExtending a critical key would be a project any way you organize the\ndatabase IMO. Natural keys are most common in manufacturing and\naccounting systems because of the COBOL heritage, when natural keys\nwere the only way to realistically do it. Unfortunately SQL really\nmissed the boat on keys...otherwise they would behave more like a\ncomposite type.\n\n> The last really bad one I saw was a manufacturing database that used a\n> combination of the customer code and the customer's part number as the\n> key. Surely if the customer changes their part number, we should switch\n> ours to match so the orders are easy to process, right? When this got fun\n> was when one large customer who released products on a yearly schedule\n> decided to change the bill of material for many of their parts for the new\n> year, but re-used the same part number; oh, and they were still ordering\n> the old parts as well. Hilarity ensued.\n\nIn the context of this debate, I see this argument all the time, with\nthe implied suffix: 'If only we used integer keys we would not have\nhad this problem...'. Either the customer identifies parts with a\npart number or they don't...and if they do identify parts with a\nnumber and recycle the numbers, you have a problem...period. Adding a\ninteger key only moves the confusion to a separate place, unless it is\nused by the user to identify the part number and then *becomes* the\nkey, or a part of it. If you hide the id from the user, then I claim\nthe data model is pretty much busted.\n\nmerlin\n", "msg_date": "Tue, 17 Apr 2007 12:17:54 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "Merlin Moncure wrote:\n> In the context of this debate, I see this argument all the time, with\n> the implied suffix: 'If only we used integer keys we would not have\n> had this problem...'. Either the customer identifies parts with a\n> part number or they don't...and if they do identify parts with a\n> number and recycle the numbers, you have a problem...period.\n\nOn the contrary. You create a new record with the same part number. You mark the old part number \"obsolete\". Everything else (the part's description, and all the relationships that it's in, such as order history, catalog inclusion, revision history, etc.) is unaffected. New orders are placed against the new part number's DB record; for safety the old part number can have a trigger that prevent new orders from being placed.\n\nSince the part number is NOT the primary key, duplicate part numbers are not a problem. If you had instead used the part number as the primary key, you'd be dead in the water.\n\nYou can argue that the customer is making a dumb decision by reusing catalog numbers, and I'd agree. But they do it, and as database designers we have to handle it. In my particular system, we aggregate information from several hundred companies, and this exact scenario happens frequently. Since we're only aggregating information, we have no control over the data that these companies provide. If we'd used catalog numbers for primary keys, we'd have big problems.\n\nCraig\n\n\n\n\n", "msg_date": "Tue, 17 Apr 2007 20:57:01 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "Merlin Moncure wrote:\n> In the context of this debate, I see this argument all the time, with\n> the implied suffix: 'If only we used integer keys we would not have\n> had this problem...'. Either the customer identifies parts with a\n> part number or they don't...and if they do identify parts with a\n> number and recycle the numbers, you have a problem...period.\n\nOn the contrary. You create a new record with the same part number. You mark the old part number \"obsolete\". Everything else (the part's description, and all the relationships that it's in, such as order history, catalog inclusion, revision history, etc.) is unaffected. New orders are placed against the new part number's DB record; for safety the old part number can have a trigger that prevent new orders from being placed.\n\nSince the part number is NOT the primary key, duplicate part numbers are not a problem. If you had instead used the part number as the primary key, you'd be dead in the water.\n\nYou can argue that the customer is making a dumb decision by reusing catalog numbers, and I'd agree. But they do it, and as database designers we have to handle it. In my particular system, we aggregate information from several hundred companies, and this exact scenario happens frequently. Since we're only aggregating information, we have no control over the data that these companies provide. If we'd used catalog numbers for primary keys, we'd have big problems.\n\nCraig\n\n\n\n\n\n", "msg_date": "Tue, 17 Apr 2007 21:06:15 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "Craig A. James wrote:\n\n> Since we're only aggregating information, we have \n> no control over the data that these companies provide.\n\nAnd at the end of the day that's the root of the problem. It's easy to \nbe lulled into \"well it looks like a primary key\" rather than being able \nto guarantee it.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 18 Apr 2007 10:22:36 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "On Wed, 18 Apr 2007, Richard Huxton wrote:\n\n> And at the end of the day that's the root of the problem. It's easy to be \n> lulled into \"well it looks like a primary key\" rather than being able to \n> guarantee it.\n\nIn some of these cases it is guaranteed to be a primary key given all \navailable information at the time. The funny thing about unexpected \nchanges to a business model is that you never expect them.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 18 Apr 2007 08:19:55 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "On 4/18/07, Craig A. James <[email protected]> wrote:\n> Merlin Moncure wrote:\n> > In the context of this debate, I see this argument all the time, with\n> > the implied suffix: 'If only we used integer keys we would not have\n> > had this problem...'. Either the customer identifies parts with a\n> > part number or they don't...and if they do identify parts with a\n> > number and recycle the numbers, you have a problem...period.\n>\n> On the contrary. You create a new record with the same part number. You mark the old part number \"obsolete\". Everything else (the part's description, and all the relationships that it's in, such as order history, catalog inclusion, revision history, etc.) is unaffected. New orders are placed against the new part number's DB record; for safety the old part number can have a trigger that prevent new orders from being placed.\n>\n> Since the part number is NOT the primary key, duplicate part numbers are not a problem. If you had instead used the part number as the primary key, you'd be dead in the water.\n\nYou are redefining the primary key to be (part_number,\nobsoletion_date). Now, if you had not anticipated that in the\noriginal design (likely enough), you do have to refactor queries that\njoin on the table...so what? If that's too much work, you can use a\nview to take care of the problem (which may be a good idea anyways).\n*you have to refactor the system anyways because you are now allowing\nduplicate part numbers where previously (from the perspective of the\nuser), they were unique *.\n\nThe hidden advantage of pushing the full key through the database is\nit tends to expose holes in the application/business logic. Chances\nare some query is not properly distinguishing obsoleted parts and now\nthe real problems come...surrogate keys do not remove complexity, they\nsimply sweep it under the rug.\n\nmerlin\n", "msg_date": "Wed, 18 Apr 2007 10:44:44 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "Merlin Moncure wrote:\n>> Since the part number is NOT the primary key, duplicate part numbers \n>> are not a problem. If you had instead used the part number as the \n>> primary key, you'd be dead in the water.\n> \n> You are redefining the primary key to be (part_number,\n> obsoletion_date). Now, if you had not anticipated that in the\n> original design (likely enough), you do have to refactor queries that\n> join on the table...so what? If that's too much work, you can use a\n> view to take care of the problem (which may be a good idea anyways).\n> *you have to refactor the system anyways because you are now allowing\n> duplicate part numbers where previously (from the perspective of the\n> user), they were unique *.\n> \n> The hidden advantage of pushing the full key through the database is\n> it tends to expose holes in the application/business logic. Chances\n> are some query is not properly distinguishing obsoleted parts and now\n> the real problems come...surrogate keys do not remove complexity, they\n> simply sweep it under the rug.\n\nThis really boils down to an object-oriented perspective. I have an object, a customer's catalog entry. It has properties such as catalog number, description, etc, and whether it's obsolete or not. Management of the object (its relation to other objects, its history, etc.) should NOT depend on the object's specific definition.\n\nThis is true whether the object is represented in Lisp, C++, Perl, or (in this case) an SQL schema. Good object oriented design abstracts the object and its behavior from management of the object. In C++, Perl, etc., we manage objects via a pointer or object reference. In SQL, we reference objects by an *arbitrary* integer that is effectively a pointer to the object.\n\nWhat you're suggesting is that I should break the object-oriented encapsulation by pulling out specific fields of the object, exposing those internal object details to the applications, and spreading those details across the whole schema. And I argue that this is wrong, because it breaks encapsulation. By exposing the details of the object, if the details change, *all* of your relationships break, and all of your applications have to change. And I've never seen a system where breaking object-oriented encapsulation was a good long-term solution. Systems change, and object-oriented techniques were invented to help manage change.\n\nThis is one of the reasons the Postgres project was started way back when: To bring object-oriented techniques to the relational-database world.\n\nCraig\n", "msg_date": "Wed, 18 Apr 2007 09:05:13 -0700", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "I think a database with all natural keys is unrealistic. For example if you\nhave a table that refers to people, are you going to use their name as a\nprimary key? Names change all the time due to things like marriage,\ndivorce, or trouble with the law. We have tables with 20 million rows which\nreference back to a table of people, and if I used the person's name as key,\nit would be a major pain when somebody's name changes. Even if there is\nreferential integrity, one person might be referred to by 25% of the 20\nmillion rows, so the update would take quite a long time. Also the table\nwill be filled with dead rows and the indexes will likely be bloated. If I\nwant to clean that up, it will take a vacuum full or a cluster which will\nlock the whole table and run for hours. If I use a surrogate key, I can\nchange their name in one row and be done with it. \n\nJust my 2 cents.\n\nDave\n\n", "msg_date": "Wed, 18 Apr 2007 13:09:57 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "On Tue, 2007-04-17 at 21:06 -0700, Craig A. James wrote:\n> Merlin Moncure wrote:\n> > In the context of this debate, I see this argument all the time, with\n> > the implied suffix: 'If only we used integer keys we would not have\n> > had this problem...'. Either the customer identifies parts with a\n> > part number or they don't...and if they do identify parts with a\n> > number and recycle the numbers, you have a problem...period.\n> \n> On the contrary. You create a new record with the same part number. You mark the old part number \"obsolete\". Everything else (the part's description, and all the relationships that it's in, such as order history, catalog inclusion, revision history, etc.) is unaffected. New orders are placed against the new part number's DB record; for safety the old part number can have a trigger that prevent new orders from being placed.\n> \n> Since the part number is NOT the primary key, duplicate part numbers are not a problem. If you had instead used the part number as the primary key, you'd be dead in the water.\n> \n> You can argue that the customer is making a dumb decision by reusing catalog numbers, and I'd agree. But they do it, and as database designers we have to handle it. In my particular system, we aggregate information from several hundred companies, and this exact scenario happens frequently. Since we're only aggregating information, we have no control over the data that these companies provide. If we'd used catalog numbers for primary keys, we'd have big problems.\n> \n\nStoring data is easy.\n\nThe difficulty lies in storing data in such a way that your assumptions\nabout the data remain valid and your queries still answer the questions\nthat you think they answer.\n\nBecause an internal ID field has no meaning outside of the database\n(some auto-generated keys do have meaning outside the database, but I'm\nnot talking about those), you can't effectively query based on an\ninternal id any more than you can query by the ctid. So what do you\nquery by then? You query by natural keys anyway. Internal id fields are\nan implementation detail related to performance (the real topic of this\ndiscussion).\n\nIf you have two parts with the same part id, what does that mean? Sure,\nyou can store the data, but then the queries that assumed that data was\nunique no longer hold. Sometimes you need two parts with the same part\nid, but you have to know the meaning in order to query based on that\ndata.\n\nLet me ask these questions:\n - Do you think that all of your relations have an internal id? \n - Do you think that all the internal ids you use are unique in the\nrelations in which they appear?\n\nIf you answer \"yes\" to either question, consider that every query on\nthat data is also a relation and so are subselects and intermediate\nresults. Do those all have an id? If not, why not? How do you join a\nvirtual relation to a physical relation if the virtual relation has no\ninternal id? Is the id field still unique in the result of a join or\nCartesian product?\n\nRegards,\n\tJeff Davis\n\n\n", "msg_date": "Wed, 18 Apr 2007 12:08:44 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "On 4/18/07, Dave Dutcher <[email protected]> wrote:\n> I think a database with all natural keys is unrealistic. For example if you\n> have a table that refers to people, are you going to use their name as a\n> primary key? Names change all the time due to things like marriage,\n> divorce, or trouble with the law. We have tables with 20 million rows which\n> reference back to a table of people, and if I used the person's name as key,\n> it would be a major pain when somebody's name changes. Even if there is\n> referential integrity, one person might be referred to by 25% of the 20\n> million rows, so the update would take quite a long time. Also the table\n> will be filled with dead rows and the indexes will likely be bloated. If I\n> want to clean that up, it will take a vacuum full or a cluster which will\n> lock the whole table and run for hours. If I use a surrogate key, I can\n> change their name in one row and be done with it.\n\nThat's perfectly reasonable (I mentioned this upthread)...there are a\ncouple of corner cases where RI costs too much Exchanging a surrogate\nfor a natural is a valid performance consideration. Usually, the\nperformance win is marginal at best (and your example suggests\npossible normalization issues in the child table), sometimes there is\nno alternative....updating 5 million rows is obviously nasty. That\nsaid -- if the cost of update was zero, would you still do it that\nway? I'm trying to separate performance related issues, which are\nreasonable and valid depending on the situation, with good design\nprinciples.\n\nmerlin\n", "msg_date": "Thu, 19 Apr 2007 05:46:18 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic Q on superfluous primary keys" }, { "msg_contents": "I have a table with 2.5 million real[] arrays. (They are points in a\ntime series.) Given a new array X, I'd like to find, say, the 25\nclosest to X in some sense--for simplification, let's just say in the\nusual vector norm. Speed is critical here, and everything I have tried\nhas been too slow.\n\nI imported the cube contrib package, and I tried creating an index on\na cube of the last 6 elements, which are the most important. Then I\ntested the 2.5MM rows for being contained within a tolerance of the\nlast 6 elements of X, +/- 0.1 in each coordinate, figuring that would\nbe an indexed search (which I CLUSTERED on). I then ran the sort on\nthis smaller set. The index was used, but it was still too slow. I\nalso tried creating new columns with rounded int2 values of the last 6\ncoordinates and made a multicolumn index.\n\nFor each X the search is taking about 4-15 seconds which is above my\ntarget at least one order of magnitude. Absolute numbers are dependent\non my hardware and settings, and some of this can be addressed with\nconfiguration tweaks, etc., but first I think I need to know the\noptimum data structure/indexing strategy.\n\nIs anyone on the list experienced with this sort of issue?\n\nThanks.\nAndrew Lazarus [email protected]\n\n", "msg_date": "Fri, 20 Apr 2007 12:07:29 -0700", "msg_from": "Andrew Lazarus <[email protected]>", "msg_from_op": false, "msg_subject": "index structure for 114-dimension vector" }, { "msg_contents": "On Fri, 2007-04-20 at 12:07 -0700, Andrew Lazarus wrote:\n> I have a table with 2.5 million real[] arrays. (They are points in a\n> time series.) Given a new array X, I'd like to find, say, the 25\n> closest to X in some sense--for simplification, let's just say in the\n> usual vector norm. Speed is critical here, and everything I have tried\n> has been too slow.\n> \n> I imported the cube contrib package, and I tried creating an index on\n> a cube of the last 6 elements, which are the most important. Then I\n\nHave you tried just making normal indexes on each of the last 6 elements\nand see if postgresql will use a BitmapAnd to combine them? \n\nRegards,\n Jeff Davis\n\n\n\n", "msg_date": "Fri, 20 Apr 2007 14:41:24 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector" }, { "msg_contents": "Jeff Davis wrote:\n> On Fri, 2007-04-20 at 12:07 -0700, Andrew Lazarus wrote:\n>> I have a table with 2.5 million real[] arrays. (They are points in a\n>> time series.) Given a new array X, I'd like to find, say, the 25\n>> closest to X in some sense--for simplification, let's just say in the\n>> usual vector norm. Speed is critical here, and everything I have tried\n>> has been too slow.\n>>\n>> I imported the cube contrib package, and I tried creating an index on\n>> a cube of the last 6 elements, which are the most important. Then I\n> \n> Have you tried just making normal indexes on each of the last 6 elements\n> and see if postgresql will use a BitmapAnd to combine them? \n> \n>\n\nI don't think that will work for the vector norm i.e:\n\n|x - y| = sqrt(sum over j ((x[j] - y[j])^2))\n\n\nCheers\n\nMark\n", "msg_date": "Sat, 21 Apr 2007 11:42:35 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector" }, { "msg_contents": "Because I know the 25 closest are going to be fairly close in each\ncoordinate, I did try a multicolumn index on the last 6 columns and\nused a +/- 0.1 or 0.2 tolerance on each. (The 25 best are very probably inside\nthat hypercube on the distribution of data in question.)\n\nThis hypercube tended to have 10-20K records, and took at least 4\nseconds to retrieve. I was a little surprised by how long that took.\nSo I'm wondering if my data representation is off the wall.\n\nI should mention I also tried a cube index using gist on all 114\nelements, but CREATE INDEX hadn't finished in 36 hours, when I killed\nit, and I wasn't in retrospect sure an index that took something like\n6GB by itself would be helpful on a 2GB of RAM box.\n\nMK> I don't think that will work for the vector norm i.e:\n\nMK> |x - y| = sqrt(sum over j ((x[j] - y[j])^2))\n\n\nMK> Cheers\n\nMK> Mark\n\n\n-- \nSincerely,\n Andrew Lazarus mailto:[email protected]", "msg_date": "Fri, 20 Apr 2007 17:28:20 -0700", "msg_from": "Andrew Lazarus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector" }, { "msg_contents": "Andrew Lazarus wrote:\n> Because I know the 25 closest are going to be fairly close in each\n> coordinate, I did try a multicolumn index on the last 6 columns and\n> used a +/- 0.1 or 0.2 tolerance on each. (The 25 best are very probably inside\n> that hypercube on the distribution of data in question.)\n> \n> This hypercube tended to have 10-20K records, and took at least 4\n> seconds to retrieve. I was a little surprised by how long that took.\n> So I'm wondering if my data representation is off the wall.\n> \n> I should mention I also tried a cube index using gist on all 114\n> elements, but CREATE INDEX hadn't finished in 36 hours, when I killed\n> it, and I wasn't in retrospect sure an index that took something like\n> 6GB by itself would be helpful on a 2GB of RAM box.\n> \n> MK> I don't think that will work for the vector norm i.e:\n> \n> MK> |x - y| = sqrt(sum over j ((x[j] - y[j])^2))\n> \n> \n\nSorry, in that case it probably *is* worth trying out 6 single column \nindexes and seeing if they get bitmap and'ed together...\n\nMark\n", "msg_date": "Sat, 21 Apr 2007 12:42:35 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector" }, { "msg_contents": "Andrew Lazarus <[email protected]> writes:\n> Because I know the 25 closest are going to be fairly close in each\n> coordinate, I did try a multicolumn index on the last 6 columns and\n> used a +/- 0.1 or 0.2 tolerance on each. (The 25 best are very probably inside\n> that hypercube on the distribution of data in question.)\n\n> This hypercube tended to have 10-20K records, and took at least 4\n> seconds to retrieve. I was a little surprised by how long that took.\n> So I'm wondering if my data representation is off the wall.\n\nA multicolumn btree index isn't going to be helpful at all. Jeff's idea\nof using six single-column indexes with the above query might work,\nthough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Apr 2007 20:44:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector " }, { "msg_contents": "On Apr 20, 12:07 pm, [email protected] (Andrew Lazarus) wrote:\n> I have a table with 2.5 million real[] arrays. (They are points in a\n> time series.) Given a new array X, I'd like to find, say, the 25\n> closest to X in some sense--for simplification, let's just say in the\n> usualvectornorm. Speed is critical here, and everything I have tried\n> has been too slow.\n>\n> I imported the cube contrib package, and I tried creating an index on\n> a cube of the last 6 elements, which are the most important. Then I\n> tested the 2.5MM rows for being contained within a tolerance of the\n> last 6 elements of X, +/- 0.1 in each coordinate, figuring that would\n> be an indexed search (which I CLUSTERED on). I then ran the sort on\n> this smaller set. The index was used, but it was still too slow. I\n> also tried creating new columns with rounded int2 values of the last 6\n> coordinates and made a multicolumn index.\n>\n> For each X the search is taking about 4-15 seconds which is above my\n> target at least one order of magnitude. Absolute numbers are dependent\n> on my hardware and settings, and some of this can be addressed with\n> configuration tweaks, etc., but first I think I need to know the\n> optimum data structure/indexing strategy.\n>\n> Is anyone on the list experienced with this sort of issue?\n>\n> Thanks.\n> Andrew Lazarus [email protected]\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\nHaving worked in high dimensional spaces a lot in my career I think\nyou'll find that there are\nmathematical limits in terms of speed. In practical terms, a seq_scan\nwill be unavoidable since\non first approximation you are limited to doing an exhaustive search\nin 101-dimensional space unless\nyou make approximations or dimensionality reductions of some kind.\n\nRead up on the Curse of Dimensionality: http://en.wikipedia.org/wiki/Curse_of_dimensionality\n\nHave you considered dimension reduction techniques such as Singular\nValue Decomposition,\nPrincipal Components Analysis, etc.?\n\n", "msg_date": "23 Apr 2007 10:49:31 -0700", "msg_from": "C Storm <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector" }, { "msg_contents": "On 4/20/07, Andrew Lazarus <[email protected]> wrote:\n> I have a table with 2.5 million real[] arrays. (They are points in a\n> time series.) Given a new array X, I'd like to find, say, the 25\n> closest to X in some sense--for simplification, let's just say in the\n> usual vector norm. Speed is critical here, and everything I have tried\n> has been too slow.\n\nLet me chime in with the observation that this is a multidimensional\nnearest neighbour (reverse nearest neighbour and its close cousin,\nk-NN) that is well known in statistics, and particularly relevant to\nstatistical learning and classification. Knowing the jargon might help\nyou dig up efficient algorithms to mine your data; there are tons of\nfascinating papers available through Citeseer.\n\nIn particular, I recommend the paper \"Efficient k-NN Search on\nVertically Decomposed Data\" by de Vries et al, SIGMOD 2002 (PDF here:\nhttp://citeseer.ist.psu.edu/618138.html), if only for inspiration. It\nproposes an algorithm called BOND to drastically reduce the search\nspace by probalistic means. They give an example using image\nhistograms, but the algorithm applies to any multidimensional data.\nBriefly put, it points out that proximity comparison can be computed\nvertically, a few dimensions at a time, and entire subsets can be\nthrown away when it's apparent that they are below a statistically\nderived lower bound. The only gotcha is that the algorithm derives\nmuch of its performance from the assumption that your data is\nvertically decomposed, one table per dimension, otherwise the search\neffectively incurs a sequential scan of the entire dataset, and then\nyou're pretty much back to square one.\n\nThe most common approach to nearest neighbour search is to use a\nspatial data structure. The classic algorithm is the kd-tree\n(http://en.wikipedia.org/wiki/Kd-tree) and there's the newer K-D-B\ntree, neither of which are available in PostgreSQL. If I remember\ncorrectly, R-trees have also been shown to be useful for high numbers\nof dimensions; with PostgreSQL you have R-trees and even better\nR-tree-equivalent support through GiST. I have no idea whether you can\nactually munge your integer vectors into something GiST can index and\nsearch, but it's a thought. (GiST, presumably, can also theoretically\nindex kd-trees and other spatial trees.)\n\nAlexander.\n", "msg_date": "Fri, 27 Apr 2007 00:34:36 +0200", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector" }, { "msg_contents": "On Fri, 27 Apr 2007, Alexander Staubo wrote:\n\n> On 4/20/07, Andrew Lazarus <[email protected]> wrote:\n>> I have a table with 2.5 million real[] arrays. (They are points in a\n>> time series.) Given a new array X, I'd like to find, say, the 25\n>> closest to X in some sense--for simplification, let's just say in the\n>> usual vector norm. Speed is critical here, and everything I have tried\n>> has been too slow.\n>\n> Let me chime in with the observation that this is a multidimensional\n> nearest neighbour (reverse nearest neighbour and its close cousin,\n> k-NN) that is well known in statistics, and particularly relevant to\n> statistical learning and classification. Knowing the jargon might help\n> you dig up efficient algorithms to mine your data; there are tons of\n> fascinating papers available through Citeseer.\n>\n> In particular, I recommend the paper \"Efficient k-NN Search on\n> Vertically Decomposed Data\" by de Vries et al, SIGMOD 2002 (PDF here:\n> http://citeseer.ist.psu.edu/618138.html), if only for inspiration. It\n> proposes an algorithm called BOND to drastically reduce the search\n> space by probalistic means. They give an example using image\n> histograms, but the algorithm applies to any multidimensional data.\n> Briefly put, it points out that proximity comparison can be computed\n> vertically, a few dimensions at a time, and entire subsets can be\n> thrown away when it's apparent that they are below a statistically\n> derived lower bound. The only gotcha is that the algorithm derives\n> much of its performance from the assumption that your data is\n> vertically decomposed, one table per dimension, otherwise the search\n> effectively incurs a sequential scan of the entire dataset, and then\n> you're pretty much back to square one.\n>\n> The most common approach to nearest neighbour search is to use a\n> spatial data structure. The classic algorithm is the kd-tree\n> (http://en.wikipedia.org/wiki/Kd-tree) and there's the newer K-D-B\n> tree, neither of which are available in PostgreSQL. If I remember\n> correctly, R-trees have also been shown to be useful for high numbers\n> of dimensions; with PostgreSQL you have R-trees and even better\n> R-tree-equivalent support through GiST. I have no idea whether you can\n> actually munge your integer vectors into something GiST can index and\n> search, but it's a thought. (GiST, presumably, can also theoretically\n> index kd-trees and other spatial trees.)\n\nyou're right, but currently only theoretically due to interface restriction.\nWe have plan to improve it sometime. There was SP-GiST project, which \ncould be used for k-d-b tree, see http://www.cs.purdue.edu/spgist/\nI don't know if it works with 8.2 version. Also, it doesn't supports\nconcurrency and recovery\n\n>\n> Alexander.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Fri, 27 Apr 2007 08:42:37 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector" }, { "msg_contents": "On 21-4-2007 1:42 Mark Kirkwood wrote:\n> I don't think that will work for the vector norm i.e:\n> \n> |x - y| = sqrt(sum over j ((x[j] - y[j])^2))\n\nI don't know if this is usefull here, but I was able to rewrite that \nalgorithm for a set of very sparse vectors (i.e. they had very little \noverlapping factors) to something like:\n|x - y| = sum over j (x[j]^2) + sum over j (y[j]^2)\n + for each j where x[j] and y[j] are both non-zero: - (x[j]^2 + \ny[j]^2) + (x[j] - y[j])^2\n\nThe first two parts sums can be calculated only once. So if you have \nvery little overlap, this is therefore much more efficient (if there is \nno overlap at all you end up with x[j]^2 + y[j]^2 anyway). Besides, this \nrewritten calculation allows you to store the X and Y vectors using a \ntrivial table-layout vector(x,i,value) which is only filled with \nnon-zero's and which you can trivially self-join to find the closest \nmatches. You don't care about the j's where there is either no x or \ny-value anyway with this rewrite.\n\nI can compare over 1000 y's of on average 100 elements to two x's of \nover 1000 elements on just a single 1.8Ghz amd processor. (I use it for \na bi-kmeans algorithm, so there are only two buckets to compare to).\n\nSo it might be possible to rewrite your algorithm to be less \ncalculation-intensive. Obviously, with a dense-matrix this isn't going \nto work, but there may be other ways to circumvent parts of the \nalgorithm or to cache large parts of it.\nIt might also help to extract only the 6 relevant columns into a \nseperate temporary table which will have much smaller records and thus \ncan fit more records per page.\n\nBest regards,\n\nArjen\n", "msg_date": "Fri, 27 Apr 2007 08:07:38 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector" }, { "msg_contents": "Let me just thank the list, especially for the references. (I found\nsimilar papers myself with Google: and to think I have a university\nlibrary alumni card and barely need it any more!)\n\nI'll write again on the sorts of results I get.", "msg_date": "Tue, 1 May 2007 11:09:24 -0700", "msg_from": "Andrew Lazarus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector" }, { "msg_contents": "On 5/1/07, Andrew Lazarus <[email protected]> wrote:\n> Let me just thank the list, especially for the references. (I found\n> similar papers myself with Google: and to think I have a university\n> library alumni card and barely need it any more!)\n>\n> I'll write again on the sorts of results I get.\n\nLooking forward to hearing about them. I have worked with such dataset\nproblems, but never attempted to apply them to a relational database\nsuch as PostgreSQL. If you want speed, nothing beats in-memory vectors\non a modern CPU architecture.\n\nAlexander.\n", "msg_date": "Tue, 1 May 2007 22:27:23 +0200", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index structure for 114-dimension vector" } ]
[ { "msg_contents": "I have performance problem with the following simple update query:\n\n UPDATE posts\n SET num_views = num_views + 1\n WHERE post_id IN (2526,5254,2572,4671,25);\n\nThe table \"posts\" is a large table with a number of foreign keys (FK).\n\nIt seems that the FK triggers for the table are evaluated even though\nnone of the FK columns are altered. In fact, these FK triggers seems to\nconstitute a considerable part of the total execution time. See the\nbelow EXPLAIN ANALYZE.\n\nWhy are these FK triggers evaluated at all and why do they take so much\ntime?\n\n------\n=> EXPLAIN ANALYZE update posts set num_views = num_views + 1 where\npost_id in (2526,5254,2572,4671,25);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on posts (cost=10.02..29.81 rows=5 width=1230)\n(actual time=0.146..0.253 rows=5 loops=1)\n Recheck Cond: ((post_id = 2526) OR (post_id = 5254) OR (post_id =\n2572) OR (post_id = 4671) OR (post_id = 25))\n -> BitmapOr (cost=10.02..10.02 rows=5 width=0) (actual\ntime=0.105..0.105 rows=0 loops=1)\n -> Bitmap Index Scan on posts_pkey (cost=0.00..2.00 rows=1\nwidth=0) (actual time=0.053..0.053 rows=2 loops=1)\n Index Cond: (post_id = 2526)\n -> Bitmap Index Scan on posts_pkey (cost=0.00..2.00 rows=1\nwidth=0) (actual time=0.012..0.012 rows=2 loops=1)\n Index Cond: (post_id = 5254)\n -> Bitmap Index Scan on posts_pkey (cost=0.00..2.00 rows=1\nwidth=0) (actual time=0.008..0.008 rows=2 loops=1)\n Index Cond: (post_id = 2572)\n -> Bitmap Index Scan on posts_pkey (cost=0.00..2.00 rows=1\nwidth=0) (actual time=0.010..0.010 rows=2 loops=1)\n Index Cond: (post_id = 4671)\n -> Bitmap Index Scan on posts_pkey (cost=0.00..2.00 rows=1\nwidth=0) (actual time=0.011..0.011 rows=2 loops=1)\n Index Cond: (post_id = 25)\n Trigger for constraint posts_question_id_fkey: time=50.031 calls=5\n Trigger for constraint posts_author_id_fkey: time=22.330 calls=5\n Trigger for constraint posts_language_id_fkey: time=1.282 calls=5\n Trigger posts_tsvectorupdate: time=61.659 calls=5\n Total runtime: 174.230 ms\n(18 rows)\n", "msg_date": "Sat, 14 Apr 2007 15:51:37 +0200", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "FK triggers misused?" }, { "msg_contents": "cluster <[email protected]> writes:\n> It seems that the FK triggers for the table are evaluated even though\n> none of the FK columns are altered.\n\nHm, they're not supposed to be, at least not in reasonably modern\nPG releases (and one that breaks out trigger runtime in EXPLAIN ANALYZE\nshould be modern enough IIRC). Exactly which PG release are you\nrunning? Can you provide a self-contained test case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 15 Apr 2007 01:29:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FK triggers misused? " }, { "msg_contents": "On 2007-04-15, Tom Lane <[email protected]> wrote:\n> cluster <[email protected]> writes:\n>> It seems that the FK triggers for the table are evaluated even though\n>> none of the FK columns are altered.\n>\n> Hm, they're not supposed to be, at least not in reasonably modern\n> PG releases (and one that breaks out trigger runtime in EXPLAIN ANALYZE\n> should be modern enough IIRC). Exactly which PG release are you\n> running? Can you provide a self-contained test case?\n\nLooking at current CVS code the RI check seems to be skipped on update of\nthe _referred to_ table if the old and new values match, but not on update\nof the _referring_ table.\n\n-- \nAndrew, Supernews\nhttp://www.supernews.com - individual and corporate NNTP services\n", "msg_date": "Sun, 15 Apr 2007 05:42:49 -0000", "msg_from": "Andrew - Supernews <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FK triggers misused?" }, { "msg_contents": "Andrew - Supernews <[email protected]> writes:\n> Looking at current CVS code the RI check seems to be skipped on update of\n> the _referred to_ table if the old and new values match, but not on update\n> of the _referring_ table.\n\nNo, both sides are supposed to be tested, see lines 3350-3395 in \nsrc/backend/commands/trigger.c. Or do you see something broken there?\nIt works for me in a quick test.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 15 Apr 2007 11:22:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FK triggers misused? " }, { "msg_contents": "On 2007-04-15, Tom Lane <[email protected]> wrote:\n> Andrew - Supernews <[email protected]> writes:\n>> Looking at current CVS code the RI check seems to be skipped on update of\n>> the _referred to_ table if the old and new values match, but not on update\n>> of the _referring_ table.\n>\n> No, both sides are supposed to be tested, see lines 3350-3395 in \n> src/backend/commands/trigger.c. Or do you see something broken there?\n> It works for me in a quick test.\n\nHm, you're right; I was looking at the logic in the triggers themselves\n(in ri_triggers.c).\n\nSo the next question is, what pg version is the original poster using?\nbecause 8.1.x doesn't report trigger execution times, and 8.2.x would use\na single bitmap index scan with an = ANY condition, not a BitmapOr.\n\n-- \nAndrew, Supernews\nhttp://www.supernews.com - individual and corporate NNTP services\n", "msg_date": "Sun, 15 Apr 2007 22:06:59 -0000", "msg_from": "Andrew - Supernews <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FK triggers misused?" }, { "msg_contents": "> So the next question is, what pg version is the original poster using?\n> because 8.1.x doesn't report trigger execution times, and 8.2.x would use\n> a single bitmap index scan with an = ANY condition, not a BitmapOr.\n\nI have tried 8.1.0 and 8.1.3 for this query.\n", "msg_date": "Tue, 17 Apr 2007 01:12:28 +0200", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FK triggers misused?" }, { "msg_contents": "cluster <[email protected]> writes:\n>> So the next question is, what pg version is the original poster using?\n>> because 8.1.x doesn't report trigger execution times, and 8.2.x would use\n>> a single bitmap index scan with an = ANY condition, not a BitmapOr.\n\n> I have tried 8.1.0 and 8.1.3 for this query.\n\nChecking the code, 8.1.x does report trigger times, so AndrewSN is\nmistaken on that point.\n\nHowever, it's also the case that 8.1 does have the suppress-the-trigger\nlogic for FKs, and it works fine for me in a simple test. I'm using\n8.1 branch tip, but there are no relevant changes since 8.1.0 as far\nas I can see in the CVS logs.\n\nWhat is that non-FK trigger shown in your results?\n\n> Trigger posts_tsvectorupdate: time=61.659 calls=5\n\nCould it possibly be firing an extra update on the table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Apr 2007 19:49:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FK triggers misused? " }, { "msg_contents": "Idly thumbing through the code, I came across something that might\npossibly explain your results. Do the rows being updated contain\nNULLs in the foreign-key columns? I see that ri_KeysEqual() treats\ntwo null values as not equal, which might be overzealous respect for\nSQL null semantics in this context.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Apr 2007 19:55:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FK triggers misused? " }, { "msg_contents": " > Do the rows being updated contain\n> NULLs in the foreign-key columns?\n\nNo, all FK columns are non-NULL. It is very strange.\n", "msg_date": "Tue, 17 Apr 2007 22:53:51 +0200", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FK triggers misused?" } ]
[ { "msg_contents": "I have a table where I store email, the bodies are mostly kept in a\ntoast table. The toast table is 940 Meg in size. The whole database\nis about 1.2 Gig in size. When I back the database up using pg_dump in\ncustom output mode, I pipe the output into gzip. My backups are only\nabout 600 meg in size. From this, I assume the that toe toast table\nisn't getting compressed.\n\n \n\nI am keeping the bodies in a column of type \"bytea\". \n\n \n\nIs there any way I can tell for sure if the messages from this column\nare being stored compressed? I know I can set the compression settings\nusing the \"ALTER TABLE ALTER SET STORAGE\" syntax, but is there a way I\ncan see what this value is currently set to?\n\n \n\nDavid \n\n\n\n\n\n\n\n\n\n\nI have a table where I store email,  the bodies are mostly\nkept in a toast table.    The toast table is 940 Meg in size.   The whole database\nis about 1.2 Gig in size.   When I back the database up using pg_dump in custom\noutput mode, I pipe the output into gzip.   My backups are only about 600 meg in\nsize.   From this, I assume the that toe toast table isn’t getting\ncompressed.\n \nI am keeping the bodies in a column of type “bytea”. \n\n \nIs there any way I can tell for sure if the messages from\nthis column are being stored compressed?   I know I can set the compression\nsettings using the “ALTER TABLE ALTER SET STORAGE” syntax, but is\nthere a way I can see what this value is currently set to?\n \nDavid", "msg_date": "Tue, 17 Apr 2007 16:13:36 -0500", "msg_from": "\"David Hinkle\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help with TOAST Compression" }, { "msg_contents": "On Tue, Apr 17, 2007 at 04:13:36PM -0500, David Hinkle wrote:\n> I have a table where I store email, the bodies are mostly kept in a\n> toast table. The toast table is 940 Meg in size. The whole database\n> is about 1.2 Gig in size. When I back the database up using pg_dump in\n> custom output mode, I pipe the output into gzip. My backups are only\n> about 600 meg in size. From this, I assume the that toe toast table\n> isn't getting compressed.\n\nHow are you measuring the toast table and database sizes? Have you\ntaken indexes and uncompressible data and metadata into account?\nThe database compresses only certain data, whereas when you pipe a\ndump into gzip you get compression on the entire dump.\n\nSome of the space might be taken up by dead rows and unused item\npointers. How often do you vacuum? What does \"VACUUM VERBOSE\ntablename\" show?\n\n> Is there any way I can tell for sure if the messages from this column\n> are being stored compressed?\n\nYou could look at a hex/ascii dump of the base and toast tables --\nyou might see runs of legible text but it should be obvious where\nthe data is compressed. See the TOAST section in the documentation\nfor more information about how and when data is compressed:\n\nhttp://www.postgresql.org/docs/8.2/interactive/storage-toast.html\n\nNote that \"The TOAST code is triggered only when a row value to be\nstored in a table is wider than BLCKSZ/4 bytes (normally 2 kB).\"\nAnd I'm no expert at compression algorithms but it's possible that\nthe \"fairly simple and very fast member of the LZ family of compression\ntechniques\" isn't as space-efficient as the algorithm that gzip\nuses (LZ77 according to its manual page). Maybe one of the developers\ncan comment.\n\n> I know I can set the compression settings using the \"ALTER TABLE\n> ALTER SET STORAGE\" syntax, but is there a way I can see what this\n> value is currently set to?\n\nYou could query pg_attribute.attstorage:\n\nhttp://www.postgresql.org/docs/8.2/interactive/catalog-pg-attribute.html\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 23 Apr 2007 02:45:58 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with TOAST Compression" } ]
[ { "msg_contents": "Hi, we're using Postgres 8.1.4.\n\nWe've been seeing deadlock errors of this form, sometimes as often as\nseveral times per hour:\n\nApr 17 13:39:50 postgres[53643]: [4-1] ERROR: deadlock detected\nApr 17 13:39:50 postgres[53643]: [4-2] DETAIL: Process 53643 waits for\nShareLock on transaction 111283280; blocked by process 53447.\nApr 17 13:39:50 postgres[53643]: [4-3] Process 53447 waits for ShareLock\non transaction 111282124; blocked by process 53242.\nApr 17 13:39:50 postgres[53643]: [4-4] Process 53242 waits for ShareLock\non transaction 111282970; blocked by process 53240.\nApr 17 13:39:50 postgres[53643]: [4-5] Process 53240 waits for ShareLock\non transaction 111282935; blocked by process 53168.\nApr 17 13:39:50 postgres[53643]: [4-6] Process 53168 waits for ShareLock\non transaction 111282707; blocked by process 53643.\n\nThe deadlocks almost always seem to involve 4 or 5 processes.\n\nAfter observing the behaviour of the locks table, and searching the\nnewsgroup and elsewhere, I'm fairly certain I know what the problem is.\nThere is extremely high update activity by a dozen or more processes on a\ntable which has FK references into two other tables. Each process may\nupdate 10s or 100s of rows and there is really no predictable access\npattern.\n\nThis blurb is from a previous discussion I found:\n-----\npostgres performs a lock (share lock) on the tuples to which the foreign\nkeys point, apparently to prevent other transactions from modifying the\nforeign key before this transaction commits. it is practically impossible to\ncause the references to be always in the same order, so a deadlock can\noccur.\n-----\n\nI also see claims that this problem is fixed in 8.2, and if the fix is what\nI think it is, it's also in 8.1.6.\n\nRelease 8.1.6\nChanges\n* Fix bug causing needless deadlock errors on row-level locks (Tom)\n\nUpgrading to 8.2 is not realistic at this point of our project cycle, but if\nthe fix is indeed in 8.1.6, I can push to upgrade to 8.1.latest.\n\nCan someone confirm that I've identified the right fix?\n\nThanks,\nSteve\n\nHi, we're using Postgres 8.1.4.\n \nWe've been seeing deadlock errors of this form, sometimes as often as several times per hour:\n \nApr 17 13:39:50 postgres[53643]: [4-1] ERROR:  deadlock detectedApr 17 13:39:50 postgres[53643]: [4-2] DETAIL:  Process 53643 waits for ShareLock on transaction 111283280; blocked by process 53447.Apr 17 13:39:50 postgres[53643]: [4-3]     Process 53447 waits for ShareLock on transaction 111282124; blocked by process 53242.\nApr 17 13:39:50 postgres[53643]: [4-4]     Process 53242 waits for ShareLock on transaction 111282970; blocked by process 53240.Apr 17 13:39:50 postgres[53643]: [4-5]     Process 53240 waits for ShareLock on transaction 111282935; blocked by process 53168.\nApr 17 13:39:50 postgres[53643]: [4-6]     Process 53168 waits for ShareLock on transaction 111282707; blocked by process 53643.\n \nThe deadlocks almost always seem to involve 4 or 5 processes.\n \nAfter observing the behaviour of the locks table, and searching the newsgroup and elsewhere, I'm fairly certain I know what the problem is.  There is extremely high update activity by a dozen or more processes on a table which has FK references into two other tables.  Each process may update 10s or 100s of rows and there is really no predictable access pattern.\n\n \nThis blurb is from a previous discussion I found:\n-----\npostgres performs a lock (share lock) on the tuples to which the foreign keys point, apparently to prevent other transactions from modifying the foreign key before this transaction commits. it is practically impossible to cause the references to be always in the same order, so a deadlock can occur.\n\n-----\n \nI also see claims that this problem is fixed in 8.2, and if the fix is what I think it is, it's also in 8.1.6.\n \nRelease 8.1.6Changes\n* Fix bug causing needless deadlock errors on row-level locks (Tom) \nUpgrading to 8.2 is not realistic at this point of our project cycle, but if the fix is indeed in 8.1.6, I can push to upgrade to 8.1.latest.\n \nCan someone confirm that I've identified the right fix?\n \nThanks,\nSteve", "msg_date": "Wed, 18 Apr 2007 11:07:08 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Foreign Key Deadlocking" }, { "msg_contents": "> Can someone confirm that I've identified the right fix?\n\nI'm pretty sure that won't help you... see:\n\nhttp://archives.postgresql.org/pgsql-general/2006-12/msg00029.php\n\nThe deadlock will be there if you update/insert the child table and\nupdate/insert the parent table in the same transaction (even if you\nupdate some other field on the parent table than the key referenced by\nthe child table). If your transactions always update/insert only one of\nthose tables, it won't deadlock (assuming you order the inserts/updates\nproperly per PK).\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Wed, 18 Apr 2007 17:36:37 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign Key Deadlocking" }, { "msg_contents": "\"Steven Flatt\" <[email protected]> writes:\n> Hi, we're using Postgres 8.1.4.\n> We've been seeing deadlock errors of this form, sometimes as often as\n> several times per hour:\n> ...\n> I also see claims that this problem is fixed in 8.2, and if the fix is what\n> I think it is, it's also in 8.1.6.\n> Can someone confirm that I've identified the right fix?\n\nHard to say without a lot more data, but as a general rule you should be\non 8.1.latest in any case. See\nhttp://www.postgresql.org/support/versioning\nYou might also find ammunition in the release notes:\nhttp://developer.postgresql.org/pgdocs/postgres/release.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Apr 2007 13:47:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign Key Deadlocking " }, { "msg_contents": "Thanks for your answers and feedback.\n\nAll things considered, it is easiest (and acceptable) in this case to remove\nRI between the tables where the deadlocks were occurring.\n\nWe are still looking to upgrade to 8.1.latest but that is another matter...\n\nSteve\n\nThanks for your answers and feedback.\n \nAll things considered, it is easiest (and acceptable) in this case to remove RI between the tables where the deadlocks were occurring.\n \nWe are still looking to upgrade to 8.1.latest but that is another matter...\n \nSteve", "msg_date": "Wed, 18 Apr 2007 16:59:03 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Foreign Key Deadlocking" }, { "msg_contents": "Hi Csaba,\n\nI have a similar problem.\n\nIn an attempt to avoid the overhead of select count(*) from mailbox \nwhere uid = somuid I've implemented triggers on insert and delete.\n\nSo there is a\n\nuser table which refers to to an inbox table,\n\nso when people insert into the inbox there is an RI trigger grabbing \nthe shared lock, then the count triggers try to grab an exclusive \nlock resulting in a deadlock.\n\nCan we safely remove the shared locks ?\n\nIs there a right way to implement the count triggers. I've tried \nbefore triggers, and after triggers, both result in different kinds \nof deadlocks.\n\nDave\nOn 18-Apr-07, at 11:36 AM, Csaba Nagy wrote:\n\n>> Can someone confirm that I've identified the right fix?\n>\n> I'm pretty sure that won't help you... see:\n>\n> http://archives.postgresql.org/pgsql-general/2006-12/msg00029.php\n>\n> The deadlock will be there if you update/insert the child table and\n> update/insert the parent table in the same transaction (even if you\n> update some other field on the parent table than the key referenced by\n> the child table). If your transactions always update/insert only \n> one of\n> those tables, it won't deadlock (assuming you order the inserts/ \n> updates\n> properly per PK).\n>\n> Cheers,\n> Csaba.\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Thu, 19 Apr 2007 10:00:36 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign Key Deadlocking" }, { "msg_contents": "Dave Cramer escribi�:\n> Hi Csaba,\n> \n> I have a similar problem.\n> \n> In an attempt to avoid the overhead of select count(*) from mailbox \n> where uid = somuid I've implemented triggers on insert and delete.\n> \n> So there is a\n> \n> user table which refers to to an inbox table,\n> \n> so when people insert into the inbox there is an RI trigger grabbing \n> the shared lock, then the count triggers try to grab an exclusive \n> lock resulting in a deadlock.\n> \n> Can we safely remove the shared locks ?\n> \n> Is there a right way to implement the count triggers. I've tried \n> before triggers, and after triggers, both result in different kinds \n> of deadlocks.\n\nWould it be possible for the triggers to lock the records, before\nstarting the actual operation, in well known orders, to avoid the\ndeadlocks?\n\nA frequently mentioned approach to avoid the point of contention is to\nhave a \"totals\" record and have the triggers insert \"deltas\" records; to\nget the sum, add them all. Periodically, take the deltas and apply them\nto the totals.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 19 Apr 2007 10:14:42 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign Key Deadlocking" }, { "msg_contents": "\nOn Apr 19, 2007, at 9:00 AM, Dave Cramer wrote:\n>\n> On 18-Apr-07, at 11:36 AM, Csaba Nagy wrote:\n>\n>>> Can someone confirm that I've identified the right fix?\n>>\n>> I'm pretty sure that won't help you... see:\n>>\n>> http://archives.postgresql.org/pgsql-general/2006-12/msg00029.php\n>>\n>> The deadlock will be there if you update/insert the child table and\n>> update/insert the parent table in the same transaction (even if you\n>> update some other field on the parent table than the key \n>> referenced by\n>> the child table). If your transactions always update/insert only \n>> one of\n>> those tables, it won't deadlock (assuming you order the inserts/ \n>> updates\n>> properly per PK).\n>>\n>> Cheers,\n>> Csaba.\n\n> Hi Csaba,\n>\n> I have a similar problem.\n>\n> In an attempt to avoid the overhead of select count(*) from mailbox \n> where uid = somuid I've implemented triggers on insert and delete.\n>\n> So there is a\n>\n> user table which refers to to an inbox table,\n>\n> so when people insert into the inbox there is an RI trigger \n> grabbing the shared lock, then the count triggers try to grab an \n> exclusive lock resulting in a deadlock.\n>\n> Can we safely remove the shared locks ?\n>\n> Is there a right way to implement the count triggers. I've tried \n> before triggers, and after triggers, both result in different kinds \n> of deadlocks.\n>\n> Dave\n\nThe ways I've done this in the past is to have the count triggers \nmake inserts into some interim table rather than try to update the \nactual count field and have another process that continually sweeps \nwhat's in the interim table and makes aggregated updates to the count \ntable. Even if there isn't much to aggregate on any given sweep, \nthis gives you a sequential pattern as your inserts/deletes on the \nmain table don't depend on any locking in another table (well, \nstrictly speaking, your inserts into the interim table would be \nblocked by any exclusive locks on it but you shouldn't need to ever \ndo that anyway).\n\n\nerik jones <[email protected]>\nsoftware developer\n615-296-0838\nemma(r)\n\n\n\n", "msg_date": "Thu, 19 Apr 2007 09:15:45 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign Key Deadlocking" }, { "msg_contents": "> A frequently mentioned approach to avoid the point of contention is to\n> have a \"totals\" record and have the triggers insert \"deltas\" records; to\n> get the sum, add them all. Periodically, take the deltas and apply them\n> to the totals.\n\nThis is what we do here too. There is only one exception to this rule,\nin one case we actually need to have the inserted records and the\nupdated parent in one transaction for data consistency, in that case the\ndelta approach won't work... we didn't find any other solution to that\nexcept patching postgres not to lock the parent keys at all, which has\nit's own problems too (occasional breakage of the foreign key\nrelationship when the parent is deleted and a child still slips in, but\nthis is very rare in our case not to cause problems which cannot be\ncleaned up with relative ease - not to mention that there could be other\nproblems we didn't discover yet or our usage patterns are avoiding).\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Thu, 19 Apr 2007 16:23:42 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Foreign Key Deadlocking" } ]
[ { "msg_contents": "Hi,\n\nIs there a query I can perform (perhaps on a system table) to return all of\nthe column names (ONLY the column names) for a given DB? (I seem to\nremember doing this before but cant remember how)\n\nThanks\n\nMike\n\nHi,Is there a query I can perform (perhaps on a system table) to return all of the column names  (ONLY the column names) for a given DB? (I seem to remember doing this before but cant remember how)Thanks\nMike", "msg_date": "Thu, 19 Apr 2007 13:06:25 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "how to output column names" }, { "msg_contents": "Michael Dengler wrote:\n> Hi,\n> \n> Is there a query I can perform (perhaps on a system table) to return all\n> of the column names (ONLY the column names) for a given DB? (I seem to\n> remember doing this before but cant remember how)\n> \n> Thanks\n> \n> Mike\n> \n\nThe 'columns' view in information_schema should do the trick for you.\n\n-Jon\n\n-- \nSenior Systems Developer\nMedia Matters for America\nhttp://mediamatters.org/\n", "msg_date": "Thu, 19 Apr 2007 13:49:43 -0400", "msg_from": "Jon Sime <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to output column names" }, { "msg_contents": "Perfect! thanks for the help!.\n\nMike\n\n\nOn 4/19/07, Jon Sime <[email protected]> wrote:\n>\n> Michael Dengler wrote:\n> > Hi,\n> >\n> > Is there a query I can perform (perhaps on a system table) to return all\n> > of the column names (ONLY the column names) for a given DB? (I seem to\n> > remember doing this before but cant remember how)\n> >\n> > Thanks\n> >\n> > Mike\n> >\n>\n> The 'columns' view in information_schema should do the trick for you.\n>\n> -Jon\n>\n> --\n> Senior Systems Developer\n> Media Matters for America\n> http://mediamatters.org/\n>\n\nPerfect! thanks for the help!.MikeOn 4/19/07, Jon Sime <[email protected]> wrote:\nMichael Dengler wrote:> Hi,>> Is there a query I can perform (perhaps on a system table) to return all\n> of the column names  (ONLY the column names) for a given DB? (I seem to> remember doing this before but cant remember how)>> Thanks>> Mike>The 'columns' view in information_schema should do the trick for you.\n-Jon--Senior Systems DeveloperMedia Matters for Americahttp://mediamatters.org/", "msg_date": "Thu, 19 Apr 2007 15:25:58 -0400", "msg_from": "\"Michael Dengler\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to output column names" } ]
[ { "msg_contents": "Hi,\n\n \n\nI'm currently dealing with performance issues of postgres and looking\nfor some advice.\n\n \n\nPlatform\n\nPostgres: 7.0.2\n\nOS: FreeBSD4.4\n\nDB: size - about 50M, most frequently updated tables are of an average\nsize of 1000-2000 rows and there are not many of them, about 15 in total\n\n \n\nDescription\n\nMy current system load keeps the postgres CPU utilization at the level\nof 90-100%. \n\n'vacuumdb' results in a sharp drop of the CPU usage down to 25-30%, but\nnot for a long period of time - it gets back to 100% within 30 minutes.\n\nDisk IO ratio during the test keeps on about 0.5 MB/s\n\n \n\nQuestions:\n\n1. When reading the 'vacuum analyze' output how to identify which one of\nthe actions had the most effect on reducing the CPU usage - garbage\ncleaning or statistics recalculation for the analyzer?\n\n2. What would be the recommended set of parameters to tune up in order\nto improve the performance over the time, instead of considering an\noption to vacuum every 30 minutes or so?\n\n3. Is it safe to run 'vacuum' as frequently as every 15-30 minutes?\n\n4. Suggestions?\n\n \n\nI know that 7.0.2 is an old version and therefore ran the same test on\n7.3.18 - the performance behavior was similar. \n\n \n\nThank you in advance,\n\nSergey\n\n_________________________________________________\n\nThis message, including any attachments, is confidential and/or\nprivileged and contains information intended only for the person(s)\nnamed above. Any other distribution, copying or disclosure is strictly\nprohibited. If you are not the intended recipient or have received this\nmessage in error, please notify us immediately by reply email and\npermanently delete the original transmission from all of your systems\nand hard drives, including any attachments, without making a copy.\n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nI’m currently dealing with performance issues of\npostgres and looking for some advice.\n \nPlatform\nPostgres: 7.0.2\nOS: FreeBSD4.4\nDB: size - about 50M, most frequently updated tables are of\nan average size of 1000-2000 rows and there are not many of them, about 15 in\ntotal\n \nDescription\nMy current system load keeps the postgres CPU utilization at\nthe level of 90-100%. \n‘vacuumdb’ results in a sharp drop of the CPU\nusage down to 25-30%, but not for a long period of time – it gets back to\n100% within 30 minutes.\nDisk IO ratio during the test keeps on about 0.5 MB/s\n \nQuestions:\n1. When reading the ‘vacuum analyze’ output how\nto identify which one of the actions had the most effect on reducing the CPU\nusage – garbage cleaning or statistics recalculation for the analyzer?\n2. What would be the recommended set of parameters to tune\nup in order to improve the performance over the time, instead of considering an\noption to vacuum every 30 minutes or so?\n3. Is it safe to run ‘vacuum’ as frequently as\nevery 15-30 minutes?\n4. Suggestions?\n \nI know that 7.0.2 is an old version and therefore ran the\nsame test on 7.3.18 – the performance behavior was similar. \n \nThank you in advance,\nSergey\n_________________________________________________\nThis\nmessage, including any attachments, is confidential and/or privileged and\ncontains information intended only for the person(s) named above. Any other\ndistribution, copying or disclosure is strictly prohibited. If you are not the\nintended recipient or have received this message in error, please notify us\nimmediately by reply email and permanently delete the original transmission\nfrom all of your systems and hard drives, including any attachments, without\nmaking a copy.", "msg_date": "Thu, 19 Apr 2007 15:29:52 -0400", "msg_from": "\"Sergey Tsukinovsky\" <[email protected]>", "msg_from_op": true, "msg_subject": "postgres: 100% CPU utilization " }, { "msg_contents": "\"Sergey Tsukinovsky\" <[email protected]> writes:\n> I'm currently dealing with performance issues of postgres and looking\n> for some advice.\n\n> Postgres: 7.0.2\n\nStop right there. You have *no* business asking for help on an\ninstallation you have not updated in more than six years.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Apr 2007 00:36:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization " }, { "msg_contents": "On Thu, 19 Apr 2007, Sergey Tsukinovsky wrote:\n\n> I know that 7.0.2 is an old version and therefore ran the same test on\n> 7.3.18 - the performance behavior was similar.\n\nWhy have you choosen just another very old version for performance\ncomparison and not the latest stable release?\n\nKind regards\n\n Andreas.\n\n-- \nhttp://fam-tille.de\n", "msg_date": "Mon, 23 Apr 2007 07:39:02 +0200 (CEST)", "msg_from": "Andreas Tille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization " }, { "msg_contents": "Am Donnerstag, 19. April 2007 schrieb Sergey Tsukinovsky:\n> 2. What would be the recommended set of parameters to tune up in order\n> to improve the performance over the time, instead of considering an\n> option to vacuum every 30 minutes or so?\n>\n> 3. Is it safe to run 'vacuum' as frequently as every 15-30 minutes?\nNo problem.\n\n>\n> 4. Suggestions?\nDo yourself a favor and upgrade at least to 8.1.x and use autovacuum.\n\nBest regards\nMario Weilguni\n", "msg_date": "Mon, 23 Apr 2007 10:53:38 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization" }, { "msg_contents": "At 04:53 AM 4/23/2007, Mario Weilguni wrote:\n>Am Donnerstag, 19. April 2007 schrieb Sergey Tsukinovsky:\n> > 2. What would be the recommended set of parameters to tune up in order\n> > to improve the performance over the time, instead of considering an\n> > option to vacuum every 30 minutes or so?\n> >\n> > 3. Is it safe to run 'vacuum' as frequently as every 15-30 minutes?\n>No problem.\n>\n> >\n> > 4. Suggestions?\n>Do yourself a favor and upgrade at least to 8.1.x and use autovacuum.\nIn fact, I'll go one step further and say that pg improves so much \nfrom release to release that everyone should make superhuman efforts \nto always be running the latest stable release.\n\nEven the differences between 8.1.x and 8.2.x are worth it.\n\n(and the fewer and more modern the releases \"out in the wild\", the \neasier community support is)\nCheers,\nRon Peacetree \n\n", "msg_date": "Mon, 23 Apr 2007 11:06:32 -0400", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization" }, { "msg_contents": "On Thu, 2007-04-19 at 14:29, Sergey Tsukinovsky wrote:\n> Hi,\n> \n> \n> \n> I’m currently dealing with performance issues of postgres and looking\n> for some advice.\n> \n> \n> \n> Platform\n> \n> Postgres: 7.0.2\n> \n> OS: FreeBSD4.4\n> \n> DB: size - about 50M, most frequently updated tables are of an average\n> size of 1000-2000 rows and there are not many of them, about 15 in\n> total\n\nSNIP\n\n> I know that 7.0.2 is an old version and therefore ran the same test on\n> 7.3.18 – the performance behavior was similar. \n\nSo, are you running this on an Intel 486DX2-50 with a Seagate ST-4096\nwith an ISA based RLL encoding controller, or are you using the more\nadvanced AMD 586 CPU running at 90MHz with an Adaptec ARC2090 PCI based\nSCSI card?\n\nAnd do you have 32 or 64 Megs of memory in that machine?\n\nCause honestly, that's the kinda hardware I was running 7.0.2 on, so you\nmight as well get retro in your hardware department while you're at it.\n", "msg_date": "Mon, 23 Apr 2007 11:09:05 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization" }, { "msg_contents": "On Apr 23, 2007, at 12:09 PM, Scott Marlowe wrote:\n\n> And do you have 32 or 64 Megs of memory in that machine?\n>\n> Cause honestly, that's the kinda hardware I was running 7.0.2 on, \n> so you\n> might as well get retro in your hardware department while you're at \n> it.\n\nI think you're being too conservative... I recall that those specs \nfor me correspond to running Pg 6.5 as the latest release... talk \nabout performance and corruption issues... :-) He's probably got at \nleast a Pentium II.", "msg_date": "Mon, 23 Apr 2007 16:00:31 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization" }, { "msg_contents": "On Mon, 2007-04-23 at 15:00, Vivek Khera wrote:\n> On Apr 23, 2007, at 12:09 PM, Scott Marlowe wrote:\n> \n> > And do you have 32 or 64 Megs of memory in that machine?\n> >\n> > Cause honestly, that's the kinda hardware I was running 7.0.2 on, \n> > so you\n> > might as well get retro in your hardware department while you're at \n> > it.\n> \n> I think you're being too conservative... I recall that those specs \n> for me correspond to running Pg 6.5 as the latest release... talk \n> about performance and corruption issues... :-) He's probably got at \n> least a Pentium II.\n\nYeah, now that you mention it, I think I was able to come up with a\nPentium 100 with 64 Megs of RAM about the time 6.5 came out, on RedHat\n5.1 then 5.2, and with a pair of 1.2 gig IDE drives under the hood.\n\nThose were the days, huh?\n\nI honestly kinda wondered if the original post came out of a time warp,\nlike some mail relay somewhere held onto it for 4 years or something.\n", "msg_date": "Mon, 23 Apr 2007 15:21:46 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization" }, { "msg_contents": "Scott Marlowe wrote:\n\n> (snippage) that's the kinda hardware I was running 7.0.2 on, so you\n> might as well get retro in your hardware department while you're at it.\n> \n\nNotice he's running FreeBSD 4.4(!), so it could well be a very old \nmachine...\n\nCheers\n\nMark\n\n", "msg_date": "Tue, 24 Apr 2007 11:59:12 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization" }, { "msg_contents": "On Mon, 23 Apr 2007, Scott Marlowe wrote:\n\n> I honestly kinda wondered if the original post came out of a time warp,\n> like some mail relay somewhere held onto it for 4 years or something.\n\nThat wouldn't be out of the question if this system is also his mail \nserver.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 23 Apr 2007 23:41:09 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization" }, { "msg_contents": "Thanks for this reply, Ron.\nThis is almost what I was looking for.\n\nWhile the upgrade to the latest version is out of the question (which\nunfortunately for me became the subject of this discussion) still, I was\nlooking for the ways to improve the performance of the 7.0.2 version. \n\nExtensive use of vacuum was almost obvious, though I was hoping to get\nsome more tips from postrges gurus (or dinosaurs, if you want).\n\nAnyways, the 8.2.4 was not performing so well without auto-vacuum. It\nramped up to 50% of CPU usage in 2 hours under the load.\nWith the auto-vacuum ON I've got what I really need and thus I know what\nto do next.\n\nJust for the record - the hardware that was used for the test has the\nfollowing parameters:\nAMD Opteron 2GHZ\n2GB RAM\nLSI Logic SCSI\n\nThanks everyone for your assistance!\nSergey\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Ron\nSent: Monday, April 23, 2007 11:07 AM\nTo: Mario Weilguni\nCc: [email protected]\nSubject: Re: [PERFORM] postgres: 100% CPU utilization\n\nAt 04:53 AM 4/23/2007, Mario Weilguni wrote:\n>Am Donnerstag, 19. April 2007 schrieb Sergey Tsukinovsky:\n> > 2. What would be the recommended set of parameters to tune up in\norder\n> > to improve the performance over the time, instead of considering an\n> > option to vacuum every 30 minutes or so?\n> >\n> > 3. Is it safe to run 'vacuum' as frequently as every 15-30 minutes?\n>No problem.\n>\n> >\n> > 4. Suggestions?\n>Do yourself a favor and upgrade at least to 8.1.x and use autovacuum.\nIn fact, I'll go one step further and say that pg improves so much \nfrom release to release that everyone should make superhuman efforts \nto always be running the latest stable release.\n\nEven the differences between 8.1.x and 8.2.x are worth it.\n\n(and the fewer and more modern the releases \"out in the wild\", the \neasier community support is)\nCheers,\nRon Peacetree \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n______________________________________________________________________\nThis email has been scanned by the MessageLabs Email Security System.\nFor more information please visit http://www.messagelabs.com/email \n______________________________________________________________________\n", "msg_date": "Tue, 24 Apr 2007 11:30:05 -0400", "msg_from": "\"Sergey Tsukinovsky\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres: 100% CPU utilization" }, { "msg_contents": "On Tue, 2007-04-24 at 10:30, Sergey Tsukinovsky wrote:\n> Thanks for this reply, Ron.\n> This is almost what I was looking for.\n> \n> While the upgrade to the latest version is out of the question (which\n> unfortunately for me became the subject of this discussion) still, I was\n> looking for the ways to improve the performance of the 7.0.2 version. \n> \n> Extensive use of vacuum was almost obvious, though I was hoping to get\n> some more tips from postrges gurus (or dinosaurs, if you want).\n> \n> Anyways, the 8.2.4 was not performing so well without auto-vacuum. It\n> ramped up to 50% of CPU usage in 2 hours under the load.\n> With the auto-vacuum ON I've got what I really need and thus I know what\n> to do next.\n\nCould you give us a better picture of how you were testing 8.2.4? My\nguess is that you were doing something that seemed right to you, but was\nworking against yourself, like constant vacuum fulls and getting index\nbloat, or something else. \n\nWhy were you trying to not use autovacuum, btw? I've found it to be\nquite capable, with only a few situations (high speed queueing) where I\nneeded to manually schedule vacuums. And I've never seen a situation\nsince about 7.4 where regular full vacuums were required.\n\n> Just for the record - the hardware that was used for the test has the\n> following parameters:\n> AMD Opteron 2GHZ\n> 2GB RAM\n> LSI Logic SCSI\n\nNice hardware. I'd really like to hear the logic behind your statement\nthat upgrading to 8.1 or 8.2 is out of the question.\n", "msg_date": "Thu, 26 Apr 2007 17:50:27 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization" }, { "msg_contents": "Sergey Tsukinovsky wrote:\n> \n> Just for the record - the hardware that was used for the test has the\n> following parameters:\n> AMD Opteron 2GHZ\n> 2GB RAM\n> LSI Logic SCSI\n>\n\nAnd you ran FreeBSD 4.4 on it right? This may be a source of high cpu \nutilization in itself if the box is SMP or dual core, as multi-cpu \nsupport was pretty primitive in that release (4.12 would be better if \nyou are required to stick to the 4.x branch, if not the 6.2 is recommended)!\n\nCheers\n\nMark\n\n\n", "msg_date": "Fri, 27 Apr 2007 13:26:53 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres: 100% CPU utilization" } ]
[ { "msg_contents": "\nI have a table that contains a column for keywords that I expect to become\nquite large and will be used for web searches. I will either index the\ncolumn or come up with a simple hashing algorithm add the hash key to the\ntable and index that column.\n\nI am thinking the max length in the keyword column I need to support is 30,\nbut the average would be less than10\n\nAny suggestions on whether to use char(30), varchar(30) or text, would be\nappreciated. I am looking for the best performance option, not necessarily\nthe most economical on disk.\n\nOr any other suggestions would be greatly appreciated.\n-- \nView this message in context: http://www.nabble.com/seeking-advise-on-char-vs-text-or-varchar-in-search-table-tf3618204.html#a10103002\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Fri, 20 Apr 2007 07:23:58 -0700 (PDT)", "msg_from": "chrisj <[email protected]>", "msg_from_op": true, "msg_subject": "seeking advise on char vs text or varchar in search table" }, { "msg_contents": "On 4/20/07, chrisj <[email protected]> wrote:\n>\n> I have a table that contains a column for keywords that I expect to become\n> quite large and will be used for web searches. I will either index the\n> column or come up with a simple hashing algorithm add the hash key to the\n> table and index that column.\n>\n> I am thinking the max length in the keyword column I need to support is 30,\n> but the average would be less than10\n>\n> Any suggestions on whether to use char(30), varchar(30) or text, would be\n> appreciated. I am looking for the best performance option, not necessarily\n> the most economical on disk.\n\nDon't use char...it pads out the string to the length always. It\nalso has no real advantage over varchar in any practical situation.\nThink of varchar as text with a maximum length...its no faster or\nslower but the database will throw out entries based on length (which\ncan be good or a bad thing)...in this case, text feels better.\n\nHave you looked at tsearch2, gist, etc?\n\nmerlin\n", "msg_date": "Mon, 23 Apr 2007 10:46:23 +0530", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seeking advise on char vs text or varchar in search table" }, { "msg_contents": "On Apr 23, 2007, at 7:16 AM, Merlin Moncure wrote:\n> On 4/20/07, chrisj <[email protected]> wrote:\n>>\n>> I have a table that contains a column for keywords that I expect \n>> to become\n>> quite large and will be used for web searches. I will either \n>> index the\n>> column or come up with a simple hashing algorithm add the hash key \n>> to the\n>> table and index that column.\n>>\n>> I am thinking the max length in the keyword column I need to \n>> support is 30,\n>> but the average would be less than10\n>>\n>> Any suggestions on whether to use char(30), varchar(30) or text, \n>> would be\n>> appreciated. I am looking for the best performance option, not \n>> necessarily\n>> the most economical on disk.\n>\n> Don't use char...it pads out the string to the length always. It\n> also has no real advantage over varchar in any practical situation.\n> Think of varchar as text with a maximum length...its no faster or\n> slower but the database will throw out entries based on length (which\n> can be good or a bad thing)...in this case, text feels better.\n\nAIUI, char, varchar and text all store their data in *exactly* the \nsame way in the database; char only pads data on output, and in the \nactual tables it still contains the regular varlena header. The only \nreason I've ever used char in other databases is to save the overhead \nof the variable-length information, so I recommend to people to just \nsteer clear of char in PostgreSQL.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Wed, 25 Apr 2007 19:30:41 +0200", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seeking advise on char vs text or varchar in search table" }, { "msg_contents": "\"Jim Nasby\" <[email protected]> writes:\n\n> AIUI, char, varchar and text all store their data in *exactly* the same way in\n> the database; char only pads data on output, and in the actual tables it still\n> contains the regular varlena header. The only reason I've ever used char in\n> other databases is to save the overhead of the variable-length information, so\n> I recommend to people to just steer clear of char in PostgreSQL.\n\nEverything you said is correct except that char actually pads its data on\ninput, not output. This doesn't actually make a lot of sense since we're\nstoring it as a varlena so we could pad it on output and modify the data type\nfunctions to pretend the spaces are there without storing them.\n\nHowever it would only make a difference if you're storing variable length data\nin a char field in which case I would 100% agree with your conclusion and\nstrongly recommend using varchar. The only reason I would think of using char\nis when the data should always be the same length, like a SSN or md5hash or\nsomething like that. In which case it's purely for the self-documenting\nnotational convenience, not any performance reason.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Thu, 26 Apr 2007 09:35:24 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seeking advise on char vs text or varchar in search table" } ]
[ { "msg_contents": "I have two tables, staff (530 rows) and location (2.5 million rows). I \ndo a query that joins the two together, as so:\n\nSELECT s.ProprietorId, l.LocationId, s.RoleId\n\tFROM Location l\n\tINNER JOIN (\n\t\tSELECT *\n\t\tFROM Staff\n\t) s ON l.ProprietorId = s.ProprietorId\n\tWHERE s.UserId = 123456\n\tAND s.LocationId IS NULL\n\nIgnore the fact that it's a subquery -- the query plan is the same if \nits a straight JOIN, and I'm going to use the subquery to demonstrate \nsomething interesting.\n\nAnyways, this takes ~45 seconds to run, and returns 525 rows (just about \n1 per record in the Staff table; 5 records are not for that user are so \nare excluded). The EXPLAIN is:\n\nNested Loop (cost=243.50..34315.32 rows=10286 width=12)\n -> Subquery Scan s (cost=0.00..21.93 rows=1 width=8)\n Filter: ((userid = 123456) AND (locationid IS NULL))\n -> Limit (cost=0.00..15.30 rows=530 width=102)\n -> Seq Scan on staff (cost=0.00..15.30 rows=530 width=102)\n -> Bitmap Heap Scan on \"location\" l (cost=243.50..34133.68 \nrows=12777 width=8)\n Recheck Cond: (s.proprietorid = l.proprietorid)\n -> Bitmap Index Scan on idx_location_proprietorid_locationid \n(cost=0.00..240.30 rows=12777 width=0)\n Index Cond: (s.proprietorid = l.proprietorid)\n\nThe EXPLAIN ANALYZE is:\n\nHash Join (cost=23.16..129297.25 rows=2022281 width=12) (actual \ntime=62.315..48632.406 rows=525 loops=1)\n Hash Cond: (l.proprietorid = staff.proprietorid)\n -> Seq Scan on \"location\" l (cost=0.00..101337.11 rows=2057111 \nwidth=8) (actual time=0.056..44504.431 rows=2057111 loops=1)\n -> Hash (cost=16.63..16.63 rows=523 width=8) (actual \ntime=46.411..46.411 rows=525 loops=1)\n -> Seq Scan on staff (cost=0.00..16.63 rows=523 width=8) \n(actual time=0.022..45.428 rows=525 loops=1)\n Filter: ((userid = 123456) AND (locationid IS NULL))\nTotal runtime: 48676.282 ms\n\nNow, the interesting thing is, if I add \"LIMIT 5000\" into that inner \nsubquery on the staff table, it no longer seq scans location, and the \nwhole thing runs in less than a second.\n\nSELECT s.ProprietorId, l.LocationId, s.RoleId\n\tFROM Location l\n\tINNER JOIN (\n\t\tSELECT *\n\t\tFROM Staff\n\t\tLIMIT 5000 \t-- Only change; remember, this table \t\t\t\t\t\t-- only has \n530 rows\n\t) s ON l.ProprietorId = s.ProprietorId\n\tWHERE s.UserId = 123456\n\tAND s.LocationId IS NULL\n\nEXPLAIN:\n\nNested Loop (cost=243.50..34315.32 rows=10286 width=12)\n -> Subquery Scan s (cost=0.00..21.93 rows=1 width=8)\n Filter: ((userid = 123456) AND (locationid IS NULL))\n -> Limit (cost=0.00..15.30 rows=530 width=102)\n -> Seq Scan on staff (cost=0.00..15.30 rows=530 width=102)\n -> Bitmap Heap Scan on \"location\" l (cost=243.50..34133.68 \nrows=12777 width=8)\n Recheck Cond: (s.proprietorid = l.proprietorid)\n -> Bitmap Index Scan on idx_location_proprietorid_locationid \n(cost=0.00..240.30 rows=12777 width=0)\n Index Cond: (s.proprietorid = l.proprietorid)\n\nEXPLAIN ANALYZE:\n\nNested Loop (cost=243.50..34315.32 rows=10286 width=12) (actual \ntime=74.097..569.372 rows=525 loops=1)\n -> Subquery Scan s (cost=0.00..21.93 rows=1 width=8) (actual \ntime=16.452..21.092 rows=525 loops=1)\n Filter: ((userid = 123456) AND (locationid IS NULL))\n -> Limit (cost=0.00..15.30 rows=530 width=102) (actual \ntime=16.434..19.128 rows=530 loops=1)\n -> Seq Scan on staff (cost=0.00..15.30 rows=530 \nwidth=102) (actual time=16.429..17.545 rows=530 loops=1)\n -> Bitmap Heap Scan on \"location\" l (cost=243.50..34133.68 \nrows=12777 width=8) (actual time=1.027..1.029 rows=1 loops=525)\n Recheck Cond: (s.proprietorid = l.proprietorid)\n -> Bitmap Index Scan on idx_location_proprietorid_locationid \n(cost=0.00..240.30 rows=12777 width=0) (actual time=0.151..0.151 rows=1 \nloops=525)\n Index Cond: (s.proprietorid = l.proprietorid)\nTotal runtime: 570.868 ms\n\nThis confuses me. As far as I can tell, the EXPLAIN output is the same \nregardless of whether LIMIT 5000 is in there or not. However, I don't \nknow why a) the EXPLAIN ANALYZE plan is different in the first case, \nwhere there is no LIMIT 5000, or b) why adding a LIMIT 5000 onto a table \nwould change anything when the table has only 530 rows in it. \nFurthermore, I can repeat this experiment over and over, so I know that \nits not caching. Removing the LIMIT 5000 returns performance to > 45 \nseconds.\n\nI've ANALYZEd both tables, so I'm relatively certain statistics are up \nto date. This is test data, so there are no ongoing \ninserts/updates/deletes -- only selects.\n\nI'd really prefer this query run in < 1 second rather than > 45, but I'd \nreally like to do that without having hacks like adding in pointless \nLIMIT clauses.\n\nAny help would be much appreciated.\n\n--Colin McGuigan\n", "msg_date": "Sat, 21 Apr 2007 00:58:07 -0500", "msg_from": "Colin McGuigan <[email protected]>", "msg_from_op": true, "msg_subject": "Odd problem with planner choosing seq scan" } ]
[ { "msg_contents": "I have investigated a bit now and found the following:\n\nWhen I perform the update the *first* time, the triggers are actually \nnot evaluated. But from the second update they are.\n\nAlso notice that the number of rows changes. Shouldn't that number of \nrows always be 2 as question_id is primary key?\n\nExample:\n\n=> explain analyze update questions set cancelled_time = now() where \nquestion_id in (10,11);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on questions (cost=4.01..12.04 rows=2 width=112) \n(actual time=0.193..0.205 rows=2 loops=1)\n Recheck Cond: ((question_id = 10) OR (question_id = 11))\n -> BitmapOr (cost=4.01..4.01 rows=2 width=0) (actual \ntime=0.046..0.046 rows=0 loops=1)\n -> Bitmap Index Scan on questions_pkey (cost=0.00..2.00 \nrows=1 width=0) (actual time=0.037..0.037 rows=1 loops=1)\n Index Cond: (question_id = 10)\n -> Bitmap Index Scan on questions_pkey (cost=0.00..2.00 \nrows=1 width=0) (actual time=0.005..0.005 rows=1 loops=1)\n Index Cond: (question_id = 11)\n Trigger for constraint questions_repost_of_fkey: time=0.023 calls=2\n Total runtime: 0.734 ms\n(9 rows)\n\n\n\n=> explain analyze update questions set cancelled_time = now() where \nquestion_id in (10,11);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on questions (cost=4.01..12.04 rows=2 width=112) \n(actual time=0.085..0.097 rows=2 loops=1)\n Recheck Cond: ((question_id = 10) OR (question_id = 11))\n -> BitmapOr (cost=4.01..4.01 rows=2 width=0) (actual \ntime=0.047..0.047 rows=0 loops=1)\n -> Bitmap Index Scan on questions_pkey (cost=0.00..2.00 \nrows=1 width=0) (actual time=0.036..0.036 rows=2 loops=1)\n Index Cond: (question_id = 10)\n -> Bitmap Index Scan on questions_pkey (cost=0.00..2.00 \nrows=1 width=0) (actual time=0.007..0.007 rows=2 loops=1)\n Index Cond: (question_id = 11)\n Trigger for constraint questions_repost_of_fkey: time=0.025 calls=2\n Trigger for constraint questions_author_id_fkey: time=0.167 calls=2\n Trigger for constraint questions_category_id_fkey: time=0.196 calls=2\n Trigger for constraint questions_lock_user_id_fkey: time=0.116 calls=2\n Total runtime: 1.023 ms\n(12 rows)\n\n\n", "msg_date": "Sat, 21 Apr 2007 12:43:27 +0200", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FK triggers misused?" }, { "msg_contents": "\nOn Sat, 21 Apr 2007, cluster wrote:\n\n> I have investigated a bit now and found the following:\n>\n> When I perform the update the *first* time, the triggers are actually\n> not evaluated. But from the second update they are.\n\nAre these in one transaction? If so, then right now after the first\nupdate, the remaining updates will trigger checks if the row modified was\nmodified in this transaction. The comment in trigger.c lists the basic\ncircumstance, although it mentions it in terms of insert and a deferred\nFK, I would guess that if there's ever a possibility of two modifications\n(including multiple updates or on an immediate constraint) before the\nconstraint check occurred the same condition could happen.\n", "msg_date": "Sat, 21 Apr 2007 09:18:16 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FK triggers misused?" } ]
[ { "msg_contents": "Hello everyone,\n\nThis is my first post in here, i am in need of some help...\n\nWel, i am running PostgreSQL 8.2.4, in a single node, in this machine:\n\nDell PowerEdge 1950\nIntel Xeon 2.33 (2 CPUs)\n4GB RAM\nSata HD 160GB\nDebian Distribution\n\nSome relevant parameters:\n\nIn Linux:\n -> SHMMAX: 32MB\nIn postgresql.conf:\n -> shared_buffers: 24MB\n -> maintenance_work_mem: 16MB\n\nI need to: build, load and do some optimization procedures in a TPC-H\nbenchmark database.\n\nSo far, i need to do it in three different scale factors (1, 2 and 5GB\ndatabases).\n\nMy build process comprehends creating the tables without any foreign keys,\nindexes, etc. - Running OK!\nThen, i load the data from the flat files generated through DBGEN software\ninto these tables. - Running OK!\n\nFinally, i run a \"optimize\" script that does the following:\n\n- Alter the tables to add the mandatory foreign keys;\n- Create all mandatory indexes;\n- Cluster the orders table by the orders table index;\n- Cluster the lineitem table by the lineitem table index;\n- Vacuum the database;\n- Analyze statistics.\n\nThis is the step which is causing me some headaches, mainly related to the\n5GB database. I identified that the cluster command over the lineitem table\n(cluster idx_lineitem on lineitem) is the responsible. I got to this\nconclusion because when i run it in the 1GB and 2GB database i am able to\ncomplete this script in 10 and 30 minutes each. But when i run this command\nover the 5GB database, it simply seems as it won't end. I watched it running\nover 12 hours and nothing happened.\n\nTo investigate a bit, i tried to tune these parameters and these parameters\nonly, and re-run the script (rebooted the machine and restarted it all\nover):\n\nIn Linux: SHMMAX -> Tuned it to 2GB via echo \"...\" > /proc/sys/kernel/shmmax\n\nIn postgresql.conf:\n\nshared_buffers: 512MB\nmaintenance_work_mem: 800MB\n\nI thought that this might improve the performance, but as a matter of fact,\nthat's what happened:\n\n1 GB database - cluster command time remains the same (more or less 10\nminutes)\n2 GB database - cluster command now takes 3 hours instead of 30 minutes! BAD\n5 GB database - still can't complete the command in over 12 hours.\n\nTo add some info, i did a top command on the machine, i saw that the\npostmaster consumes all the \"shared_buffers\" configured in the physical\nmemory (13,3% of the RAM --> 512MB of 4GB) but the total free mem is 0% (all\nthe 4GB is used, but not by the postmaster), and no swap is in use, CPU is\naround 1% busy (this 1% is for the postmaster), and this machine is\ndedicate, for my personal use, and there's nothing else running but\nPostgreSQL.\n\nDoes anyone have any clues?\n\nThanks in advance,\nNelson P Kotowski Filho.\n\nHello everyone,This is my first post in here, i am in need of some help...Wel, i am running PostgreSQL 8.2.4, in a single node, in this machine:Dell PowerEdge 1950Intel Xeon 2.33 (2 CPUs)4GB RAM\nSata HD 160GBDebian DistributionSome relevant parameters:In Linux:  -> SHMMAX: 32MBIn postgresql.conf:  -> shared_buffers: 24MB  -> maintenance_work_mem: 16MBI need to: build, load and do some optimization procedures in a TPC-H benchmark database.\nSo far, i need to do it in three different scale factors (1, 2 and 5GB databases).My build process comprehends creating the tables without any foreign keys, indexes, etc. - Running OK! Then, i load the data from the flat files generated through DBGEN software into these tables. - Running OK!\nFinally, i run a \"optimize\" script that does the following:- Alter the tables to add the mandatory foreign keys;- Create all mandatory indexes;- Cluster the orders table by the orders table index;\n- Cluster the lineitem table by the lineitem table index;- Vacuum the database;- Analyze statistics.This is the step which is causing me some headaches, mainly related to the 5GB database. I identified that the cluster command over the lineitem table (cluster idx_lineitem on lineitem) is the responsible. I got to this conclusion because when i run it in the 1GB and 2GB database i am able to complete this script in 10 and 30 minutes each. But when i run this command over the 5GB database, it simply seems as it won't end. I watched it running over 12 hours and nothing happened.\nTo investigate a bit, i tried to tune these parameters and these parameters only, and re-run the script (rebooted the machine and restarted it all over):In Linux: SHMMAX -> Tuned it to 2GB via echo \"...\" > /proc/sys/kernel/shmmax\nIn postgresql.conf: shared_buffers: 512MBmaintenance_work_mem: 800MBI thought that this might improve the performance, but as a matter of fact, that's what happened:1 GB database - cluster command time remains the same (more or less 10 minutes)\n2 GB database - cluster command now takes 3 hours instead of 30 minutes! BAD5 GB database - still can't complete the command in over 12 hours.To add some info, i did a top command on the machine, i saw that the postmaster consumes all the \"shared_buffers\" configured in the physical memory (13,3% of the RAM --> 512MB of 4GB) but the total free mem is 0% (all the 4GB is used, but not by the postmaster), and no swap is in use, CPU is around 1% busy (this 1% is for the postmaster), and this machine is dedicate, for my personal use, and there's nothing else running but PostgreSQL.\nDoes anyone have any clues?Thanks in advance,Nelson P Kotowski Filho.", "msg_date": "Sat, 21 Apr 2007 11:54:42 -0300", "msg_from": "\"Nelson Kotowski\" <[email protected]>", "msg_from_op": true, "msg_subject": "TPC-H Scaling Factors X PostgreSQL Cluster Command" }, { "msg_contents": "Nelson Kotowski wrote:\n> So far, i need to do it in three different scale factors (1, 2 and 5GB\n> databases).\n> \n> My build process comprehends creating the tables without any foreign keys,\n> indexes, etc. - Running OK!\n> Then, i load the data from the flat files generated through DBGEN software\n> into these tables. - Running OK!\n> \n> Finally, i run a \"optimize\" script that does the following:\n> \n> - Alter the tables to add the mandatory foreign keys;\n> - Create all mandatory indexes;\n> - Cluster the orders table by the orders table index;\n> - Cluster the lineitem table by the lineitem table index;\n> - Vacuum the database;\n> - Analyze statistics.\n\nCluster will completely rewrite the table and indexes. On step 2, you \nshould only create the indexes you're clustering on, and create the rest \nof them after clustering.\n\nOr even better, generate and load the data in the right order to start \nwith, so you don't need to cluster at all.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 23 Apr 2007 10:46:45 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TPC-H Scaling Factors X PostgreSQL Cluster Command" }, { "msg_contents": "Hi Heikki,\n\nThanks for answering! :)\n\n I don't get how creating only the indexes i cluster on would improve my\ncluster command perfomance. I believed that all other indexes wouldn't\ninterfere because so far they're created in a fashionable time and they\ndon't refer to any field/column in the orders/lineitem table. Could you\nexplain me again?\n\nAs for the load, when you say the right order to start, you mean i should\norder the load file by the index field in the table before loading it?\n\nThanks in advance,\nNelson P Kotowski Filho.\n\nOn 4/23/07, Heikki Linnakangas <[email protected]> wrote:\n>\n> Nelson Kotowski wrote:\n> > So far, i need to do it in three different scale factors (1, 2 and 5GB\n> > databases).\n> >\n> > My build process comprehends creating the tables without any foreign\n> keys,\n> > indexes, etc. - Running OK!\n> > Then, i load the data from the flat files generated through DBGEN\n> software\n> > into these tables. - Running OK!\n> >\n> > Finally, i run a \"optimize\" script that does the following:\n> >\n> > - Alter the tables to add the mandatory foreign keys;\n> > - Create all mandatory indexes;\n> > - Cluster the orders table by the orders table index;\n> > - Cluster the lineitem table by the lineitem table index;\n> > - Vacuum the database;\n> > - Analyze statistics.\n>\n> Cluster will completely rewrite the table and indexes. On step 2, you\n> should only create the indexes you're clustering on, and create the rest\n> of them after clustering.\n>\n> Or even better, generate and load the data in the right order to start\n> with, so you don't need to cluster at all.\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n\nHi Heikki,Thanks for answering! :) I don't get how creating only the indexes i cluster on would improve my cluster command perfomance. I believed that all other indexes wouldn't interfere because so far they're created in a fashionable time and they don't refer to any field/column in the orders/lineitem table. Could you explain me again?\nAs for the load, when you say the right order to start, you mean i should order the load file by the index field in the table before loading it?Thanks in advance,Nelson P Kotowski Filho.\nOn 4/23/07, Heikki Linnakangas <[email protected]> wrote:\nNelson Kotowski wrote:> So far, i need to do it in three different scale factors (1, 2 and 5GB> databases).>> My build process comprehends creating the tables without any foreign keys,> indexes, etc. - Running OK!\n> Then, i load the data from the flat files generated through DBGEN software> into these tables. - Running OK!>> Finally, i run a \"optimize\" script that does the following:>\n> - Alter the tables to add the mandatory foreign keys;> - Create all mandatory indexes;> - Cluster the orders table by the orders table index;> - Cluster the lineitem table by the lineitem table index;\n> - Vacuum the database;> - Analyze statistics.Cluster will completely rewrite the table and indexes. On step 2, youshould only create the indexes you're clustering on, and create the rest\nof them after clustering.Or even better, generate and load the data in the right order to startwith, so you don't need to cluster at all.--   Heikki Linnakangas   EnterpriseDB   \nhttp://www.enterprisedb.com", "msg_date": "Mon, 23 Apr 2007 12:52:36 -0300", "msg_from": "\"Nelson Kotowski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TPC-H Scaling Factors X PostgreSQL Cluster Command" }, { "msg_contents": ">>> On Mon, Apr 23, 2007 at 10:52 AM, in message\n<[email protected]>, \"Nelson\nKotowski\" <[email protected]> wrote: \n> \n> I don't get how creating only the indexes i cluster on would improve my\n> cluster command perfomance. I believed that all other indexes wouldn't\n> interfere because so far they're created in a fashionable time and they\n> don't refer to any field/column in the orders/lineitem table. Could you\n> explain me again?\n \nWhat a CLUSTER command does is to read through the table in the sequence specified by the index (using the index) and copy the data into a new copy of the table. It then applies all of the permissions, constraints, etc. from the original table to the copy and builds all the same indexes as were on the original table. (You can't use the same indexes, because the data is shifted around to new spots.) The new copy of the table then takes the place of the original. If you build indexes and then cluster, you throw away the results of the work from the original build, and do it all over again.\n \n> As for the load, when you say the right order to start, you mean i should\n> order the load file by the index field in the table before loading it?\n \nIf you load the rows in the same order that the index would read them during the cluster, there is no need to cluster and no benefit from doing so.\n \n-Kevin\n \n\n\n", "msg_date": "Mon, 23 Apr 2007 13:26:07 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TPC-H Scaling Factors X PostgreSQL Cluster\n\tCommand" }, { "msg_contents": "On Sat, 21 Apr 2007, Nelson Kotowski wrote:\n\n> I identified that the cluster command over the lineitem table (cluster \n> idx_lineitem on lineitem) is the responsible. I got to this conclusion \n> because when i run it in the 1GB and 2GB database i am able to complete \n> this script in 10 and 30 minutes each. But when i run this command over \n> the 5GB database, it simply seems as it won't end.\n\nHave you looked in the database log files for messages? Unless you \nchanged some other parameters from the defaults that you didn't mention, \nI'd expect you've got a constant series of \"checkpoint occuring too \nfrequently\" errors in there, which would be a huge slowdown on your index \nrebuild. Slowdowns from checkpoints would get worse with an increase of \nshared_buffers, as you report.\n\nThe default setting for checkpoint_segments of 3 is extremely low for even \na 1GB database. Try increasing that to 30, restart the server, and \nrebuild the index to see how much the 1GB case speeds up. If it's \nsignificantly faster (it should be), try the 5GB one again.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 23 Apr 2007 23:39:42 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TPC-H Scaling Factors X PostgreSQL Cluster Command" }, { "msg_contents": "Greg Smith wrote:\n> On Sat, 21 Apr 2007, Nelson Kotowski wrote:\n> \n>> I identified that the cluster command over the lineitem table (cluster \n>> idx_lineitem on lineitem) is the responsible. I got to this conclusion \n>> because when i run it in the 1GB and 2GB database i am able to \n>> complete this script in 10 and 30 minutes each. But when i run this \n>> command over the 5GB database, it simply seems as it won't end.\n> \n> Have you looked in the database log files for messages? Unless you \n> changed some other parameters from the defaults that you didn't mention, \n> I'd expect you've got a constant series of \"checkpoint occuring too \n> frequently\" errors in there, which would be a huge slowdown on your \n> index rebuild. Slowdowns from checkpoints would get worse with an \n> increase of shared_buffers, as you report.\n\nIndex builds don't write WAL, unless archive_command has been set. A \nhigher shared_buffers setting can hurt index build performance, but for \na different reason: the memory spent on shared_buffers can't be used for \nsorting and caching the sort tapes.\n\n> The default setting for checkpoint_segments of 3 is extremely low for \n> even a 1GB database. Try increasing that to 30, restart the server, and \n> rebuild the index to see how much the 1GB case speeds up. If it's \n> significantly faster (it should be), try the 5GB one again.\n\nA good advice, but it's unlikely to make a difference at load time.\n\nBTW: With CVS HEAD, if you create the table in the same transaction (or \nTRUNCATE) as you load the data, the COPY will skip writing WAL which can \ngive a nice speedup.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 24 Apr 2007 09:52:38 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TPC-H Scaling Factors X PostgreSQL Cluster Command" } ]
[ { "msg_contents": "I have two tables, staff (530 rows) and location (2.5 million rows). I \ndo a query that joins the two together, as so:\n\nSELECT s.ProprietorId, l.LocationId, s.RoleId\n FROM Location l\n INNER JOIN (\n SELECT *\n FROM Staff\n ) s ON l.ProprietorId = s.ProprietorId\n WHERE s.UserId = 123456\n AND s.LocationId IS NULL\n\nIgnore the fact that it's a subquery -- the query plan is the same if \nits a straight JOIN, and I'm going to use the subquery to demonstrate \nsomething interesting.\n\nAnyways, this takes ~45 seconds to run, and returns 525 rows (just about \n1 per record in the Staff table; 5 records are not for that user are so \nare excluded). The EXPLAIN is:\n\nNested Loop (cost=243.50..34315.32 rows=10286 width=12)\n -> Subquery Scan s (cost=0.00..21.93 rows=1 width=8)\n Filter: ((userid = 123456) AND (locationid IS NULL))\n -> Limit (cost=0.00..15.30 rows=530 width=102)\n -> Seq Scan on staff (cost=0.00..15.30 rows=530 width=102)\n -> Bitmap Heap Scan on \"location\" l (cost=243.50..34133.68 \nrows=12777 width=8)\n Recheck Cond: (s.proprietorid = l.proprietorid)\n -> Bitmap Index Scan on idx_location_proprietorid_locationid \n(cost=0.00..240.30 rows=12777 width=0)\n Index Cond: (s.proprietorid = l.proprietorid)\n\nThe EXPLAIN ANALYZE is:\n\nHash Join (cost=23.16..129297.25 rows=2022281 width=12) (actual \ntime=62.315..48632.406 rows=525 loops=1)\n Hash Cond: (l.proprietorid = staff.proprietorid)\n -> Seq Scan on \"location\" l (cost=0.00..101337.11 rows=2057111 \nwidth=8) (actual time=0.056..44504.431 rows=2057111 loops=1)\n -> Hash (cost=16.63..16.63 rows=523 width=8) (actual \ntime=46.411..46.411 rows=525 loops=1)\n -> Seq Scan on staff (cost=0.00..16.63 rows=523 width=8) \n(actual time=0.022..45.428 rows=525 loops=1)\n Filter: ((userid = 123456) AND (locationid IS NULL))\nTotal runtime: 48676.282 ms\n\nNow, the interesting thing is, if I add \"LIMIT 5000\" into that inner \nsubquery on the staff table, it no longer seq scans location, and the \nwhole thing runs in less than a second.\n\nSELECT s.ProprietorId, l.LocationId, s.RoleId\n FROM Location l\n INNER JOIN (\n SELECT *\n FROM Staff\n LIMIT 5000 -- Only change; remember, this \ntable -- only has 530 rows\n ) s ON l.ProprietorId = s.ProprietorId\n WHERE s.UserId = 123456\n AND s.LocationId IS NULL\n\nEXPLAIN:\n\nNested Loop (cost=243.50..34315.32 rows=10286 width=12)\n -> Subquery Scan s (cost=0.00..21.93 rows=1 width=8)\n Filter: ((userid = 123456) AND (locationid IS NULL))\n -> Limit (cost=0.00..15.30 rows=530 width=102)\n -> Seq Scan on staff (cost=0.00..15.30 rows=530 width=102)\n -> Bitmap Heap Scan on \"location\" l (cost=243.50..34133.68 \nrows=12777 width=8)\n Recheck Cond: (s.proprietorid = l.proprietorid)\n -> Bitmap Index Scan on idx_location_proprietorid_locationid \n(cost=0.00..240.30 rows=12777 width=0)\n Index Cond: (s.proprietorid = l.proprietorid)\n\nEXPLAIN ANALYZE:\n\nNested Loop (cost=243.50..34315.32 rows=10286 width=12) (actual \ntime=74.097..569.372 rows=525 loops=1)\n -> Subquery Scan s (cost=0.00..21.93 rows=1 width=8) (actual \ntime=16.452..21.092 rows=525 loops=1)\n Filter: ((userid = 123456) AND (locationid IS NULL))\n -> Limit (cost=0.00..15.30 rows=530 width=102) (actual \ntime=16.434..19.128 rows=530 loops=1)\n -> Seq Scan on staff (cost=0.00..15.30 rows=530 \nwidth=102) (actual time=16.429..17.545 rows=530 loops=1)\n -> Bitmap Heap Scan on \"location\" l (cost=243.50..34133.68 \nrows=12777 width=8) (actual time=1.027..1.029 rows=1 loops=525)\n Recheck Cond: (s.proprietorid = l.proprietorid)\n -> Bitmap Index Scan on idx_location_proprietorid_locationid \n(cost=0.00..240.30 rows=12777 width=0) (actual time=0.151..0.151 rows=1 \nloops=525)\n Index Cond: (s.proprietorid = l.proprietorid)\nTotal runtime: 570.868 ms\n\nThis confuses me. As far as I can tell, the EXPLAIN output is the same \nregardless of whether LIMIT 5000 is in there or not. However, I don't \nknow why a) the EXPLAIN ANALYZE plan is different in the first case, \nwhere there is no LIMIT 5000, or b) why adding a LIMIT 5000 onto a table \nwould change anything when the table has only 530 rows in it. \nFurthermore, I can repeat this experiment over and over, so I know that \nits not caching. Removing the LIMIT 5000 returns performance to > 45 \nseconds.\n\nI've ANALYZEd both tables, so I'm relatively certain statistics are up \nto date. This is test data, so there are no ongoing \ninserts/updates/deletes -- only selects.\n\nI'd really prefer this query run in < 1 second rather than > 45, but I'd \nreally like to do that without having hacks like adding in pointless \nLIMIT clauses.\n\nAny help would be much appreciated.\n\n--Colin McGuigan\n\n", "msg_date": "Sat, 21 Apr 2007 10:33:38 -0500", "msg_from": "Colin McGuigan <[email protected]>", "msg_from_op": true, "msg_subject": "Odd problem with planner choosing seq scan" }, { "msg_contents": "Colin McGuigan <[email protected]> writes:\n> -> Subquery Scan s (cost=0.00..21.93 rows=1 width=8)\n> Filter: ((userid = 123456) AND (locationid IS NULL))\n> -> Limit (cost=0.00..15.30 rows=530 width=102)\n> -> Seq Scan on staff (cost=0.00..15.30 rows=530 width=102)\n\nThere does seem to be a bug here, but not the one you think: the rows=1\nestimate for the subquery node seems a bit silly given that it knows\nthere are 530 rows in the underlying query. I'm not sure how bright the\ncode is about finding stats for variables emitted by a subquery, but\neven with totally default estimates it should not come up with a\nselectivity of 1/500 for the filter. Unfortunately, fixing that is\nlikely to bias it further away from the plan you want ...\n\n> Furthermore, I can repeat this experiment over and over, so I know that \n> its not caching.\n\nYou mean it *is* caching.\n\n> I'd really prefer this query run in < 1 second rather than > 45, but I'd \n> really like to do that without having hacks like adding in pointless \n> LIMIT clauses.\n\nThe right way to do it is to adjust the planner cost parameters.\nThe standard values of those are set on the assumption of\ntables-much-bigger-than-memory, a situation in which the planner's\npreferred plan probably would be the best. What you are testing here\nis most likely a situation in which the whole of both tables fits in\nRAM. If that pretty much describes your production situation too,\nthen you should decrease seq_page_cost and random_page_cost. I find\nsetting them both to 0.1 produces estimates that are more nearly in\nline with true costs for all-in-RAM situations.\n\n(Pre-8.2, there's no seq_page_cost, so instead set random_page_cost\nto 1 and inflate all the cpu_xxx cost constants by 10.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Apr 2007 13:06:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd problem with planner choosing seq scan " }, { "msg_contents": "Tom Lane wrote:\n> The right way to do it is to adjust the planner cost parameters.\n> The standard values of those are set on the assumption of\n> tables-much-bigger-than-memory, a situation in which the planner's\n> preferred plan probably would be the best. What you are testing here\n> is most likely a situation in which the whole of both tables fits in\n> RAM. If that pretty much describes your production situation too,\n> then you should decrease seq_page_cost and random_page_cost. I find\n> setting them both to 0.1 produces estimates that are more nearly in\n> line with true costs for all-in-RAM situations.\n> \nI know I can do it by adjusting cost parameters, but I was really \ncurious as to why adding a \"LIMIT 5000\" onto a SELECT from a table with \nonly 530 rows in it would affect matters at all. The plan the planner \nuses when LIMIT 5000 is on is the one I want, without adjusting any \nperformance costs. It doesn't seem to matter what the limit is -- LIMIT \n99999 also produces the desired plan, whereas no LIMIT produces the \nundesirable plan.\n\n--Colin McGuigan\n", "msg_date": "Sat, 21 Apr 2007 20:48:22 -0500", "msg_from": "Colin McGuigan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd problem with planner choosing seq scan" }, { "msg_contents": "Colin McGuigan <[email protected]> writes:\n> I know I can do it by adjusting cost parameters, but I was really \n> curious as to why adding a \"LIMIT 5000\" onto a SELECT from a table with \n> only 530 rows in it would affect matters at all.\n\nThe LIMIT prevents the sub-select from being flattened into the main\nquery. In the current code this has a side-effect of preventing any\nstatistical information from being used to estimate the selectivity\nof the filter conditions --- so you get a default rowcount estimate\nthat's way too small, and that changes the shape of the join plan.\nIt's giving you the \"right\" answer for entirely the wrong reason.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Apr 2007 22:45:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd problem with planner choosing seq scan " } ]
[ { "msg_contents": "\nHi all,\n\nI'm a bit new to PostgreSQL and database design in general so forgive me\nfor asking stupid questions. ;-)\n\nI've setup a PostgreSQL database on a Linux machine (2 processor, 1GB\nmem) and while the database itself resides on a NetApp filer, via NFS,\nthis doesn't seem to impact the performance to drastically.\n\nI basically use it for indexed tables without any relation between 'em\nso far this has worked perfectly.\n\nFor statistics I've created the following table:\nvolume varchar(30),\nqtree varchar(255),\nfile varchar(512),\nctime timestamp,\nmtime timestamp,\natime timestamp\nannd created separate indexes on the volume and qtree columns.\n\nThis table gets filled with the copy command and about 2 hours and\nsome 40 million records later I issue a reindex command to make sure the\nindexes are accurate. (for good interest, there are some 35 values for\nvolume and some 1450 for qtrees)\n\nWhile filling of this table, my database grows to an (expected) 11.5GB.\n\nThe problems comes when I try to do a query without using a where clause\nbecause by then, it completely discards the indexes and does a complete\ntable scan which takes over half an hour! (40.710.725 rows, 1110258\npages, 1715 seconds)\n\nI've tried several things but doing a query like:\nselect distinct volume from project_access_times\nor\nselect distinct qtree from project_access_times\nalways result in a full sequential table scan even after a 'vacuum' and\n'vacuum analyze'.\n\nI even tried the 'set enable_seqscan = no' but it still does a full\ntable scan instead of using the indexes.\n\nCan anyone tell me if this is normal behaviour (half an hour seems over\nthe top to me) and if not, what I can do about it.\n\nRegards,\n\nJeroen Kleijer\n", "msg_date": "Sat, 21 Apr 2007 22:17:42 +0200", "msg_from": "Jeroen Kleijer <[email protected]>", "msg_from_op": true, "msg_subject": "not using indexes on large table" }, { "msg_contents": "On Saturday 21 April 2007 22:17:42 Jeroen Kleijer wrote:\n> I've tried several things but doing a query like:\n> select distinct volume from project_access_times\n\nI'm new too but an \"order by volume\" could help!\n\nIn any case maybe a different table design with a separate table for the\n\"distinct volumes\" could help even more.\n\n-- \nVincenzo Romano\n----\nMaybe Computers will never become as intelligent as\nHumans. For sure they won't ever become so stupid.\n", "msg_date": "Sat, 21 Apr 2007 22:58:06 +0200", "msg_from": "Vincenzo Romano <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not using indexes on large table" }, { "msg_contents": "* Jeroen Kleijer <[email protected]> [070421 23:10]:\n> \n> Hi all,\n> \n> I'm a bit new to PostgreSQL and database design in general so forgive me\n> for asking stupid questions. ;-)\n> \n> I've setup a PostgreSQL database on a Linux machine (2 processor, 1GB\n> mem) and while the database itself resides on a NetApp filer, via NFS,\n> this doesn't seem to impact the performance to drastically.\n> \n> I basically use it for indexed tables without any relation between 'em\n> so far this has worked perfectly.\n> \n> For statistics I've created the following table:\n> volume varchar(30),\n> qtree varchar(255),\n> file varchar(512),\n> ctime timestamp,\n> mtime timestamp,\n> atime timestamp\n> annd created separate indexes on the volume and qtree columns.\n> \n> This table gets filled with the copy command and about 2 hours and\n> some 40 million records later I issue a reindex command to make sure the\n> indexes are accurate. (for good interest, there are some 35 values for\n> volume and some 1450 for qtrees)\n> \n> While filling of this table, my database grows to an (expected) 11.5GB.\n> \n> The problems comes when I try to do a query without using a where clause\n> because by then, it completely discards the indexes and does a complete\n> table scan which takes over half an hour! (40.710.725 rows, 1110258\n> pages, 1715 seconds)\n> \n> I've tried several things but doing a query like:\n> select distinct volume from project_access_times\n> or\n> select distinct qtree from project_access_times\n> always result in a full sequential table scan even after a 'vacuum' and\n> 'vacuum analyze'.\n\nTry:\nselect volume from project_access_times group by project_access_times;\n\nAnd no matter, runnning a database over NFS smells like a dead rat.\n\nHopefully, you've mounted it hard, but still NFS does not have normal\nsemantics, e.g. locking, etc.\n\nNext thing, as you've got only one client for that NFS mount, try to\nmake it to cache aggressivly meta data. The ac prefixed options in\nnfs(5) come to mind.\n\nAndreas\n", "msg_date": "Sat, 21 Apr 2007 23:17:04 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not using indexes on large table" }, { "msg_contents": "> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Jeroen Kleijer\n>\n> The problems comes when I try to do a query without using a \n> where clause\n> because by then, it completely discards the indexes and does \n> a complete\n> table scan which takes over half an hour! (40.710.725 rows, 1110258\n> pages, 1715 seconds)\n> \n> I've tried several things but doing a query like:\n> select distinct volume from project_access_times\n> or\n> select distinct qtree from project_access_times\n> always result in a full sequential table scan even after a \n> 'vacuum' and\n> 'vacuum analyze'.\n\nTo my knowledge Postgres doesn't use indexes for distinct queries or\ngrouping. Also you are getting horrible IO performance. Our old slow test\nmachine can scan a table of 12 million rows in 100 seconds, and our\nproduction server can do the same in 20 seconds. If possible, I would try\nrunning the same thing on your local hard drive. That way you can see how\nmuch the netapp and NFS are slowing you down. Although in the end if you\nneed very fast distinct queries, you will need to maintain a separate table.\n\nDave\n\n", "msg_date": "Mon, 23 Apr 2007 09:56:46 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not using indexes on large table" }, { "msg_contents": "On Sat, 2007-04-21 at 15:17, Jeroen Kleijer wrote:\n> Hi all,\n> \n> I'm a bit new to PostgreSQL and database design in general so forgive me\n> for asking stupid questions. ;-)\n> \n> I've setup a PostgreSQL database on a Linux machine (2 processor, 1GB\n> mem) and while the database itself resides on a NetApp filer, via NFS,\n> this doesn't seem to impact the performance to drastically.\n\nWhat does a benchmark like bonnie++ say about your performance? And I\nhope your data's not too important to you, because I've had LOTS of\nproblems with NFS mounts in the past with pgsql. Generally speaking,\nNFS can be moderately fast, or moderately reliable (for databases) but\nit generally isn't both at the same time.\n\nConsidering the cost of a quartet of 80 Gig SATA drives ($59x4) and a\ndecent RAID controller (LSI, Areca at ~$450 or so) you could be getting\nVERY good performance out of your system with real reliability at the\nsame time on a RAID-10 volume. Then use the NetApp for backup. That's\nwhat I'd do.\n\n> I basically use it for indexed tables without any relation between 'em\n> so far this has worked perfectly.\n> \n> For statistics I've created the following table:\n> volume varchar(30),\n> qtree varchar(255),\n> file varchar(512),\n> ctime timestamp,\n> mtime timestamp,\n> atime timestamp\n> annd created separate indexes on the volume and qtree columns.\n\nYou might want to look at setting this up as two or three tables with a\nview and update triggers to look like one table to the user, and the\nqtree and file in their own tables. that would make your main stats\ntable only one varchar(30) and 3 timestamps wide. Especially if qtree\nand file tend to be large. If one of those tends to be small and the\nother large, then look at moving just the large one into its own table. \nThe reasons for this will be obvious later on in this post.\n\n> The problems comes when I try to do a query without using a where clause\n> because by then, it completely discards the indexes and does a complete\n> table scan which takes over half an hour! (40.710.725 rows, 1110258\n> pages, 1715 seconds)\n\nYes it does, and it should.\n\nWhy? Visibility. This has been discussed quite a bit on the lists. \nBecause of the particular design for PostgreSQL's MVCC implementation,\nindexes cannot contain visibility information on tables. Therefore,\nevery time the db looks in an index, it then has to look in the table\nanyway to find the right version of that tuple and to see if it's\nactually valid for your snapshot.\n\n> Can anyone tell me if this is normal behaviour (half an hour seems over\n> the top to me) and if not, what I can do about it.\n\nYes this is normal behaviour. It's just how PostgreSQL works. There\nare some workarounds our there that involve updating extra tables that\ncarry things like counts etc... Each of these cost something in\noverhead.\n\nThere are two distinct problems here. One is that you're tying to use\nPostgreSQL in a role where perhaps a different database might be a\nbetter choice. MSSQL Server or DB2 or even MySQL might be a better\nchoice depending on what you want to do with your data.\n\nThe other problem is that you're using an NFS server. Either go whole\nhog and buy a SAN with dual 2G nics in it or put local storage underneat\nyour machine with LOTS of hard drives in RAID-10.\n\nNote that while other databases may be better at some of the queries\nyou're trying to run, it might be that PostgreSQL is still a good choice\nbecause of other queries, and you can do rollups of the data that it's\nslow at while using it for the things it is good at.\n\nI've got a test db on my workstation that's pretty big at 42,463,248\nrows and taking up 12 Gigs just for the table, 7.7 Gigs in indexes, and\na select count(*) on it takes 489 seconds. I try not to do things like\nthat. It covers the last 9 months of statistics. \n\nThis query:\n\nselect a, b, count(*) from summary where atime > '2006-06-16' and\n perspective = 'yada'\n group by a, b\n order by a, b\n\ntook 300 seconds, which is typical.\n\nThis is on a Workstation with one CPU, 2 gigs of ram, and a 150 Gig SATA\ndrive. It's running X Windows, with Evolution, firefox, and a dozen\nother user apps up and running. Our \"real\" server, with 4 disks in a\nRAID 5 on a mediocre RAID controller but with 2 CPUs and 6 gigs of ram,\nstomps my little work station into the ground. \n\nI have the feeling my laptop with 512 Meg of ram and a 1.6 GHz CPU would\nbe faster than your current server.\n", "msg_date": "Thu, 26 Apr 2007 14:17:49 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: not using indexes on large table" } ]