threads
listlengths
1
275
[ { "msg_contents": "Hi there.\n\nIf you just wanted PostgreSQL to go as fast as possible WITHOUT any\ncare for your data (you accept 100% dataloss and datacorruption if any\nerror should occur), what settings should you use then?\n", "msg_date": "Fri, 5 Nov 2010 11:59:43 +0100", "msg_from": "A B <[email protected]>", "msg_from_op": true, "msg_subject": "Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "A B <gentosaker 'at' gmail.com> writes:\n\n> Hi there.\n>\n> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n> care for your data (you accept 100% dataloss and datacorruption if any\n> error should occur), what settings should you use then?\n\nDon't use PostgreSQL, just drop your data, you will end up with\nthe same results and be even faster than any use of PostgreSQL.\nIf anyone needs data, then just say you had data corruption, and\nthat since 100% dataloss is accepted, then all's well.\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Fri, 05 Nov 2010 12:11:25 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On 5 November 2010 10:59, A B <[email protected]> wrote:\n\n> Hi there.\n>\n> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n> care for your data (you accept 100% dataloss and datacorruption if any\n> error should occur), what settings should you use then?\n>\n>\nTurn off fsync and full_page_writes (i.e. running with scissors).\n\nAlso depends on what you mean by \"as fast as possible\". Fast at doing\nwhat? Bulk inserts, selecting from massive tables?\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nOn 5 November 2010 10:59, A B <[email protected]> wrote:\n\nHi there.\n\nIf you just wanted PostgreSQL to go as fast as possible WITHOUT any\ncare for your data (you accept 100% dataloss and datacorruption if any\nerror should occur), what settings should you use then?\nTurn off fsync and full_page_writes (i.e. running with scissors).Also depends on what you mean by \"as fast as possible\".  Fast at doing what?  Bulk inserts, selecting from massive tables?\n-- Thom BrownTwitter: @darkixionIRC (freenode): dark_ixionRegistered Linux user: #516935", "msg_date": "Fri, 5 Nov 2010 11:14:43 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On 5 November 2010 11:14, Thom Brown <[email protected]> wrote:\n\n> On 5 November 2010 10:59, A B <[email protected]> wrote:\n>\n>> Hi there.\n>>\n>> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n>> care for your data (you accept 100% dataloss and datacorruption if any\n>> error should occur), what settings should you use then?\n>>\n>>\n> Turn off fsync and full_page_writes (i.e. running with scissors).\n>\n> Also depends on what you mean by \"as fast as possible\". Fast at doing\n> what? Bulk inserts, selecting from massive tables?\n>\n>\nOh, and turn synchronous_commit off too.\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nOn 5 November 2010 11:14, Thom Brown <[email protected]> wrote:\nOn 5 November 2010 10:59, A B <[email protected]> wrote:\n\n\nHi there.\n\nIf you just wanted PostgreSQL to go as fast as possible WITHOUT any\ncare for your data (you accept 100% dataloss and datacorruption if any\nerror should occur), what settings should you use then?\nTurn off fsync and full_page_writes (i.e. running with scissors).Also depends on what you mean by \"as fast as possible\".  Fast at doing what?  Bulk inserts, selecting from massive tables?\nOh, and turn synchronous_commit off too.-- Thom BrownTwitter: @darkixionIRC (freenode): dark_ixionRegistered Linux user: #516935", "msg_date": "Fri, 5 Nov 2010 11:15:51 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On 5 November 2010 11:59, A B <[email protected]> wrote:\n\n> Hi there.\n>\n> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n> care for your data (you accept 100% dataloss and datacorruption if any\n> error should occur), what settings should you use then?\n>\n>\n\nI'm just curious, what do you need that for?\n\n\nregards\nSzymon\n\nOn 5 November 2010 11:59, A B <[email protected]> wrote:\nHi there.\n\nIf you just wanted PostgreSQL to go as fast as possible WITHOUT any\ncare for your data (you accept 100% dataloss and datacorruption if any\nerror should occur), what settings should you use then?I'm just curious, what do you need that for?\nregardsSzymon", "msg_date": "Fri, 5 Nov 2010 12:23:36 +0100", "msg_from": "Szymon Guz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "> Turn off fsync and full_page_writes (i.e. running with scissors).\n> Also depends on what you mean by \"as fast as possible\".  Fast at doing\n> what?  Bulk inserts, selecting from massive tables?\n\nI guess some tuning has to be done to make it work well with the\nparticular workload (in this case most selects). But thanks for the\nsuggestions on the more general parameters.\n\n\"running with scissors\" sounds nice :-)\n", "msg_date": "Fri, 5 Nov 2010 12:24:13 +0100", "msg_from": "A B <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On 05/11/10 18:59, A B wrote:\n> Hi there.\n> \n> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n> care for your data (you accept 100% dataloss and datacorruption if any\n> error should occur), what settings should you use then?\n\nOthers have suggested appropriate parameters (\"running with scissors\").\n\nI'd like to add something else to the discussion: have you looked at\nmemcached yet? Or pgpool? If you haven't, start there.\n\n-- \nCraig Ringer\n\nTech-related writing: http://soapyfrogs.blogspot.com/\n", "msg_date": "Fri, 05 Nov 2010 19:27:41 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On Fri, Nov 5, 2010 at 13:11, Guillaume Cottenceau <[email protected]> wrote:\n> Don't use PostgreSQL, just drop your data, you will end up with\n> the same results and be even faster than any use of PostgreSQL.\n> If anyone needs data, then just say you had data corruption, and\n> that since 100% dataloss is accepted, then all's well.\n\nYou're not helping. There are legitimate reasons for trading off\nsafety for performance.\n\nRegards,\nMarti\n", "msg_date": "Fri, 5 Nov 2010 13:30:41 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": ">> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n>> care for your data (you accept 100% dataloss and datacorruption if any\n>> error should occur), what settings should you use then?\n>>\n>\n>\n> I'm just curious, what do you need that for?\n>\n> regards\n> Szymon\n\nI was just thinking about the case where I will have almost 100%\nselects, but still needs something better than a plain key-value\nstorage so I can do some sql queries.\nThe server will just boot, load data, run, hopefully not crash but if\nit would, just start over with load and run.\n", "msg_date": "Fri, 5 Nov 2010 12:32:39 +0100", "msg_from": "A B <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": ">> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n>> care for your data (you accept 100% dataloss and datacorruption if any\n>> error should occur), what settings should you use then?\n>\n> Others have suggested appropriate parameters (\"running with scissors\").\n>\n> I'd like to add something else to the discussion: have you looked at\n> memcached yet? Or pgpool? If you haven't, start there.\n>\n\nmemcahced has been mentioned in some discussions, but I have not studied it yet.\n", "msg_date": "Fri, 5 Nov 2010 12:36:08 +0100", "msg_from": "A B <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On Fri, Nov 5, 2010 at 13:32, A B <[email protected]> wrote:\n> I was just thinking about the case where I will have almost 100%\n> selects, but still needs something better than a plain key-value\n> storage so I can do some sql queries.\n> The server will just boot, load data, run,  hopefully not crash but if\n> it would, just start over with load and run.\n\nIf you want fast read queries then changing\nfsync/full_page_writes/synchronous_commit won't help you.\n\nJust follow the regular tuning guide. shared_buffers,\neffective_cache_size, work_mem, default_statistics_target can make a\ndifference.\n\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nRegards,\nMarti\n", "msg_date": "Fri, 5 Nov 2010 13:36:34 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On 5 November 2010 11:36, Marti Raudsepp <[email protected]> wrote:\n\n> On Fri, Nov 5, 2010 at 13:32, A B <[email protected]> wrote:\n> > I was just thinking about the case where I will have almost 100%\n> > selects, but still needs something better than a plain key-value\n> > storage so I can do some sql queries.\n> > The server will just boot, load data, run, hopefully not crash but if\n> > it would, just start over with load and run.\n>\n> If you want fast read queries then changing\n> fsync/full_page_writes/synchronous_commit won't help you.\n\n\nYes, those will be for write-performance only, so useless in this case.\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nOn 5 November 2010 11:36, Marti Raudsepp <[email protected]> wrote:\nOn Fri, Nov 5, 2010 at 13:32, A B <[email protected]> wrote:\n> I was just thinking about the case where I will have almost 100%\n> selects, but still needs something better than a plain key-value\n> storage so I can do some sql queries.\n> The server will just boot, load data, run,  hopefully not crash but if\n> it would, just start over with load and run.\n\nIf you want fast read queries then changing\nfsync/full_page_writes/synchronous_commit won't help you. Yes, those will be for write-performance only, so useless in this case.-- Thom BrownTwitter: @darkixionIRC (freenode): dark_ixion\n\nRegistered Linux user: #516935", "msg_date": "Fri, 5 Nov 2010 11:41:33 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "Marti Raudsepp <marti 'at' juffo.org> writes:\n\n> On Fri, Nov 5, 2010 at 13:11, Guillaume Cottenceau <[email protected]> wrote:\n>> Don't use PostgreSQL, just drop your data, you will end up with\n>> the same results and be even faster than any use of PostgreSQL.\n>> If anyone needs data, then just say you had data corruption, and\n>> that since 100% dataloss is accepted, then all's well.\n>\n> You're not helping. There are legitimate reasons for trading off\n> safety for performance.\n\nCccepting 100% dataloss and datacorruption deserves a little\nreasoning, otherwise I'm afraid I'm right in suggesting it makes\nlittle difference to use PG or to drop data altogether.\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Fri, 05 Nov 2010 13:06:25 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "Marti Raudsepp <marti 'at' juffo.org> writes:\n\n> On Fri, Nov 5, 2010 at 13:32, A B <[email protected]> wrote:\n>> I was just thinking about the case where I will have almost 100%\n>> selects, but still needs something better than a plain key-value\n>> storage so I can do some sql queries.\n>> The server will just boot, load data, run, �hopefully not crash but if\n>> it would, just start over with load and run.\n>\n> If you want fast read queries then changing\n> fsync/full_page_writes/synchronous_commit won't help you.\n\nThat illustrates how knowing the reasoning of this particular\nrequests makes new suggestions worthwhile, while previous ones\nare now seen as useless.\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Fri, 05 Nov 2010 13:08:26 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On Fri, Nov 5, 2010 at 7:08 AM, Guillaume Cottenceau <[email protected]> wrote:\n> Marti Raudsepp <marti 'at' juffo.org> writes:\n>\n>> On Fri, Nov 5, 2010 at 13:32, A B <[email protected]> wrote:\n>>> I was just thinking about the case where I will have almost 100%\n>>> selects, but still needs something better than a plain key-value\n>>> storage so I can do some sql queries.\n>>> The server will just boot, load data, run,  hopefully not crash but if\n>>> it would, just start over with load and run.\n>>\n>> If you want fast read queries then changing\n>> fsync/full_page_writes/synchronous_commit won't help you.\n>\n> That illustrates how knowing the reasoning of this particular\n> requests makes new suggestions worthwhile, while previous ones\n> are now seen as useless.\n\nI disagree that they are useless - the stated mechanism was \"start,\nload data, and run\". Changing the params above won't likely change\nmuch in the 'run' stage but would they help in the 'load' stage?\n\n\n-- \nJon\n", "msg_date": "Fri, 5 Nov 2010 07:12:09 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "How about either:-\n\na) Size the pool so all your data fits into it.\n\nb) Use a RAM-based filesystem (ie: a memory disk or SSD) for the\ndata storage [memory disk will be faster] with a Smaller pool\n- Your seed data should be a copy of the datastore on disk filesystem;\nat startup time copy the storage files from the physical to memory.\n\nA bigger gain can probably be had if you have a tightly controlled\nsuite of queries that will be run against the database and you can\nspend the time to tune each to ensure it performs no sequential scans\n(ie: Every query uses index lookups).\n\n\nOn 5 November 2010 11:32, A B <[email protected]> wrote:\n>>> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n>>> care for your data (you accept 100% dataloss and datacorruption if any\n>>> error should occur), what settings should you use then?\n>>>\n>>\n>>\n>> I'm just curious, what do you need that for?\n>>\n>> regards\n>> Szymon\n>\n> I was just thinking about the case where I will have almost 100%\n> selects, but still needs something better than a plain key-value\n> storage so I can do some sql queries.\n> The server will just boot, load data, run,  hopefully not crash but if\n> it would, just start over with load and run.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n\n\nNick Lello | Web Architect\no +1 503.284.7581 x418 / +44 (0) 8433309374 | m +44 (0) 7917 138319\nEmail: nick.lello at rentrak.com\nRENTRAK | www.rentrak.com | NASDAQ: RENT\n", "msg_date": "Fri, 5 Nov 2010 12:26:40 +0000", "msg_from": "\"Lello, Nick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "[email protected] (A B) writes:\n> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n> care for your data (you accept 100% dataloss and datacorruption if any\n> error should occur), what settings should you use then?\n\nUse /dev/null. It is web scale, and there are good tutorials.\n\nBut seriously, there *are* cases where \"blind speed\" is of use. When\nloading data into a fresh database is a good time for this; if things\nfall over, it may be pretty acceptable to start \"from scratch\" with\nmkfs/initdb.\n\nI'd:\n- turn off fsync\n- turn off synchronous commit\n- put as much as possible onto Ramdisk/tmpfs/similar as possible\n-- \noutput = reverse(\"moc.liamg\" \"@\" \"enworbbc\")\nhttp://linuxfinances.info/info/lsf.html\n43% of all statistics are worthless.\n", "msg_date": "Fri, 05 Nov 2010 11:13:34 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On Fri, 2010-11-05 at 11:59 +0100, A B wrote:\n> \n> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n> care for your data (you accept 100% dataloss and datacorruption if any\n> error should occur), what settings should you use then? \n\nYou can initdb to ramdisk, if you have enough RAM. It will fast, really.\n\n-- \nDevrim GÜNDÜZ\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nPostgreSQL RPM Repository: http://yum.pgrpms.org\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Fri, 05 Nov 2010 08:23:35 -0700", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "Devrim GÜNDÜZ wrote:\n> On Fri, 2010-11-05 at 11:59 +0100, A B wrote:\n> \n>> If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n>> care for your data (you accept 100% dataloss and datacorruption if any\n>> error should occur), what settings should you use then? \n>> \n>\n> You can initdb to ramdisk, if you have enough RAM. It will fast, really.\n>\n> \nThat is approximately the same thing as the answer to the question \nwhether Ford Taurus can reach 200mph.\nIt can, just once, if you run it down the cliff.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Fri, 05 Nov 2010 12:25:55 -0400", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On 11/05/2010 07:32 PM, A B wrote:\n\n> The server will just boot, load data, run, hopefully not crash but if\n> it would, just start over with load and run.\n\nHave you looked at VoltDB? It's designed for fast in-memory use.\n\n--\nCraig Ringer\n", "msg_date": "Sun, 07 Nov 2010 08:21:18 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "\"Lello, Nick\" <[email protected]> writes:\n> A bigger gain can probably be had if you have a tightly controlled\n> suite of queries that will be run against the database and you can\n> spend the time to tune each to ensure it performs no sequential scans\n> (ie: Every query uses index lookups).\n\nGiven a fixed pool of queries, you can prepare them in advance so that\nyou don't usually pay the parsing and planning costs. I've found that\nthe planning is easily more expensive than the executing when all data\nfits in RAM.\n\nEnter pgbouncer and preprepare :\n http://wiki.postgresql.org/wiki/PgBouncer\n http://preprepare.projects.postgresql.org/README.html\n\nRegards,\n-- \nDimitri Fontaine\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Mon, 08 Nov 2010 16:57:14 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "Use a replicated setup?\n\nOn Nov 8, 2010 4:21 PM, \"Lello, Nick\" <[email protected]> wrote:\n\nHow about either:-\n\na) Size the pool so all your data fits into it.\n\nb) Use a RAM-based filesystem (ie: a memory disk or SSD) for the\ndata storage [memory disk will be faster] with a Smaller pool\n- Your seed data should be a copy of the datastore on disk filesystem;\nat startup time copy the storage files from the physical to memory.\n\nA bigger gain can probably be had if you have a tightly controlled\nsuite of queries that will be run against the database and you can\nspend the time to tune each to ensure it performs no sequential scans\n(ie: Every query uses index lookups).\n\n\n\nOn 5 November 2010 11:32, A B <[email protected]> wrote:\n>>> If you just wanted PostgreSQL to g...\n--\n\n\nNick Lello | Web Architect\no +1 503.284.7581 x418 / +44 (0) 8433309374 | m +44 (0) 7917 138319\nEmail: nick.lello at rentrak.com\nRENTRAK | www.rentrak.com | NASDAQ: RENT\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to y...\n\nUse a replicated setup?\nOn Nov 8, 2010 4:21 PM, \"Lello, Nick\" <[email protected]> wrote:How about either:-\n\na)   Size the pool so all your data fits into it.\n\nb)   Use a RAM-based filesystem (ie: a memory disk or SSD) for the\ndata storage [memory disk will be faster] with a Smaller pool\n- Your seed data should be a copy of the datastore on disk filesystem;\nat startup time copy the storage files from the physical to memory.\n\nA bigger gain can probably be had if you have a tightly controlled\nsuite of queries that will be run against the database and you can\nspend the time to tune each to ensure it performs no sequential scans\n(ie: Every query uses index lookups).\nOn 5 November 2010 11:32, A B <[email protected]> wrote:>>> If you just wanted PostgreSQL to g...--\n\n\nNick Lello | Web Architect\no +1 503.284.7581 x418 / +44 (0) 8433309374 | m +44 (0) 7917 138319\nEmail: nick.lello at rentrak.com\nRENTRAK | www.rentrak.com | NASDAQ: RENT\n-- Sent via pgsql-performance mailing list ([email protected])To make changes to y...", "msg_date": "Mon, 8 Nov 2010 16:58:13 +0100", "msg_from": "Klaus Ita <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On Fri, Nov 5, 2010 at 8:12 AM, Jon Nelson <[email protected]> wrote:\n> On Fri, Nov 5, 2010 at 7:08 AM, Guillaume Cottenceau <[email protected]> wrote:\n>> Marti Raudsepp <marti 'at' juffo.org> writes:\n>>\n>>> On Fri, Nov 5, 2010 at 13:32, A B <[email protected]> wrote:\n>>>> I was just thinking about the case where I will have almost 100%\n>>>> selects, but still needs something better than a plain key-value\n>>>> storage so I can do some sql queries.\n>>>> The server will just boot, load data, run,  hopefully not crash but if\n>>>> it would, just start over with load and run.\n>>>\n>>> If you want fast read queries then changing\n>>> fsync/full_page_writes/synchronous_commit won't help you.\n>>\n>> That illustrates how knowing the reasoning of this particular\n>> requests makes new suggestions worthwhile, while previous ones\n>> are now seen as useless.\n>\n> I disagree that they are useless - the stated mechanism was \"start,\n> load data, and run\". Changing the params above won't likely change\n> much in the 'run' stage but would they help in the 'load' stage?\n\nYes, they certainly will. And they might well help in the run stage,\ntoo, if there are temporary tables in use, or checkpoints flushing\nhint bit updates, or such things.\n\nIt's also important to crank up checkpoint_segments and\ncheckpoint_timeout very high, especially for the bulk data load but\neven afterwards if there is any write activity at all. And it's\nimportant to set shared_buffers correctly, too, which helps on\nworkloads of all kinds. But as said upthread, turning off fsync,\nfull_page_writes, and synchronous_commit are the things you can do\nthat specifically trade reliability away to get speed.\n\nIn 9.1, I'm hopeful that we'll have unlogged tables, which will even\nbetter than turning these parameters off, and for which I just posted\na patch to -hackers. Instead of generating WAL and writing WAL to the\nOS and then NOT trying to make sure it hits the disk, we just won't\ngenerate it in the first place. But if PostgreSQL or the machine it's\nrunning on crashes, you won't need to completely blow away the cluster\nand start over; instead, the particular tables that you chose to\ncreate as unlogged will be truncated, and the rest of your data,\nincluding the system catalogs, will still be intact.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 15 Nov 2010 10:06:04 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On 11/15/2010 9:06 AM, Robert Haas wrote:\n> In 9.1, I'm hopeful that we'll have unlogged tables, which will even\n> better than turning these parameters off, and for which I just posted\n> a patch to -hackers. Instead of generating WAL and writing WAL to the\n> OS and then NOT trying to make sure it hits the disk, we just won't\n> generate it in the first place. But if PostgreSQL or the machine it's\n> running on crashes, you won't need to completely blow away the cluster\n> and start over; instead, the particular tables that you chose to\n> create as unlogged will be truncated, and the rest of your data,\n> including the system catalogs, will still be intact.\n>\n\nif I am reading this right means: we can run our db safely (with fsync \nand full_page_writes enabled) except for tables of our choosing?\n\nIf so, I am very +1 for this!\n\n-Andy\n", "msg_date": "Mon, 15 Nov 2010 13:27:43 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On Mon, Nov 15, 2010 at 2:27 PM, Andy Colson <[email protected]> wrote:\n> On 11/15/2010 9:06 AM, Robert Haas wrote:\n>>\n>> In 9.1, I'm hopeful that we'll have unlogged tables, which will even\n>> better than turning these parameters off, and for which I just posted\n>> a patch to -hackers.  Instead of generating WAL and writing WAL to the\n>> OS and then NOT trying to make sure it hits the disk, we just won't\n>> generate it in the first place.  But if PostgreSQL or the machine it's\n>> running on crashes, you won't need to completely blow away the cluster\n>> and start over; instead, the particular tables that you chose to\n>> create as unlogged will be truncated, and the rest of your data,\n>> including the system catalogs, will still be intact.\n>>\n>\n> if I am reading this right means: we can run our db safely (with fsync and\n> full_page_writes enabled) except for tables of our choosing?\n>\n> If so, I am very +1 for this!\n\nYep. But we need some vic^H^Holunteers to reviews and test the patches.\n\nhttps://commitfest.postgresql.org/action/patch_view?id=424\n\nCode review, benchmarking, or just general tinkering and reporting\nwhat you find out on the -hackers thread would be appreciated.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 15 Nov 2010 14:36:38 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "Chris Browne wrote:\n> [email protected] (A B) writes:\n> > If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n> > care for your data (you accept 100% dataloss and datacorruption if any\n> > error should occur), what settings should you use then?\n> \n> Use /dev/null. It is web scale, and there are good tutorials.\n> \n> But seriously, there *are* cases where \"blind speed\" is of use. When\n> loading data into a fresh database is a good time for this; if things\n> fall over, it may be pretty acceptable to start \"from scratch\" with\n> mkfs/initdb.\n> \n> I'd:\n> - turn off fsync\n> - turn off synchronous commit\n> - put as much as possible onto Ramdisk/tmpfs/similar as possible\n\nFYI, we do have a documentation section about how to configure Postgres\nfor improved performance if you don't care about durability:\n\n\thttp://developer.postgresql.org/pgdocs/postgres/non-durability.html\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Wed, 19 Jan 2011 12:07:24 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "2011/1/19 Bruce Momjian <[email protected]>\n\n>\n> FYI, we do have a documentation section about how to configure Postgres\n> for improved performance if you don't care about durability:\n>\n> http://developer.postgresql.org/pgdocs/postgres/non-durability.html\n>\n\n\nA sometime ago I wrote in my blog [1] (sorry but available only in\npt-br) how to create an in-memory database with PostgreSQL. This little\narticle is based on post of Mr. Robert Haas about this topic [2].\n\n[1]\nhttp://fabriziomello.blogspot.com/2010/06/postgresql-na-memoria-ram-in-memory.html\n[2]\nhttp://rhaas.blogspot.com/2010/06/postgresql-as-in-memory-only-database_24.html\n\n-- \nFabrízio de Royes Mello\n>> Blog sobre TI: http://fabriziomello.blogspot.com\n>> Perfil Linkedin: http://br.linkedin.com/in/fabriziomello\n\n2011/1/19 Bruce Momjian <[email protected]>\n\nFYI, we do have a documentation section about how to configure Postgres\nfor improved performance if you don't care about durability:\n\n        http://developer.postgresql.org/pgdocs/postgres/non-durability.html\nA sometime ago I wrote in my blog [1] (sorry but available only in pt-br) how to create an in-memory database with PostgreSQL. This little article is based on post of Mr. Robert Haas about this topic [2].\n[1] http://fabriziomello.blogspot.com/2010/06/postgresql-na-memoria-ram-in-memory.html\n[2] http://rhaas.blogspot.com/2010/06/postgresql-as-in-memory-only-database_24.html-- \nFabrízio de Royes Mello>> Blog sobre TI: http://fabriziomello.blogspot.com>> Perfil Linkedin: http://br.linkedin.com/in/fabriziomello", "msg_date": "Wed, 19 Jan 2011 15:27:30 -0200", "msg_from": "=?ISO-8859-1?Q?Fabr=EDzio_de_Royes_Mello?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On Wed, Jan 19, 2011 at 12:07 PM, Bruce Momjian <[email protected]> wrote:\n> Chris Browne wrote:\n>> [email protected] (A B) writes:\n>> > If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n>> > care for your data (you accept 100% dataloss and datacorruption if any\n>> > error should occur), what settings should you use then?\n>>\n>> Use /dev/null.  It is web scale, and there are good tutorials.\n>>\n>> But seriously, there *are* cases where \"blind speed\" is of use.  When\n>> loading data into a fresh database is a good time for this; if things\n>> fall over, it may be pretty acceptable to start \"from scratch\" with\n>> mkfs/initdb.\n>>\n>> I'd:\n>> - turn off fsync\n>> - turn off synchronous commit\n>> - put as much as possible onto Ramdisk/tmpfs/similar as possible\n>\n> FYI, we do have a documentation section about how to configure Postgres\n> for improved performance if you don't care about durability:\n>\n>        http://developer.postgresql.org/pgdocs/postgres/non-durability.html\n\nThis sentence looks to me like it should be removed, or perhaps clarified:\n\nThis does affect database crash transaction durability.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 20 Jan 2011 10:25:39 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "Robert Haas wrote:\n> On Wed, Jan 19, 2011 at 12:07 PM, Bruce Momjian <[email protected]> wrote:\n> > Chris Browne wrote:\n> >> [email protected] (A B) writes:\n> >> > If you just wanted PostgreSQL to go as fast as possible WITHOUT any\n> >> > care for your data (you accept 100% dataloss and datacorruption if any\n> >> > error should occur), what settings should you use then?\n> >>\n> >> Use /dev/null. ?It is web scale, and there are good tutorials.\n> >>\n> >> But seriously, there *are* cases where \"blind speed\" is of use. ?When\n> >> loading data into a fresh database is a good time for this; if things\n> >> fall over, it may be pretty acceptable to start \"from scratch\" with\n> >> mkfs/initdb.\n> >>\n> >> I'd:\n> >> - turn off fsync\n> >> - turn off synchronous commit\n> >> - put as much as possible onto Ramdisk/tmpfs/similar as possible\n> >\n> > FYI, we do have a documentation section about how to configure Postgres\n> > for improved performance if you don't care about durability:\n> >\n> > ? ? ? ?http://developer.postgresql.org/pgdocs/postgres/non-durability.html\n> \n> This sentence looks to me like it should be removed, or perhaps clarified:\n> \n> This does affect database crash transaction durability.\n\nUh, doesn't it affect database crash transaction durability? I have\napplied the attached patch to clarify things. Thanks.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +", "msg_date": "Tue, 25 Jan 2011 20:32:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "On Tue, Jan 25, 2011 at 5:32 PM, Bruce Momjian <[email protected]> wrote:\n> Robert Haas wrote:\n>> On Wed, Jan 19, 2011 at 12:07 PM, Bruce Momjian <[email protected]> wrote:\n\n>> > ? ? ? ?http://developer.postgresql.org/pgdocs/postgres/non-durability.html\n>>\n>> This sentence looks to me like it should be removed, or perhaps clarified:\n>>\n>>     This does affect database crash transaction durability.\n>\n> Uh, doesn't it affect database crash transaction durability?  I have\n> applied the attached patch to clarify things.  Thanks.\n\nI think the point that was trying to be made there was that the other\nparameters only lose and corrupt data when the machine crashes.\nSynchronous commit turned off will lose data on a mere postgresql\nserver crash, it doesn't require a machine-level crash to cause data\nloss.\n\nIndeed, the currently committed doc is quite misleading.\n\n\" The following are configuration changes you can make\n to improve performance in such cases; they do not invalidate\n commit guarantees related to database crashes, only abrupt operating\n system stoppage, except as mentioned below\"\n\nWe've now removed the thing being mentioned below, but did not remove\nthe promise we would be mentioning those things.\n\nCheers,\n\nJeff\n", "msg_date": "Thu, 27 Jan 2011 08:51:46 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" }, { "msg_contents": "Jeff Janes wrote:\n> On Tue, Jan 25, 2011 at 5:32 PM, Bruce Momjian <[email protected]> wrote:\n> > Robert Haas wrote:\n> >> On Wed, Jan 19, 2011 at 12:07 PM, Bruce Momjian <[email protected]> wrote:\n> \n> >> > ? ? ? ?http://developer.postgresql.org/pgdocs/postgres/non-durability.html\n> >>\n> >> This sentence looks to me like it should be removed, or perhaps clarified:\n> >>\n> >> ? ? This does affect database crash transaction durability.\n> >\n> > Uh, doesn't it affect database crash transaction durability? ?I have\n> > applied the attached patch to clarify things. ?Thanks.\n> \n> I think the point that was trying to be made there was that the other\n> parameters only lose and corrupt data when the machine crashes.\n> Synchronous commit turned off will lose data on a mere postgresql\n> server crash, it doesn't require a machine-level crash to cause data\n> loss.\n> \n> Indeed, the currently committed doc is quite misleading.\n> \n> \" The following are configuration changes you can make\n> to improve performance in such cases; they do not invalidate\n> commit guarantees related to database crashes, only abrupt operating\n> system stoppage, except as mentioned below\"\n> \n> We've now removed the thing being mentioned below, but did not remove\n> the promise we would be mentioning those things.\n\nExcellent point. The old wording was just too clever and even I forgot\nwhy I was making that point. I have updated the docs to clearly state\nwhy this setting is different from the ones above. Thanks for spotting\nthis.\n\nApplied patch attached.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +", "msg_date": "Thu, 27 Jan 2011 12:07:40 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running PostgreSQL as fast as possible no matter the consequences" } ]
[ { "msg_contents": "All,\n\nDomas (of Facebook/Wikipedia, MySQL geek) pointed me to this report:\n\nhttp://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&num=1\nhttp://www.phoronix.com/scan.php?page=article&item=ext4_then_now&num=6\n\nThe serious problems with this appear to be (a) that Linux/Ext4 PG\nperformance still hasn't fully recovered, and, (b) that RHEL6 is set to\nship with kernel 2.6.32, which means that we'll have a whole generation\nof RHEL which is off-limits to PostgreSQL.\n\nTom, any word from your coworkers on this?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 05 Nov 2010 13:15:20 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Major Linux performance regression; shouldn't we be worried about\n\tRHEL6?" }, { "msg_contents": "\n> The serious problems with this appear to be (a) that Linux/Ext4 PG\n> performance still hasn't fully recovered, and, (b) that RHEL6 is set to\n> ship with kernel 2.6.32, which means that we'll have a whole generation\n> of RHEL which is off-limits to PostgreSQL.\n\nOh. Found some other information on the issue. Looks like the problem\nis fixed in later kernels. So the only real issue is: is RHEL6 shipping\nwith 2.6.32?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 05 Nov 2010 13:19:37 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Major Linux performance regression; shouldn't we be\n\tworried about RHEL6?" }, { "msg_contents": "On Fri, Nov 5, 2010 at 2:15 PM, Josh Berkus <[email protected]> wrote:\n> All,\n>\n> Domas (of Facebook/Wikipedia, MySQL geek) pointed me to this report:\n>\n> http://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&num=1\n> http://www.phoronix.com/scan.php?page=article&item=ext4_then_now&num=6\n>\n> The serious problems with this appear to be (a) that Linux/Ext4 PG\n> performance still hasn't fully recovered, and, (b) that RHEL6 is set to\n> ship with kernel 2.6.32, which means that we'll have a whole generation\n> of RHEL which is off-limits to PostgreSQL.\n\nWhy would it be off limits? Is it likely to lose data due to power failure etc?\n\nAre you referring to improvements due to write barrier support getting\nfixed up fr ext4 to run faster but still be safe? I would assume that\nany major patches that increase performance with write barriers\nwithout being dangerous for your data would get back ported by RH as\nusual.\n", "msg_date": "Fri, 5 Nov 2010 14:27:06 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Major Linux performance regression; shouldn't we be\n\tworried about RHEL6?" }, { "msg_contents": "\n> Why would it be off limits? Is it likely to lose data due to power failure etc?\n\nIf fsyncs are taking 5X as long, people can't use PostgreSQL on that\nplatform.\n\n> Are you referring to improvements due to write barrier support getting\n> fixed up fr ext4 to run faster but still be safe? I would assume that\n> any major patches that increase performance with write barriers\n> without being dangerous for your data would get back ported by RH as\n> usual.\n\nHopefully, yes. I wouldn't mind confirmation of this, though; it\nwouldn't be the first time RH shipped with known-bad IO performance.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 05 Nov 2010 13:32:44 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Major Linux performance regression; shouldn't we be\n\tworried about RHEL6?" }, { "msg_contents": "On Friday 05 November 2010 21:15:20 Josh Berkus wrote:\n> All,\n> \n> Domas (of Facebook/Wikipedia, MySQL geek) pointed me to this report:\n> \n> http://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&n\nI guess thats the O_DSYNC thingy. See the \"Defaulting wal_sync_method to \nfdatasync on Linux for 9.1?\" (performance) and \"Revert default wal_sync_method \nto fdatasync on Linux 2.6.33+\" on hackers.\n\nO_DSYNC got finally properly implemented on linux with 2.6.33 (and thus 2.6.32-\nrc1).\n\n> um=1 http://www.phoronix.com/scan.php?page=article&item=ext4_then_now&num=6\nThat one looks pretty uninteresting. Barriers are slower then no barriers. No \nsurprise there.\n\nAndres\n", "msg_date": "Fri, 5 Nov 2010 21:38:54 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Major Linux performance regression;\n\tshouldn't we be worried about RHEL6?" }, { "msg_contents": "On Fri, Nov 5, 2010 at 2:32 PM, Josh Berkus <[email protected]> wrote:\n>\n>> Why would it be off limits?  Is it likely to lose data due to power failure etc?\n>\n> If fsyncs are taking 5X as long, people can't use PostgreSQL on that\n> platform.\n\nI was under the impression that from 2.6.28 through 2.6.31 or so that\nthe linux kernel just forgot how to fsync, and they turned it back on\nin 2.6.32 and that's why we saw the big slowdown. Dropping from\nthousands of transactions per second to 150 to 175 seems a reasonable\nchange when that happens.\n\n>> Are you referring to improvements due to write barrier support getting\n>> fixed up fr ext4 to run faster but still be safe?  I would assume that\n>> any major patches that increase performance with write barriers\n>> without being dangerous for your data would get back ported by RH as\n>> usual.\n>\n> Hopefully, yes.  I wouldn't mind confirmation of this, though; it\n> wouldn't be the first time RH shipped with known-bad IO performance.\n\ntrue, very true. I will say that with my 2.6.32 based Ubuntu 10.04.1\nLTS servers, running pgsql on an LSI 8888 controller can pull off 7500\ntps quite easily. And quite safely, having survived power off tests\nquite well. That's on ext3 though. I haven't tested them with ext4,\nas when I set them up I still didn't consider it stable enough for\nproduction.\n", "msg_date": "Fri, 5 Nov 2010 14:59:47 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Major Linux performance regression; shouldn't we be\n\tworried about RHEL6?" }, { "msg_contents": "Josh Berkus wrote:\n> Domas (of Facebook/Wikipedia, MySQL geek) pointed me to this report:\n>\n> http://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&num=1\n> http://www.phoronix.com/scan.php?page=article&item=ext4_then_now&num=6\n> \nThe main change here was discussed back in January: \n\nhttp://archives.postgresql.org/message-id/[email protected]\n\nWhat I've been doing about this is the writing leading up to \nhttp://wiki.postgresql.org/wiki/Reliable_Writes so that when RHEL6 does \nship, we have a place to point people toward that makes it better \ndocumented that the main difference here is a reliability improvement \nrather than a performance regression. I'm not sure what else we can do \nhere, other than organizing more testing for kernel bugs in this area on \nRHEL6. The only way to regain the majority of the \"lost\" performance \nhere is to turn off synchronous_commit in the default config.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 05 Nov 2010 14:26:16 -0700", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Major Linux performance regression; shouldn't we be\n\tworried about RHEL6?" }, { "msg_contents": "\n> The main change here was discussed back in January:\n> http://archives.postgresql.org/message-id/[email protected]\n> \n> What I've been doing about this is the writing leading up to\n> http://wiki.postgresql.org/wiki/Reliable_Writes so that when RHEL6 does\n> ship, we have a place to point people toward that makes it better\n> documented that the main difference here is a reliability improvement\n> rather than a performance regression. I'm not sure what else we can do\n> here, other than organizing more testing for kernel bugs in this area on\n> RHEL6. The only way to regain the majority of the \"lost\" performance\n> here is to turn off synchronous_commit in the default config.\n\nYeah, I was looking at that. However, there seems to be some\nindications that there was a drop in performance specifically in 2.6.32\nwhich went beyond fixing the reliability:\n\nhttp://www.phoronix.com/scan.php?page=article&item=linux_2636_btrfs&num=1\n\nHowever, Phoronix doesn't say what sync option they're using; quite\nlikely it's O_DSYNC. Unfortunately, the fact that users now need to be\naware of the fsync_method again, after having it set automatically for\nthem for the last 4 years, is a usability regression for *us*. Anything\nthat's reasonable for us to do about it?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 05 Nov 2010 15:11:43 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Major Linux performance regression; shouldn't we be\n\tworried about RHEL6?" }, { "msg_contents": "\nOn Nov 5, 2010, at 1:19 PM, Josh Berkus wrote:\n\n> \n>> The serious problems with this appear to be (a) that Linux/Ext4 PG\n>> performance still hasn't fully recovered, and, (b) that RHEL6 is set to\n>> ship with kernel 2.6.32, which means that we'll have a whole generation\n>> of RHEL which is off-limits to PostgreSQL.\n> \n> Oh. Found some other information on the issue. Looks like the problem\n> is fixed in later kernels. So the only real issue is: is RHEL6 shipping\n> with 2.6.32?\n> \n\nNo, RHEL 6 is not on any specific upstream Kernel version. Its 2.6.32++, with many changes from .33 to .35 also in there. You can probably assume that ALL ext4 changes are in there (since RedHat develops and contributes that). Plus, it will have several features that RedHat has not gotten pushed upstream completely yet, such as the automatic huge pages stuff.\n\nThe likelihood that file system related fixes from the .33 to .35 range did not make RHEL 6 is very low.\n\n\n> -- \n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 5 Nov 2010 15:30:22 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Major Linux performance regression; shouldn't we be\n\tworried about RHEL6?" } ]
[ { "msg_contents": "I have a postgres 8.4.5 instance on CentOS 5 (x86_64) which appears to\ngo crazy with the amount of memory it consumes.\nWhen I run the query below, in a matter of a few seconds memory\nballoons to 5.3G (virtual), 4.6G (resident) and 1840 (shared), and\neventually the oom killer is invoked, killing the entire process.\n\nPhysical memory is 8GB but other processes on the box consume\napproximately 4GB of that.\n\nThe settings changed from their defaults:\n\neffective_cache_size = 4GB\nwork_mem = 16MB\nmaintenance_work_mem = 128MB\nwal_buffers = 16MB\ncheckpoint_segments = 16\nshared_buffers = 384MB\ncheckpoint_segments = 64\n\nand\n\ndefault_statistics_target = 100\n\nThe query is this:\n\ninsert into d_2010_09_13_sum\n select FOO.i, FOO.n, sum(FOO.cc) as cc, sum(FOO.oc) as oc\n from (\n select * from d_2010_09_12_sum\n union all\n select * from d_2010_09_13\n ) AS FOO group by i, n;\n\nhere is the explain:\n\n Subquery Scan \"*SELECT*\" (cost=1200132.06..1201332.06 rows=40000 width=80)\n -> HashAggregate (cost=1200132.06..1200732.06 rows=40000 width=41)\n -> Append (cost=0.00..786531.53 rows=41360053 width=41)\n -> Seq Scan on d_2010_09_12_sum (cost=0.00..520066.48\nrows=27272648 width=42)\n -> Seq Scan on d_2010_09_13 (cost=0.00..266465.05\nrows=14087405 width=40)\n\nBoth source tables freshly vacuum analyze'd.\nThe row estimates are correct for both source tables.\n\nIf I use \"set enable_hashagg = false\" I get this plan:\n\n Subquery Scan \"*SELECT*\" (cost=8563632.73..9081838.25 rows=40000 width=80)\n -> GroupAggregate (cost=8563632.73..9081238.25 rows=40000 width=41)\n -> Sort (cost=8563632.73..8667033.84 rows=41360441 width=41)\n Sort Key: d_2010_09_12_sum.i, d_2010_09_12_sum.n\n -> Result (cost=0.00..786535.41 rows=41360441 width=41)\n -> Append (cost=0.00..786535.41 rows=41360441 width=41)\n -> Seq Scan on d_2010_09_12_sum\n(cost=0.00..520062.04 rows=27272204 width=42)\n -> Seq Scan on d_2010_09_13\n(cost=0.00..266473.37 rows=14088237 width=40)\n\nand postmaster's memory never exceeds (roughly) 548M (virtual), 27M\n(resident), 5M (shared).\n\nI even set default_statistics_target to 1000 and re-ran \"vacuum\nanalyze verbose\" on both tables - no change.\nIf I set work_mem to 1MB (from 16MB) then the GroupAggregate variation\nis chosen instead.\nExperimentally, HashAggregate is chosen when work_mem is 16MB, 8MB,\n6MB, 5MB but not 4MB and on down.\n\nTwo things I don't understand:\n\n1. Why, when hash aggregation is allowed, does memory absolutely\nexplode (eventually invoking the wrath of the oom killer). 16MB for\nwork_mem does not seem outrageously high. For that matter, neither\ndoes 5MB.\n\n2. Why do both HashAggregate and GroupAggregate say the cost estimate\nis 40000 rows?\n\n-- \nJon\n", "msg_date": "Fri, 5 Nov 2010 19:26:48 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "postmaster consuming /lots/ of memory with hash aggregate. why?" }, { "msg_contents": "\n> 2. Why do both HashAggregate and GroupAggregate say the cost estimate\n> is 40000 rows?\n\nI've reproduced this :\n\n\nCREATE TABLE popo AS SELECT (x%1000) AS a,(x%1001) AS b FROM \ngenerate_series( 1,1000000 ) AS x;\nVACUUM ANALYZE popo;\nEXPLAIN ANALYZE SELECT a,b,count(*) FROM (SELECT * FROM popo UNION ALL \nSELECT * FROM popo) AS foo GROUP BY a,b;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=43850.00..44350.00 rows=40000 width=8) (actual \ntime=1893.441..2341.780 rows=1000000 loops=1)\n -> Append (cost=0.00..28850.00 rows=2000000 width=8) (actual \ntime=0.025..520.581 rows=2000000 loops=1)\n -> Seq Scan on popo (cost=0.00..14425.00 rows=1000000 width=8) \n(actual time=0.025..142.639 rows=1000000 loops=1)\n -> Seq Scan on popo (cost=0.00..14425.00 rows=1000000 width=8) \n(actual time=0.003..114.257 rows=1000000 loops=1)\n Total runtime: 2438.741 ms\n(5 lignes)\n\nTemps : 2439,247 ms\n\nI guess the row count depends on the correlation of a and b, which pg has \nno idea about. In the first example, there is no correlation, now with \nfull correlation :\n\n\nUPDATE popo SET a=b;\nVACUUM FULL popo;\nVACUUM FULL popo;\nANALYZE popo;\nEXPLAIN ANALYZE SELECT a,b,count(*) FROM (SELECT * FROM popo UNION ALL \nSELECT * FROM popo) AS foo GROUP BY a,b;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=43850.00..44350.00 rows=40000 width=8) (actual \ntime=1226.201..1226.535 rows=1001 loops=1)\n -> Append (cost=0.00..28850.00 rows=2000000 width=8) (actual \ntime=0.008..518.068 rows=2000000 loops=1)\n -> Seq Scan on popo (cost=0.00..14425.00 rows=1000000 width=8) \n(actual time=0.007..128.609 rows=1000000 loops=1)\n -> Seq Scan on popo (cost=0.00..14425.00 rows=1000000 width=8) \n(actual time=0.005..128.502 rows=1000000 loops=1)\n Total runtime: 1226.797 ms\n", "msg_date": "Sat, 06 Nov 2010 09:57:57 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster consuming /lots/ of memory with hash\n aggregate. why?" }, { "msg_contents": "I also found this. Perhaps it is related?\n\nhttp://postgresql.1045698.n5.nabble.com/Hash-Aggregate-plan-picked-for-very-large-table-out-of-memory-td1883299.html\n\n\n-- \nJon\n", "msg_date": "Sat, 6 Nov 2010 13:37:12 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postmaster consuming /lots/ of memory with hash\n aggregate. why?" }, { "msg_contents": "On Fri, Nov 5, 2010 at 7:26 PM, Jon Nelson <[email protected]> wrote:\n> I have a postgres 8.4.5 instance on CentOS 5 (x86_64) which appears to\n> go crazy with the amount of memory it consumes.\n> When I run the query below, in a matter of a few seconds memory\n> balloons to 5.3G (virtual), 4.6G (resident) and 1840 (shared), and\n> eventually the oom killer is invoked, killing the entire process.\n>\n> Physical memory is 8GB but other processes on the box consume\n> approximately 4GB of that.\n>\n> The settings changed from their defaults:\n>\n> effective_cache_size = 4GB\n> work_mem = 16MB\n> maintenance_work_mem = 128MB\n> wal_buffers = 16MB\n> checkpoint_segments = 16\n> shared_buffers = 384MB\n> checkpoint_segments = 64\n>\n> and\n>\n> default_statistics_target = 100\n>\n> The query is this:\n>\n> insert into d_2010_09_13_sum\n>        select FOO.i, FOO.n, sum(FOO.cc) as cc, sum(FOO.oc) as oc\n>        from (\n>          select * from d_2010_09_12_sum\n>          union all\n>          select * from d_2010_09_13\n>        ) AS FOO group by i, n;\n>\n> here is the explain:\n>\n>  Subquery Scan \"*SELECT*\"  (cost=1200132.06..1201332.06 rows=40000 width=80)\n>   ->  HashAggregate  (cost=1200132.06..1200732.06 rows=40000 width=41)\n>         ->  Append  (cost=0.00..786531.53 rows=41360053 width=41)\n>               ->  Seq Scan on d_2010_09_12_sum  (cost=0.00..520066.48\n> rows=27272648 width=42)\n>               ->  Seq Scan on d_2010_09_13  (cost=0.00..266465.05\n> rows=14087405 width=40)\n>\n> Both source tables freshly vacuum analyze'd.\n> The row estimates are correct for both source tables.\n>\n> If I use \"set enable_hashagg = false\" I get this plan:\n>\n>  Subquery Scan \"*SELECT*\"  (cost=8563632.73..9081838.25 rows=40000 width=80)\n>   ->  GroupAggregate  (cost=8563632.73..9081238.25 rows=40000 width=41)\n>         ->  Sort  (cost=8563632.73..8667033.84 rows=41360441 width=41)\n>               Sort Key: d_2010_09_12_sum.i, d_2010_09_12_sum.n\n>               ->  Result  (cost=0.00..786535.41 rows=41360441 width=41)\n>                     ->  Append  (cost=0.00..786535.41 rows=41360441 width=41)\n>                           ->  Seq Scan on d_2010_09_12_sum\n> (cost=0.00..520062.04 rows=27272204 width=42)\n>                           ->  Seq Scan on d_2010_09_13\n> (cost=0.00..266473.37 rows=14088237 width=40)\n>\n> and postmaster's memory never exceeds (roughly) 548M (virtual), 27M\n> (resident), 5M (shared).\n>\n> I even set default_statistics_target to 1000 and re-ran \"vacuum\n> analyze verbose\" on both tables - no change.\n> If I set work_mem to 1MB (from 16MB) then the GroupAggregate variation\n> is chosen instead.\n> Experimentally, HashAggregate is chosen when work_mem is 16MB, 8MB,\n> 6MB, 5MB but not 4MB and on down.\n>\n> Two things I don't understand:\n>\n> 1. Why, when hash aggregation is allowed, does memory absolutely\n> explode (eventually invoking the wrath of the oom killer). 16MB for\n> work_mem does not seem outrageously high. For that matter, neither\n> does 5MB.\n>\n> 2. Why do both HashAggregate and GroupAggregate say the cost estimate\n> is 40000 rows?\n\nUnfortunately, I've found that as my database size grows, I've\ngenerally had to disable hash aggregates for fear of even simple\nseeming queries running out of memory, even with work_mem = 1MB.\n\nIn some cases I saw memory usage (with hashagg) grow to well over 5GB\nand with group aggregate it barely moves. Am *I* doing something\nwrong? Some of these queries are on partitioned tables (typically\nquerying the parent) and the resulting UNION or UNION ALL really\nstarts to hurt, and when the server runs out of memory and kills of\nthe postmaster process a few minutes or even hours into the query it\ndoesn't make anybody very happy.\n\nIs there some setting I can turn on to look to see when memory is\nbeing allocated (and, apparently, not deallocated)?\n\nThe latest query has a HashAggregate that looks like this:\nHashAggregate (cost=19950525.30..19951025.30 rows=40000 width=37)\nbut there are, in reality, approximately 200 million rows (when I run\nthe query with GroupAggregate, that's what I get).\n\nWhy does it keep choosing 40,000 rows?\n\nI suppose I could use the newly-learned ALTER USER trick to disable\nhash aggregation for the primary user, because disabling hash\naggregation system-wide sounds fairly drastic. However, if I *don't*\ndisable it, the query quickly balloons memory usage to the point where\nthe process is killed off.\n\n-- \nJon\n", "msg_date": "Thu, 11 Nov 2010 20:30:56 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postmaster consuming /lots/ of memory with hash aggregate. why?" }, { "msg_contents": "Hello\n\nlook on EXPLAIN ANALYZE command. Probably your statistic are out, and\nthen planner can be confused. EXPLAIN ANALYZE statement show it.\n\nRegards\n\nPavel Stehule\n\n2010/11/12 Jon Nelson <[email protected]>:\n> On Fri, Nov 5, 2010 at 7:26 PM, Jon Nelson <[email protected]> wrote:\n>> I have a postgres 8.4.5 instance on CentOS 5 (x86_64) which appears to\n>> go crazy with the amount of memory it consumes.\n>> When I run the query below, in a matter of a few seconds memory\n>> balloons to 5.3G (virtual), 4.6G (resident) and 1840 (shared), and\n>> eventually the oom killer is invoked, killing the entire process.\n>>\n>> Physical memory is 8GB but other processes on the box consume\n>> approximately 4GB of that.\n>>\n>> The settings changed from their defaults:\n>>\n>> effective_cache_size = 4GB\n>> work_mem = 16MB\n>> maintenance_work_mem = 128MB\n>> wal_buffers = 16MB\n>> checkpoint_segments = 16\n>> shared_buffers = 384MB\n>> checkpoint_segments = 64\n>>\n>> and\n>>\n>> default_statistics_target = 100\n>>\n>> The query is this:\n>>\n>> insert into d_2010_09_13_sum\n>>        select FOO.i, FOO.n, sum(FOO.cc) as cc, sum(FOO.oc) as oc\n>>        from (\n>>          select * from d_2010_09_12_sum\n>>          union all\n>>          select * from d_2010_09_13\n>>        ) AS FOO group by i, n;\n>>\n>> here is the explain:\n>>\n>>  Subquery Scan \"*SELECT*\"  (cost=1200132.06..1201332.06 rows=40000 width=80)\n>>   ->  HashAggregate  (cost=1200132.06..1200732.06 rows=40000 width=41)\n>>         ->  Append  (cost=0.00..786531.53 rows=41360053 width=41)\n>>               ->  Seq Scan on d_2010_09_12_sum  (cost=0.00..520066.48\n>> rows=27272648 width=42)\n>>               ->  Seq Scan on d_2010_09_13  (cost=0.00..266465.05\n>> rows=14087405 width=40)\n>>\n>> Both source tables freshly vacuum analyze'd.\n>> The row estimates are correct for both source tables.\n>>\n>> If I use \"set enable_hashagg = false\" I get this plan:\n>>\n>>  Subquery Scan \"*SELECT*\"  (cost=8563632.73..9081838.25 rows=40000 width=80)\n>>   ->  GroupAggregate  (cost=8563632.73..9081238.25 rows=40000 width=41)\n>>         ->  Sort  (cost=8563632.73..8667033.84 rows=41360441 width=41)\n>>               Sort Key: d_2010_09_12_sum.i, d_2010_09_12_sum.n\n>>               ->  Result  (cost=0.00..786535.41 rows=41360441 width=41)\n>>                     ->  Append  (cost=0.00..786535.41 rows=41360441 width=41)\n>>                           ->  Seq Scan on d_2010_09_12_sum\n>> (cost=0.00..520062.04 rows=27272204 width=42)\n>>                           ->  Seq Scan on d_2010_09_13\n>> (cost=0.00..266473.37 rows=14088237 width=40)\n>>\n>> and postmaster's memory never exceeds (roughly) 548M (virtual), 27M\n>> (resident), 5M (shared).\n>>\n>> I even set default_statistics_target to 1000 and re-ran \"vacuum\n>> analyze verbose\" on both tables - no change.\n>> If I set work_mem to 1MB (from 16MB) then the GroupAggregate variation\n>> is chosen instead.\n>> Experimentally, HashAggregate is chosen when work_mem is 16MB, 8MB,\n>> 6MB, 5MB but not 4MB and on down.\n>>\n>> Two things I don't understand:\n>>\n>> 1. Why, when hash aggregation is allowed, does memory absolutely\n>> explode (eventually invoking the wrath of the oom killer). 16MB for\n>> work_mem does not seem outrageously high. For that matter, neither\n>> does 5MB.\n>>\n>> 2. Why do both HashAggregate and GroupAggregate say the cost estimate\n>> is 40000 rows?\n>\n> Unfortunately, I've found that as my database size grows, I've\n> generally had to disable hash aggregates for fear of even simple\n> seeming queries running out of memory, even with work_mem = 1MB.\n>\n> In some cases I saw memory usage (with hashagg) grow to well over 5GB\n> and with group aggregate it barely moves.  Am *I* doing something\n> wrong? Some of these queries are on partitioned tables (typically\n> querying the parent) and the resulting UNION or UNION ALL really\n> starts to hurt, and when the server runs out of memory and kills of\n> the postmaster process a few minutes or even hours into the query it\n> doesn't make anybody very happy.\n>\n> Is there some setting I can turn on to look to see when memory is\n> being allocated (and, apparently, not deallocated)?\n>\n> The latest query has a HashAggregate that looks like this:\n> HashAggregate  (cost=19950525.30..19951025.30 rows=40000 width=37)\n> but there are, in reality, approximately 200 million rows (when I run\n> the query with GroupAggregate, that's what I get).\n>\n> Why does it keep choosing 40,000 rows?\n>\n> I suppose I could use the newly-learned ALTER USER trick to disable\n> hash aggregation for the primary user, because disabling hash\n> aggregation system-wide sounds fairly drastic. However, if I *don't*\n> disable it, the query quickly balloons memory usage to the point where\n> the process is killed off.\n>\n> --\n> Jon\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 12 Nov 2010 05:26:04 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster consuming /lots/ of memory with hash\n aggregate. why?" }, { "msg_contents": "On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule <[email protected]> wrote:\n> Hello\n>\n> look on EXPLAIN ANALYZE command. Probably your statistic are out, and\n> then planner can be confused. EXPLAIN ANALYZE statement show it.\n\nAs I noted earlier, I did set statistics to 1000 an re-ran vacuum\nanalyze and the plan did not change.\n\nWhat other diagnostics can I provide? This still doesn't answer the\n40000 row question, though. It seems absurd to me that the planner\nwould give up and just use 40000 rows (0.02 percent of the actual\nresult).\n\n-- \nJon\n", "msg_date": "Thu, 11 Nov 2010 22:33:06 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postmaster consuming /lots/ of memory with hash\n aggregate. why?" }, { "msg_contents": "2010/11/12 Jon Nelson <[email protected]>:\n> On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule <[email protected]> wrote:\n>> Hello\n>>\n>> look on EXPLAIN ANALYZE command. Probably your statistic are out, and\n>> then planner can be confused. EXPLAIN ANALYZE statement show it.\n>\n> As I noted earlier, I did set statistics to 1000 an re-ran vacuum\n> analyze and the plan did not change.\n\nthis change can do nothing. this is default in config. did you use\nALTER TABLE ALTER COLUMN SET STATISTIC = ... ? and ANALYZE\n\n>\n> What other diagnostics can I provide? This still doesn't answer the\n> 40000 row question, though. It seems absurd to me that the planner\n> would give up and just use 40000 rows (0.02 percent of the actual\n> result).\n>\n\nthere can be some not well supported operation, then planner use a\nsome % from rows without statistic based estimation\n\nPavel\n> --\n> Jon\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 12 Nov 2010 05:38:35 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster consuming /lots/ of memory with hash\n aggregate. why?" }, { "msg_contents": "On Thu, Nov 11, 2010 at 10:38 PM, Pavel Stehule <[email protected]> wrote:\n> 2010/11/12 Jon Nelson <[email protected]>:\n>> On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule <[email protected]> wrote:\n>>> Hello\n>>>\n>>> look on EXPLAIN ANALYZE command. Probably your statistic are out, and\n>>> then planner can be confused. EXPLAIN ANALYZE statement show it.\n>>\n>> As I noted earlier, I did set statistics to 1000 an re-ran vacuum\n>> analyze and the plan did not change.\n>\n> this change can do nothing. this is default in config. did you use\n> ALTER TABLE ALTER COLUMN SET STATISTIC = ... ? and ANALYZE\n\nNo. To be clear: are you saying that changing the value for\ndefault_statistics_target, restarting postgresql, and re-running\nVACUUM ANALYZE does *not* change the statistics for columns\ncreated/populated *prior* to the sequence of operations, and that one\n/must/ use ALTER TABLE ALTER COLUMN SET STATISTICS ... and re-ANALYZE?\n\nThat does not jive with the documentation, which appears to suggest\nthat setting a new default_statistics_target, restarting postgresql,\nand then re-ANALYZE'ing a table should be sufficient (provided the\ncolumns have not had a statistics target explicitly set).\n\n>> What other diagnostics can I provide? This still doesn't answer the\n>> 40000 row question, though. It seems absurd to me that the planner\n>> would give up and just use 40000 rows (0.02 percent of the actual\n>> result).\n>>\n>\n> there can be some not well supported operation, then planner use a\n> some % from rows without statistic based estimation\n\nThe strange thing is that the value 40000 keeps popping up in totally\ndiffferent contexts, with different tables, databases, etc... I tried\ndigging through the code and the only thing I found was that numGroups\nwas being set to 40000 but I couldn't see where.\n\n-- \nJon\n", "msg_date": "Fri, 12 Nov 2010 09:33:21 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postmaster consuming /lots/ of memory with hash\n aggregate. why?" }, { "msg_contents": "2010/11/12 Jon Nelson <[email protected]>:\n> On Thu, Nov 11, 2010 at 10:38 PM, Pavel Stehule <[email protected]> wrote:\n>> 2010/11/12 Jon Nelson <[email protected]>:\n>>> On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule <[email protected]> wrote:\n>>>> Hello\n>>>>\n>>>> look on EXPLAIN ANALYZE command. Probably your statistic are out, and\n>>>> then planner can be confused. EXPLAIN ANALYZE statement show it.\n>>>\n>>> As I noted earlier, I did set statistics to 1000 an re-ran vacuum\n>>> analyze and the plan did not change.\n>>\n>> this change can do nothing. this is default in config. did you use\n>> ALTER TABLE ALTER COLUMN SET STATISTIC = ... ? and ANALYZE\n>\n> No. To be clear: are you saying that changing the value for\n> default_statistics_target, restarting postgresql, and re-running\n> VACUUM ANALYZE does *not* change the statistics for columns\n> created/populated *prior* to the sequence of operations, and that one\n> /must/ use ALTER TABLE ALTER COLUMN SET STATISTICS ... and re-ANALYZE?\n>\n\nyes.\n\nbut I was wrong. Documentation is correct. Problem is elsewhere.\n\n> That does not jive with the documentation, which appears to suggest\n> that setting a new default_statistics_target, restarting postgresql,\n> and then re-ANALYZE'ing a table should be sufficient (provided the\n> columns have not had a statistics target explicitly set).\n>\n\n>>> What other diagnostics can I provide? This still doesn't answer the\n>>> 40000 row question, though. It seems absurd to me that the planner\n>>> would give up and just use 40000 rows (0.02 percent of the actual\n>>> result).\n>>>\n>>\n>> there can be some not well supported operation, then planner use a\n>> some % from rows without statistic based estimation\n>\n> The strange thing is that the value 40000 keeps popping up in totally\n> diffferent contexts, with different tables, databases, etc... I tried\n> digging through the code and the only thing I found was that numGroups\n> was being set to 40000 but I couldn't see where.\n>\n\n\nif I remember well, you can set a number of group by ALTER TABLE ALTER\nCOLUMN SET n_distinct = ..\n\nmaybe you use it.\n\nRegards\n\nPavel Stehule\n\nhttp://www.postgresql.org/docs/9.0/interactive/sql-altertable.html\n\n\n\n\n> --\n> Jon\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 12 Nov 2010 17:12:26 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster consuming /lots/ of memory with hash\n aggregate. why?" }, { "msg_contents": "On Fri, Nov 12, 2010 at 11:12 AM, Pavel Stehule <[email protected]> wrote:\n> if I remember well, you can set a number of group by ALTER TABLE ALTER\n> COLUMN SET n_distinct = ..\n>\n> maybe you use it.\n\nI'm not sure where the number 40,000 is coming from either, but I\nthink Pavel's suggestion is a good one. If you're grouping on a\ncolumn with N distinct values, then it stands to reason there will be\nN groups, and the planner is known to estimate n_distinct on large\ntables, even with very high statistics targets, which is why 9.0\nallows a manual override.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 23 Nov 2010 22:11:18 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postmaster consuming /lots/ of memory with hash\n aggregate. why?" } ]
[ { "msg_contents": "Question regarding the operation of the shared_buffers cache and implications of the pg_X_stat_tables|pg_X_stat_indexes stats.\n( I am also aware that this is all complicated by the kernel cache behavior, however, if, for the purpose of these questions, you wouldn't mind assuming that we don't have a kernel cache, and therefore just focus on the behavior of the db cache as an isolated component, it will help - thanks in advance).\n\nWhat is the procedure that postgres uses to decide whether or not a table/index block will be left in the shared_buffers cache at the end of the operation?\n\nAre there any particular types of *table* access operations that will cause postgres to choose not to retain the table pages in shared_buffers at the end of the operation?\nIn particular, the activity tracked by:\n\n- Seq_scan\n\n- Seq_tup_read\n\n- Idx_tup_read\n\n- Idx_tup_fetch\n\nAre there any particular types of *index* access operations that will cause postgres to choose not to retain the index pages in shared_buffers at the end of the operation?\nIn particular, the activity tracked by:\n\n- idx_scan\n\n- Idx_tup_read\n\n- Idx_tup_fetch\n\n\n\n\n\nQuestion regarding the operation of the shared_buffers cache and implications of the pg_X_stat_tables|pg_X_stat_indexes stats.( I am also aware that this is all complicated by the kernel cache behavior, however, if, for the purpose of these questions, you wouldn’t mind assuming that we don’t have a kernel cache, and therefore just focus on the behavior of the db cache as an isolated component, it will help – thanks in advance). What is the procedure that postgres uses to decide whether or not a table/index block will be left in the shared_buffers cache at the end of the operation? Are there any particular types of *table* access operations that will cause postgres to choose not to retain the table pages in shared_buffers at the end of the operation?In particular, the activity tracked by:-          Seq_scan-          Seq_tup_read-          Idx_tup_read-          Idx_tup_fetch Are there any particular types of *index* access operations that will cause postgres to choose not to retain the index pages in shared_buffers at the end of the operation?In particular, the activity tracked by:-          idx_scan-          Idx_tup_read-          Idx_tup_fetch", "msg_date": "Sun, 7 Nov 2010 12:33:05 -0800", "msg_from": "Mark Rostron <[email protected]>", "msg_from_op": true, "msg_subject": "questions regarding shared_buffers behavior" }, { "msg_contents": "Mark Rostron wrote:\n>\n> What is the procedure that postgres uses to decide whether or not a \n> table/index block will be left in the shared_buffers cache at the end \n> of the operation?\n>\n\nThere is no such procedure. When a table or index page is used, its \nusage count goes up, which means it's more likely to stay in the cache \nfor longer afterwards. Processing trying to allocate pages are \nconstantly circling the buffer cache looking for pages where the usage \ncount is at 0 to re-use. The only special cases are for sequential \nscans and VACUUM, which use continuously re-use a small section of the \nbuffer cache in some cases instead.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sun, 07 Nov 2010 18:30:14 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions regarding shared_buffers behavior" }, { "msg_contents": "> >\n> > What is the procedure that postgres uses to decide whether or not a \n> > table/index block will be left in the shared_buffers cache at the end \n> > of the operation?\n> >\n>\n> The only special cases are for sequential scans and VACUUM, which use continuously re-use a small section of the buffer cache in some cases instead.\n\nThanks - the part about sequential scans and the re-use of a small section of shared_buffers is the bit I was interested in.\nI don't suppose you would be able to tell me how large that re-useable area might be?\n\nNow, with regard to the behavior of table sequential scans: do the stat values in seq_scan and seq_tup_read reflect actual behavior.\nI assume they do, but I'm just checking - these would be updated as the result of real I/O as opposed to fuzzy estimates?\n\nObviously, the reason I am asking this is that I am noticing high machine io levels that would only result from sequential scan activity.\nThe explain output says otherwise, but the seq_scan stat value for the table kinda correlates.\nHence my enquiry.\n\nThanks in advance.\nMr\n\n\n\n", "msg_date": "Sun, 7 Nov 2010 16:33:37 -0800", "msg_from": "Mark Rostron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: questions regarding shared_buffers behavior" }, { "msg_contents": "2010/11/8 Mark Rostron <[email protected]>:\n>> >\n>> > What is the procedure that postgres uses to decide whether or not a\n>> > table/index block will be left in the shared_buffers cache at the end\n>> > of the operation?\n>> >\n>>\n>> The only special cases are for sequential scans and VACUUM, which use continuously re-use a small section of the buffer cache in some cases instead.\n>\n> Thanks - the part about sequential scans and the re-use of a small section of shared_buffers is the bit I was interested in.\n> I don't suppose you would be able to tell me how large that re-useable area might be?\n\nThere are 256KB per seqscan and 256KB per vacuum.\n\nI suggest you to go reading src/backend/storage/buffer/README\n\n>\n> Now, with regard to the behavior of table sequential scans: do the stat values in seq_scan and seq_tup_read reflect actual behavior.\n> I assume they do, but I'm just checking - these would be updated as the result of real I/O as opposed to fuzzy estimates?\n\nThey represent the real stat for hit/read from shared_buffers, *not*\nfrom OS buffers.\n\nGetting real statistic from OS has a cost because postgresql don't use\n(for other reason) mmap to get data.\n\n>\n> Obviously, the reason I am asking this is that I am noticing high machine io levels that would only result from sequential scan activity\n\nYou may want to start inspect your postgresql buffer cache with the\ncontrib module pg_buffercache.\nhttp://www.postgresql.org/docs/9.0/static/pgbuffercache.html\n\nThen if it is not enough you can inspect more precisely your OS cache\nwith pgfincore but it migh be useless in your situation.\nhttp://villemain.org/projects/pgfincore\n\n> The explain output says otherwise, but the seq_scan stat value for the table kinda correlates.\n\nStarting with 9.0, the contrib module pg_stat_statements provide a lot\nof information about buffer access (from shared buffers usage, but\nstill very valuable information) you should have a look at it if you\nhave such postgresql version installed.\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Mon, 8 Nov 2010 04:03:37 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions regarding shared_buffers behavior" }, { "msg_contents": "On Sun, Nov 7, 2010 at 10:03 PM, Cédric Villemain\n<[email protected]> wrote:\n> 2010/11/8 Mark Rostron <[email protected]>:\n>>> >\n>>> > What is the procedure that postgres uses to decide whether or not a\n>>> > table/index block will be left in the shared_buffers cache at the end\n>>> > of the operation?\n>>> >\n>>>\n>>> The only special cases are for sequential scans and VACUUM, which use continuously re-use a small section of the buffer cache in some cases instead.\n>>\n>> Thanks - the part about sequential scans and the re-use of a small section of shared_buffers is the bit I was interested in.\n>> I don't suppose you would be able to tell me how large that re-useable area might be?\n>\n> There are 256KB per seqscan and 256KB per vacuum.\n>\n> I suggest you to go reading src/backend/storage/buffer/README\n\nNote that there is a different, higher limit for the \"bulk write\"\nstrategy when using COPY IN or CTAS.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 12 Nov 2010 16:07:47 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions regarding shared_buffers behavior" } ]
[ { "msg_contents": "Hi,\nI have a table employee with 33 columns.\nThe table have 200 records now.\nSelect * from employee takes 15 seconds to fetch the data!!!\nWhich seems to be very slow.\nBut when I say select id,name from empoyee it executes in 30ms.\nSame pefromance if I say select count(*) from emloyee.\n\nWhy the query is slow if I included all the columns in the table.\nAs per my understanding , number of columns should not be having a major\nimpact on the query performance.\n\nI have increased the shared_buffres to 1024MB, but no improvement.\nI have noticed that the query \"show shared_buffers\" always show 8MB.Why is\nthis? Does it mean that changing the shared_buffers in config file have no\nimpact?\n\nCan anybody help?\n\nShaiju\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/Select-is-very-slow-tp3254568p3254568.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\nHi,\nI have a table employee with 33 columns.\nThe table have 200 records now.\nSelect * from employee takes 15 seconds to fetch the data!!!\nWhich seems to be very slow.\nBut when I say select id,name from empoyee it executes in 30ms.\nSame pefromance if I say select count(*) from emloyee.\n\nWhy the query is slow if I included all the columns in the table.\nAs per my understanding , number of columns should not be having a major impact on the query performance.\n\nI have increased the shared_buffres to 1024MB, but no improvement.\nI have noticed that the query \"show shared_buffers\" always show 8MB.Why is this? Does it mean that changing the shared_buffers in config file have no impact?\n\nCan anybody help?\n\nShaiju\n\nView this message in context: Select * is very slow\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.", "msg_date": "Sun, 7 Nov 2010 22:16:56 -0800 (PST)", "msg_from": "\"shaiju.ck\" <[email protected]>", "msg_from_op": true, "msg_subject": "Select * is very slow" }, { "msg_contents": "Hello\n\ndo you use a VACUUM statement?\n\nRegards\n\nPavel Stehule\n\n2010/11/8 shaiju.ck <[email protected]>:\n> Hi, I have a table employee with 33 columns. The table have 200 records now.\n> Select * from employee takes 15 seconds to fetch the data!!! Which seems to\n> be very slow. But when I say select id,name from empoyee it executes in\n> 30ms. Same pefromance if I say select count(*) from emloyee. Why the query\n> is slow if I included all the columns in the table. As per my understanding\n> , number of columns should not be having a major impact on the query\n> performance. I have increased the shared_buffres to 1024MB, but no\n> improvement. I have noticed that the query \"show shared_buffers\" always show\n> 8MB.Why is this? Does it mean that changing the shared_buffers in config\n> file have no impact? Can anybody help? Shaiju\n> ________________________________\n> View this message in context: Select * is very slow\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n", "msg_date": "Mon, 8 Nov 2010 16:23:32 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select * is very slow" }, { "msg_contents": "On 8 November 2010 06:16, shaiju.ck <[email protected]> wrote:\n\n> Hi, I have a table employee with 33 columns. The table have 200 records\n> now. Select * from employee takes 15 seconds to fetch the data!!! Which\n> seems to be very slow. But when I say select id,name from empoyee it\n> executes in 30ms. Same pefromance if I say select count(*) from emloyee. Why\n> the query is slow if I included all the columns in the table. As per my\n> understanding , number of columns should not be having a major impact on the\n> query performance. I have increased the shared_buffres to 1024MB, but no\n> improvement. I have noticed that the query \"show shared_buffers\" always show\n> 8MB.Why is this? Does it mean that changing the shared_buffers in config\n> file have no impact? Can anybody help? Shaiju\n>\n\nCould you run an EXPLAIN ANALYZE on the query? And what do the columns\ncontain? For instance, if you have 10 columns each returning massive XML\ndocuments, each hundreds of megs, the bottleneck would be I/O bandwidth.\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nOn 8 November 2010 06:16, shaiju.ck <[email protected]> wrote:\n\n\nHi,\nI have a table employee with 33 columns.\nThe table have 200 records now.\nSelect * from employee takes 15 seconds to fetch the data!!!\nWhich seems to be very slow.\nBut when I say select id,name from empoyee it executes in 30ms.\nSame pefromance if I say select count(*) from emloyee.\n\nWhy the query is slow if I included all the columns in the table.\nAs per my understanding , number of columns should not be having a major impact on the query performance.\n\nI have increased the shared_buffres to 1024MB, but no improvement.\nI have noticed that the query \"show shared_buffers\" always show 8MB.Why is this? Does it mean that changing the shared_buffers in config file have no impact?\n\nCan anybody help?\n\nShaiju\nCould you run an EXPLAIN ANALYZE on the query?  And what do the columns contain?  For instance, if you have 10 columns each returning massive XML documents, each hundreds of megs, the bottleneck would be I/O bandwidth.\n-- Thom BrownTwitter: @darkixionIRC (freenode): dark_ixionRegistered Linux user: #516935", "msg_date": "Mon, 8 Nov 2010 15:30:30 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select * is very slow" }, { "msg_contents": "On Mon, Nov 8, 2010 at 1:16 AM, shaiju.ck <[email protected]> wrote:\n> [....] I have increased the shared_buffres to 1024MB, but no\n> improvement. I have noticed that the query \"show shared_buffers\" always show\n> 8MB.Why is this? Does it mean that changing the shared_buffers in config\n> file have no impact? Can anybody help? Shaiju\n\nHave you restarted PostgreSQL? Changing that setting requires a\ncomplete restart for it to take effect.\n", "msg_date": "Mon, 8 Nov 2010 10:37:36 -0500", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select * is very slow" }, { "msg_contents": "\"shaiju.ck\" <[email protected]> wrote:\n \n> The table have 200 records now.\n> Select * from employee takes 15 seconds to fetch the data!!!\n> Which seems to be very slow.\n> But when I say select id,name from empoyee it executes in 30ms.\n> Same pefromance if I say select count(*) from emloyee.\n \nYou haven't given nearly enough information for anyone to diagnose\nthe issues with any certainty. Earlier responses have asked for\nsome particularly important information, and I would add a request\nto see the output from `VACUUM VERBOSE employee;`. Beyond that, you\nmight want to review this page for checks you can make yourself, and\ninformation which you could provide to allow people to give more\ninformed advice:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n", "msg_date": "Mon, 08 Nov 2010 11:01:46 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select * is very slow" }, { "msg_contents": "\"shaiju.ck\" <[email protected]> wrote:\n \n> I have increased the shared_buffres to 1024MB, but no improvement.\n> I have noticed that the query \"show shared_buffers\" always show\n> 8MB.Why is this? Does it mean that changing the shared_buffers in\n> config file have no impact?\n \nDid you signal PostgreSQL to \"reload\" its configuration after making\nthe change?\n \nOh, and please show us the result of running `select version();` and\ntell us about the hardware and OS.\n \n-Kevin\n", "msg_date": "Mon, 08 Nov 2010 11:08:43 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select * is very slow" }, { "msg_contents": "Kevin Grittner, 08.11.2010 18:01:\n> \"shaiju.ck\"<[email protected]> wrote:\n>\n>> The table have 200 records now.\n>> Select * from employee takes 15 seconds to fetch the data!!!\n>> Which seems to be very slow.\n>> But when I say select id,name from empoyee it executes in 30ms.\n>> Same pefromance if I say select count(*) from emloyee.\n>\n> You haven't given nearly enough information for anyone to diagnose\n> the issues with any certainty. Earlier responses have asked for\n> some particularly important information, and I would add a request\n> to see the output from `VACUUM VERBOSE employee;`. Beyond that, you\n> might want to review this page for checks you can make yourself, and\n> information which you could provide to allow people to give more\n> informed advice:\n\n\nDo you really think that VACCUM is the problem? If the OP only selects two columns it is apparently fast.\nIf he selects all columns it's slow, so I wouldn't suspect dead tuples here.\n\nMy bet is that there are some really large text columns in there...\n\nHe has asked the same question here:\nhttp://forums.devshed.com/postgresql-help-21/select-is-very-slow-761130.html\n\nbut has also failed to answer the question about the table details...\n\nRegards\nThomas\n\n", "msg_date": "Mon, 08 Nov 2010 18:09:21 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select * is very slow" }, { "msg_contents": "Thomas Kellerer <[email protected]> wrote:\n> Kevin Grittner, 08.11.2010 18:01:\n \n>> I would add a request to see the output from `VACUUM VERBOSE\n>> employee;`.\n \n> Do you really think that VACCUM is the problem? If the OP only\n> selects two columns it is apparently fast.\n> If he selects all columns it's slow, so I wouldn't suspect dead\n> tuples here.\n> \n> My bet is that there are some really large text columns in\n> there...\n \nThat's something we can infer pretty well from the verbose output.\n \n-Kevin\n", "msg_date": "Mon, 08 Nov 2010 11:15:56 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select * is very slow" }, { "msg_contents": "\n> The table have 200 records now.\n> Select * from employee takes 15 seconds to fetch the data!!!\n> Which seems to be very slow.\n> But when I say select id,name from empoyee it executes in 30ms.\n\n30 ms is also amazingly slow for so few records and so little data.\n\n- please provide results of \"EXPLAIN ANALYZE SELECT id FROM table\"\n- huge bloat (table never vacuumed ?) => VACUUM VERBOSE\n- bad network cable, network interface reverting to 10 Mbps, badly \nconfigured network, etc ? (test it and test ping to server, throughput, \netc)\n- server overloaded (swapping, etc) ? (vmstat, iostat, top, etc)\n", "msg_date": "Mon, 08 Nov 2010 19:41:53 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select * is very slow" } ]
[ { "msg_contents": "Hello together,\n\nI get an out of memory problem I don't understand.\nThe installed Postgres-Version is:\nPostgreSQL 8.3.7 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real \n(Debian 4.3.3-5) 4.3.3\nIt is running on a 32bit Debian machine with 4GB RAM.\n\nThanks for any help in advance\n\nTill\n\n-- \n-----------------------------------------------------------------------------------------------------------------------------\n\nMain settings are as follows:\ncheckpoint_segments 16\ncheckpoint_timeout 120s\neffective_cache_size 128MB\nmaintenance_work_mem 128MB\nmax_fsm_pages 153600\nshared_buffers 1GB\nwal_buffers 256MB\nwork_mem 256MB\n\n-- \n-----------------------------------------------------------------------------------------------------------------------------\n\nUsed query is:\n CREATE TABLE temp.bwi_atkis0809_forestland AS\n SELECT\n b.gid AS bwi_gid,\n a.dlm0809id,\n a.objart_08,\n a.objart_09\n FROM\n bwi.bwi_pkt AS b,\n atkis.atkis0809_forestland AS a\n WHERE\n b.the_geom && a.the_geom AND ST_Within(b.the_geom, a.the_geom)\n ;\n COMMIT;\n\n(The JOIN is a Spatial one using PostGIS-Functions)\n\n-- \n-----------------------------------------------------------------------------------------------------------------------------\n\nFull Table Sizes:\natkis0809_forestland 2835mb\nbwi_pkt 47mb\n\n-- \n-----------------------------------------------------------------------------------------------------------------------------\n\nError Message is:\nFEHLER: Speicher aufgebraucht\nDETAIL: Fehler bei Anfrage mit Grᅵᅵe 32.\n\n********** Fehler **********\n\nFEHLER: Speicher aufgebraucht\nSQL Status:53200\nDetail:Fehler bei Anfrage mit Grᅵᅵe 32.\n\nin english:\nERROR: out of memory\ndetail: error for request with size 32\n\n-- \n-----------------------------------------------------------------------------------------------------------------------------\n\nThe LOG looks as follows:\n\nTopMemoryContext: 42800 total in 5 blocks; 4816 free (5 chunks); 37984 used\n CFuncHash: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used\n TopTransactionContext: 8192 total in 1 blocks; 5520 free (0 chunks); \n2672 used\n Operator class cache: 8192 total in 1 blocks; 3848 free (0 chunks); \n4344 used\n Operator lookup cache: 24576 total in 2 blocks; 14072 free (6 \nchunks); 10504 used\n MessageContext: 65536 total in 4 blocks; 35960 free (10 chunks); \n29576 used\n smgr relation table: 8192 total in 1 blocks; 2808 free (0 chunks); \n5384 used\n TransactionAbortContext: 32768 total in 1 blocks; 32752 free (0 \nchunks); 16 used\n Portal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used\n PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used\n PortalHeapMemory: 1024 total in 1 blocks; 896 free (0 chunks); 128 used\n ExecutorState: 1833967692 total in 230 blocks; 9008 free (3 \nchunks); 1833958684 used\n GiST temporary context: 8192 total in 1 blocks; 8176 free (0 \nchunks); 16 used\n ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\n ExprContext: 8192 total in 1 blocks; 8176 free (9 chunks); 16 used\n ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\n ExprContext: 8192 total in 1 blocks; 3880 free (4 chunks); 4312 \nused\n Relcache by OID: 8192 total in 1 blocks; 2856 free (0 chunks); 5336 used\n CacheMemoryContext: 667472 total in 20 blocks; 195408 free (3 \nchunks); 472064 used\n pg_toast_12241534_index: 1024 total in 1 blocks; 240 free (0 \nchunks); 784 used\n pg_shdepend_depender_index: 1024 total in 1 blocks; 152 free (0 \nchunks); 872 used\n pg_shdepend_reference_index: 1024 total in 1 blocks; 240 free (0 \nchunks); 784 used\n pg_depend_depender_index: 1024 total in 1 blocks; 152 free (0 \nchunks); 872 used\n pg_depend_reference_index: 1024 total in 1 blocks; 152 free (0 \nchunks); 872 used\n idx_atkis0809_forestland_the_geom_gist: 1024 total in 1 blocks; 136 \nfree (0 chunks); 888 used\n atkis0809_forestland_pkey: 1024 total in 1 blocks; 344 free (0 \nchunks); 680 used\n btree_bwi_pkt_enr: 1024 total in 1 blocks; 344 free (0 chunks); 680 \nused\n btree_bwi_pkt_tnr: 1024 total in 1 blocks; 344 free (0 chunks); 680 \nused\n rtree_bwi_pkt: 1024 total in 1 blocks; 136 free (0 chunks); 888 used\n bwi_pkt_pkey: 1024 total in 1 blocks; 344 free (0 chunks); 680 used\n pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 240 free (0 \nchunks); 784 used\n pg_constraint_conrelid_index: 1024 total in 1 blocks; 304 free (0 \nchunks); 720 used\n pg_database_datname_index: 1024 total in 1 blocks; 344 free (0 \nchunks); 680 used\n pg_index_indrelid_index: 1024 total in 1 blocks; 304 free (0 \nchunks); 720 used\n pg_ts_dict_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); \n680 used\n pg_aggregate_fnoid_index: 1024 total in 1 blocks; 344 free (0 \nchunks); 680 used\n pg_language_name_index: 1024 total in 1 blocks; 344 free (0 \nchunks); 680 used\n pg_statistic_relid_att_index: 1024 total in 1 blocks; 240 free (0 \nchunks); 784 used\n pg_ts_dict_dictname_index: 1024 total in 1 blocks; 280 free (0 \nchunks); 744 used\n pg_namespace_nspname_index: 1024 total in 1 blocks; 304 free (0 \nchunks); 720 used\n pg_opfamily_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); \n680 used\n pg_opclass_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); \n720 used\n pg_ts_parser_prsname_index: 1024 total in 1 blocks; 280 free (0 \nchunks); 744 used\n pg_amop_fam_strat_index: 1024 total in 1 blocks; 88 free (0 \nchunks); 936 used\n pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 192 free (0 \nchunks); 832 used\n pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 240 free \n(0 chunks); 784 used\n pg_cast_source_target_index: 1024 total in 1 blocks; 240 free (0 \nchunks); 784 used\n pg_auth_members_role_member_index: 1024 total in 1 blocks; 280 free \n(0 chunks); 744 used\n pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 240 free \n(0 chunks); 784 used\n pg_ts_config_cfgname_index: 1024 total in 1 blocks; 280 free (0 \nchunks); 744 used\n pg_authid_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); \n720 used\n pg_ts_config_oid_index: 1024 total in 1 blocks; 344 free (0 \nchunks); 680 used\n pg_conversion_default_index: 1024 total in 1 blocks; 128 free (0 \nchunks); 896 used\n pg_language_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); \n680 used\n pg_enum_oid_index: 1024 total in 1 blocks; 344 free (0 chunks); 680 \nused\n pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 152 free (0 \nchunks); 872 used\n pg_ts_parser_oid_index: 1024 total in 1 blocks; 344 free (0 \nchunks); 680 used\n pg_database_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); \n720 used\n pg_conversion_name_nsp_index: 1024 total in 1 blocks; 280 free (0 \nchunks); 744 used\n pg_class_relname_nsp_index: 1024 total in 1 blocks; 240 free (0 \nchunks); 784 used\n pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 240 free \n(0 chunks); 784 used\n pg_class_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); \n720 used\n pg_amproc_fam_proc_index: 1024 total in 1 blocks; 88 free (0 \nchunks); 936 used\n pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 88 free (0 \nchunks); 936 used\n pg_index_indexrelid_index: 1024 total in 1 blocks; 304 free (0 \nchunks); 720 used\n pg_type_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 \nused\n pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 280 free (0 \nchunks); 744 used\n pg_authid_rolname_index: 1024 total in 1 blocks; 304 free (0 \nchunks); 720 used\n pg_auth_members_member_role_index: 1024 total in 1 blocks; 280 free \n(0 chunks); 744 used\n pg_enum_typid_label_index: 1024 total in 1 blocks; 280 free (0 \nchunks); 744 used\n pg_constraint_oid_index: 1024 total in 1 blocks; 344 free (0 \nchunks); 680 used\n pg_conversion_oid_index: 1024 total in 1 blocks; 344 free (0 \nchunks); 680 used\n pg_ts_template_tmplname_index: 1024 total in 1 blocks; 280 free (0 \nchunks); 744 used\n pg_ts_config_map_index: 1024 total in 1 blocks; 192 free (0 \nchunks); 832 used\n pg_namespace_oid_index: 1024 total in 1 blocks; 344 free (0 \nchunks); 680 used\n pg_type_typname_nsp_index: 1024 total in 1 blocks; 240 free (0 \nchunks); 784 used\n pg_operator_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); \n720 used\n pg_amop_opr_fam_index: 1024 total in 1 blocks; 240 free (0 chunks); \n784 used\n pg_proc_oid_index: 1024 total in 1 blocks; 304 free (0 chunks); 720 \nused\n pg_opfamily_am_name_nsp_index: 1024 total in 1 blocks; 192 free (0 \nchunks); 832 used\n pg_ts_template_oid_index: 1024 total in 1 blocks; 344 free (0 \nchunks); 680 used\n MdSmgr: 8192 total in 1 blocks; 7312 free (0 chunks); 880 used\n LOCALLOCK hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used\n Timezones: 48616 total in 2 blocks; 5968 free (0 chunks); 42648 used\n ErrorContext: 8192 total in 1 blocks; 8176 free (4 chunks); 16 used\n2010-11-09 11:36:10 CET FEHLER: Speicher aufgebraucht\n2010-11-09 11:36:10 CET DETAIL: Fehler bei Anfrage mit Grᅵᅵe 32.\n2010-11-09 11:36:10 CET ANWEISUNG: BEGIN;\n CREATE TABLE temp.bwi_atkis0809_forestland AS\n SELECT\n b.gid AS bwi_gid,\n a.dlm0809id,\n a.objart_08,\n a.objart_09\n FROM\n bwi.bwi_pkt AS b,\n atkis.atkis0809_forestland AS a\n WHERE\n b.the_geom && a.the_geom AND ST_Within(b.the_geom, a.the_geom)\n ;\n COMMIT;\n\n", "msg_date": "Tue, 09 Nov 2010 11:39:48 +0100", "msg_from": "Till Kirchner <[email protected]>", "msg_from_op": true, "msg_subject": "out of memory problem" }, { "msg_contents": "Till Kirchner <[email protected]> writes:\n> I get an out of memory problem I don't understand.\n\nIt's pretty clear that something is leaking memory in the per-query\ncontext:\n\n> ExecutorState: 1833967692 total in 230 blocks; 9008 free (3 \n> chunks); 1833958684 used\n\nThere doesn't seem to be anything in your query that is known to cause\nthat sort of thing, so I'm guessing that the leak is being caused by\nthe postgis functions you're using. You might ask about this on the\npostgis lists.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Nov 2010 10:22:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory problem " }, { "msg_contents": "Be sure that you are starting PostgreSQL using an account with sufficient memory limits:\n\n ulimit -m\n\nIf the account has memory limit below the server's configuration you may get the out of memory error.\n\nBob Lunney\n\n--- On Tue, 11/9/10, Till Kirchner <[email protected]> wrote:\n\n> From: Till Kirchner <[email protected]>\n> Subject: [PERFORM] out of memory problem\n> To: [email protected]\n> Date: Tuesday, November 9, 2010, 5:39 AM\n> Hello together,\n> \n> I get an out of memory problem I don't understand.\n> The installed Postgres-Version is:\n> PostgreSQL 8.3.7 on i486-pc-linux-gnu, compiled by GCC\n> gcc-4.3.real (Debian 4.3.3-5) 4.3.3\n> It is running on a 32bit Debian machine with 4GB RAM.\n> \n> Thanks for any help in advance\n> \n> Till\n> \n> --\n> -----------------------------------------------------------------------------------------------------------------------------\n> \n> Main settings are as follows:\n> checkpoint_segments 16\n> checkpoint_timeout 120s\n> effective_cache_size 128MB\n> maintenance_work_mem 128MB\n> max_fsm_pages 153600\n> shared_buffers 1GB\n> wal_buffers 256MB\n> work_mem 256MB\n> \n> --\n> -----------------------------------------------------------------------------------------------------------------------------\n> \n> Used query is:\n>     CREATE TABLE temp.bwi_atkis0809_forestland\n> AS\n>     SELECT\n>     b.gid AS bwi_gid,\n>     a.dlm0809id,\n>     a.objart_08,\n>     a.objart_09\n>     FROM\n>     bwi.bwi_pkt AS b,\n>     atkis.atkis0809_forestland AS a\n>     WHERE\n>     b.the_geom && a.the_geom AND\n> ST_Within(b.the_geom, a.the_geom)\n>     ;\n>     COMMIT;\n> \n> (The JOIN is a Spatial one using PostGIS-Functions)\n> \n> --\n> -----------------------------------------------------------------------------------------------------------------------------\n> \n> Full Table Sizes:\n> atkis0809_forestland 2835mb\n> bwi_pkt 47mb\n> \n> --\n> -----------------------------------------------------------------------------------------------------------------------------\n> \n> Error Message is:\n> FEHLER:  Speicher aufgebraucht\n> DETAIL:  Fehler bei Anfrage mit Größe 32.\n> \n> ********** Fehler **********\n> \n> FEHLER: Speicher aufgebraucht\n> SQL Status:53200\n> Detail:Fehler bei Anfrage mit Größe 32.\n> \n> in english:\n> ERROR: out of memory\n> detail: error for request with size 32\n> \n> --\n> -----------------------------------------------------------------------------------------------------------------------------\n> \n> The LOG looks as follows:\n> \n> TopMemoryContext: 42800 total in 5 blocks; 4816 free (5\n> chunks); 37984 used\n>   CFuncHash: 8192 total in 1 blocks; 4936 free (0\n> chunks); 3256 used\n>   TopTransactionContext: 8192 total in 1 blocks; 5520\n> free (0 chunks); 2672 used\n>   Operator class cache: 8192 total in 1 blocks; 3848\n> free (0 chunks); 4344 used\n>   Operator lookup cache: 24576 total in 2 blocks;\n> 14072 free (6 chunks); 10504 used\n>   MessageContext: 65536 total in 4 blocks; 35960 free\n> (10 chunks); 29576 used\n>   smgr relation table: 8192 total in 1 blocks; 2808\n> free (0 chunks); 5384 used\n>   TransactionAbortContext: 32768 total in 1 blocks;\n> 32752 free (0 chunks); 16 used\n>   Portal hash: 8192 total in 1 blocks; 3912 free (0\n> chunks); 4280 used\n>   PortalMemory: 8192 total in 1 blocks; 8040 free (0\n> chunks); 152 used\n>     PortalHeapMemory: 1024 total in 1 blocks; 896\n> free (0 chunks); 128 used\n>       ExecutorState: 1833967692 total in 230\n> blocks; 9008 free (3 chunks); 1833958684 used\n>         GiST temporary context: 8192\n> total in 1 blocks; 8176 free (0 chunks); 16 used\n>         ExprContext: 0 total in 0\n> blocks; 0 free (0 chunks); 0 used\n>         ExprContext: 8192 total in 1\n> blocks; 8176 free (9 chunks); 16 used\n>         ExprContext: 0 total in 0\n> blocks; 0 free (0 chunks); 0 used\n>         ExprContext: 8192 total in 1\n> blocks; 3880 free (4 chunks); 4312 used\n>   Relcache by OID: 8192 total in 1 blocks; 2856 free\n> (0 chunks); 5336 used\n>   CacheMemoryContext: 667472 total in 20 blocks;\n> 195408 free (3 chunks); 472064 used\n>     pg_toast_12241534_index: 1024 total in 1\n> blocks; 240 free (0 chunks); 784 used\n>     pg_shdepend_depender_index: 1024 total in 1\n> blocks; 152 free (0 chunks); 872 used\n>     pg_shdepend_reference_index: 1024 total in 1\n> blocks; 240 free (0 chunks); 784 used\n>     pg_depend_depender_index: 1024 total in 1\n> blocks; 152 free (0 chunks); 872 used\n>     pg_depend_reference_index: 1024 total in 1\n> blocks; 152 free (0 chunks); 872 used\n>     idx_atkis0809_forestland_the_geom_gist: 1024\n> total in 1 blocks; 136 free (0 chunks); 888 used\n>     atkis0809_forestland_pkey: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     btree_bwi_pkt_enr: 1024 total in 1 blocks;\n> 344 free (0 chunks); 680 used\n>     btree_bwi_pkt_tnr: 1024 total in 1 blocks;\n> 344 free (0 chunks); 680 used\n>     rtree_bwi_pkt: 1024 total in 1 blocks; 136\n> free (0 chunks); 888 used\n>     bwi_pkt_pkey: 1024 total in 1 blocks; 344\n> free (0 chunks); 680 used\n>     pg_attrdef_adrelid_adnum_index: 1024 total in\n> 1 blocks; 240 free (0 chunks); 784 used\n>     pg_constraint_conrelid_index: 1024 total in 1\n> blocks; 304 free (0 chunks); 720 used\n>     pg_database_datname_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     pg_index_indrelid_index: 1024 total in 1\n> blocks; 304 free (0 chunks); 720 used\n>     pg_ts_dict_oid_index: 1024 total in 1 blocks;\n> 344 free (0 chunks); 680 used\n>     pg_aggregate_fnoid_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     pg_language_name_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     pg_statistic_relid_att_index: 1024 total in 1\n> blocks; 240 free (0 chunks); 784 used\n>     pg_ts_dict_dictname_index: 1024 total in 1\n> blocks; 280 free (0 chunks); 744 used\n>     pg_namespace_nspname_index: 1024 total in 1\n> blocks; 304 free (0 chunks); 720 used\n>     pg_opfamily_oid_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     pg_opclass_oid_index: 1024 total in 1 blocks;\n> 304 free (0 chunks); 720 used\n>     pg_ts_parser_prsname_index: 1024 total in 1\n> blocks; 280 free (0 chunks); 744 used\n>     pg_amop_fam_strat_index: 1024 total in 1\n> blocks; 88 free (0 chunks); 936 used\n>     pg_opclass_am_name_nsp_index: 1024 total in 1\n> blocks; 192 free (0 chunks); 832 used\n>     pg_trigger_tgrelid_tgname_index: 1024 total\n> in 1 blocks; 240 free (0 chunks); 784 used\n>     pg_cast_source_target_index: 1024 total in 1\n> blocks; 240 free (0 chunks); 784 used\n>     pg_auth_members_role_member_index: 1024 total\n> in 1 blocks; 280 free (0 chunks); 744 used\n>     pg_attribute_relid_attnum_index: 1024 total\n> in 1 blocks; 240 free (0 chunks); 784 used\n>     pg_ts_config_cfgname_index: 1024 total in 1\n> blocks; 280 free (0 chunks); 744 used\n>     pg_authid_oid_index: 1024 total in 1 blocks;\n> 304 free (0 chunks); 720 used\n>     pg_ts_config_oid_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     pg_conversion_default_index: 1024 total in 1\n> blocks; 128 free (0 chunks); 896 used\n>     pg_language_oid_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     pg_enum_oid_index: 1024 total in 1 blocks;\n> 344 free (0 chunks); 680 used\n>     pg_proc_proname_args_nsp_index: 1024 total in\n> 1 blocks; 152 free (0 chunks); 872 used\n>     pg_ts_parser_oid_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     pg_database_oid_index: 1024 total in 1\n> blocks; 304 free (0 chunks); 720 used\n>     pg_conversion_name_nsp_index: 1024 total in 1\n> blocks; 280 free (0 chunks); 744 used\n>     pg_class_relname_nsp_index: 1024 total in 1\n> blocks; 240 free (0 chunks); 784 used\n>     pg_attribute_relid_attnam_index: 1024 total\n> in 1 blocks; 240 free (0 chunks); 784 used\n>     pg_class_oid_index: 1024 total in 1 blocks;\n> 304 free (0 chunks); 720 used\n>     pg_amproc_fam_proc_index: 1024 total in 1\n> blocks; 88 free (0 chunks); 936 used\n>     pg_operator_oprname_l_r_n_index: 1024 total\n> in 1 blocks; 88 free (0 chunks); 936 used\n>     pg_index_indexrelid_index: 1024 total in 1\n> blocks; 304 free (0 chunks); 720 used\n>     pg_type_oid_index: 1024 total in 1 blocks;\n> 304 free (0 chunks); 720 used\n>     pg_rewrite_rel_rulename_index: 1024 total in\n> 1 blocks; 280 free (0 chunks); 744 used\n>     pg_authid_rolname_index: 1024 total in 1\n> blocks; 304 free (0 chunks); 720 used\n>     pg_auth_members_member_role_index: 1024 total\n> in 1 blocks; 280 free (0 chunks); 744 used\n>     pg_enum_typid_label_index: 1024 total in 1\n> blocks; 280 free (0 chunks); 744 used\n>     pg_constraint_oid_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     pg_conversion_oid_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     pg_ts_template_tmplname_index: 1024 total in\n> 1 blocks; 280 free (0 chunks); 744 used\n>     pg_ts_config_map_index: 1024 total in 1\n> blocks; 192 free (0 chunks); 832 used\n>     pg_namespace_oid_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>     pg_type_typname_nsp_index: 1024 total in 1\n> blocks; 240 free (0 chunks); 784 used\n>     pg_operator_oid_index: 1024 total in 1\n> blocks; 304 free (0 chunks); 720 used\n>     pg_amop_opr_fam_index: 1024 total in 1\n> blocks; 240 free (0 chunks); 784 used\n>     pg_proc_oid_index: 1024 total in 1 blocks;\n> 304 free (0 chunks); 720 used\n>     pg_opfamily_am_name_nsp_index: 1024 total in\n> 1 blocks; 192 free (0 chunks); 832 used\n>     pg_ts_template_oid_index: 1024 total in 1\n> blocks; 344 free (0 chunks); 680 used\n>   MdSmgr: 8192 total in 1 blocks; 7312 free (0\n> chunks); 880 used\n>   LOCALLOCK hash: 8192 total in 1 blocks; 3912 free (0\n> chunks); 4280 used\n>   Timezones: 48616 total in 2 blocks; 5968 free (0\n> chunks); 42648 used\n>   ErrorContext: 8192 total in 1 blocks; 8176 free (4\n> chunks); 16 used\n> 2010-11-09 11:36:10 CET FEHLER:  Speicher\n> aufgebraucht\n> 2010-11-09 11:36:10 CET DETAIL:  Fehler bei Anfrage\n> mit Größe 32.\n> 2010-11-09 11:36:10 CET ANWEISUNG:  BEGIN;\n>     CREATE TABLE temp.bwi_atkis0809_forestland\n> AS\n>     SELECT\n>     b.gid AS bwi_gid,\n>     a.dlm0809id,\n>     a.objart_08,\n>     a.objart_09\n>     FROM\n>     bwi.bwi_pkt AS b,\n>     atkis.atkis0809_forestland AS a\n>     WHERE\n>     b.the_geom && a.the_geom AND\n> ST_Within(b.the_geom, a.the_geom)\n>     ;\n>     COMMIT;\n> \n> \n> -- Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Tue, 9 Nov 2010 08:02:42 -0800 (PST)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: out of memory problem" } ]
[ { "msg_contents": "The semi-join and anti-join have helped us quite a bit, but we have\nseen a situation where anti-join is chosen even though it is slower\nthan the \"old fashioned\" plan. I know there have been other reports\nof this, but I just wanted to go on record with my details.\n\nThe query:\n\ndelete from \"DbTranLogRecord\"\n where not exists\n (select * from \"DbTranRepository\" r\n where r.\"countyNo\" = \"DbTranLogRecord\".\"countyNo\"\n and r.\"tranImageSeqNo\"\n = \"DbTranLogRecord\".\"tranImageSeqNo\");\n\nOld plan on 8.3.7:\n\n Seq Scan on \"DbTranLogRecord\" (cost=0.00..1224227790.06\nrows=333387520 width=6)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using \"DbTranRepositoryPK\" on \"DbTranRepository\"\nr (cost=0.00..1.83 rows=1 width=974)\n Index Cond: (((\"countyNo\")::smallint = ($0)::smallint)\nAND ((\"tranImageSeqNo\")::numeric = ($1)::numeric))\n\nDeletes about 9.2 million rows in 7 hours and 20 minutes.\n\nNew plan on 9.0.1:\n\n Delete (cost=0.00..93918390.38 rows=1 width=12)\n -> Merge Anti Join (cost=0.00..93918390.38 rows=1 width=12)\n Merge Cond: (((\"DbTranLogRecord\".\"countyNo\")::smallint =\n(r.\"countyNo\")::smallint) AND\n((\"DbTranLogRecord\".\"tranImageSeqNo\")::numeric =\n(r.\"tranImageSeqNo\")::numeric))\n -> Index Scan using \"DbTranLogRecordPK\" on\n\"DbTranLogRecord\" (cost=0.00..73143615.91 rows=675405504 width=20)\n -> Index Scan using \"DbTranRepositoryPK\" on\n\"DbTranRepository\" r (cost=0.00..16328700.43 rows=152541168\nwidth=20)\n\nCancelled after 39 hours and 25 minutes.\n\nI know how to work around it by using OFFSET 0 or tweaking the\ncosting for that one query; just sharing the information.\n \nAlso, we know these tables might be good candidates for\npartitioning, but that's an issue for another day.\n \n Table \"public.DbTranLogRecord\"\n Column | Type | Modifiers\n----------------+-------------------+-----------\n countyNo | \"CountyNoT\" | not null\n tranImageSeqNo | \"TranImageSeqNoT\" | not null\n logRecordSeqNo | \"LogRecordSeqNoT\" | not null\n operation | \"OperationT\" | not null\n tableName | \"TableNameT\" | not null\nIndexes:\n \"DbTranLogRecordPK\" PRIMARY KEY, btree (\"countyNo\",\n\"tranImageSeqNo\", \"logRecordSeqNo\")\n \"DbTranLogRecord_TableNameSeqNo\" btree (\"countyNo\", \"tableName\",\n\"tranImageSeqNo\", operation)\n \n Table \"public.DbTranRepository\"\n Column | Type | Modifiers\n------------------+------------------------+-----------\n countyNo | \"CountyNoT\" | not null\n tranImageSeqNo | \"TranImageSeqNoT\" | not null\n timestampValue | \"TimestampT\" | not null\n transactionImage | \"ImageT\" |\n status | character(1) | not null\n queryName | \"QueryNameT\" |\n runDuration | numeric(15,0) |\n userId | \"UserIdT\" |\n functionalArea | \"FunctionalAreaT\" |\n sourceRef | character varying(255) |\n url | \"URLT\" |\n tranImageSize | numeric(15,0) |\nIndexes:\n \"DbTranRepositoryPK\" PRIMARY KEY, btree (\"countyNo\",\n\"tranImageSeqNo\") CLUSTER\n \"DbTranRepository_UserId\" btree (\"countyNo\", \"userId\",\n\"tranImageSeqNo\")\n \"DbTranRepository_timestamp\" btree (\"countyNo\", \"timestampValue\")\n \n relname | relpages | reltuples |\npg_relation_size\n--------------------------------+----------+-------------+------------------\n DbTranLogRecord | 5524411 | 6.75406e+08 | 42 GB\n DbTranLogRecordPK | 6581122 | 6.75406e+08 | 50 GB\n DbTranLogRecord_TableNameSeqNo | 6803441 | 6.75406e+08 | 52 GB\n DbTranRepository | 22695447 | 1.52376e+08 | 173 GB\n DbTranRepositoryPK | 1353643 | 1.52376e+08 | 10 GB\n DbTranRepository_UserId | 1753793 | 1.52376e+08 | 13 GB\n DbTranRepository_timestamp | 1353682 | 1.52376e+08 | 10 GB\n(7 rows)\n \noprofile while not much but this delete is running:\n \nsamples % symbol name\n2320174 33.7617 index_getnext\n367268 5.3443 LWLockAcquire\n299131 4.3528 hash_search_with_hash_value\n249459 3.6300 HeapTupleSatisfiesMVCC\n229558 3.3404 PinBuffer\n222673 3.2402 _bt_checkkeys\n204416 2.9745 LWLockRelease\n194336 2.8279 heap_page_prune_opt\n152353 2.2169 XidInMVCCSnapshot\n121131 1.7626 AllocSetAlloc\n91123 1.3260 SearchCatCache\n88394 1.2863 nocache_index_getattr\n85936 1.2505 pglz_compress\n76531 1.1136 heap_hot_search_buffer\n69532 1.0118 _mdfd_getseg\n68743 1.0003 FunctionCall2\n64720 0.9418 TransactionIdPrecedes\n45298 0.6591 texteq\n43183 0.6284 UnpinBuffer\n40666 0.5917 base_yyparse\n \nIf you want more details or the opannotate level, let me know.\n \n-Kevin\n", "msg_date": "Tue, 09 Nov 2010 15:18:13 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "anti-join chosen even when slower than old plan" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> samples % symbol name\n> 2320174 33.7617 index_getnext\n \nI couldn't resist seeing where the time went within this function.\nOver 13.7% of the opannotate run time was on this bit of code:\n\n /*\n * The xmin should match the previous xmax value, else chain is\n * broken. (Note: this test is not optional because it protects\n * us against the case where the prior chain member's xmax aborted\n * since we looked at it.)\n */\n if (TransactionIdIsValid(scan->xs_prev_xmax) &&\n !TransactionIdEquals(scan->xs_prev_xmax,\n HeapTupleHeaderGetXmin(heapTuple->t_data)))\n break;\n \nI can't see why it would be such a hotspot, but it is.\n \n-Kevin\n", "msg_date": "Tue, 09 Nov 2010 17:07:42 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> \"Kevin Grittner\" <[email protected]> wrote:\n>> samples % symbol name\n>> 2320174 33.7617 index_getnext\n \n> I couldn't resist seeing where the time went within this function.\n> Over 13.7% of the opannotate run time was on this bit of code:\n\n> /*\n> * The xmin should match the previous xmax value, else chain is\n> * broken. (Note: this test is not optional because it protects\n> * us against the case where the prior chain member's xmax aborted\n> * since we looked at it.)\n> */\n> if (TransactionIdIsValid(scan->xs_prev_xmax) &&\n> !TransactionIdEquals(scan->xs_prev_xmax,\n> HeapTupleHeaderGetXmin(heapTuple->t_data)))\n> break;\n \n> I can't see why it would be such a hotspot, but it is.\n\nMain-memory access waits, maybe? If at_chain_start is false, that xmin\nfetch would be the first actual touch of a given heap tuple, and could\nbe expected to have to wait for a cache line to be pulled in from RAM.\nHowever, you'd have to be spending a lot of time chasing through long\nHOT chains before that would happen enough to make this a hotspot...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Nov 2010 18:17:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> However, you'd have to be spending a lot of time chasing through\n> long HOT chains before that would happen enough to make this a\n> hotspot...\n \nThat makes it all the more mysterious, then. These tables are\ninsert-only except for a weekly delete of one week of the oldest\ndata. The parent table, with the date, is deleted first, then this\ntable deletes \"where not exists\" a corresponding parent. I can't\nsee how we'd ever have a HOT chain in these tables.\n \nIs there anything in particular you'd like me to check?\n \n-Kevin\n", "msg_date": "Tue, 09 Nov 2010 17:24:48 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> The semi-join and anti-join have helped us quite a bit, but we have\n> seen a situation where anti-join is chosen even though it is slower\n> than the \"old fashioned\" plan. I know there have been other reports\n> of this, but I just wanted to go on record with my details.\n\nIn principle, the old-style plan ought to be equivalent to a nestloop\nantijoin with a seqscan of DbTranLogRecord on the outside and an\nindexscan of DbTranRepository on the inside. Can you force it to choose\nsuch a plan by setting enable_mergejoin off (and maybe enable_hashjoin\ntoo)? If so, it'd be interesting to see the estimated costs and actual\nruntime on 9.0 for that plan.\n\nIt would also be interesting to check estimated and actual costs for the\nSELECT COUNT(*) versions of these queries, ie, no actual delete. I'm\nsuspicious that the cost differential has nothing to do with antijoin\nvs. subplan, and everything to do with whether the targeted tuples are\nbeing deleted in physical order (thus improving locality of access for\nthe deletions). If it's the latter, see previous discussions about\npossibly sorting update/delete targets by CTID before applying the\nactions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Nov 2010 19:08:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "you're joining on more than one key. That always hurts performance.\n", "msg_date": "Wed, 10 Nov 2010 05:36:59 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> The semi-join and anti-join have helped us quite a bit, but we\n>> have seen a situation where anti-join is chosen even though it is\n>> slower than the \"old fashioned\" plan. I know there have been\n>> other reports of this, but I just wanted to go on record with my\n>> details.\n> \n> In principle, the old-style plan ought to be equivalent to a\n> nestloop antijoin with a seqscan of DbTranLogRecord on the outside\n> and an indexscan of DbTranRepository on the inside. Can you force\n> it to choose such a plan by setting enable_mergejoin off (and\n> maybe enable_hashjoin too)?\n \nWell, I got what I think is the equivalent plan by adding OFFSET 0\nto the subquery:\n \n Delete (cost=0.00..1239005015.67 rows=337702752 width=6)\n -> Seq Scan on \"DbTranLogRecord\" (cost=0.00..1239005015.67\nrows=337702752 width=6)\n Filter: (NOT (SubPlan 1))\n SubPlan 1\n -> Limit (cost=0.00..1.82 rows=1 width=974)\n -> Index Scan using \"DbTranRepositoryPK\" on\n\"DbTranRepository\" r (cost=0.00..1.82 rows=1 width=974)\n Index Cond: (((\"countyNo\")::smallint =\n($0)::smallint) AND ((\"tranImageSeqNo\")::numeric = ($1)::numeric))\n \n> If so, it'd be interesting to see the estimated costs and actual\n> runtime on 9.0 for that plan.\n \nUnfortunately, based on the oprofile information I decided to check\nout the plan I would get by boosting cpu_index_tuple_cost by a\nfactor of 20. The resulting plan was:\n \n Delete (cost=132623778.83..139506491.18 rows=1 width=12)\n -> Merge Anti Join (cost=132623778.83..139506491.18 rows=1\nwidth=12)\n Merge Cond: (((\"DbTranLogRecord\".\"tranImageSeqNo\")::numeric\n= (r.\"tranImageSeqNo\")::numeric) AND\n((\"DbTranLogRecord\".\"countyNo\")::smallint =\n(r.\"countyNo\")::smallint))\n -> Sort (cost=107941675.79..109630189.55 rows=675405504\nwidth=20)\n Sort Key: \"DbTranLogRecord\".\"tranImageSeqNo\",\n\"DbTranLogRecord\".\"countyNo\"\n -> Seq Scan on \"DbTranLogRecord\" \n(cost=0.00..7306496.14 rows=675405504 width=20)\n -> Materialize (cost=24682103.04..25443983.12\nrows=152376016 width=20)\n -> Sort (cost=24682103.04..25063043.08\nrows=152376016 width=20)\n Sort Key: r.\"tranImageSeqNo\", r.\"countyNo\"\n -> Seq Scan on \"DbTranRepository\" r \n(cost=0.00..3793304.86 rows=152376016 width=20)\n \nThat looked like it had potential, so I started that off and went\nhome before I got your post. It finished in 3 hours and 31 minutes\n-- more than twice as fast as the nestloop plan used under 8.3.\n \nBut wait -- it turns out that this pain was self-inflicted. Based\non heavy testing of the interactive queries which users run against\nthis database we tuned the database for \"fully-cached\" settings,\nwith both random_page_cost and _seq_page_cost at 0.1. In a\npractical sense, the users are almost always running these queries\nagainst very recent data which is, in fact, heavily cached -- so\nit's no surprise that the queries they run perform best with plans\nbased on such costing. The problem is that these weekly maintenance\nruns need to pass the entire database, so caching effects are far\nless pronounced. If I set seq_page_cost = 1 and random_page_cost =\n2 I get exactly the same (fast) plan as above.\n \nI guess the lesson here is not to use the same costing for\ndatabase-wide off-hours maintenance queries as for ad hoc queries\nagainst a smaller set of recent data by users who expect quick\nresponse time. I'm fine with tweaking the costs in our maintenance\nscripts, but it does tend to make me daydream about how the\noptimizer might possibly auto-tweak such things....\n \nI assume there's now no need to get timings for the old plan?\n \n-Kevin\n", "msg_date": "Wed, 10 Nov 2010 09:15:50 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> In principle, the old-style plan ought to be equivalent to a\n>> nestloop antijoin with a seqscan of DbTranLogRecord on the outside\n>> and an indexscan of DbTranRepository on the inside. Can you force\n>> it to choose such a plan by setting enable_mergejoin off (and\n>> maybe enable_hashjoin too)?\n \n> Well, I got what I think is the equivalent plan by adding OFFSET 0\n> to the subquery:\n\nNo, that *is* the old-style plan (plus a useless Limit node, which will\nsurely make it marginally slower). My point was that a nestloop\nantijoin plan should have the same access pattern and hence very similar\nperformance, maybe even a little better due to not having the SubPlan\nmachinery in there.\n \n> But wait -- it turns out that this pain was self-inflicted. Based\n> on heavy testing of the interactive queries which users run against\n> this database we tuned the database for \"fully-cached\" settings,\n> with both random_page_cost and _seq_page_cost at 0.1.\n\nAh. So it was underestimating the cost of the full-table indexscans,\nand my guess about nonsequential application of the delete actions\nwasn't the right guess. The merge antijoin does seem like it should be\nthe fastest way of doing such a large join, so I think the problem is\nsolved.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Nov 2010 10:33:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "On Wed, Nov 10, 2010 at 10:15 AM, Kevin Grittner\n<[email protected]> wrote:\n> But wait -- it turns out that this pain was self-inflicted.  Based\n> on heavy testing of the interactive queries which users run against\n> this database we tuned the database for \"fully-cached\" settings,\n> with both random_page_cost and _seq_page_cost at 0.1.  In a\n> practical sense, the users are almost always running these queries\n> against very recent data which is, in fact, heavily cached -- so\n> it's no surprise that the queries they run perform best with plans\n> based on such costing.  The problem is that these weekly maintenance\n> runs need to pass the entire database, so caching effects are far\n> less pronounced.  If I set seq_page_cost = 1 and random_page_cost =\n> 2 I get exactly the same (fast) plan as above.\n>\n> I guess the lesson here is not to use the same costing for\n> database-wide off-hours maintenance queries as for ad hoc queries\n> against a smaller set of recent data by users who expect quick\n> response time.  I'm fine with tweaking the costs in our maintenance\n> scripts, but it does tend to make me daydream about how the\n> optimizer might possibly auto-tweak such things....\n\nWow. That's fascinating, and if you don't mind, I might mention this\npotential problem in a future talk at some point.\n\nI've given some thought in the past to trying to maintain some model\nof which parts of the database are likely to be cached, and trying to\nadjust costing estimates based on that data. But it's a really hard\nproblem, because what is and is not in cache can change relatively\nquickly, and you don't want to have too much plan instability. Also,\nfor many workloads, you'd need to have pretty fine-grained statistics\nto figure out anything useful, which would be expensive and difficult\nto maintain.\n\nBut thinking over what you've written here, I'm reminded of something\nPeter said years ago, also about the optimizer. He was discussed the\nratio of the estimated cost to the actual cost and made an off-hand\nremark that efforts had been made over the years to make that ratio\nmore consistent (i.e. improve the quality of the cost estimates) but\nthat they'd been abandoned because they didn't necessarily produce\nbetter plans. Applying that line of thinking to this problem, maybe\nwe should give up on trying to make the estimates truly model reality,\nand focus more on assigning them values which work well in practice.\nFor example, in your case, it would be sufficient to estimate the\namount of data that a given query is going to grovel through and then\napplying some heuristic to choose values for random_page_cost and\nseq_page_cost based on the ratio of that value to, I don't know,\neffective_cache_size.\n\nUnfortunately, to know how much data we're going to grovel through, we\nneed to know the plan; and to decide on the right plan, we need to\nknow how much data we're going to grovel through.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 10 Nov 2010 17:07:52 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Robert Haas <[email protected]> wrote:\n \n> Wow. That's fascinating, and if you don't mind, I might mention\n> this potential problem in a future talk at some point.\n \nI don't mind at all.\n \n> For example, in your case, it would be sufficient to estimate the\n> amount of data that a given query is going to grovel through and\n> then applying some heuristic to choose values for random_page_cost\n> and seq_page_cost based on the ratio of that value to, I don't\n> know, effective_cache_size.\n \nThat's where my day-dreams on the topic have been starting.\n \n> Unfortunately, to know how much data we're going to grovel\n> through, we need to know the plan; and to decide on the right\n> plan, we need to know how much data we're going to grovel through.\n \nAnd that's where they've been ending.\n \nThe only half-sane answer I've thought of is to apply a different\ncost to full-table or full-index scans based on the ratio with\neffective cache size. A higher cost for such scans is something\nwhich I've already postulated might be worthwhile for SSI, because\nof the increased risk of rw-conflicts which could ultimately\ncontribute to serialization failures -- to attempt to model, at\nleast in some crude way, the costs associated with transaction\nretry.\n \n-Kevin\n", "msg_date": "Wed, 10 Nov 2010 16:43:43 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Robert Haas <[email protected]> wrote:\n>> Unfortunately, to know how much data we're going to grovel\n>> through, we need to know the plan; and to decide on the right\n>> plan, we need to know how much data we're going to grovel through.\n \n> And that's where they've been ending.\n \n> The only half-sane answer I've thought of is to apply a different\n> cost to full-table or full-index scans based on the ratio with\n> effective cache size.\n\nThis might have some connection to some rather half-baked ideas I've\nbeen having in connection with the generalized-inner-indexscan problem.\nI don't have anything in the way of a coherent presentation to make yet,\nbut the thing I'm being forced to realize is that sane modeling of a\ncomplex subplan that's on the inside of a nestloop join requires\ntreating *every* scan type as having different costs \"the first time\"\nversus \"during rescan\". If the total amount of data touched in the\nquery is less than effective_cache_size, it's not unreasonable to\nsuppose that I/O costs during rescan might be zero, even for a seqscan or\na non-parameterized indexscan. In fact, only parameterized indexscans\nwould be able to touch pages they'd not touched the first time, and so\nthey ought to have higher not lower rescan costs in this environment.\nBut once the total data volume exceeds effective_cache_size, you have to\ndo something different since you shouldn't any longer assume the data is\nall cached from the first scan. (This isn't quite as hard as the case\nyou're talking about, since I think the relevant data volume is the sum\nof the sizes of the tables used in the query; which is easy to\nestimate at the start of planning, unlike the portion of the tables\nthat actually gets touched.)\n\nAn idea that isn't even half-baked yet is that once we had a cost model\nlike that, we might be able to produce plans that are well-tuned for a\nheavily cached environment by applying the \"rescan\" cost model even to\nthe first scan for a particular query. So that might lead to some sort\nof \"assume_cached\" GUC parameter, and perhaps Kevin could tune his\nreporting queries by turning that off instead of messing with individual\ncost constants. Or maybe we could be smarter if we could extract an\nestimate for the amount of data touched in the query ... but like you,\nI don't see a good way to get that number soon enough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Nov 2010 18:07:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "On Wed, Nov 10, 2010 at 6:07 PM, Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Robert Haas <[email protected]> wrote:\n>>> Unfortunately, to know how much data we're going to grovel\n>>> through, we need to know the plan; and to decide on the right\n>>> plan, we need to know how much data we're going to grovel through.\n>\n>> And that's where they've been ending.\n>\n>> The only half-sane answer I've thought of is to apply a different\n>> cost to full-table or full-index scans based on the ratio with\n>> effective cache size.\n\nKevin, yes, good point. Bravo! Let's do that. Details TBD, but\nsuppose effective_cache_size = 1GB. What we know for sure is that a 4\nGB table is not going to be fully cached but a 4 MB table may well be.\n In fact, I think we should assume that the 4 MB table IS cached,\nbecause the point is that if it's used at all, it soon will be. It's\nalmost certainly a bad idea to build a plan around the idea of\nminimizing reads from that 4 MB table in favor of doing a substantial\namount of additional work somewhere else. I suppose this could break\ndown if you had hundreds and hundreds of 4 MB tables all of which were\naccessed regularly, but that's an unusual situation, and anyway it's\nnot clear that assuming them all uncached is going to be any better\nthan assuming them all cached.\n\n> This might have some connection to some rather half-baked ideas I've\n> been having in connection with the generalized-inner-indexscan problem.\n> I don't have anything in the way of a coherent presentation to make yet,\n> but the thing I'm being forced to realize is that sane modeling of a\n> complex subplan that's on the inside of a nestloop join requires\n> treating *every* scan type as having different costs \"the first time\"\n> versus \"during rescan\".  If the total amount of data touched in the\n> query is less than effective_cache_size, it's not unreasonable to\n> suppose that I/O costs during rescan might be zero, even for a seqscan or\n> a non-parameterized indexscan.  In fact, only parameterized indexscans\n> would be able to touch pages they'd not touched the first time, and so\n> they ought to have higher not lower rescan costs in this environment.\n> But once the total data volume exceeds effective_cache_size, you have to\n> do something different since you shouldn't any longer assume the data is\n> all cached from the first scan.  (This isn't quite as hard as the case\n> you're talking about, since I think the relevant data volume is the sum\n> of the sizes of the tables used in the query; which is easy to\n> estimate at the start of planning, unlike the portion of the tables\n> that actually gets touched.)\n\nWell, we don't want the costing model to have sharp edges.\neffective_cache_size can't be taken as much more than an educated\nguess, and what actually happens will depend a lot on what else is\ngoing on on the system. If only one query is running on a system at a\ntime and it is repeatedly seq-scanning a large table, the cost of\nreading pages in will be very small until the table grows large enough\nthat you can't fit the whole thing in memory at once, and then will\nabruptly go through the roof. But realistically you're not going to\nknow exactly where that edge is going to be, because you can't predict\nexactly how much concurrent activity there will be, for example, or\nhow much work_mem allocations will push out of the OS buffer cache.\nSo I'm thinking we should start the costs at something like 0.05/0.05\nfor tables that are much smaller than effective_cache_size and ramp up\nto 4/1 for tables that are larger than effective_cache_size. Maybe\njust by linearly ramping up, although that has a certain feeling of\nbeing without mathemetical soundness.\n\n> An idea that isn't even half-baked yet is that once we had a cost model\n> like that, we might be able to produce plans that are well-tuned for a\n> heavily cached environment by applying the \"rescan\" cost model even to\n> the first scan for a particular query.  So that might lead to some sort\n> of \"assume_cached\" GUC parameter, and perhaps Kevin could tune his\n> reporting queries by turning that off instead of messing with individual\n> cost constants.\n\nI think the real goal here should be to try to avoid needing a GUC. A\nlot of people could benefit if the system could make some attempt to\nrecognize on its own which queries are likely to be cached. We\nalready have parameters you can hand-tune for each query as necessary.\n Being able to set some parameters system-wide and then get sensible\nbehavior automatically would be much nicer.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 10 Nov 2010 22:47:21 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On 11/10/2010 5:43 PM, Kevin Grittner wrote:\n> The only half-sane answer I've thought of is to apply a different\n> cost to full-table or full-index scans based on the ratio with\n> effective cache size.\n\nThe \"effective_cache_size\" is, in my humble opinion, a wrong method. It \nwould be much easier to have a parameter, let's call it \n\"optimizer_index_caching\", which would give the assumption of the \npercentage of an index that is cached. In other words, if \n\"optimizer_index_caching\" was set to 80, the optimizer would assume that \n80% of any index is cached and would apply different cost estimate. It's \nnot exact but it's simple and modifiable. It would also be a great tool \nin the hands of the DBA which has to manage OLTP database or DW database \nand would be able to create a definitive bias toward one type of the \nexecution plan.\nI have to confess that the idea about such parameter is not entirely \nmine:*http://tinyurl.com/33gu4f6*\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n\n\n\n\n\n\n\n On 11/10/2010 5:43 PM, Kevin Grittner wrote:\n \nThe only half-sane answer I've thought of is to apply a different\ncost to full-table or full-index scans based on the ratio with\neffective cache size. \n\n\n The \"effective_cache_size\" is, in my humble opinion, a wrong\n method.  It would be much easier to have a parameter, let's call it\n \"optimizer_index_caching\", which would give the assumption of the\n percentage of an index that is cached. In other words, if\n \"optimizer_index_caching\" was set to 80, the optimizer would assume\n that 80% of any index is cached and would apply different cost\n estimate. It's not exact but it's simple and modifiable. It would\n also be a great tool in the hands of the DBA which has to manage\n OLTP database or DW database and would be able to create a\n definitive bias toward one type of the execution plan. \n I have to confess that the idea about such parameter is not entirely\n mine: http://tinyurl.com/33gu4f6\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com", "msg_date": "Thu, 11 Nov 2010 00:14:18 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "2010/11/11 Robert Haas <[email protected]>\n\n>\n> But thinking over what you've written here, I'm reminded of something\n> Peter said years ago, also about the optimizer. He was discussed the\n> ratio of the estimated cost to the actual cost and made an off-hand\n> remark that efforts had been made over the years to make that ratio\n> more consistent (i.e. improve the quality of the cost estimates) but\n> that they'd been abandoned because they didn't necessarily produce\n> better plans. Applying that line of thinking to this problem, maybe\n> we should give up on trying to make the estimates truly model reality,\n> and focus more on assigning them values which work well in practice.\n> For example, in your case, it would be sufficient to estimate the\n> amount of data that a given query is going to grovel through and then\n> applying some heuristic to choose values for random_page_cost and\n> seq_page_cost based on the ratio of that value to, I don't know,\n> effective_cache_size.\n>\n\nAs for me, the simplest solution would be to allow to set costs on\nper-relation basis. E.g. I know that this relation is most time in memory\nand other one (archive) is on the disk. This could work like charm along\nwith buffer pools (portions of shared cache) - tables (or indexes) that are\nrequired to be cached can be assigned to bufferpool that has enough size to\nhold all the data, archive ones - to small bufferpool. This can guarantie\nthat after query on the archive data, cached tables are still cached.\nThis solutions however, does not help on tables where only some portion of\ntable is activelly used. The solution can be to allow set costs via partial\nindexes - e.g. \"for any table access using this index, use this cost\nvalues\". This, BTW, will make table access via given index more preferable.\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2010/11/11 Robert Haas <[email protected]>\n\nBut thinking over what you've written here, I'm reminded of something\nPeter said years ago, also about the optimizer.  He was discussed the\nratio of the estimated cost to the actual cost and made an off-hand\nremark that efforts had been made over the years to make that ratio\nmore consistent (i.e. improve the quality of the cost estimates) but\nthat they'd been abandoned because they didn't necessarily produce\nbetter plans.  Applying that line of thinking to this problem, maybe\nwe should give up on trying to make the estimates truly model reality,\nand focus more on assigning them values which work well in practice.\nFor example, in your case, it would be sufficient to estimate the\namount of data that a given query is going to grovel through and then\napplying some heuristic to choose values for random_page_cost and\nseq_page_cost based on the ratio of that value to, I don't know,\neffective_cache_size.As for me, the simplest solution would be to allow to set costs on per-relation basis. E.g. I know that this relation is most time in memory and other one (archive) is on the disk. This could work like charm along with buffer pools (portions of shared cache) - tables (or indexes) that are required to be cached can be assigned to bufferpool that has enough size to hold all the data, archive ones - to small bufferpool. This can guarantie that after query on the archive data, cached tables are still cached.\nThis solutions however, does not help on tables where only some portion of table is activelly used. The solution can be to allow set costs via partial indexes - e.g. \"for any table access using this index, use this cost values\". This, BTW, will make table access via given index more preferable.\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Thu, 11 Nov 2010 10:01:12 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Wed, Nov 10, 2010 at 10:47:21PM -0500, Robert Haas wrote:\n> On Wed, Nov 10, 2010 at 6:07 PM, Tom Lane <[email protected]> wrote:\n> > \"Kevin Grittner\" <[email protected]> writes:\n> >> Robert Haas <[email protected]> wrote:\n> >>> Unfortunately, to know how much data we're going to grovel\n> >>> through, we need to know the plan; and to decide on the right\n> >>> plan, we need to know how much data we're going to grovel through.\n> >\n> >> And that's where they've been ending.\n> >\n> >> The only half-sane answer I've thought of is to apply a different\n> >> cost to full-table or full-index scans based on the ratio with\n> >> effective cache size.\n> \n> Kevin, yes, good point. Bravo! Let's do that. Details TBD, but\n> suppose effective_cache_size = 1GB. What we know for sure is that a 4\n> GB table is not going to be fully cached but a 4 MB table may well be.\n> In fact, I think we should assume that the 4 MB table IS cached,\n> because the point is that if it's used at all, it soon will be. It's\n> almost certainly a bad idea to build a plan around the idea of\n> minimizing reads from that 4 MB table in favor of doing a substantial\n> amount of additional work somewhere else. I suppose this could break\n> down if you had hundreds and hundreds of 4 MB tables all of which were\n> accessed regularly, but that's an unusual situation, and anyway it's\n> not clear that assuming them all uncached is going to be any better\n> than assuming them all cached.\n> \n> > This might have some connection to some rather half-baked ideas I've\n> > been having in connection with the generalized-inner-indexscan problem.\n> > I don't have anything in the way of a coherent presentation to make yet,\n> > but the thing I'm being forced to realize is that sane modeling of a\n> > complex subplan that's on the inside of a nestloop join requires\n> > treating *every* scan type as having different costs \"the first time\"\n> > versus \"during rescan\". ?If the total amount of data touched in the\n> > query is less than effective_cache_size, it's not unreasonable to\n> > suppose that I/O costs during rescan might be zero, even for a seqscan or\n> > a non-parameterized indexscan. ?In fact, only parameterized indexscans\n> > would be able to touch pages they'd not touched the first time, and so\n> > they ought to have higher not lower rescan costs in this environment.\n> > But once the total data volume exceeds effective_cache_size, you have to\n> > do something different since you shouldn't any longer assume the data is\n> > all cached from the first scan. ?(This isn't quite as hard as the case\n> > you're talking about, since I think the relevant data volume is the sum\n> > of the sizes of the tables used in the query; which is easy to\n> > estimate at the start of planning, unlike the portion of the tables\n> > that actually gets touched.)\n> \n> Well, we don't want the costing model to have sharp edges.\n> effective_cache_size can't be taken as much more than an educated\n> guess, and what actually happens will depend a lot on what else is\n> going on on the system. If only one query is running on a system at a\n> time and it is repeatedly seq-scanning a large table, the cost of\n> reading pages in will be very small until the table grows large enough\n> that you can't fit the whole thing in memory at once, and then will\n> abruptly go through the roof. But realistically you're not going to\n> know exactly where that edge is going to be, because you can't predict\n> exactly how much concurrent activity there will be, for example, or\n> how much work_mem allocations will push out of the OS buffer cache.\n> So I'm thinking we should start the costs at something like 0.05/0.05\n> for tables that are much smaller than effective_cache_size and ramp up\n> to 4/1 for tables that are larger than effective_cache_size. Maybe\n> just by linearly ramping up, although that has a certain feeling of\n> being without mathemetical soundness.\n> \n> > An idea that isn't even half-baked yet is that once we had a cost model\n> > like that, we might be able to produce plans that are well-tuned for a\n> > heavily cached environment by applying the \"rescan\" cost model even to\n> > the first scan for a particular query. ?So that might lead to some sort\n> > of \"assume_cached\" GUC parameter, and perhaps Kevin could tune his\n> > reporting queries by turning that off instead of messing with individual\n> > cost constants.\n> \n> I think the real goal here should be to try to avoid needing a GUC. A\n> lot of people could benefit if the system could make some attempt to\n> recognize on its own which queries are likely to be cached. We\n> already have parameters you can hand-tune for each query as necessary.\n> Being able to set some parameters system-wide and then get sensible\n> behavior automatically would be much nicer.\n> \n> -- \n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n> \n\nI agree with the goal of avoiding the need for a GUC. This needs to\nbe as automatic as possible. One idea I had had was computing a value\nfor the amount of cache data in the system by keeping a sum or a\nweighted sum of the table usage in the system. Smaller tables and\nindexes would contribute a smaller amount to the total, while larger\nindexes and tables would contribute a larger amount. Then by comparing\nthis running total to the effective_cache_size, set the random and\nsequential costs for a query. This would allow the case of many 4MB\ntables to favor disk I/O more than memory I/O. The weighting could\nbe a function of simultaneous users of the table. I know this is a\nbit of hand-waving but some sort of dynamic feedback needs to be\nprovided to the planning process as system use increases.\n\nRegards,\nKen\n", "msg_date": "Thu, 11 Nov 2010 07:51:22 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Kenneth Marshall wrote:\n> I agree with the goal of avoiding the need for a GUC. This needs to\n> be as automatic as possible. One idea I had had was computing a value\n> for the amount of cache data in the system by keeping a sum or a\n> weighted sum of the table usage in the system. Smaller tables and\n> indexes would contribute a smaller amount to the total, while larger\n> indexes and tables would contribute a larger amount. Then by comparing\n> this running total to the effective_cache_size, set the random and\n> sequential costs for a query. This would allow the case of many 4MB\n> tables to favor disk I/O more than memory I/O. The weighting could\n> be a function of simultaneous users of the table. I know this is a\n> bit of hand-waving but some sort of dynamic feedback needs to be\n> provided to the planning process as system use increases.\n>\n> Regards,\n> Ken\n>\n> \nKenneth, you seem to be only concerned with the accuracy of the planning \nprocess, not with the plan stability. As a DBA who has to monitor real \nworld applications, I find things like an execution plan changing with \nthe use of the system to be my worst nightmare. The part where you say \nthat \"this needs to be as automatic as possible\" probably means that I \nwill not be able to do anything about it, if the optimizer, by any \nchance, doesn't get it right. That looks to me like an entirely wrong \nway to go.\nWhen application developer tunes the SQL both him and me expect that SQL \nto always perform that way, not to change the execution plan because the \nsystem is utilized more than it was 1 hour ago. Nobody seems to have \ntaken my suggestion about having a parameter\nwhich would simply \"invent\" the percentage out of thin air seriously, \nbecause it's obviously not accurate.\nHowever, the planner accuracy is not the only concern. Running \napplications on the system usually requires plan stability. Means of\nexternal control of the execution plan, DBA knobs and buttons that can \nbe turned and pushed to produce the desired plan are also very much desired.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Thu, 11 Nov 2010 09:15:58 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Thu, Nov 11, 2010 at 09:15:58AM -0500, Mladen Gogala wrote:\n> Kenneth Marshall wrote:\n>> I agree with the goal of avoiding the need for a GUC. This needs to\n>> be as automatic as possible. One idea I had had was computing a value\n>> for the amount of cache data in the system by keeping a sum or a\n>> weighted sum of the table usage in the system. Smaller tables and\n>> indexes would contribute a smaller amount to the total, while larger\n>> indexes and tables would contribute a larger amount. Then by comparing\n>> this running total to the effective_cache_size, set the random and\n>> sequential costs for a query. This would allow the case of many 4MB\n>> tables to favor disk I/O more than memory I/O. The weighting could\n>> be a function of simultaneous users of the table. I know this is a\n>> bit of hand-waving but some sort of dynamic feedback needs to be\n>> provided to the planning process as system use increases.\n>>\n>> Regards,\n>> Ken\n>>\n>> \n> Kenneth, you seem to be only concerned with the accuracy of the planning \n> process, not with the plan stability. As a DBA who has to monitor real \n> world applications, I find things like an execution plan changing with the \n> use of the system to be my worst nightmare. The part where you say that \n> \"this needs to be as automatic as possible\" probably means that I will not \n> be able to do anything about it, if the optimizer, by any chance, doesn't \n> get it right. That looks to me like an entirely wrong way to go.\n> When application developer tunes the SQL both him and me expect that SQL to \n> always perform that way, not to change the execution plan because the \n> system is utilized more than it was 1 hour ago. Nobody seems to have taken \n> my suggestion about having a parameter\n> which would simply \"invent\" the percentage out of thin air seriously, \n> because it's obviously not accurate.\n> However, the planner accuracy is not the only concern. Running applications \n> on the system usually requires plan stability. Means of\n> external control of the execution plan, DBA knobs and buttons that can be \n> turned and pushed to produce the desired plan are also very much desired.\n>\n> -- \n> Mladen Gogala Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com \n>\nHi Mladen,\n\nI think in many ways, this is the same problem. Because we are not\ncorrectly modeling the system, the plan choices are not accurate\neither for some scenarios. This means that when plan costs are\ncompared, the evaluation is not accurate. This is what causes the\nterrible plans being right next to the good plans and is what\nimpacts the \"plan stability\". If the costs are correct, then in\nfact the plan stability will be much better with the better\ncosting, not worse. Plans with close costs should actually have\nclose performance.\n\nRegards,\nKen\n", "msg_date": "Thu, 11 Nov 2010 08:28:09 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Mladen Gogala <[email protected]> wrote:\n \n> create a definitive bias toward one type of the execution plan.\n \nWe're talking about trying to support the exact opposite. This all\nstarted because a database which was tuned for good response time\nfor relatively small queries against a \"hot\" portion of some tables\nchose a bad plan for a weekend maintenance run against the full\ntables. We're talking about the possibility of adapting the cost\nfactors based on table sizes as compared to available cache, to more\naccurately model the impact of needing to do actual disk I/O for\nsuch queries.\n \nThis also is very different from trying to adapt queries to what\nhappens to be currently in cache. As already discussed on a recent\nthread, the instability in plans and the failure to get to an\neffective cache set make that a bad idea. The idea discussed here\nwould maintain a stable plan for a given query, it would just help\nchoose a good plan based on the likely level of caching.\n \n-Kevin\n", "msg_date": "Thu, 11 Nov 2010 09:00:20 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "\n--- On Thu, 11/11/10, Mladen Gogala <[email protected]> wrote:\n\n> From: Mladen Gogala <[email protected]>\n> Subject: Re: [PERFORM] anti-join chosen even when slower than old plan\n> To: \"Kenneth Marshall\" <[email protected]>\n> Cc: \"Robert Haas\" <[email protected]>, \"Tom Lane\" <[email protected]>, \"Kevin Grittner\" <[email protected]>, \"[email protected]\" <[email protected]>\n> Date: Thursday, November 11, 2010, 9:15 AM\n> Kenneth Marshall wrote:\n> > I agree with the goal of avoiding the need for a GUC.\n> This needs to\n> > be as automatic as possible. One idea I had had was\n> computing a value\n> > for the amount of cache data in the system by keeping\n> a sum or a\n> > weighted sum of the table usage in the system. Smaller\n> tables and\n> > indexes would contribute a smaller amount to the\n> total, while larger\n> > indexes and tables would contribute a larger amount.\n> Then by comparing\n> > this running total to the effective_cache_size, set\n> the random and\n> > sequential costs for a query. This would allow the\n> case of many 4MB\n> > tables to favor disk I/O more than memory I/O. The\n> weighting could\n> > be a function of simultaneous users of the table. I\n> know this is a\n> > bit of hand-waving but some sort of dynamic feedback\n> needs to be\n> > provided to the planning process as system use\n> increases.\n> > \n> > Regards,\n> > Ken\n> > \n> >   \n> Kenneth, you seem to be only concerned with the accuracy of\n> the planning process, not with the plan stability. As a DBA\n> who has to monitor real world applications, I find things\n> like an execution plan changing with the use of the system\n> to be my worst nightmare. The part where you say that \"this\n> needs to be as automatic as possible\" probably means that I\n> will not be able to do anything about it, if the optimizer,\n> by any chance, doesn't get it right. That looks to me like\n> an entirely wrong way to go.\n> When application developer tunes the SQL both him and me\n> expect that SQL to always perform that way, not to change\n> the execution plan because the system is utilized more than\n> it was 1 hour ago. Nobody seems to have taken my suggestion\n> about having a parameter\n> which would simply \"invent\" the percentage out of thin air\n> seriously, because it's obviously not accurate.\n> However, the planner accuracy is not the only concern.\n> Running applications on the system usually requires plan\n> stability. Means of\n> external control of the execution plan, DBA knobs and\n> buttons that can be turned and pushed to produce the desired\n> plan are also very much desired.\n> \n> -- Mladen Gogala Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com \n> \n\nMladen,\n\nBeen there, done that with Oracle for more years than I care to remember or admit. Having the necessary knobs was both daunting and a godsend, depending on if you could find the right one(s) to frob during production use, and you turned them the right way and amount. I personally find having less knobbage with PostgreSQL to be a huge benefit over Oracle. In that spirit, I offer the following suggestion: (Ken's original suggestion inspired me, so if I misunderstand it, Ken, please correct me.)\n\nWhat if the code that managed the shared buffer cache kept track of how many buffers were in the cache for each table and index? Then the optimizer could know the ratio of cached to non-cached table of index buffers (how many pages are in PG's buffer cache vs. the total number of pages required for the entire table, assuming autovacuum is working well) and plan accordingly. It would even be possible to skew the estimate based on the ratio of shared_buffers to effective_cache_size. The optimizer could then dynamically aadjust the random and sequential costs per query prior to planning, with (hopefully) plans optimized to the current condition of the server and host caches just prior to execution.\n\nThere are lots of assumptions here, the primary ones being the shared buffer cache's state doesn't change significantly between the start of planning and actual execution time, and the host is dedicated to running the database and nothing else that would trash the host's file system cache. I admit that I haven't looked at the code for this yet, so I don't know if I'm on to something or off in the weeds.\n\nRegards,\n\nBob Lunney\n\n\n\n\n \n", "msg_date": "Thu, 11 Nov 2010 09:05:56 -0800 (PST)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Kevin Grittner wrote:\n> Mladen Gogala <[email protected]> wrote:\n> \n> \n>> create a definitive bias toward one type of the execution plan.\n>> \n> \n> We're talking about trying to support the exact opposite. \nI understand this, that is precisely the reason for my intervention into \nthe discussion of experts, which I am not.\n> This all\n> started because a database which was tuned for good response time\n> for relatively small queries against a \"hot\" portion of some tables\n> chose a bad plan for a weekend maintenance run against the full\n> tables. We're talking about the possibility of adapting the cost\n> factors based on table sizes as compared to available cache, to more\n> accurately model the impact of needing to do actual disk I/O for\n> such queries.\n> \nKevin, in my experience, the hardest thing to do is to tune so called \nmixed type databases. In practice, databases are usually separated: OLTP \ndatabase on one group of servers, reporting database and the data \nwarehouse on another group of servers. Postgres 9.0 has made a great \nstride toward such possibility with the new replication facilities. \nAgain, having an optimizer which will choose the plan completely \naccurately is, at least in my opinion, less important than having a \npossibility of manual control, the aforementioned \"knobs and buttons\" \nand produce the same plan for the same statement. Trying to make the \noptimizer smart enough for all types of loads is akin to looking for the \nHoly Grail. Inevitably, you will face some hard questions, like the one \nabout the airspeed velocity of an unladen swallow, and the whole search \nis likely to end in pretty funny way, not producing the desired \n\"optimizing genie in the CPU\".\n> \n> This also is very different from trying to adapt queries to what\n> happens to be currently in cache. As already discussed on a recent\n> thread, the instability in plans and the failure to get to an\n> effective cache set make that a bad idea. The idea discussed here\n> would maintain a stable plan for a given query, it would just help\n> choose a good plan based on the likely level of caching.\n> \nKevin, I am talking from the perspective of a DBA who is involved with a \nproduction databases on day-to-day basis. I am no expert but I do \nbelieve to speak from a perspective of users that Postgres has to win in \norder to make further inroads into the corporate server rooms. Without \nthe possibility of such control and the plan stability, it is hard for \nme to recommend more extensive use of PostgreSQL to my boss. Whatever \nsolution is chosen, it needs to have \"knobs and buttons\" and produce the \nplans that will not change when the CPU usage goes up.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Thu, 11 Nov 2010 12:13:08 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Mladen Gogala <[email protected]> writes:\n> Again, having an optimizer which will choose the plan completely \n> accurately is, at least in my opinion, less important than having a \n> possibility of manual control, the aforementioned \"knobs and buttons\" \n> and produce the same plan for the same statement.\n\nMore knobs and buttons is the Oracle way, and the end result of that\nprocess is that you have something as hard to use as Oracle. That's\ngenerally not thought of as desirable in this community.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 2010 12:45:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "On 11/11/10 9:13 AM, Mladen Gogala wrote:\n> Kevin Grittner wrote:\n>> Mladen Gogala <[email protected]> wrote:\n>>\n>>> create a definitive bias toward one type of the execution plan.\n>>\n>> We're talking about trying to support the exact opposite.\n> I understand this, that is precisely the reason for my intervention into the discussion of experts, which I am not.\n>> This all\n>> started because a database which was tuned for good response time\n>> for relatively small queries against a \"hot\" portion of some tables\n>> chose a bad plan for a weekend maintenance run against the full\n>> tables. We're talking about the possibility of adapting the cost\n>> factors based on table sizes as compared to available cache, to more\n>> accurately model the impact of needing to do actual disk I/O for\n>> such queries.\n> Kevin, in my experience, the hardest thing to do is to tune so called mixed type databases. In practice, databases are usually separated: OLTP database on one group of servers, reporting database and the data warehouse on another group of servers. Postgres 9.0 has made a great stride toward such possibility with the new replication facilities. Again, having an optimizer which will choose the plan completely accurately is, at least in my opinion, less important than having a possibility of manual control, the aforementioned \"knobs and buttons\" and produce the same plan for the same statement. Trying to make the optimizer smart enough for all types of loads is akin to looking for the Holy Grail. Inevitably, you will face some hard questions, like the one about the airspeed velocity of an unladen swallow, and the whole search is likely to end in pretty funny way, not producing the desired \"optimizing genie in the CPU\".\n\nWhat about rule-based configuration? You provide a default configuration (what Postgres does now), and then allow one or more alternate configurations that are triggered when certain rules match. The rules might be things like:\n\n- Per user or group of users. Set up a user or group for your\n maintenance task and you automatically get your own config.\n\n- A set of tables. If you do a query that uses tables X, Y, and\n Z, this configuration applies to you.\n\n- A regular expression applied to the SQL. If the regexp matches,\n the configuration applies.\n\n- System resource usage. If some other process is gobbling memory,\n switch to a configuration with lower memory requirements.\n\n- A time of day. Maybe on weekends, different rules apply.\n\n... and so on. I don't know what the right parameters might be, but surely the original poster's problem would be solved by this solution. It gives performance experts the tool they need for complex installations, without adding FUD to the lives of everyone else.\n\nCraig\n\n>>\n>> This also is very different from trying to adapt queries to what\n>> happens to be currently in cache. As already discussed on a recent\n>> thread, the instability in plans and the failure to get to an\n>> effective cache set make that a bad idea. The idea discussed here\n>> would maintain a stable plan for a given query, it would just help\n>> choose a good plan based on the likely level of caching.\n> Kevin, I am talking from the perspective of a DBA who is involved with a production databases on day-to-day basis. I am no expert but I do believe to speak from a perspective of users that Postgres has to win in order to make further inroads into the corporate server rooms. Without the possibility of such control and the plan stability, it is hard for me to recommend more extensive use of PostgreSQL to my boss. Whatever solution is chosen, it needs to have \"knobs and buttons\" and produce the plans that will not change when the CPU usage goes up.\n>\n\n", "msg_date": "Thu, 11 Nov 2010 09:55:39 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Thu, Nov 11, 2010 at 10:00 AM, Kevin Grittner\n<[email protected]> wrote:\n> Mladen Gogala <[email protected]> wrote:\n>\n>> create a definitive bias toward one type of the execution plan.\n>\n> We're talking about trying to support the exact opposite.  This all\n> started because a database which was tuned for good response time\n> for relatively small queries against a \"hot\" portion of some tables\n> chose a bad plan for a weekend maintenance run against the full\n> tables.  We're talking about the possibility of adapting the cost\n> factors based on table sizes as compared to available cache, to more\n> accurately model the impact of needing to do actual disk I/O for\n> such queries.\n>\n> This also is very different from trying to adapt queries to what\n> happens to be currently in cache.  As already discussed on a recent\n> thread, the instability in plans and the failure to get to an\n> effective cache set make that a bad idea.  The idea discussed here\n> would maintain a stable plan for a given query, it would just help\n> choose a good plan based on the likely level of caching.\n\nLet's back up a moment and talk about what the overall goal is, here.\nIdeally, we would like PostgreSQL to have excellent performance at all\ntimes, under all circumstances, with minimal tuning. Therefore, we do\nNOT want to add variables that will, by design, need constant manual\nadjustment. That is why I suggested that Tom's idea of an\nassume_cached GUC is probably not what we really want to do. On the\nother hand, what I understand Mladen to be suggesting is something\ncompletely different. He's basically saying that, of course, he wants\nit to work out of the box most of the time, but since there are\nguaranteed to be cases where it doesn't, how about providing some\nknobs that aren't intended to be routinely twaddled but which are\navailable in case of emergency? Bravo, I say!\n\nConsider the case of whether a table is cached. Currently, we\nestimate that it isn't, and you can sort of twaddle that assumption\nglobally by setting seq_page_cost and random_page_cost. In 9.0, you\ncan twaddle it with a bit more granularity by adjusting seq_page_cost\nand random_page_cost on a per-tablespace basis. But that's really\nintended to handle the case where you have one tablespace on an SSD\nand another that isn't. It doesn't really model caching at all; we're\njust abusing it as if it does. If 90% of a table is cached, you can't\nsimply multiply the cost of reading it by 0.1, because now many of\nthose reads will be random I/O rather than sequential I/O. The right\nthing to do is to estimate what percentage of the table will be\ncached, then estimate how much random and sequential I/O it'll take to\nget the rest, and then compute the cost.\n\nTo do that, we can adopt the approach proposed upthread of comparing\nthe size of the table to effective_cache_size. We come up with some\nfunction f, such that f(effective_cache_size, table_size) =\nassumed_caching_percentage, and then from there we estimate random\nI/Os and sequential I/Os, and from there we estimate costs. This is a\ngood system, almost certainly better than what we have now. However,\nit's also guaranteed to not always work. The DBA may know, for\nexample, that one particular table that is quite large is always fully\ncached because it is very heavily access. So why not let them pin the\nassumed_caching_percentage for that table to 100%? I don't see any\nreason at all. Most people will never need to touch that setting, but\nit's there in case you really, really need it.\n\nWe've traditionally been reluctant to do this sort of thing (as the\nemail Tom just sent reflects) but I think we should soften up a bit.\nA product gets hard to use when it has knobs that MUST be tuned to\nmake it work at all, and certainly AFAICT Oracle falls into that\ncategory. My rollback segment is full? My table is out of extents?\nWell allocate some more space then; I certainly wouldn't have imposed\nan arbitrary cap on the table size if I'd known I was doing it.\nHowever, that's not the same thing as having knobs that are\n*available* when the shit really hits the fan. By failing to provide\nthat type of knob, we're not facilitating ease of use; we're just\nmaking it difficult for the small percentage of people who have\nproblems to fix them, which is another kind of non-ease-of-use.\n\nIn fact, we already have a few knobs of this type. We have a\nstatistics target which can be overriden on a per-column basis, and\nbeginning in 9.0, you can override the planner's n_distinct estimates\nin the same way. Why? Because we know that it's not achievable to\nestimate n_distinct accurately in all cases without making ANALYZE\nunreasonably slow. I bet that very, VERY few people will ever use\nthat feature, so it costs nothing in terms of \"oh, another setting I\nhave to configure\". But for people who are getting bitten by\ninaccurate n_distinct estimates, it will be very nice to have that as\nan escape hatch. I see no harm, and much value, in providing similar\nescape hatches elsewhere.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 11 Nov 2010 12:57:54 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Tom Lane wrote:\n> More knobs and buttons is the Oracle way, \n\nTrue. Very true.\n\n> and the end result of that\n> process is that you have something as hard to use as Oracle. \n\nAlso, you end up with something which is extremely reliable and \nadjustable to variety of conditions.\n\n> That's\n> generally not thought of as desirable in this community.\n>\n> \t\t\tregards, tom lane\n> \nAllow me to play the devil's advocate again. This community is still \nmuch, much smaller than even the MySQL community, much less Oracle's \ncommunity. If growth of the community is the goal, copying a page or two \nfrom the Oracle's book, looks like a good idea to me. The only thing I \ndislike about Oracle is its price, not its complexity.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Thu, 11 Nov 2010 13:11:01 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Let's back up a moment and talk about what the overall goal is, here.\n> Ideally, we would like PostgreSQL to have excellent performance at all\n> times, under all circumstances, with minimal tuning. Therefore, we do\n> NOT want to add variables that will, by design, need constant manual\n> adjustment. That is why I suggested that Tom's idea of an\n> assume_cached GUC is probably not what we really want to do. On the\n> other hand, what I understand Mladen to be suggesting is something\n> completely different. He's basically saying that, of course, he wants\n> it to work out of the box most of the time, but since there are\n> guaranteed to be cases where it doesn't, how about providing some\n> knobs that aren't intended to be routinely twaddled but which are\n> available in case of emergency? Bravo, I say!\n\nUm ... those are exactly the same thing. You're just making different\nassumptions about how often you will need to twiddle the setting.\nNeither assumption is based on any visible evidence, unfortunately.\nI was thinking of assume_cached as something that could be\nset-and-forget most of the time, and you're entirely right to criticize\nit on the grounds that maybe it wouldn't. But to support a proposal\nthat doesn't even exist yet on the grounds that it *would* be\nset-and-forget seems a tad inconsistent. We can't make that judgment\nwithout a whole lot more details than have been provided yet for any\nidea in this thread.\n\nI do think that something based around a settable-per-table caching\npercentage might be a reasonable way to proceed. But the devil is in\nthe details, and we don't have those yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 2010 13:23:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "I wrote:\n> I do think that something based around a settable-per-table caching\n> percentage might be a reasonable way to proceed.\n\nBTW ... on reflection it seems that this would *not* solve the use-case\nKevin described at the start of this thread. What he's got AIUI is some\nlarge tables whose recent entries are well-cached, and a lot of queries\nthat tend to hit that well-cached portion, plus a few queries that hit\nthe whole table and so see largely-not-cached behavior. We can't\nrepresent that very well with a caching knob at the table level. Either\na high or a low setting will be wrong for one set of queries or the\nother.\n\nIt might work all right if he were to partition the table and then have\na different caching value attached to the currently-latest partition,\nbut that doesn't sound exactly maintenance-free either. Also, that only\nworks with the current statically-planned approach to partitioned\ntables. I think where we're trying to go with partitioning is that\nthe planner doesn't consider the individual partitions, but the executor\njust hits the right one at runtime --- so cost modifiers attached to\nindividual partitions aren't going to work in that environment.\n\nThe most practical solution for his case still seems to be to twiddle\nsome GUC or other locally in the maintenance scripts that do the\nfull-table-scan queries. Unfortunately we don't have an equivalent\nof per-session SET (much less SET LOCAL) for per-relation attributes.\nNot sure if we want to go there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 2010 13:58:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "On Thu, Nov 11, 2010 at 1:23 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> Let's back up a moment and talk about what the overall goal is, here.\n>> Ideally, we would like PostgreSQL to have excellent performance at all\n>> times, under all circumstances, with minimal tuning.  Therefore, we do\n>> NOT want to add variables that will, by design, need constant manual\n>> adjustment.  That is why I suggested that Tom's idea of an\n>> assume_cached GUC is probably not what we really want to do.   On the\n>> other hand, what I understand Mladen to be suggesting is something\n>> completely different.  He's basically saying that, of course, he wants\n>> it to work out of the box most of the time, but since there are\n>> guaranteed to be cases where it doesn't, how about providing some\n>> knobs that aren't intended to be routinely twaddled but which are\n>> available in case of emergency?  Bravo, I say!\n>\n> Um ... those are exactly the same thing.  You're just making different\n> assumptions about how often you will need to twiddle the setting.\n> Neither assumption is based on any visible evidence, unfortunately.\n>\n> I was thinking of assume_cached as something that could be\n> set-and-forget most of the time, and you're entirely right to criticize\n> it on the grounds that maybe it wouldn't.  But to support a proposal\n> that doesn't even exist yet on the grounds that it *would* be\n> set-and-forget seems a tad inconsistent.  We can't make that judgment\n> without a whole lot more details than have been provided yet for any\n> idea in this thread.\n\nWell, maybe I misunderstood what you were proposing. I had the\nimpression that you were proposing something that would *by design*\nrequire adjustment for each query, so evidently I missed the point.\nIt seems to me that random_page_cost and seq_page_cost are pretty\nclose to set-and-forget already. We don't have many reports of people\nneeding to tune these values on a per-query basis; most people seem to\njust guesstimate a cluster-wide value and call it good. Refining the\nalgorithm should only make things better.\n\n> I do think that something based around a settable-per-table caching\n> percentage might be a reasonable way to proceed.  But the devil is in\n> the details, and we don't have those yet.\n\nI think one of the larger devils in the details is deciding how to\nestimate the assumed caching percentage when the user hasn't specified\none. Frankly, I suspect that if we simply added a reloption called\nassumed_caching_percentage and made it default to zero, we would make\na bunch of DBAs happy; they'd knock down seq_page_cost and\nrandom_page_cost enough to account for the general level of caching\nand then bump assumed_caching_percentage up for hot tables/indexes (or\nones that they want to have become hot). I think we can do better\nthan that, but the right formula isn't exactly obvious. I feel safe\nsaying that if effective_cache_size=1GB and table_size=4MB, then we\nought to take the table as fully cached. But it's far from clear what\ncaching percentage we should assume when table_size=400MB, and it\nseems like the sort of thing that will lead to endless bikeshedding.\nThere's probably no perfect answer, but I feel we can likely come up\nwith something that is better than a constant (which would probably\nstill be better than what we have now).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 11 Nov 2010 14:02:14 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Thu, Nov 11, 2010 at 1:58 PM, Tom Lane <[email protected]> wrote:\n> I wrote:\n>> I do think that something based around a settable-per-table caching\n>> percentage might be a reasonable way to proceed.\n>\n> BTW ... on reflection it seems that this would *not* solve the use-case\n> Kevin described at the start of this thread.  What he's got AIUI is some\n> large tables whose recent entries are well-cached, and a lot of queries\n> that tend to hit that well-cached portion, plus a few queries that hit\n> the whole table and so see largely-not-cached behavior.  We can't\n> represent that very well with a caching knob at the table level.  Either\n> a high or a low setting will be wrong for one set of queries or the\n> other.\n\nYeah. For Kevin's case, it seems like we want the caching percentage\nto vary not so much based on which table we're hitting at the moment\nbut on how much of it we're actually reading. However, the two\nproblems are related enough that I think it might be feasible to come\nup with one solution that answers both needs, or perhaps two\nsomewhat-intertwined solutions.\n\n> The most practical solution for his case still seems to be to twiddle\n> some GUC or other locally in the maintenance scripts that do the\n> full-table-scan queries.  Unfortunately we don't have an equivalent\n> of per-session SET (much less SET LOCAL) for per-relation attributes.\n> Not sure if we want to go there.\n\nI doubt it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 11 Nov 2010 14:05:41 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> BTW ... on reflection it seems that this would *not* solve the\n> use-case Kevin described at the start of this thread. What he's\n> got AIUI is some large tables whose recent entries are well-\n> cached, and a lot of queries that tend to hit that well-cached\n> portion, plus a few queries that hit the whole table and so see\n> largely-not-cached behavior. We can't represent that very well\n> with a caching knob at the table level. Either a high or a low\n> setting will be wrong for one set of queries or the other.\n \nExactly right.\n \n> The most practical solution for his case still seems to be to\n> twiddle some GUC or other locally in the maintenance scripts that\n> do the full-table-scan queries.\n \nYes, that works fine. The thread spun off in this speculative\ndirection because I started thinking about whether there was any\nreasonable way for PostgreSQL to automatically handle such things\nwithout someone having to notice the problem and do the per-script\ntuning. I don't know whether any of the ideas thus spawned are\nworth the effort -- it's not a situation I find myself in all that\noften. I guess it could be considered an \"ease of use\" feature.\n \n> Unfortunately we don't have an equivalent of per-session SET (much\n> less SET LOCAL) for per-relation attributes. Not sure if we want\n> to go there.\n \nBesides the \"fully-scanned object size relative to relation size\ncosting adjustment\" idea, the only one which seemed to be likely to\nbe useful for this sort of issue was the \"costing factors by user\nID\" idea -- the interactive queries hitting the well-cached portion\nof the tables are run through a read-only user ID, while the weekly\nmaintenance scripts (obviously) are not. With the settings I\ninitially had assigned to the cluster the maintenance scripts would\nnever have seen this issue; it was tuning to resolve end-user\ncomplaints of slowness in the interactive queries which set up the\nconditions for failure, and if I'd had per-user settings, I probably\nwould have (and definitely *should* have) used them.\n \nFWIW, I can certainly see the potential of some other ideas which\ncame up on the thread; what might have seemed like antipathy toward\nthem was more of an attempt to point out that they would not have\nhelped at all with the problem which started this thread.\n \n-Kevin\n", "msg_date": "Thu, 11 Nov 2010 13:15:55 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Nov 11, 2010 at 1:23 PM, Tom Lane <[email protected]> wrote:\n>> I do think that something based around a settable-per-table caching\n>> percentage might be a reasonable way to proceed. �But the devil is in\n>> the details, and we don't have those yet.\n\n> I think one of the larger devils in the details is deciding how to\n> estimate the assumed caching percentage when the user hasn't specified\n> one.\n\nI was imagining something very similar to the handling of seq_page_cost,\nie, there's a GUC controlling the system-wide default and then you can\noverride that per-table. But the real question is whether per-table\nis a useful granularity to control it at. See my later message.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 2010 14:17:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "I wrote:\n \n> Besides the \"fully-scanned object size relative to relation size\n> costing adjustment\" idea,\n \ns/relation size/effective cache size/\n \n-Kevin\n", "msg_date": "Thu, 11 Nov 2010 13:22:59 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Yeah. For Kevin's case, it seems like we want the caching percentage\n> to vary not so much based on which table we're hitting at the moment\n> but on how much of it we're actually reading.\n\nWell, we could certainly take the expected number of pages to read and\ncompare that to effective_cache_size. The thing that's missing in that\nequation is how much other stuff is competing for cache space. I've\ntried to avoid having the planner need to know the total size of the\ndatabase cluster, but it's kind of hard to avoid that if you want to\nmodel this honestly.\n\nWould it be at all workable to have an estimate that so many megs of a\ntable are in cache (independently of any other table), and then we could\nscale the cost based on the expected number of pages to read versus that\nnumber? The trick here is that DBAs really aren't going to want to set\nsuch a per-table number (at least, most of the time) so we need a\nformula to get to a default estimate for that number based on some simple\nsystem-wide parameters. I'm not sure if that's any easier.\n\nBTW, it seems that all these variants have an implicit assumption that\nif you're reading a small part of the table it's probably part of the\nworking set; which is an assumption that could be 100% wrong. I don't\nsee a way around it without trying to characterize the data access at\nan unworkably fine level, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 2010 14:35:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Besides the \"fully-scanned object size relative to relation size\n> costing adjustment\" idea, the only one which seemed to be likely to\n> be useful for this sort of issue was the \"costing factors by user\n> ID\" idea -- the interactive queries hitting the well-cached portion\n> of the tables are run through a read-only user ID, while the weekly\n> maintenance scripts (obviously) are not. With the settings I\n> initially had assigned to the cluster the maintenance scripts would\n> never have seen this issue; it was tuning to resolve end-user\n> complaints of slowness in the interactive queries which set up the\n> conditions for failure, and if I'd had per-user settings, I probably\n> would have (and definitely *should* have) used them.\n\nErm ... you can in fact do \"ALTER USER SET random_page_cost\" today.\nAs long as the settings are GUC parameters we have quite a lot of\nflexibility about how to control them. This gets back to my earlier\npoint that our current form of per-relation properties (reloptions) is\nconsiderably less flexible than a GUC. I think that if we create any\nstrong planner dependence on such properties, we're going to end up\nneeding to be able to set them in all the same ways you can set a GUC.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 2010 14:41:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> I've tried to avoid having the planner need to know the total size\n> of the database cluster, but it's kind of hard to avoid that if\n> you want to model this honestly.\n \nAgreed. Perhaps the cost could start escalating when the pages to\naccess hit (effective_cache_size * relation_size / database_size),\nand escalate to the defaults (or some new GUCs) in a linear fashion\nuntil you hit effective_cache_size?\n \n> BTW, it seems that all these variants have an implicit assumption\n> that if you're reading a small part of the table it's probably\n> part of the working set\n \nI would say that the assumption should be that seq_page_cost and\nrandom_page_cost model the costs for less extreme (and presumably\nmore common) queries, and that we're providing a way of handling the\nexceptional, extreme queries.\n \n-Kevin\n", "msg_date": "Thu, 11 Nov 2010 13:47:53 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> Erm ... you can in fact do \"ALTER USER SET random_page_cost\"\n> today.\n \nOuch. I'm embarrassed to have missed that. I'll do that instead of\nadding those settings to the scripts, then.\n \nThanks for pointing that out.\n \n-Kevin\n", "msg_date": "Thu, 11 Nov 2010 13:50:28 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Thu, Nov 11, 2010 at 1:41 PM, Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Besides the \"fully-scanned object size relative to relation size\n>> costing adjustment\" idea, the only one which seemed to be likely to\n>> be useful for this sort of issue was the \"costing factors by user\n>> ID\" idea -- the interactive queries hitting the well-cached portion\n>> of the tables are run through a read-only user ID, while the weekly\n>> maintenance scripts (obviously) are not.  With the settings I\n>> initially had assigned to the cluster the maintenance scripts would\n>> never have seen this issue; it was tuning to resolve end-user\n>> complaints of slowness in the interactive queries which set up the\n>> conditions for failure, and if I'd had per-user settings, I probably\n>> would have (and definitely *should* have) used them.\n>\n> Erm ... you can in fact do \"ALTER USER SET random_page_cost\" today.\n> As long as the settings are GUC parameters we have quite a lot of\n> flexibility about how to control them.  This gets back to my earlier\n> point that our current form of per-relation properties (reloptions) is\n> considerably less flexible than a GUC.  I think that if we create any\n> strong planner dependence on such properties, we're going to end up\n> needing to be able to set them in all the same ways you can set a GUC.\n\nIn Kevin's particular case, would this mechanism not help? By that I\nmean he could have two users: one user for the daily, the\ntables-ought-to-be-in-hot-cache use case. The other use could make use\nof the ALTER USER SET ... mechanism to drive the weekly reporting\n(tables are probably not hot) use case.\n\n\n-- \nJon\n", "msg_date": "Thu, 11 Nov 2010 13:52:12 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Thursday 11 November 2010 19:58:49 Tom Lane wrote:\n> I wrote:\n> > I do think that something based around a settable-per-table caching\n> > percentage might be a reasonable way to proceed.\n> \n> BTW ... on reflection it seems that this would *not* solve the use-case\n> Kevin described at the start of this thread. What he's got AIUI is some\n> large tables whose recent entries are well-cached, and a lot of queries\n> that tend to hit that well-cached portion, plus a few queries that hit\n> the whole table and so see largely-not-cached behavior. We can't\n> represent that very well with a caching knob at the table level. Either\n> a high or a low setting will be wrong for one set of queries or the\n> other.\n> \n> It might work all right if he were to partition the table and then have\n> a different caching value attached to the currently-latest partition,\n> but that doesn't sound exactly maintenance-free either. Also, that only\n> works with the current statically-planned approach to partitioned\n> tables. I think where we're trying to go with partitioning is that\n> the planner doesn't consider the individual partitions, but the executor\n> just hits the right one at runtime --- so cost modifiers attached to\n> individual partitions aren't going to work in that environment.\n> \n> The most practical solution for his case still seems to be to twiddle\n> some GUC or other locally in the maintenance scripts that do the\n> full-table-scan queries. Unfortunately we don't have an equivalent\n> of per-session SET (much less SET LOCAL) for per-relation attributes.\n> Not sure if we want to go there.\nAs dicussed in another thread some time ago another possibility is to probe \nhow well the data i cached using mincore() or similar...\nWhile it presents problem with cache ramp-up it quite cool for other use-cases \n(like this one).\n\nAndre\n", "msg_date": "Thu, 11 Nov 2010 21:11:04 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Thu, Nov 11, 2010 at 2:35 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> Yeah.  For Kevin's case, it seems like we want the caching percentage\n>> to vary not so much based on which table we're hitting at the moment\n>> but on how much of it we're actually reading.\n>\n> Well, we could certainly take the expected number of pages to read and\n> compare that to effective_cache_size.  The thing that's missing in that\n> equation is how much other stuff is competing for cache space.  I've\n> tried to avoid having the planner need to know the total size of the\n> database cluster, but it's kind of hard to avoid that if you want to\n> model this honestly.\n\nI'm not sure I agree with that. I mean, you could easily have a\ndatabase that is much larger than effective_cache_size, but only that\nmuch of it is hot. Or, the hot portion could move around over time.\nAnd for reasons of both technical complexity and plan stability, I\ndon't think we want to try to model that. It seems perfectly\nreasonable to say that reading 25% of effective_cache_size will be\nmore expensive *per-page* than reading 5% of effective_cache_size,\nindependently of what the total cluster size is.\n\n> Would it be at all workable to have an estimate that so many megs of a\n> table are in cache (independently of any other table), and then we could\n> scale the cost based on the expected number of pages to read versus that\n> number?  The trick here is that DBAs really aren't going to want to set\n> such a per-table number (at least, most of the time) so we need a\n> formula to get to a default estimate for that number based on some simple\n> system-wide parameters.  I'm not sure if that's any easier.\n\nThat's an interesting idea. For the sake of argument, suppose we\nassume that a relation which is less than 5% of effective_cache_size\nwill be fully cached; and anything larger we'll assume that much of it\nis cached. Consider a 4GB machine with effective_cache_size set to\n3GB. Then we'll assume that any relation less than 153MB table is\n100% cached, a 1 GB table is 15% cached, and a 3 GB table is 5%\ncached. That doesn't seem quite right, though: the caching percentage\ndrops off very quickly after you exceed the threshold.\n\n*thinks*\n\nI wondering if we could do something with a formula like 3 *\namount_of_data_to_read / (3 * amount_of_data_to_read +\neffective_cache_size) = percentage NOT cached. That is, if we're\nreading an amount of data equal to effective_cache_size, we assume 25%\ncaching, and plot a smooth curve through that point. In the examples\nabove, we would assume that a 150MB read is 87% cached, a 1GB read is\n50% cached, and a 3GB read is 25% cached.\n\n> BTW, it seems that all these variants have an implicit assumption that\n> if you're reading a small part of the table it's probably part of the\n> working set; which is an assumption that could be 100% wrong.  I don't\n> see a way around it without trying to characterize the data access at\n> an unworkably fine level, though.\n\nMe neither, but I think it will frequently be true, and I'm not sure\nit will hurt very much when it isn't. I mean, if you execute the same\nquery repeatedly, that data will become hot soon enough. If you\nexecute a lot of different queries that each touch a small portion of\na big, cold table, we might underestimate the costs of the index\nprobes, but so what? There's probably no better strategy for\naccessing that table anyway. Perhaps you can construct an example\nwhere this underestimate affects the join order in an undesirable\nfashion, but I'm having a hard time getting worked up about that as a\npotential problem case. Our current system - where we essentially\nassume that the caching percentage is uniform across the board - can\nhave the same problem in less artificial cases.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 11 Nov 2010 15:29:40 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "---- Original message ----\n>Date: Thu, 11 Nov 2010 15:29:40 -0500\n>From: [email protected] (on behalf of Robert Haas <[email protected]>)\n>Subject: Re: [PERFORM] anti-join chosen even when slower than old plan \n>To: Tom Lane <[email protected]>\n>Cc: Kevin Grittner <[email protected]>,Mladen Gogala <[email protected]>,\"[email protected]\" <[email protected]>\n>\n>On Thu, Nov 11, 2010 at 2:35 PM, Tom Lane <[email protected]> wrote:\n>> Robert Haas <[email protected]> writes:\n>>> Yeah.  For Kevin's case, it seems like we want the caching percentage\n>>> to vary not so much based on which table we're hitting at the moment\n>>> but on how much of it we're actually reading.\n>>\n>> Well, we could certainly take the expected number of pages to read and\n>> compare that to effective_cache_size.  The thing that's missing in that\n>> equation is how much other stuff is competing for cache space.  I've\n>> tried to avoid having the planner need to know the total size of the\n>> database cluster, but it's kind of hard to avoid that if you want to\n>> model this honestly.\n>\n>I'm not sure I agree with that. I mean, you could easily have a\n>database that is much larger than effective_cache_size, but only that\n>much of it is hot. Or, the hot portion could move around over time.\n>And for reasons of both technical complexity and plan stability, I\n>don't think we want to try to model that. It seems perfectly\n>reasonable to say that reading 25% of effective_cache_size will be\n>more expensive *per-page* than reading 5% of effective_cache_size,\n>independently of what the total cluster size is.\n>\n>> Would it be at all workable to have an estimate that so many megs of a\n>> table are in cache (independently of any other table), and then we could\n>> scale the cost based on the expected number of pages to read versus that\n>> number?  The trick here is that DBAs really aren't going to want to set\n>> such a per-table number (at least, most of the time) so we need a\n>> formula to get to a default estimate for that number based on some simple\n>> system-wide parameters.  I'm not sure if that's any easier.\n>\n>That's an interesting idea. For the sake of argument, suppose we\n>assume that a relation which is less than 5% of effective_cache_size\n>will be fully cached; and anything larger we'll assume that much of it\n>is cached. Consider a 4GB machine with effective_cache_size set to\n>3GB. Then we'll assume that any relation less than 153MB table is\n>100% cached, a 1 GB table is 15% cached, and a 3 GB table is 5%\n>cached. That doesn't seem quite right, though: the caching percentage\n>drops off very quickly after you exceed the threshold.\n>\n>*thinks*\n>\n>I wondering if we could do something with a formula like 3 *\n>amount_of_data_to_read / (3 * amount_of_data_to_read +\n>effective_cache_size) = percentage NOT cached. That is, if we're\n>reading an amount of data equal to effective_cache_size, we assume 25%\n>caching, and plot a smooth curve through that point. In the examples\n>above, we would assume that a 150MB read is 87% cached, a 1GB read is\n>50% cached, and a 3GB read is 25% cached.\n>\n>> BTW, it seems that all these variants have an implicit assumption that\n>> if you're reading a small part of the table it's probably part of the\n>> working set; which is an assumption that could be 100% wrong.  I don't\n>> see a way around it without trying to characterize the data access at\n>> an unworkably fine level, though.\n>\n>Me neither, but I think it will frequently be true, and I'm not sure\n>it will hurt very much when it isn't. I mean, if you execute the same\n>query repeatedly, that data will become hot soon enough. If you\n>execute a lot of different queries that each touch a small portion of\n>a big, cold table, we might underestimate the costs of the index\n>probes, but so what? There's probably no better strategy for\n>accessing that table anyway. Perhaps you can construct an example\n>where this underestimate affects the join order in an undesirable\n>fashion, but I'm having a hard time getting worked up about that as a\n>potential problem case. Our current system - where we essentially\n>assume that the caching percentage is uniform across the board - can\n>have the same problem in less artificial cases.\n>\n>-- \n>Robert Haas\n>EnterpriseDB: http://www.enterprisedb.com\n>The Enterprise PostgreSQL Company\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\nOn a thread some time ago, on a similar subject, I opined that I missed the ability to assign tables to tablespaces and buffers to tablespaces, thus having the ability to isolate needed tables (perhaps a One True Lookup Table, for example; or a Customer table) to memory without fear of eviction.\n\nI was sounding beaten about the face and breast. It really is an \"Enterprise\" way of handling the situation.\n\nregards,\nRobert\n", "msg_date": "Thu, 11 Nov 2010 15:56:25 -0500 (EST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when\n slower than old plan" }, { "msg_contents": "On Thu, Nov 11, 2010 at 03:56:25PM -0500, [email protected] wrote:\n> On a thread some time ago, on a similar subject, I opined that I missed the ability to assign tables to tablespaces and buffers to tablespaces, thus having the ability to isolate needed tables (perhaps a One True Lookup Table, for example; or a Customer table) to memory without fear of eviction.\n> \n> I was sounding beaten about the face and breast. It really is an \"Enterprise\" way of handling the situation.\n> \n> regards,\n> Robert\n> \n\nALTER TABLE can be used to change the tablespace of a table\nand/or index.\n\nCheers,\nKen\n", "msg_date": "Thu, 11 Nov 2010 15:28:19 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "2010/11/11 Tom Lane <[email protected]>:\n> Robert Haas <[email protected]> writes:\n>> Yeah.  For Kevin's case, it seems like we want the caching percentage\n>> to vary not so much based on which table we're hitting at the moment\n>> but on how much of it we're actually reading.\n>\n> Well, we could certainly take the expected number of pages to read and\n> compare that to effective_cache_size.  The thing that's missing in that\n> equation is how much other stuff is competing for cache space.  I've\n> tried to avoid having the planner need to know the total size of the\n> database cluster, but it's kind of hard to avoid that if you want to\n> model this honestly.\n>\n> Would it be at all workable to have an estimate that so many megs of a\n> table are in cache\n\nYes, with Linux ... at least.\n\n> (independently of any other table), and then we could\n> scale the cost based on the expected number of pages to read versus that\n> number?  The trick here is that DBAs really aren't going to want to set\n> such a per-table number (at least, most of the time) so we need a\n> formula to get to a default estimate for that number based on some simple\n> system-wide parameters.  I'm not sure if that's any easier.\n\nMy current ideas for future POC with pgfincore are around what is said\ncurrently in this thread.\n\nI'd like to have some maintenance stuff like auto-ANALYZE which report\ntable and index usage of the OS cache, it might be % of data in cache\nand distribution of data in cache (perhaps only my last 15% of the\ntable are in cache, or perhaps 15% of blocks with a more\nregular-random?- distribution)\nMy current stats around OS cache illustrate that the OS page cache\nremain stable : number of blocks in memory per object does not change\na lot once application have run long enough.\n\nThose are good stats to automaticaly adjust random_page_cost and\nseq_page_cost per per table or index. DBA provide accurate (with the\nhardware) random_page_cost and seq_page_cost , perhaps we may want a\nmem_page_cost (?). Or we just adjust rand_page_cost and seq_page_cost\nbased on the average data in cache.\nActually I think that updating *_page_cost and keeping the current\ndesign of effective_cache_size (in costsize.c) may rock enough.\n\n\n>\n> BTW, it seems that all these variants have an implicit assumption that\n> if you're reading a small part of the table it's probably part of the\n> working set; which is an assumption that could be 100% wrong.  I don't\n> see a way around it without trying to characterize the data access at\n> an unworkably fine level, though.\n\nExactly.\n\n>\n>                        regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 12 Nov 2010 10:07:31 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "2010/11/11 Robert Haas <[email protected]>:\n> On Thu, Nov 11, 2010 at 2:35 PM, Tom Lane <[email protected]> wrote:\n>> Robert Haas <[email protected]> writes:\n>>> Yeah.  For Kevin's case, it seems like we want the caching percentage\n>>> to vary not so much based on which table we're hitting at the moment\n>>> but on how much of it we're actually reading.\n>>\n>> Well, we could certainly take the expected number of pages to read and\n>> compare that to effective_cache_size.  The thing that's missing in that\n>> equation is how much other stuff is competing for cache space.  I've\n>> tried to avoid having the planner need to know the total size of the\n>> database cluster, but it's kind of hard to avoid that if you want to\n>> model this honestly.\n>\n> I'm not sure I agree with that.  I mean, you could easily have a\n> database that is much larger than effective_cache_size, but only that\n> much of it is hot.  Or, the hot portion could move around over time.\n> And for reasons of both technical complexity and plan stability, I\n> don't think we want to try to model that.  It seems perfectly\n> reasonable to say that reading 25% of effective_cache_size will be\n> more expensive *per-page* than reading 5% of effective_cache_size,\n> independently of what the total cluster size is.\n>\n>> Would it be at all workable to have an estimate that so many megs of a\n>> table are in cache (independently of any other table), and then we could\n>> scale the cost based on the expected number of pages to read versus that\n>> number?  The trick here is that DBAs really aren't going to want to set\n>> such a per-table number (at least, most of the time) so we need a\n>> formula to get to a default estimate for that number based on some simple\n>> system-wide parameters.  I'm not sure if that's any easier.\n>\n> That's an interesting idea.  For the sake of argument, suppose we\n> assume that a relation which is less than 5% of effective_cache_size\n> will be fully cached; and anything larger we'll assume that much of it\n> is cached.  Consider a 4GB machine with effective_cache_size set to\n> 3GB.  Then we'll assume that any relation less than 153MB table is\n> 100% cached, a 1 GB table is 15% cached, and a 3 GB table is 5%\n> cached.  That doesn't seem quite right, though: the caching percentage\n> drops off very quickly after you exceed the threshold.\n>\n> *thinks*\n>\n> I wondering if we could do something with a formula like 3 *\n> amount_of_data_to_read / (3 * amount_of_data_to_read +\n> effective_cache_size) = percentage NOT cached.  That is, if we're\n> reading an amount of data equal to effective_cache_size, we assume 25%\n> caching, and plot a smooth curve through that point.  In the examples\n> above, we would assume that a 150MB read is 87% cached, a 1GB read is\n> 50% cached, and a 3GB read is 25% cached.\n\n\nBut isn't it already the behavior of effective_cache_size usage ?\n\nSee index_pages_fetched() in costsize.c\n\n\n>\n>> BTW, it seems that all these variants have an implicit assumption that\n>> if you're reading a small part of the table it's probably part of the\n>> working set; which is an assumption that could be 100% wrong.  I don't\n>> see a way around it without trying to characterize the data access at\n>> an unworkably fine level, though.\n>\n> Me neither, but I think it will frequently be true, and I'm not sure\n> it will hurt very much when it isn't.  I mean, if you execute the same\n> query repeatedly, that data will become hot soon enough.  If you\n> execute a lot of different queries that each touch a small portion of\n> a big, cold table, we might underestimate the costs of the index\n> probes, but so what?  There's probably no better strategy for\n> accessing that table anyway.  Perhaps you can construct an example\n> where this underestimate affects the join order in an undesirable\n> fashion, but I'm having a hard time getting worked up about that as a\n> potential problem case.  Our current system - where we essentially\n> assume that the caching percentage is uniform across the board - can\n> have the same problem in less artificial cases.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 12 Nov 2010 10:15:17 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "I'd say there are two Qs here:\n\n1) Modify costs based on information on how much of the table is in \ncache. It would be great if this can be done, but I'd prefer to have it \nas admin knobs (because of plan stability). May be both admin and \nautomatic ways can be followed with some parallel (disableable) process \nmodify knobs on admin behalf. In this case different strategies to \nautomatically modify knobs can be applied.\n\n2) Modify costs for part of table retrieval. Then you need to define \n\"part\". Current ways are partitioning and partial indexes. Some similar \nto partial index thing may be created, that has only \"where\" clause and \nno data. But has statistics and knobs (and may be personal bufferspace \nif they are introduced). I don't like to gather data about \"last X \npercents\" or like, because it works only in clustering and it's hard for \noptimizer to decide if it will be enough to scan only this percents for \ngiven query.\n\nBest regards, Vitalii Tymchyshyn\n", "msg_date": "Fri, 12 Nov 2010 12:10:01 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "I supposed it was an answer to my mail but not sure... please keep\nCC'ed people, it is easier to follow threads (at least for me)\n\n2010/11/12 Vitalii Tymchyshyn <[email protected]>:\n> I'd say there are two Qs here:\n>\n> 1) Modify costs based on information on how much of the table is in cache.\n> It would be great  if this can be done, but I'd prefer to have it as admin\n> knobs (because of plan stability). May be both admin and automatic ways can\n> be followed with some parallel (disableable) process modify knobs on admin\n> behalf. In this case different strategies to automatically modify knobs can\n> be applied.\n\nOS cache is usualy stable enough to keep your plans stable too, I think.\n\n>\n> 2) Modify costs for part of table retrieval. Then you need to define \"part\".\n> Current ways are partitioning and partial indexes. Some similar to partial\n> index thing may be created, that has only \"where\" clause and no data. But\n> has statistics and knobs (and may be personal bufferspace if they are\n> introduced). I don't like to gather data about \"last X percents\" or like,\n> because it works only in clustering and it's hard for optimizer to decide if\n> it will be enough to scan only this percents for given query.\n\nModifying random_page_cost and sequential_page_cost thanks to\nstatistics about cached blocks can be improved if we know the\ndistribution.\n\nIt does not mean : we know we have last 15% in cache, and we are goign\nto request those 15%.\n\n>\n> Best regards, Vitalii Tymchyshyn\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 12 Nov 2010 11:56:34 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "12.11.10 12:56, Cédric Villemain написав(ла):\n> I supposed it was an answer to my mail but not sure... please keep\n> CC'ed people, it is easier to follow threads (at least for me)\n> \nOK\n> 2010/11/12 Vitalii Tymchyshyn<[email protected]>:\n> \n>> I'd say there are two Qs here:\n>>\n>> 1) Modify costs based on information on how much of the table is in cache.\n>> It would be great if this can be done, but I'd prefer to have it as admin\n>> knobs (because of plan stability). May be both admin and automatic ways can\n>> be followed with some parallel (disableable) process modify knobs on admin\n>> behalf. In this case different strategies to automatically modify knobs can\n>> be applied.\n>> \n> OS cache is usualy stable enough to keep your plans stable too, I think.\n> \nNot if it is on edge. There are always edge cases where data fluctuates \nnear some threshold.\n> \n>> 2) Modify costs for part of table retrieval. Then you need to define \"part\".\n>> Current ways are partitioning and partial indexes. Some similar to partial\n>> index thing may be created, that has only \"where\" clause and no data. But\n>> has statistics and knobs (and may be personal bufferspace if they are\n>> introduced). I don't like to gather data about \"last X percents\" or like,\n>> because it works only in clustering and it's hard for optimizer to decide if\n>> it will be enough to scan only this percents for given query.\n>> \n> Modifying random_page_cost and sequential_page_cost thanks to\n> statistics about cached blocks can be improved if we know the\n> distribution.\n>\n> It does not mean : we know we have last 15% in cache, and we are goign\n> to request those 15%.\n> \n\nYou mean *_cost for the whole table, don't you? That is case (1) for me.\nCase (2) is when different cost values are selected based on what \nportion of table is requested in the query. E.g. when we have data for \nthe whole day in one table, data for the last hour is cached and all the \nother data is not. Optimizer then may use different *_cost for query \nthat requires all the data and for query that requires only last hour \ndata. But, as I've said, that is much more complex task then (1).\n\nBest regards, Vitalii Tymchyshyn\n\n", "msg_date": "Fri, 12 Nov 2010 14:06:25 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "2010/11/12 Vitalii Tymchyshyn <[email protected]>:\n> 12.11.10 12:56, Cédric Villemain написав(ла):\n>>\n>> I supposed it was an answer to my mail but not sure... please keep\n>> CC'ed people, it is easier to follow threads (at least for me)\n>>\n>\n> OK\n>>\n>> 2010/11/12 Vitalii Tymchyshyn<[email protected]>:\n>>\n>>>\n>>> I'd say there are two Qs here:\n>>>\n>>> 1) Modify costs based on information on how much of the table is in\n>>> cache.\n>>> It would be great  if this can be done, but I'd prefer to have it as\n>>> admin\n>>> knobs (because of plan stability). May be both admin and automatic ways\n>>> can\n>>> be followed with some parallel (disableable) process modify knobs on\n>>> admin\n>>> behalf. In this case different strategies to automatically modify knobs\n>>> can\n>>> be applied.\n>>>\n>>\n>> OS cache is usualy stable enough to keep your plans stable too, I think.\n>>\n>\n> Not if it is on edge. There are always edge cases where data fluctuates near\n> some threshold.\n\nSo far I did some analysis on the topic with pgfincore. Tables and\nindex first have peak and holes if you graph the % of blocks in cache\nat the server start, but after a while, it is more stable.\n\nMaybe there are applications where linux faill to find a 'stable' page cache.\n\nIf people are able to graph the pgfincore results for all or part of\nthe objects of their database it will give us more robust analysis.\nEspecially when corner case with the planner exists (like here).\n\n>>\n>>\n>>>\n>>> 2) Modify costs for part of table retrieval. Then you need to define\n>>> \"part\".\n>>> Current ways are partitioning and partial indexes. Some similar to\n>>> partial\n>>> index thing may be created, that has only \"where\" clause and no data. But\n>>> has statistics and knobs (and may be personal bufferspace if they are\n>>> introduced). I don't like to gather data about \"last X percents\" or like,\n>>> because it works only in clustering and it's hard for optimizer to decide\n>>> if\n>>> it will be enough to scan only this percents for given query.\n>>>\n>>\n>> Modifying random_page_cost and sequential_page_cost thanks to\n>> statistics about cached blocks can be improved if we know the\n>> distribution.\n>>\n>> It does not mean : we know we have last 15% in cache, and we are goign\n>> to request those 15%.\n>>\n>\n> You mean *_cost for the whole table, don't you? That is case (1) for me.\n\nYes.\n\n> Case (2) is when different cost values are selected based on what portion of\n> table is requested in the query. E.g. when we have data for the whole day in\n> one table, data for the last hour is cached and all the other data is not.\n> Optimizer then may use different *_cost for query that requires all the data\n> and for query that requires only last hour data. But, as I've said, that is\n> much more complex task then (1).\n\nI need to think some more time of that.\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 12 Nov 2010 13:40:52 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Fri, Nov 12, 2010 at 4:15 AM, Cédric Villemain\n<[email protected]> wrote:\n>> I wondering if we could do something with a formula like 3 *\n>> amount_of_data_to_read / (3 * amount_of_data_to_read +\n>> effective_cache_size) = percentage NOT cached.  That is, if we're\n>> reading an amount of data equal to effective_cache_size, we assume 25%\n>> caching, and plot a smooth curve through that point.  In the examples\n>> above, we would assume that a 150MB read is 87% cached, a 1GB read is\n>> 50% cached, and a 3GB read is 25% cached.\n>\n> But isn't it already the behavior of effective_cache_size usage ?\n\nNo.\n\nThe ideal of trying to know what is actually in cache strikes me as an\nalmost certain non-starter. It can change very quickly, even as a\nresult of the query you're actually running. And getting the\ninformation we'd need in order to do it that way would be very\nexpensive, when it can be done at all.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 12 Nov 2010 11:30:24 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Nov 12, 2010 at 4:15 AM, C�dric Villemain\n> <[email protected]> wrote:\n>>> I wondering if we could do something with a formula like 3 *\n>>> amount_of_data_to_read / (3 * amount_of_data_to_read +\n>>> effective_cache_size) = percentage NOT cached. �That is, if we're\n>>> reading an amount of data equal to effective_cache_size, we assume 25%\n>>> caching, and plot a smooth curve through that point. �In the examples\n>>> above, we would assume that a 150MB read is 87% cached, a 1GB read is\n>>> 50% cached, and a 3GB read is 25% cached.\n\n>> But isn't it already the behavior of effective_cache_size usage ?\n\n> No.\n\nI think his point is that we already have a proven formula\n(Mackert-Lohmann) and shouldn't be inventing a new one out of thin air.\nThe problem is to figure out what numbers to apply the M-L formula to.\n\nI've been thinking that we ought to try to use it in the context of the\nquery as a whole rather than for individual table scans; the current\nusage already has some of that flavor but we haven't taken it to the\nlogical conclusion.\n\n> The ideal of trying to know what is actually in cache strikes me as an\n> almost certain non-starter.\n\nAgreed on that point. Plan stability would go out the window.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 2010 11:43:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan " }, { "msg_contents": "On Fri, Nov 12, 2010 at 11:43 AM, Tom Lane <[email protected]> wrote:\n> I think his point is that we already have a proven formula\n> (Mackert-Lohmann) and shouldn't be inventing a new one out of thin air.\n> The problem is to figure out what numbers to apply the M-L formula to.\n\nI'm not sure that's really measuring the same thing, although I'm not\nopposed to using it if it produces reasonable answers.\n\n> I've been thinking that we ought to try to use it in the context of the\n> query as a whole rather than for individual table scans; the current\n> usage already has some of that flavor but we haven't taken it to the\n> logical conclusion.\n\nThat's got a pretty severe chicken-and-egg problem though, doesn't it?\n You're going to need to know how much data you're touching to\nestimate the costs so you can pick the best plan, but you can't know\nhow much data will ultimately be touched until you've got the whole\nplan.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 12 Nov 2010 13:57:38 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "2010/11/12 Tom Lane <[email protected]>:\n> Robert Haas <[email protected]> writes:\n>> On Fri, Nov 12, 2010 at 4:15 AM, Cédric Villemain\n>> <[email protected]> wrote:\n>>>> I wondering if we could do something with a formula like 3 *\n>>>> amount_of_data_to_read / (3 * amount_of_data_to_read +\n>>>> effective_cache_size) = percentage NOT cached.  That is, if we're\n>>>> reading an amount of data equal to effective_cache_size, we assume 25%\n>>>> caching, and plot a smooth curve through that point.  In the examples\n>>>> above, we would assume that a 150MB read is 87% cached, a 1GB read is\n>>>> 50% cached, and a 3GB read is 25% cached.\n>\n>>> But isn't it already the behavior of effective_cache_size usage ?\n>\n>> No.\n>\n> I think his point is that we already have a proven formula\n> (Mackert-Lohmann) and shouldn't be inventing a new one out of thin air.\n> The problem is to figure out what numbers to apply the M-L formula to.\n>\n> I've been thinking that we ought to try to use it in the context of the\n> query as a whole rather than for individual table scans; the current\n> usage already has some of that flavor but we haven't taken it to the\n> logical conclusion.\n>\n>> The ideal of trying to know what is actually in cache strikes me as an\n>> almost certain non-starter.\n>\n> Agreed on that point.  Plan stability would go out the window.\n\nPoint is not to now the current cache, but like for ANALYZE on a\nregular basis (probably something around number of page read/hit) run\na cache_analyze which report stats like ANALYZE do, and may be\nadjusted per table like auto_analyze is.\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Sat, 13 Nov 2010 06:44:25 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Hello,\n\nJust a short though:\n\nIs it imaginable to compare the prognoses of the plans with the actual\nresults \nand somehow log the worst cases ? \n\na) to help the DBA locate bad statistics and queries\nb) as additional information source for the planner\n\nThis could possibly affect parameters of your formula on the fly.\n\nbest regards,\n\nMarc Mamin\n", "msg_date": "Sat, 13 Nov 2010 10:32:12 +0100", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Sat, Nov 13, 2010 at 1:32 AM, Marc Mamin <[email protected]> wrote:\n> Hello,\n>\n> Just a short though:\n>\n> Is it imaginable to compare the prognoses of the plans with the actual\n> results\n> and somehow log the worst cases ?\n>\n> a) to help the DBA locate bad statistics and queries\n> b) as additional information source for the planner\n>\n> This could possibly affect parameters of your formula on the fly.\n>\n> best regards,\n>\n> Marc Mamin\n\nThe contrib module auto_explain might help out here if you wanted to\nroll your own solution for plan comparison.\n", "msg_date": "Sat, 13 Nov 2010 08:15:06 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Sat, Nov 13, 2010 at 4:32 AM, Marc Mamin <[email protected]> wrote:\n> Hello,\n>\n> Just a short though:\n>\n> Is it imaginable to compare the prognoses of the plans with the actual\n> results\n> and somehow log the worst cases ?\n>\n> a) to help the DBA locate bad statistics and queries\n> b) as additional information source for the planner\n>\n> This could possibly affect parameters of your formula on the fly.\n\nYeah, I've thought about this, but it's not exactly clear what would\nbe most useful.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sat, 13 Nov 2010 19:20:22 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Tom Lane wrote:\n> Mladen Gogala <[email protected]> writes:\n> > Again, having an optimizer which will choose the plan completely \n> > accurately is, at least in my opinion, less important than having a \n> > possibility of manual control, the aforementioned \"knobs and buttons\" \n> > and produce the same plan for the same statement.\n> \n> More knobs and buttons is the Oracle way, and the end result of that\n> process is that you have something as hard to use as Oracle. That's\n> generally not thought of as desirable in this community.\n\nLet reply, but Mladen, you might want to look at my blog entry\nexplaining why knobs are often not useful because they are only used by\na small percentage of users (and confuse the rest):\n\n\thttp://momjian.us/main/blogs/pgblog/2009.html#January_10_2009\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Wed, 19 Jan 2011 14:47:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Robert Haas wrote:\n> On Thu, Nov 11, 2010 at 2:35 PM, Tom Lane <[email protected]> wrote:\n> > Robert Haas <[email protected]> writes:\n> >> Yeah. ?For Kevin's case, it seems like we want the caching percentage\n> >> to vary not so much based on which table we're hitting at the moment\n> >> but on how much of it we're actually reading.\n> >\n> > Well, we could certainly take the expected number of pages to read and\n> > compare that to effective_cache_size. ?The thing that's missing in that\n> > equation is how much other stuff is competing for cache space. ?I've\n> > tried to avoid having the planner need to know the total size of the\n> > database cluster, but it's kind of hard to avoid that if you want to\n> > model this honestly.\n> \n> I'm not sure I agree with that. I mean, you could easily have a\n> database that is much larger than effective_cache_size, but only that\n> much of it is hot. Or, the hot portion could move around over time.\n> And for reasons of both technical complexity and plan stability, I\n> don't think we want to try to model that. It seems perfectly\n> reasonable to say that reading 25% of effective_cache_size will be\n> more expensive *per-page* than reading 5% of effective_cache_size,\n> independently of what the total cluster size is.\n\nLate reply, but one idea is to have the executor store hit counts for\nlater use by the optimizer. Only the executor knows how many pages it\nhad to request from the kernel for a query. Perhaps getrusage could\ntell us how often we hit the disk.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Wed, 19 Jan 2011 15:03:19 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Tom Lane wrote:\n> Robert Haas <[email protected]> writes:\n> > On Fri, Nov 12, 2010 at 4:15 AM, C�dric Villemain\n> > <[email protected]> wrote:\n> >>> I wondering if we could do something with a formula like 3 *\n> >>> amount_of_data_to_read / (3 * amount_of_data_to_read +\n> >>> effective_cache_size) = percentage NOT cached. �That is, if we're\n> >>> reading an amount of data equal to effective_cache_size, we assume 25%\n> >>> caching, and plot a smooth curve through that point. �In the examples\n> >>> above, we would assume that a 150MB read is 87% cached, a 1GB read is\n> >>> 50% cached, and a 3GB read is 25% cached.\n> \n> >> But isn't it already the behavior of effective_cache_size usage ?\n> \n> > No.\n> \n> I think his point is that we already have a proven formula\n> (Mackert-Lohmann) and shouldn't be inventing a new one out of thin air.\n> The problem is to figure out what numbers to apply the M-L formula to.\n> \n> I've been thinking that we ought to try to use it in the context of the\n> query as a whole rather than for individual table scans; the current\n> usage already has some of that flavor but we haven't taken it to the\n> logical conclusion.\n\nIs there a TODO here?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Wed, 19 Jan 2011 15:04:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "2011/1/19 Bruce Momjian <[email protected]>:\n> Tom Lane wrote:\n>> Robert Haas <[email protected]> writes:\n>> > On Fri, Nov 12, 2010 at 4:15 AM, Cédric Villemain\n>> > <[email protected]> wrote:\n>> >>> I wondering if we could do something with a formula like 3 *\n>> >>> amount_of_data_to_read / (3 * amount_of_data_to_read +\n>> >>> effective_cache_size) = percentage NOT cached.  That is, if we're\n>> >>> reading an amount of data equal to effective_cache_size, we assume 25%\n>> >>> caching, and plot a smooth curve through that point.  In the examples\n>> >>> above, we would assume that a 150MB read is 87% cached, a 1GB read is\n>> >>> 50% cached, and a 3GB read is 25% cached.\n>>\n>> >> But isn't it already the behavior of effective_cache_size usage ?\n>>\n>> > No.\n>>\n>> I think his point is that we already have a proven formula\n>> (Mackert-Lohmann) and shouldn't be inventing a new one out of thin air.\n>> The problem is to figure out what numbers to apply the M-L formula to.\n>>\n>> I've been thinking that we ought to try to use it in the context of the\n>> query as a whole rather than for individual table scans; the current\n>> usage already has some of that flavor but we haven't taken it to the\n>> logical conclusion.\n>\n> Is there a TODO here?\n\nit looks like, yes.\n\n>\n> --\n>  Bruce Momjian  <[email protected]>        http://momjian.us\n>  EnterpriseDB                             http://enterprisedb.com\n>\n>  + It's impossible for everything to be true. +\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Thu, 20 Jan 2011 10:17:08 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "On Thu, Jan 20, 2011 at 4:17 AM, Cédric Villemain\n<[email protected]> wrote:\n>>> I think his point is that we already have a proven formula\n>>> (Mackert-Lohmann) and shouldn't be inventing a new one out of thin air.\n>>> The problem is to figure out what numbers to apply the M-L formula to.\n>>>\n>>> I've been thinking that we ought to try to use it in the context of the\n>>> query as a whole rather than for individual table scans; the current\n>>> usage already has some of that flavor but we haven't taken it to the\n>>> logical conclusion.\n>>\n>> Is there a TODO here?\n>\n> it looks like, yes.\n\n\"Modify the planner to better estimate caching effects\"?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 20 Jan 2011 09:19:00 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "2011/1/20 Robert Haas <[email protected]>:\n> On Thu, Jan 20, 2011 at 4:17 AM, Cédric Villemain\n> <[email protected]> wrote:\n>>>> I think his point is that we already have a proven formula\n>>>> (Mackert-Lohmann) and shouldn't be inventing a new one out of thin air.\n>>>> The problem is to figure out what numbers to apply the M-L formula to.\n>>>>\n>>>> I've been thinking that we ought to try to use it in the context of the\n>>>> query as a whole rather than for individual table scans; the current\n>>>> usage already has some of that flavor but we haven't taken it to the\n>>>> logical conclusion.\n>>>\n>>> Is there a TODO here?\n>>\n>> it looks like, yes.\n>\n> \"Modify the planner to better estimate caching effects\"?\n\nor \"Estimate caching effect in the query context instead of per\nobject\" (the point above)\nand \"Improve the estimate of the caching effects\" (more or less M-L\nreview, fine control of cache estimate)\n\n?\n\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Thu, 20 Jan 2011 17:16:05 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "2011/1/19 Bruce Momjian <[email protected]>:\n> Robert Haas wrote:\n>> On Thu, Nov 11, 2010 at 2:35 PM, Tom Lane <[email protected]> wrote:\n>> > Robert Haas <[email protected]> writes:\n>> >> Yeah. ?For Kevin's case, it seems like we want the caching percentage\n>> >> to vary not so much based on which table we're hitting at the moment\n>> >> but on how much of it we're actually reading.\n>> >\n>> > Well, we could certainly take the expected number of pages to read and\n>> > compare that to effective_cache_size. ?The thing that's missing in that\n>> > equation is how much other stuff is competing for cache space. ?I've\n>> > tried to avoid having the planner need to know the total size of the\n>> > database cluster, but it's kind of hard to avoid that if you want to\n>> > model this honestly.\n>>\n>> I'm not sure I agree with that.  I mean, you could easily have a\n>> database that is much larger than effective_cache_size, but only that\n>> much of it is hot.  Or, the hot portion could move around over time.\n>> And for reasons of both technical complexity and plan stability, I\n>> don't think we want to try to model that.  It seems perfectly\n>> reasonable to say that reading 25% of effective_cache_size will be\n>> more expensive *per-page* than reading 5% of effective_cache_size,\n>> independently of what the total cluster size is.\n>\n> Late reply, but one idea is to have the executor store hit counts for\n> later use by the optimizer.  Only the executor knows how many pages it\n> had to request from the kernel for a query.  Perhaps getrusage could\n> tell us how often we hit the disk.\n\nAFAIK getrusage does not provide access to real IO counters but\nfilesystem's ones. :-(\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Thu, 20 Jan 2011 17:36:05 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" }, { "msg_contents": "Robert Haas wrote:\n> On Thu, Jan 20, 2011 at 4:17 AM, C?dric Villemain\n> <[email protected]> wrote:\n> >>> I think his point is that we already have a proven formula\n> >>> (Mackert-Lohmann) and shouldn't be inventing a new one out of thin air.\n> >>> The problem is to figure out what numbers to apply the M-L formula to.\n> >>>\n> >>> I've been thinking that we ought to try to use it in the context of the\n> >>> query as a whole rather than for individual table scans; the current\n> >>> usage already has some of that flavor but we haven't taken it to the\n> >>> logical conclusion.\n> >>\n> >> Is there a TODO here?\n> >\n> > it looks like, yes.\n> \n> \"Modify the planner to better estimate caching effects\"?\n\nAdded to TODO.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Wed, 26 Jan 2011 20:40:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: anti-join chosen even when slower than old plan" } ]
[ { "msg_contents": "Hi,\n\nI have a query that is getting a pretty bad plan due to a massively\nincorrect count of expected rows. All tables in the query were vacuum\nanalyzed right before the query was tested. Disabling nested loops\ngives a significantly faster result (4s vs 292s).\nAny thoughts on what I can change to make the planner generate a better plan?\n\n\n32GB ram\neffective_cache_size = 16GB\nshared_buffers = 4GB\nrandom_page_cost = 1.5\ndefault_statistics_target = 100\nNote: for the tables in question, I tested default_statistics_target\nat 100, then also at 5000 to see if there was an improvement (none\nnoted).\n\n\n\nselect version();\n version\n------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.2 on x86_64-redhat-linux-gnu, compiled by GCC gcc\n(GCC) 4.1.2 20071124 (Red Hat 4.1.2-42), 64-bit\n\n\nexplain analyze\nselect c.id, c.transactionid, c.clickgenerated, c.confirmed,\nc.rejected, cr.rejectedreason\nfrom conversion c\ninner join conversionrejected cr on cr.idconversion = c.id\nwhere date = '2010-11-06'\nand idaction = 12906\nand idaffiliate = 198338\norder by transactionid;\n\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=2318120.52..2345652.23 rows=11012683 width=78) (actual\ntime=292668.896..292668.903 rows=70 loops=1)\n Sort Key: c.transactionid\n Sort Method: quicksort Memory: 43kB\n -> Nested Loop (cost=1234.69..715468.13 rows=11012683 width=78)\n(actual time=8687.314..292668.159 rows=70 loops=1)\n Join Filter: ((cr.idconversion = c.id) OR (c.id = 38441828354::bigint))\n -> Append (cost=1234.69..1244.03 rows=2 width=56) (actual\ntime=15.292..15.888 rows=72 loops=1)\n -> Bitmap Heap Scan on conversion c\n(cost=1234.69..1240.76 rows=1 width=31) (actual time=15.291..15.840\nrows=72 loops=1)\n Recheck Cond: ((idaffiliate = 198338) AND (date =\n'2010-11-06'::date))\n Filter: (idaction = 12906)\n -> BitmapAnd (cost=1234.69..1234.69 rows=4\nwidth=0) (actual time=15.152..15.152 rows=0 loops=1)\n -> Bitmap Index Scan on\nconversion_idaffiliate_idx (cost=0.00..49.16 rows=3492 width=0)\n(actual time=4.071..4.071 rows=28844 loops=1)\n Index Cond: (idaffiliate = 198338)\n -> Bitmap Index Scan on\nconversion_date_idx (cost=0.00..1185.28 rows=79282 width=0) (actual\ntime=10.343..10.343 rows=82400 loops=1)\n Index Cond: (date = '2010-11-06'::date)\n -> Index Scan using conversionlate_date_idx on\nconversionlate c (cost=0.00..3.27 rows=1 width=80) (actual\ntime=0.005..0.005 rows=0 loops=1)\n Index Cond: (date = '2010-11-06'::date)\n Filter: ((idaction = 12906) AND (idaffiliate = 198338))\n -> Seq Scan on conversionrejected cr (cost=0.00..191921.82\nrows=11012682 width=31) (actual time=0.003..1515.816 rows=11012682\nloops=72)\n Total runtime: 292668.992 ms\n\n\nselect count(*) from conversionrejected ;\n count\n----------\n 11013488\n\nTime: 3649.647 ms\n\nselect count(*) from conversion where date = '2010-11-06';\n count\n-------\n 82400\n\nTime: 507.985 ms\n\n\nselect count(*) from conversion;\n count\n----------\n 73419376(1 row)\n\nTime: 7100.619 ms\n\n\n\n-- with enable_nestloop to off;\n-- much faster!\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=234463.54..234463.54 rows=2 width=78) (actual\ntime=4035.340..4035.347 rows=70 loops=1)\n Sort Key: c.transactionid\n Sort Method: quicksort Memory: 43kB\n -> Hash Join (cost=1244.13..234463.53 rows=2 width=78) (actual\ntime=4024.816..4034.715 rows=70 loops=1)\n Hash Cond: (cr.idconversion = c.id)\n -> Seq Scan on conversionrejected cr (cost=0.00..191921.82\nrows=11012682 width=31) (actual time=0.003..1949.597 rows=11013576\nloops=1)\n -> Hash (cost=1244.11..1244.11 rows=2 width=56) (actual\ntime=19.312..19.312 rows=72 loops=1)\n -> Append (cost=1234.77..1244.11 rows=2 width=56)\n(actual time=18.539..19.261 rows=72 loops=1)\n -> Bitmap Heap Scan on conversion c\n(cost=1234.77..1240.83 rows=1 width=31) (actual time=18.538..19.235\nrows=72 loops=1)\n Recheck Cond: ((idaffiliate = 198338) AND\n(date = '2010-11-06'::date))\n Filter: (idaction = 12906)\n -> BitmapAnd (cost=1234.77..1234.77\nrows=4 width=0) (actual time=18.237..18.237 rows=0 loops=1)\n -> Bitmap Index Scan on\nconversion_idaffiliate_idx (cost=0.00..49.16 rows=3492 width=0)\n(actual time=4.932..4.932 rows=28844 loops=1)\n Index Cond: (idaffiliate = 198338)\n -> Bitmap Index Scan on\nconversion_date_idx (cost=0.00..1185.36 rows=79292 width=0) (actual\ntime=12.473..12.473 rows=82400 loops=1)\n Index Cond: (date = '2010-11-06'::date)\n -> Index Scan using conversionlate_date_idx on\nconversionlate c (cost=0.00..3.27 rows=1 width=80) (actual\ntime=0.006..0.006 rows=0 loops=1)\n Index Cond: (date = '2010-11-06'::date)\n Filter: ((idaction = 12906) AND\n(idaffiliate = 198338))\n Total runtime: 4035.439 ms\n\n\n\n\n\n\n-- for completeness,\n-- same query, on 9.0.0, underpowered server, 2 disks mirrored.\nApproximately the same table sizes/counts.\n\nselect version();\n version\n-------------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.0.0 on x86_64-unknown-linux-gnu, compiled by GCC gcc\n(GCC) 4.1.2 20080704 (Red Hat 4.1.2-48), 64-bit\n\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=120727.25..120727.25 rows=2 width=78) (actual\ntime=3214.827..3214.867 rows=70 loops=1)\n Sort Key: c.transactionid\n Sort Method: quicksort Memory: 43kB\n -> Nested Loop (cost=697.95..120727.24 rows=2 width=78) (actual\ntime=2955.321..3214.208 rows=70 loops=1)\n -> Append (cost=697.95..120712.87 rows=2 width=56) (actual\ntime=2931.584..3173.402 rows=72 loops=1)\n -> Bitmap Heap Scan on conversion c\n(cost=697.95..120706.59 rows=1 width=31) (actual\ntime=2931.582..3150.231 rows=72 loops=1)\n Recheck Cond: (date = '2010-11-06'::date)\n Filter: ((idaction = 12906) AND (idaffiliate = 198338))\n -> Bitmap Index Scan on conversion_date_idx\n(cost=0.00..697.95 rows=44365 width=0) (actual time=51.692..51.692\nrows=82400 loops=1)\n Index Cond: (date = '2010-11-06'::date)\n -> Index Scan using conversionlate_idaffiliate_idx on\nconversionlate c (cost=0.00..6.27 rows=1 width=80) (actual\ntime=23.091..23.091 rows=0 loops=1)\n Index Cond: (idaffiliate = 198338)\n Filter: ((date = '2010-11-06'::date) AND\n(idaction = 12906))\n -> Index Scan using conversionrejected_pk on\nconversionrejected cr (cost=0.00..7.17 rows=1 width=31) (actual\ntime=0.563..0.564 rows=1 loops=72)\n Index Cond: (cr.idconversion = c.id)\n Total runtime: 3214.972 ms\n\n\n\nThanks,\n\nBricklen\n", "msg_date": "Tue, 9 Nov 2010 13:26:47 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Huge overestimation in rows expected results in bad plan" }, { "msg_contents": "On 11/9/2010 3:26 PM, bricklen wrote:\n> Hi,\n>\n> I have a query that is getting a pretty bad plan due to a massively\n> incorrect count of expected rows. All tables in the query were vacuum\n> analyzed right before the query was tested. Disabling nested loops\n> gives a significantly faster result (4s vs 292s).\n> Any thoughts on what I can change to make the planner generate a better plan?\n>\n>\n> explain analyze\n> select c.id, c.transactionid, c.clickgenerated, c.confirmed,\n> c.rejected, cr.rejectedreason\n> from conversion c\n> inner join conversionrejected cr on cr.idconversion = c.id\n> where date = '2010-11-06'\n> and idaction = 12906\n> and idaffiliate = 198338\n> order by transactionid;\n>\n>\n\n> -> Seq Scan on conversionrejected cr (cost=0.00..191921.82\n> rows=11012682 width=31) (actual time=0.003..1515.816 rows=11012682\n> loops=72)\n> Total runtime: 292668.992 ms\n>\n>\n>\n>\n\nLooks like the table stats are ok. But its doing a sequential scan. \nAre you missing an index?\n\nAlso:\n\nhttp://explain.depesz.com/\n\nis magic.\n\n-Andy\n", "msg_date": "Tue, 09 Nov 2010 16:48:04 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge overestimation in rows expected results in bad\n plan" }, { "msg_contents": "On Tue, Nov 9, 2010 at 2:48 PM, Andy Colson <[email protected]> wrote:\n> On 11/9/2010 3:26 PM, bricklen wrote:\n>\n>>          ->   Seq Scan on conversionrejected cr  (cost=0.00..191921.82\n>> rows=11012682 width=31) (actual time=0.003..1515.816 rows=11012682\n>> loops=72)\n>>  Total runtime: 292668.992 ms\n>>\n>\n> Looks like the table stats are ok.  But its doing a sequential scan. Are you\n> missing an index?\n>\n> Also:\n>\n> http://explain.depesz.com/\n>\n> is magic.\n>\n> -Andy\n>\n\nThe PK is on the conversionrejected table in all three databases I\ntested (I also tested our Greenplum datawarehouse). The \"idconversion\"\nattribute is a bigint in both tables, so it's not a type mismatch.\n\n\\d conversionrejected\n Table \"public.conversionrejected\"\n Column | Type | Modifiers\n----------------+--------+-----------\n idconversion | bigint | not null\n rejectedreason | text | not null\nIndexes:\n \"conversionrejected_pk\" PRIMARY KEY, btree (idconversion)\n\nYeah, that explain visualizer from depesz is a handy tool, I use frequently.\n", "msg_date": "Tue, 9 Nov 2010 14:59:02 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge overestimation in rows expected results in bad plan" }, { "msg_contents": "bricklen <[email protected]> writes:\n> I have a query that is getting a pretty bad plan due to a massively\n> incorrect count of expected rows.\n\nThe query doesn't seem to match the plan. Where is that OR (c.id =\n38441828354::bigint) condition coming from?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Nov 2010 18:29:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge overestimation in rows expected results in bad plan " }, { "msg_contents": "On Tue, Nov 9, 2010 at 3:29 PM, Tom Lane <[email protected]> wrote:\n> bricklen <[email protected]> writes:\n>> I have a query that is getting a pretty bad plan due to a massively\n>> incorrect count of expected rows.\n>\n> The query doesn't seem to match the plan.  Where is that OR (c.id =\n> 38441828354::bigint) condition coming from?\n>\n>                        regards, tom lane\n>\n\nAh sorry, I was testing it with and without that part. Here is the\ncorrected query, with that as part of the join condition:\n\nexplain analyze\nselect c.id, c.transactionid, c.clickgenerated, c.confirmed,\nc.rejected, cr.rejectedreason\nfrom conversion c\ninner join conversionrejected cr on cr.idconversion = c.id or c.id = 38441828354\nwhere date = '2010-11-06'\nand idaction = 12906\nand idaffiliate = 198338\norder by transactionid;\n", "msg_date": "Tue, 9 Nov 2010 15:39:24 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge overestimation in rows expected results in bad plan" }, { "msg_contents": "bricklen <[email protected]> writes:\n> On Tue, Nov 9, 2010 at 3:29 PM, Tom Lane <[email protected]> wrote:\n>> The query doesn't seem to match the plan.  Where is that OR (c.id =\n>> 38441828354::bigint) condition coming from?\n\n> Ah sorry, I was testing it with and without that part. Here is the\n> corrected query, with that as part of the join condition:\n\n> explain analyze\n> select c.id, c.transactionid, c.clickgenerated, c.confirmed,\n> c.rejected, cr.rejectedreason\n> from conversion c\n> inner join conversionrejected cr on cr.idconversion = c.id or c.id = 38441828354\n> where date = '2010-11-06'\n> and idaction = 12906\n> and idaffiliate = 198338\n> order by transactionid;\n\nHm. Well, the trouble with that query is that if there is any\nconversion row with c.id = 38441828354, it will join to *every* row of\nconversionrejected. The planner not unreasonably assumes there will be\nat least one such row, so it comes up with a join size estimate that's\n>= size of conversionrejected; and it also tends to favor a seqscan\nsince it thinks it's going to have to visit every row of\nconversionrejected anyway.\n\nIf you have reason to think the c.id = 38441828354 test is usually dead\ncode, you might see if you can get rid of it, or at least rearrange the\nquery as a UNION of two independent joins.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Nov 2010 18:55:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge overestimation in rows expected results in bad plan " }, { "msg_contents": "On Tue, Nov 9, 2010 at 3:55 PM, Tom Lane <[email protected]> wrote:\n> bricklen <[email protected]> writes:\n>> On Tue, Nov 9, 2010 at 3:29 PM, Tom Lane <[email protected]> wrote:\n>>> The query doesn't seem to match the plan.  Where is that OR (c.id =\n>>> 38441828354::bigint) condition coming from?\n>\n>> Ah sorry, I was testing it with and without that part. Here is the\n>> corrected query, with that as part of the join condition:\n>\n>> explain analyze\n>> select c.id, c.transactionid, c.clickgenerated, c.confirmed,\n>> c.rejected, cr.rejectedreason\n>> from conversion c\n>> inner join conversionrejected cr on cr.idconversion = c.id or c.id = 38441828354\n>> where date = '2010-11-06'\n>> and idaction = 12906\n>> and idaffiliate = 198338\n>> order by transactionid;\n>\n> Hm.  Well, the trouble with that query is that if there is any\n> conversion row with c.id = 38441828354, it will join to *every* row of\n> conversionrejected.  The planner not unreasonably assumes there will be\n> at least one such row, so it comes up with a join size estimate that's\n>>= size of conversionrejected; and it also tends to favor a seqscan\n> since it thinks it's going to have to visit every row of\n> conversionrejected anyway.\n>\n> If you have reason to think the c.id = 38441828354 test is usually dead\n> code, you might see if you can get rid of it, or at least rearrange the\n> query as a UNION of two independent joins.\n>\n>                        regards, tom lane\n>\n\nOkay, thanks. I'll talk to the developer that wrote that query and see\nwhat he has to say about it.\n\n\nCheers,\n\nBricklen\n", "msg_date": "Tue, 9 Nov 2010 16:10:33 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge overestimation in rows expected results in bad plan" } ]
[ { "msg_contents": "I use the postgresql in default configuration and use inheritance way to create table.\r\n \r\nMy postgresql version is:\r\n \r\nSELECT version();\r\n \r\n version \r\n \r\n--------------------------------------------------------------------------------\r\n \r\n PostgreSQL 9.0.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5, 32-bit\r\n \r\n(1 row)\r\n \r\n \r\n \r\nReboot the computer to avoid memory cache. And then get the following explain:\r\n \r\nEXPLAIN ANALYZE SELECT authdomain,authuser,count(*),sum(SIZE) FROM tbltrafficlog WHERE (PROTOCOL in ('HTTP','HTTPS','FTP')) and (TIME >= '2010-10-01 00:00:00' AND TIME < '2010-11-01 00:00:00') GROUP BY authdomain,authuser order by count(*) DESC LIMIT 10 OFFSET 0;\r\n \r\nQUERY PLAN\r\n \r\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n \r\n Limit (cost=600830.83..600830.86 rows=10 width=19) (actual time=225034.470..225034.483 rows=10 loops=1)\r\n \r\n -> Sort (cost=600830.83..600833.25 rows=968 width=19) (actual time=225034.469..225034.473 rows=10 loops=1)\r\n \r\n Sort Key: (count(*))\r\n \r\n Sort Method: top-N heapsort Memory: 17kB\r\n \r\n -> HashAggregate (cost=600795.40..600809.92 rows=968 width=19) (actual time=225018.666..225019.522 rows=904 loops=1)\r\n \r\n -> Append (cost=0.00..535281.08 rows=6551432 width=19) (actual time=4734.441..205514.878 rows=7776000 loops=1)\r\n \r\n -> Seq Scan on tbltrafficlog (cost=0.00..11.50 rows=1 width=298) (actual time=0.001..0.001 rows=0 loops=1)\r\n \r\n Filter: ((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone) AND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone) AND ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[])))\r\n \r\n -> Bitmap Heap Scan on tbltrafficlog_20101001 tbltrafficlog (cost=4471.33..17819.25 rows=218129 width=19) (actual time=4734.437..6096.206 rows=259200 loops=1)\r\n \r\n Recheck Cond: ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[]))\r\n \r\n Filter: ((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone) AND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone))\r\n \r\n -> Bitmap Index Scan on tbltrafficlog_20101001_protocol_idx (cost=0.00..4416.80 rows=218129 width=0) (actual time=4731.860..4731.860 rows=259200 loops=1)\r\n \r\n Index Cond: ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[]))\r\n \r\n…\r\n \r\n -> Bitmap Heap Scan on tbltrafficlog_20101030 tbltrafficlog (cost=4472.75..17824.12 rows=218313 width=19) (actual time=4685.536..6090.222 rows=259200 loops=1)\r\n \r\n Recheck Cond: ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[]))\r\n \r\n Filter: ((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone) AND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone))\r\n \r\n -> Bitmap Index Scan on tbltrafficlog_20101030_protocol_idx (cost=0.00..4418.17 rows=218313 width=0) (actual time=4677.147..4677.147 rows=259200 loops=1)\r\n \r\n Index Cond: ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[]))\r\n \r\n Total runtime: 225044.255 ms\r\n \r\n \r\n \r\nReboot the computer again. And then I close bitmap scan manually and get the following explain:\r\n \r\nSET SET enable_bitmapscan TO off;\r\n \r\nEXPLAIN ANALYZE SELECT authdomain,authuser,count(*),sum(SIZE) FROM tbltrafficlog WHERE (PROTOCOL in ('HTTP','HTTPS','FTP')) and (TIME >= '2010-10-01 00:00:00' AND TIME < '2010-11-01 00:00:00') GROUP BY authdomain,authuser order by count(*) DESC LIMIT 10 OFFSET 0;\r\n \r\nQUERY PLAN\r\n \r\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=634901.26..634901.28 rows=10 width=19) (actual time=83805.465..83805.477 rows=10 loops=1)\r\n \r\n -> Sort (cost=634901.26..634903.68 rows=968 width=19) (actual time=83805.463..83805.467 rows=10 loops=1)\r\n \r\n Sort Key: (count(*))\r\n \r\n Sort Method: top-N heapsort Memory: 17kB\r\n \r\n -> HashAggregate (cost=634865.82..634880.34 rows=968 width=19) (actual time=83789.686..83790.540 rows=904 loops=1)\r\n \r\n -> Append (cost=0.00..569351.50 rows=6551432 width=19) (actual time=0.010..64393.284 rows=7776000 loops=1)\r\n \r\n -> Seq Scan on tbltrafficlog (cost=0.00..11.50 rows=1 width=298) (actual time=0.001..0.001 rows=0 loops=1)\r\n \r\n Filter: ((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone) AND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone) AND ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[])))\r\n \r\n -> Seq Scan on tbltrafficlog_20101001 tbltrafficlog (cost=0.00..18978.00 rows=218129 width=19) (actual time=0.008..1454.757 rows=259200 loops=1)\r\n \r\n Filter: ((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone) AND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone) AND ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[])))\r\n \r\n…\r\n \r\n-> Seq Scan on tbltrafficlog_20101030 tbltrafficlog (cost=0.00..18978.00 rows=218313 width=19) (actual time=0.025..1483.817 rows=259200 loops=1)\r\n \r\n Filter: ((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone) AND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone) AND ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[])))\r\n \r\n Total runtime: 83813.808 ms\r\n \r\n \r\n \r\nOkay, 225044.255ms VS 83813.808 ms, it obviously seems that the planner select one bad scan plan by default.\n\nI use the postgresql in default configuration\r\nand use inheritance way to create table.\nMy postgresql version is:\nSELECT version();\n\nversion \n--------------------------------------------------------------------------------\n PostgreSQL 9.0.1 on i686-pc-linux-gnu,\r\ncompiled by GCC gcc (GCC) 3.3.5, 32-bit\n(1 row)\n \nReboot the computer to avoid memory cache. And\r\nthen get the following explain:\nEXPLAIN ANALYZE SELECT\r\nauthdomain,authuser,count(*),sum(SIZE) FROM tbltrafficlog WHERE (PROTOCOL in\r\n('HTTP','HTTPS','FTP')) and (TIME >= '2010-10-01 00:00:00' AND TIME <\r\n'2010-11-01 00:00:00') GROUP BY authdomain,authuser order by count(*) DESC\r\nLIMIT 10 OFFSET 0;\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit\n(cost=600830.83..600830.86 rows=10 width=19) (actual\r\ntime=225034.470..225034.483 rows=10 loops=1)\n ->\nSort (cost=600830.83..600833.25\r\nrows=968 width=19) (actual time=225034.469..225034.473 rows=10 loops=1)\n Sort Key: (count(*))\n Sort Method: top-N heapsort Memory: 17kB\n ->\nHashAggregate\n(cost=600795.40..600809.92 rows=968 width=19) (actual\r\ntime=225018.666..225019.522 rows=904 loops=1)\n -> Append\n(cost=0.00..535281.08 rows=6551432 width=19) (actual\r\ntime=4734.441..205514.878 rows=7776000 loops=1)\n -> Seq Scan on tbltrafficlog (cost=0.00..11.50 rows=1 width=298) (actual\r\ntime=0.001..0.001 rows=0 loops=1)\n Filter: ((\"time\" >=\r\n'2010-10-01 00:00:00'::timestamp without time zone) AND (\"time\" <\r\n'2010-11-01 00:00:00'::timestamp without time zone) AND ((protocol)::text = ANY\r\n('{HTTP,HTTPS,FTP}'::text[])))\n -> Bitmap Heap Scan on tbltrafficlog_20101001\r\ntbltrafficlog (cost=4471.33..17819.25\r\nrows=218129 width=19) (actual time=4734.437..6096.206 rows=259200 loops=1)\n Recheck Cond:\r\n((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[]))\n Filter:\r\n((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone)\r\nAND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on\r\ntbltrafficlog_20101001_protocol_idx\n(cost=0.00..4416.80 rows=218129 width=0) (actual time=4731.860..4731.860\r\nrows=259200 loops=1)\n Index Cond:\r\n((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[]))\n…\n -> Bitmap Heap Scan on tbltrafficlog_20101030\r\ntbltrafficlog (cost=4472.75..17824.12\r\nrows=218313 width=19) (actual time=4685.536..6090.222 rows=259200 loops=1)\n Recheck Cond:\r\n((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[]))\n Filter:\r\n((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone)\r\nAND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on\r\ntbltrafficlog_20101030_protocol_idx\n(cost=0.00..4418.17 rows=218313 width=0) (actual time=4677.147..4677.147\r\nrows=259200 loops=1)\n Index Cond:\r\n((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[]))\n Total runtime: 225044.255 ms\n \nReboot the computer again. And then I close\r\nbitmap scan manually and get the following explain:\nSET SET\r\nenable_bitmapscan TO off;\nEXPLAIN ANALYZE SELECT\r\nauthdomain,authuser,count(*),sum(SIZE) FROM tbltrafficlog WHERE (PROTOCOL in\r\n('HTTP','HTTPS','FTP')) and (TIME >= '2010-10-01 00:00:00' AND TIME <\r\n'2010-11-01 00:00:00') GROUP BY authdomain,authuser order by count(*) DESC\r\nLIMIT 10 OFFSET 0;\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit\n(cost=634901.26..634901.28 rows=10 width=19) (actual\r\ntime=83805.465..83805.477 rows=10 loops=1)\n ->\nSort (cost=634901.26..634903.68\r\nrows=968 width=19) (actual time=83805.463..83805.467 rows=10 loops=1)\n Sort Key: (count(*))\n Sort Method: top-N heapsort Memory: 17kB\n ->\nHashAggregate\n(cost=634865.82..634880.34 rows=968 width=19) (actual\r\ntime=83789.686..83790.540 rows=904 loops=1)\n -> Append\n(cost=0.00..569351.50 rows=6551432 width=19) (actual\r\ntime=0.010..64393.284 rows=7776000 loops=1)\n -> Seq Scan on tbltrafficlog (cost=0.00..11.50 rows=1 width=298) (actual\r\ntime=0.001..0.001 rows=0 loops=1)\n Filter:\r\n((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone)\r\nAND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone)\r\nAND ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[])))\n -> Seq Scan on tbltrafficlog_20101001\r\ntbltrafficlog (cost=0.00..18978.00\r\nrows=218129 width=19) (actual time=0.008..1454.757 rows=259200 loops=1)\n Filter:\r\n((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone)\r\nAND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone)\r\nAND ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[])))\n…\n-> Seq Scan on tbltrafficlog_20101030\r\ntbltrafficlog (cost=0.00..18978.00\r\nrows=218313 width=19) (actual time=0.025..1483.817 rows=259200 loops=1)\n Filter:\r\n((\"time\" >= '2010-10-01 00:00:00'::timestamp without time zone)\r\nAND (\"time\" < '2010-11-01 00:00:00'::timestamp without time zone)\r\nAND ((protocol)::text = ANY ('{HTTP,HTTPS,FTP}'::text[])))\n Total runtime: 83813.808 ms\n \nOkay, 225044.255ms VS 83813.808 ms, it obviously seems that the\r\nplanner select one bad scan plan by default.", "msg_date": "Wed, 10 Nov 2010 17:37:29 +0800", "msg_from": "\"=?gbk?B?vrKwssvC?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why dose the planner select one bad scan plan." }, { "msg_contents": "> Okay, 225044.255ms VS 83813.808 ms, it obviously seems that the planner\n> select one bad scan plan by default.\n\nActually no, the planner chose the cheapest plan (more precisely a plan\nwith the lowest computed cost). The first plan has a cost 600830.86 while\nthe second one has a cost 634901.28, so the first one is chosen.\n\nTo fix this, you'll have to tweak the cost variables, and maybe work_mem.\nSee this -\nhttp://www.postgresql.org/docs/9.0/interactive/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS\n(but I'm not sure which of those influence the Bitmap Heap Scan /\nHashAggregate plans).\n\nSo you'll have to modify these values until the hash aggregate plan is\ncheaper. And you don't need to reboot the machine between EXPLAIN\nexecutions. And even if you do EXPLAIN ANALYZE it's not necessary - there\nare better ways to clear the filesystem cache.\n\nBTW this is not a bug, so it's pointless to send it to 'bugs' mailinglist.\n\nregards\nTomas\n\n", "msg_date": "Wed, 10 Nov 2010 13:48:47 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Why dose the planner select one bad scan plan." } ]
[ { "msg_contents": "Grzegorz Jaśkiewicz wrote:\n \n> you're joining on more than one key. That always hurts performance.\n \nThat's very clearly *not* the problem, as there is a plan which runs\nin acceptable time but the optimizer is not choosing without being\ncoerced.\n \n(1) Virtually every query we run joins on multi-column keys, yet we\nhave good performance except for this one query.\n \n(2) We're talking about a performance regression due to a new release\npicking a newly available plan which it wrongly estimates to be an\norder of magnitude faster, when it's actually more than five times\nslower.\n \n-Kevin\n\n", "msg_date": "Wed, 10 Nov 2010 06:38:43 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: anti-join chosen even when slower than old plan" } ]
[ { "msg_contents": "Thanks for your answer! And I am sorry for trading the question as a bug, and send it to 'bugs' mailing-list. \r\n \r\nBut I doubt your answer. I think the essence of the problem is when the planner selects 'Bitmap Index Scan' and how the planner computes the cost of 'Bitmap Index Scan'. \r\n \r\nTom Lane said “In principle a bitmap index scan should be significantly faster if the index can return the bitmap more or less \"natively\" rather than having to construct it. My recollection though is that a significant amount of work is needed to make that happen, and that there is no existing patch that tackled the problem. So I'm not sure that this report should be taken as indicating that there's no chance of a SELECT performance improvement. What it does say is that we have to do that work if we want to make bitmap indexes useful.”\r\n \r\nOkay, I want to know how the planner computes the cost of constructing bitmap. And when the planner computes the cost of 'Bitmap Index Scan', if it considers the influence of memory cache? As when I do not clear the memory cache, I find the 'Bitmap Index Scan' is real fast than 'Seq Scan'.\r\n \r\n Best Regards!\r\n \r\nAsen\n\nThanks for your answer! And I am sorry for trading the\r\nquestion as a bug, and send it to 'bugs' mailing-list. \nBut I doubt your answer. I think the essence of the problem\r\nis when the planner selects 'Bitmap Index Scan' and how the planner computes the\r\ncost of 'Bitmap Index Scan'. \nTom Lane said “In principle a bitmap index scan should be significantly faster if the index can return the bitmap more or less\r\n\"natively\" rather than having to construct it. My recollection though\r\nis that a significant amount of work is needed to make that happen, and that\r\nthere is no existing patch that tackled the problem. So I'm not sure that this\r\nreport should be taken as indicating that there's no chance of a SELECT\r\nperformance improvement. What it does say is that we have to do that work if we\r\nwant to make bitmap indexes useful.”\nOkay, I want to know how the planner computes the cost of\r\nconstructing bitmap. And when the planner computes the cost of 'Bitmap Index Scan',\r\nif it considers the influence of memory cache? As when I do not clear the\r\nmemory cache, I find the 'Bitmap Index Scan' is real fast than 'Seq Scan'.\n Best Regards!\nAsen", "msg_date": "Thu, 11 Nov 2010 15:03:45 +0800", "msg_from": "\"=?gbk?B?vrKwssvC?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why dose the planner select one bad scan plan." }, { "msg_contents": "> But I doubt your answer. I think the essence of the problem is when the\n> planner selects 'Bitmap Index Scan' and how the planner computes the cost\n> of 'Bitmap Index Scan'.\n\nThe essence of the problem obviously is a bad estimate of the cost. The\nplanner considers the two plans, computes the costs and then chooses the\none with the lower cost. But obviously the cost does not reflect the\nreality (first time when the query is executed and the filesystem cache is\nempty).\n\n> Tom Lane said ��In principle a bitmap index scan should be significantly\n> faster if the index can return the bitmap more or less \"natively\" rather\n> than having to construct it. My recollection though is that a significant\n> amount of work is needed to make that happen, and that there is no\n> existing patch that tackled the problem. So I'm not sure that this report\n> should be taken as indicating that there's no chance of a SELECT\n> performance improvement. What it does say is that we have to do that work\n> if we want to make bitmap indexes useful.��\n\nTom Lane is right (as usual). The point is that when computing the cost,\nplanner does not know whether the data are already in the filesystem cache\nor if it has to fetch them from the disk (which is much slower).\n\n> Okay, I want to know how the planner computes the cost of constructing\n> bitmap. And when the planner computes the cost of 'Bitmap Index Scan', if\n> it considers the influence of memory cache? As when I do not clear the\n> memory cache, I find the 'Bitmap Index Scan' is real fast than 'Seq\n> Scan'.\n\nThere are two things here - loading the data from a disk into a cache\n(filesystem cache at the OS level / shared buffers at the PG level), and\nthen the execution itself.\n\nPostgreSQL estimates the first part using an effective_cache_size hint,\nand uses that to estimate the probability that the data are already in the\nfilesystem cache. But you're confusing him by the 'reboot' which results\nin an empty cache.\n\nThe plan itself seems fine to me - you might play with the cost variables,\nbut I think it won't improve the overall perfomance.\n\nActually what you see is a worst case scenario - the plan is not bad if\nthe data are in a cache (filesystem or shared buffers), but when Pg has to\nread the data from the disk, performance sucks. But is this reflecting\nreality? How often is the query executed? What other queries are executed\non the box? What is the size of shared_buffers?\n\nIf the query is executed often (compared to other queries) and the shared\nbuffers is set high enough, most of the table will remain in the shared\nbuffers and everything will work fine.\n\nTomas\n\n", "msg_date": "Thu, 11 Nov 2010 09:43:35 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Why dose the planner select one bad scan plan." }, { "msg_contents": "On Thu, Nov 11, 2010 at 3:43 AM, <[email protected]> wrote:\n>> Okay, I want to know how the planner computes the cost of constructing\n>> bitmap. And when the planner computes the cost of 'Bitmap Index Scan', if\n>> it considers the influence of memory cache? As when I do not clear the\n>> memory cache, I find the 'Bitmap Index Scan' is real fast than 'Seq\n>> Scan'.\n>\n> There are two things here - loading the data from a disk into a cache\n> (filesystem cache at the OS level / shared buffers at the PG level), and\n> then the execution itself.\n>\n> PostgreSQL estimates the first part using an effective_cache_size hint,\n> and uses that to estimate the probability that the data are already in the\n> filesystem cache.\n\nNo, it does not do that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sun, 14 Nov 2010 18:00:58 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why dose the planner select one bad scan plan." } ]
[ { "msg_contents": "Hello,\n\nin the last years, we have successfully manage to cope with our data\ngrowth \nusing partitioning and splitting large aggregation tasks on multiple\nthreads.\nThe partitioning is done logically by our applicationn server, thus\navoiding trigger overhead.\n\nThere are a few places in our data flow where we have to wait for index\ncreation before being able to distribute the process on multiple threads\nagain.\n\nWith the expected growth, create index will probably become a severe\nbottleneck for us.\n\nIs there any chance to see major improvement on it in a middle future ?\nI guess the question is naive, but why can't posgres use multiple\nthreads for large sort operation ?\n\n\nbest regards,\n\nMarc Mamin\n", "msg_date": "Thu, 11 Nov 2010 14:41:12 +0100", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE INDEX as bottleneck" }, { "msg_contents": "On Thu, Nov 11, 2010 at 02:41:12PM +0100, Marc Mamin wrote:\n> Hello,\n> \n> in the last years, we have successfully manage to cope with our data\n> growth \n> using partitioning and splitting large aggregation tasks on multiple\n> threads.\n> The partitioning is done logically by our applicationn server, thus\n> avoiding trigger overhead.\n> \n> There are a few places in our data flow where we have to wait for index\n> creation before being able to distribute the process on multiple threads\n> again.\n> \n> With the expected growth, create index will probably become a severe\n> bottleneck for us.\n> \n> Is there any chance to see major improvement on it in a middle future ?\n> I guess the question is naive, but why can't posgres use multiple\n> threads for large sort operation ?\n> \n> \n> best regards,\n> \n> Marc Mamin\n> \n\nThere has been a recent discussion on the hackers mailing list on\nusing the infrastructure that is already in place to lauch autovacuum\nprocesses to launch other helper processes. Something like this could\nbe used to offload the sort process to a much more parallelize version\nthat could take advantage of multiple I/O streams and CPU cores. Many\nthings are possible given the appropriate resources: funding, coding\nand development cycles...\n\nRegards,\nKen\n", "msg_date": "Thu, 11 Nov 2010 07:58:00 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE INDEX as bottleneck" }, { "msg_contents": "On Thu, Nov 11, 2010 at 06:41, Marc Mamin <[email protected]> wrote:\n> There are a few places in our data flow where we have to wait for index\n> creation before being able to distribute the process on multiple threads\n> again.\n\nWould CREATE INDEX CONCURRENTLY help here?\n", "msg_date": "Thu, 11 Nov 2010 11:54:37 -0700", "msg_from": "Alex Hunsaker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE INDEX as bottleneck" }, { "msg_contents": "No, CONCURRENTLY is to improve table availability during index creation, but it degrades the performances.\r\n\r\nbest regards,\r\n\r\nMarc Mamin\r\n\r\n\r\n-----Original Message-----\r\nFrom: Alex Hunsaker [mailto:[email protected]] \r\nSent: Donnerstag, 11. November 2010 19:55\r\nTo: Marc Mamin\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] CREATE INDEX as bottleneck\r\n\r\nOn Thu, Nov 11, 2010 at 06:41, Marc Mamin <[email protected]> wrote:\r\n> There are a few places in our data flow where we have to wait for index\r\n> creation before being able to distribute the process on multiple threads\r\n> again.\r\n\r\nWould CREATE INDEX CONCURRENTLY help here?\r\n", "msg_date": "Thu, 11 Nov 2010 21:05:27 +0100", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE INDEX as bottleneck" } ]
[ { "msg_contents": "This is my first post in this mailing list and I would like to raise an\nissue that in my opinion is causing performance issues of PostgreSQL\nespecially in a transaction processing environment. In my company we are\nusing PostgreSQL for the last 8 year for our in-house developed billing\nsystem (telecom). The last few months we started considering moving to\nanother RDBMS just because of this issue. \n\nAfter all these years, I believe that the biggest improvement that could\nbe done and will boost overall performance especially for enterprise\napplication will be to improve Multiversion Concurrency Control (MVCC)\nmechanism. In theory this seems to be improving performance for SELECT\nqueries but on tables with very intensive and frequent updates, even\nthat is not fully true because of the fragmentation of data caused by\nMVCC. I saw cases were a SELECT COUNT(*) on an empty (!!!) table (used\nas a buffer) took more than 40min to return a result! VACUUM is not a\nsolution in my opinion even though after the introduction of autovacuum\ndaemon situation got much better.\n\nPROBLEM DECRIPTION\n------------------\nBy definition of MVCC, when an UPDATE is performed, PostgreSQL creates a\nnew copy of the row in a new location. Any SELECT queries within the\nsame session are accessing the new version of the raw and all other\nqueries from other users are still accessing the old version. When\ntransaction is COMMIT PostgreSQL makes the a new version of the row as\nthe \"active\" row and expires the old row that remains \"dead\" and then is\nup to VACUUM procedure to recover the \"dead\" rows space and make it\navailable to the database engine. In case that transaction is ROLLBACK\nthen space reserved for the new version of the row is released. The\nresult is to have huge fragmentation on table space, unnecessary updates\nin all affected indexes, unnecessary costly I/O operations, poor\nperformance on SELECT that retrieves big record sets (i.e. reports etc)\nand slower updates. As an example, consider updating the \"live\" balance\nof a customer for each phone call where the entire customer record has\nto be duplicated again and again upon each call just for modifying a\nnumeric value! \n\nSUGGESTION\n--------------\n1) When a raw UPDATE is performed, store all \"new raw versions\" either\nin separate temporary table space \n or in a reserved space at the end of each table (can be allocated\ndynamically) etc \n2) Any SELECT queries within the same session will be again accessing\nthe new version of the row\n3) Any SELECT queries from other users will still be accessing the old\nversion\n4) When UPDATE transaction is ROLLBACK just release the space used in\nnew temporary location \n5) When UPDATE transaction is COMMIT then try to LOCK the old version\nand overwrite it at the same physical location (NO FRAGMENTATION).\n6) Similar mechanism can be applied on INSERTS and DELETES \n7) In case that transaction was COMMIT, the temporary location can be\neither released or archived/cleaned on a pre-scheduled basis. This will\npossibly allow the introduction of a TRANSACTION LOG backup mechanism as\na next step. \n8) After that VACUUM will have to deal only with deletions!!! \n\n\nI understand that my suggestion seems to be too simplified and also that\nthere are many implementation details and difficulties that I am not\naware. \n\nI strongly believe that the outcome of the discussion regarding this\nissue will be helpful. \n\nBest Regards, \n\nKyriacos Kyriacou\nSenior Developer/DBA\n\n\n", "msg_date": "Thu, 11 Nov 2010 20:25:52 +0200", "msg_from": "\"Kyriacos Kyriacou\" <[email protected]>", "msg_from_op": true, "msg_subject": "MVCC performance issue" }, { "msg_contents": "On 11/12/2010 02:25 AM, Kyriacos Kyriacou wrote:\n\n> The\n> result is to have huge fragmentation on table space, unnecessary updates\n> in all affected indexes, unnecessary costly I/O operations, poor\n> performance on SELECT that retrieves big record sets (i.e. reports etc)\n> and slower updates.\n\nYep. It's all about trade-offs. For some workloads the in-table MVCC \nstorage setup works pretty darn poorly, but for most it seems to work \nquite well.\n\nThere are various other methods of implementing relational storage with \nACID properties. You can exclude all other transactions while making a \nchange to a table, ensuring that nobody else can see \"old\" or \"new\" rows \nso there's no need to keep them around. You can use an out-of-line redo \nlog (a-la Oracle). Many other methods exist, too.\n\nThey all have advantages and disadvantages for different workloads. It's \nfar from trivial to mix multiple schemes within a single database, so \nmixing and matching schemes for different parts of your DB isn't \ngenerally practical.\n\n> 1) When a raw UPDATE is performed, store all \"new raw versions\" either\n> in separate temporary table space\n> or in a reserved space at the end of each table (can be allocated\n> dynamically) etc\n\nOK, so you want a redo log a-la Oracle?\n\n> 2) Any SELECT queries within the same session will be again accessing\n> the new version of the row\n> 3) Any SELECT queries from other users will still be accessing the old\n> version\n\n... and incurring horrible random I/O penalties if the redo log doesn't \nfit in RAM. Again, a-la Oracle.\n\nEven read-only transactions have to hit the undo log if there's an \nupdate in progress, because rows they need may have been moved out to \nthe undo log as they're updated in the main table storage.\n\n[snip description]\n\n> I understand that my suggestion seems to be too simplified and also that\n> there are many implementation details and difficulties that I am not\n> aware.\n\nIt sounds like you're describing Oracle-style MVCC, using redo logs.\n\nhttp://blogs.sybase.com/database/2009/04/mvcc-dispelling-some-oracle-fudunderstanding-the-cost/\n\nhttp://en.wikipedia.org/wiki/Multiversion_concurrency_control\n\nOracle's MVCC approach has its own costs. Like Pg's, those costs \nincrease with update/delete frequency. Instead of table bloat, Oracle \nsuffers from redo log growth (or redo log size management issues). \nInstead of increased table scan costs from dead rows, Oracle suffers \nfrom random I/O costs as it looks up the out-of-line redo log for old \nrows. Instead of long-running writer transactions causing table bloat, \nOracle can have problems with long-running reader transactions aborting \nwhen the redo log runs out of space.\n\nPersonally, I don't know enough to know which is \"better\". I suspect \nthey're just different, with different trade-offs. If redo logs allow \nyou to do without write-ahead logging, that'd be interesting - but \nthen, the WAL is useful for all sorts of replication options, and the \nuse of linear WALs means that write ordering in the tables doesn't need \nto be as strict, which has performance advantages.\n\n--\nCraig Ringer\n", "msg_date": "Sat, 13 Nov 2010 13:53:27 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "In reading what you are describing, don't you think PG 9 goes a long way to\nhelping you out?\n\nOn Sat, Nov 13, 2010 at 12:53 AM, Craig Ringer\n<[email protected]>wrote:\n\n> On 11/12/2010 02:25 AM, Kyriacos Kyriacou wrote:\n>\n> The\n>> result is to have huge fragmentation on table space, unnecessary updates\n>> in all affected indexes, unnecessary costly I/O operations, poor\n>> performance on SELECT that retrieves big record sets (i.e. reports etc)\n>> and slower updates.\n>>\n>\n> Yep. It's all about trade-offs. For some workloads the in-table MVCC\n> storage setup works pretty darn poorly, but for most it seems to work quite\n> well.\n>\n> There are various other methods of implementing relational storage with\n> ACID properties. You can exclude all other transactions while making a\n> change to a table, ensuring that nobody else can see \"old\" or \"new\" rows so\n> there's no need to keep them around. You can use an out-of-line redo log\n> (a-la Oracle). Many other methods exist, too.\n>\n> They all have advantages and disadvantages for different workloads. It's\n> far from trivial to mix multiple schemes within a single database, so mixing\n> and matching schemes for different parts of your DB isn't generally\n> practical.\n>\n>\n> 1) When a raw UPDATE is performed, store all \"new raw versions\" either\n>> in separate temporary table space\n>> or in a reserved space at the end of each table (can be allocated\n>> dynamically) etc\n>>\n>\n> OK, so you want a redo log a-la Oracle?\n>\n>\n> 2) Any SELECT queries within the same session will be again accessing\n>> the new version of the row\n>> 3) Any SELECT queries from other users will still be accessing the old\n>> version\n>>\n>\n> ... and incurring horrible random I/O penalties if the redo log doesn't fit\n> in RAM. Again, a-la Oracle.\n>\n> Even read-only transactions have to hit the undo log if there's an update\n> in progress, because rows they need may have been moved out to the undo log\n> as they're updated in the main table storage.\n>\n> [snip description]\n>\n>\n> I understand that my suggestion seems to be too simplified and also that\n>> there are many implementation details and difficulties that I am not\n>> aware.\n>>\n>\n> It sounds like you're describing Oracle-style MVCC, using redo logs.\n>\n>\n> http://blogs.sybase.com/database/2009/04/mvcc-dispelling-some-oracle-fudunderstanding-the-cost/\n>\n> http://en.wikipedia.org/wiki/Multiversion_concurrency_control\n>\n> Oracle's MVCC approach has its own costs. Like Pg's, those costs increase\n> with update/delete frequency. Instead of table bloat, Oracle suffers from\n> redo log growth (or redo log size management issues). Instead of increased\n> table scan costs from dead rows, Oracle suffers from random I/O costs as it\n> looks up the out-of-line redo log for old rows. Instead of long-running\n> writer transactions causing table bloat, Oracle can have problems with\n> long-running reader transactions aborting when the redo log runs out of\n> space.\n>\n> Personally, I don't know enough to know which is \"better\". I suspect\n> they're just different, with different trade-offs. If redo logs allow you\n> to do without write-ahead logging, that'd be interesting - but then, the\n> WAL is useful for all sorts of replication options, and the use of linear\n> WALs means that write ordering in the tables doesn't need to be as strict,\n> which has performance advantages.\n>\n> --\n> Craig Ringer\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIn reading what you are describing, don't you think PG 9 goes a long way to helping you out?On Sat, Nov 13, 2010 at 12:53 AM, Craig Ringer <[email protected]> wrote:\nOn 11/12/2010 02:25 AM, Kyriacos Kyriacou wrote:\n\n\n The\nresult is to have huge fragmentation on table space, unnecessary updates\nin all affected indexes, unnecessary costly I/O operations, poor\nperformance on SELECT that retrieves big record sets (i.e. reports etc)\nand slower updates.\n\n\nYep. It's all about trade-offs. For some workloads the in-table MVCC storage setup works pretty darn poorly, but for most it seems to work quite well.\n\nThere are various other methods of implementing relational storage with ACID properties. You can exclude all other transactions while making a change to a table, ensuring that nobody else can see \"old\" or \"new\" rows so there's no need to keep them around. You can use an out-of-line redo log (a-la Oracle). Many other methods exist, too.\n\nThey all have advantages and disadvantages for different workloads. It's far from trivial to mix multiple schemes within a single database, so mixing and matching schemes for different parts of your DB isn't generally practical.\n\n\n\n1) When a raw UPDATE is performed, store all \"new raw versions\" either\nin separate temporary table space\n    or in a reserved space at the end of each table (can be allocated\ndynamically) etc\n\n\nOK, so you want a redo log a-la Oracle?\n\n\n2) Any SELECT queries within the same session will be again accessing\nthe new version of the row\n3) Any SELECT queries from other users will still be accessing the old\nversion\n\n\n... and incurring horrible random I/O penalties if the redo log doesn't fit in RAM. Again, a-la Oracle.\n\nEven read-only transactions have to hit the undo log if there's an update in progress, because rows they need may have been moved out to the undo log as they're updated in the main table storage.\n\n[snip description]\n\n\nI understand that my suggestion seems to be too simplified and also that\nthere are many implementation details and difficulties that I am not\naware.\n\n\nIt sounds like you're describing Oracle-style MVCC, using redo logs.\n\nhttp://blogs.sybase.com/database/2009/04/mvcc-dispelling-some-oracle-fudunderstanding-the-cost/\n\nhttp://en.wikipedia.org/wiki/Multiversion_concurrency_control\n\nOracle's MVCC approach has its own costs. Like Pg's, those costs increase with update/delete frequency. Instead of table bloat, Oracle suffers from redo log growth (or redo log size management issues). Instead of increased table scan costs from dead rows, Oracle suffers from random I/O costs as it looks up the out-of-line redo log for old rows. Instead of long-running writer transactions causing table bloat, Oracle can have problems with long-running reader transactions aborting when the redo log runs out of space.\n\nPersonally, I don't know enough to know which is \"better\". I suspect they're just different, with different trade-offs. If redo logs allow you  to do without write-ahead logging, that'd be interesting - but then, the WAL is useful for all sorts of replication options, and the use of linear WALs means that write ordering in the tables doesn't need to be as strict, which has performance advantages.\n\n--\nCraig Ringer\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 13 Nov 2010 02:05:45 -0500", "msg_from": "Rich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "Craig Ringer wrote:\n> It sounds like you're describing Oracle-style MVCC, using redo logs.\n>\n> http://blogs.sybase.com/database/2009/04/mvcc-dispelling-some-oracle-fudunderstanding-the-cost/\n>\n> \nCraig, this is an interesting blog page, making some valid points about \nthe multiversioning vs. locking. The ATM example, however, is \nunrealistic and couldn't have happened the way the author describes. \nOracle has the same write consistency mechanism as Postgres and it \nrestarts the transaction if the transaction blocks were updated while \nthe transaction was waiting. In other words, the wife's transaction \nwould have been restarted before committing, the transaction would get \nthe balance accurately and there wouldn't be a loss of $250.\nSuch an example is naive, sheer FUD. If that was the case, no bank in \nthe whole wide world would be using Oracle, and many of them do, I dare \nsay many more are using Oracle than Sybase. That means that they're not \nlosing money if 2 spouses decide to withdraw money from the joint \naccount simultaneously. Given the number of people in the world, I \nimagine that to be a rather common and ordinary situation for the banks. \nThe example is plain silly. Here is what I have in mind as \"write \nconsistency\":\nhttp://www.postgresql.org/docs/9.0/static/transaction-iso.html#XACT-READ-COMMITTED:\n\" If the first updater rolls back, then its effects are negated and the \nsecond updater can proceed with updating the originally found row. If \nthe first updater commits, the second updater will ignore the row if the \nfirst updater deleted it, otherwise it will attempt to apply its \noperation to the updated version of the row. The search condition of the \ncommand (the WHERE clause) is re-evaluated to see if the updated version \nof the row still matches the search condition.\"\n\nEssentially the same behavior is described here, for Oracle:\nhttp://tkyte.blogspot.com/2005/08/something-different-part-i-of-iii.html\n\"Obviously, we cannot modify an old version of a block�when we go to \nmodify a row, we must modify the current version of that block. \nAdditionally, Oracle cannot just simply skip this row, as that would be \nan inconsistent read and unpredictable. What we�ll discover is that in \nsuch cases, Oracle will restart the write modification from scratch.\"\n\nPostgres re-evaluates the where condition, Oracle restarts the entire \ntransaction, but neither MVCC mechanism would allow for the silly ATM \nexample described in the blog. Both databases would have noticed change \nin the balance, both databases would have ended with the proper balance \nin the account.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Sat, 13 Nov 2010 13:38:50 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On 11/14/2010 02:38 AM, Mladen Gogala wrote:\n> Craig Ringer wrote:\n>> It sounds like you're describing Oracle-style MVCC, using redo logs.\n>>\n>> http://blogs.sybase.com/database/2009/04/mvcc-dispelling-some-oracle-fudunderstanding-the-cost/\n>>\n\n> Craig, this is an interesting blog page, making some valid points about\n> the multiversioning vs. locking. The ATM example, however, is\n> unrealistic and couldn't have happened the way the author describes.\n\nYep, you're quite right. I was using it for its explanation of some of \nthe costs of MVCC as Oracle implements it, because it's surprisingly \nhard to find explanations/analysis of that with some quick Google \nsearching. I hadn't read beyond that part.\n\nI'd be really interested in some *good* writeups of the costs/benefits \nof the various common mvcc and locking based rdbms implementations.\n\nThanks for posting a breakdown of the issues with that article, lest \nothers be mislead. Appreciated.\n\n--\nCraig Ringer\n", "msg_date": "Sun, 14 Nov 2010 08:10:56 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On Sat, Nov 13, 2010 at 07:53, Craig Ringer <[email protected]> wrote:\n> Oracle's MVCC approach has its own costs. Like Pg's, those costs increase\n> with update/delete frequency. Instead of table bloat, Oracle suffers from\n> redo log growth (or redo log size management issues). Instead of increased\n> table scan costs from dead rows, Oracle suffers from random I/O costs as it\n> looks up the out-of-line redo log for old rows. Instead of long-running\n> writer transactions causing table bloat, Oracle can have problems with\n> long-running reader transactions aborting when the redo log runs out of\n> space.\n\nAnother advantage of Oracle's approach seems that they need much less\ntuple-level overhead. IMO the 23-byte tuple overhead is a much bigger\ndrawback in Postgres than table fragmentation.\n\nRegards,\nMarti\n", "msg_date": "Sun, 14 Nov 2010 10:30:37 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On Thu, Nov 11, 2010 at 20:25, Kyriacos Kyriacou\n<[email protected]> wrote:\n> By definition of MVCC, when an UPDATE is performed, PostgreSQL creates a\n> new copy of the row in a new location.\n\n> result is to have huge fragmentation on table space, unnecessary updates\n> in all affected indexes, unnecessary costly I/O operations, poor\n> performance on SELECT that retrieves big record sets (i.e. reports etc)\n> and slower updates.\n\nHave you tried reducing the table fillfactor and seeing if HOT update\nratio increases?\n\nPostgreSQL 8.3 introduced HOT updates as kind of a middle ground -- if\nthe update doesn't affect indexed columns and there's enough space in\nthe same page that is being updated, then the new version will be\nwritten in the same page and indexes don't need to be touched at all.\n\nRegards,\nMarti\n", "msg_date": "Sun, 14 Nov 2010 10:46:14 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "Marti Raudsepp wrote:\n>\n>\n> Another advantage of Oracle's approach seems that they need much less\n> tuple-level overhead. IMO the 23-byte tuple overhead is a much bigger\n> drawback in Postgres than table fragmentation.\n>\n> Regards,\n> Marti\n>\n> \nOracle, however, does have a problem with \"ORA-1555 Snapshot too old\", \nprecisely because of their implementation of MVCC. In other words, if \nyour query is running long and Oracle is not able to reconstruct the old \nrows from the UNDO segments, you're out of luck and your query will die. \nThe greatest burden of the Postgres implementation is the fact that \nthere is no row id, so that the table header and the indexes need to be \nupdated much more frequently than is the case with Oracle.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Sun, 14 Nov 2010 19:32:54 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On Thu, Nov 11, 2010 at 1:25 PM, Kyriacos Kyriacou\n<[email protected]> wrote:\n> This is my first post in this mailing list and I would like to raise an\n> issue that in my opinion is causing performance issues of PostgreSQL\n> especially in a transaction processing environment. In my company we are\n> using PostgreSQL for the last 8 year for our in-house developed billing\n> system (telecom). The last few months we started considering moving to\n> another RDBMS just because of this issue.\n>\n> After all these years, I believe that the biggest improvement that could\n> be done and will boost overall performance especially for enterprise\n> application will be to improve Multiversion Concurrency Control (MVCC)\n> mechanism. In theory this seems to be improving performance for SELECT\n> queries but on tables with very intensive and frequent updates, even\n> that is not fully true because of the fragmentation of data caused by\n> MVCC. I saw cases were a SELECT COUNT(*) on an empty (!!!) table (used\n> as a buffer) took more than 40min to return a result! VACUUM is not a\n> solution in my opinion even though after the introduction of autovacuum\n> daemon situation got much better.\n\nThere are probably a number of ways that the behavior you're seeing\ncould be improved without switching databases or rewriting PostgreSQL,\nbut you haven't provided enough information here for anyone to help\nyou in a meaningful way - such as the version of PostgreSQL you're\nrunning. One obvious suggestion would be to empty your table using\nTRUNCATE rather than DELETE, which will avoid the particular problem\nyou're describing here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 29 Nov 2010 19:21:59 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" } ]
[ { "msg_contents": "dear pgers --\n\nconsider the following toy example (using pg 8.4.3) :\n\ncreate temporary table foo (\n ts timestamp not null,\n id integer not null,\n val double precision not null,\n primary key (ts, id)\n);\n\ni might want to return the vals, minus the averages at each timestamp. the obvious self-join results in a sequential scan over foo -- we aggregate the average val for EVERY timestamp, then join against the timestamps we want.\n\nus_quotedb=# explain select ts, id, val - aval from foo join (select ts, avg(val) as aval from foo group by ts) as a using (ts) where ts > '2010-11-11' and ts < '2010-11-13'; \n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------- \n Hash Join (cost=49.06..54.41 rows=8 width=28) \n Hash Cond: (pg_temp_2.foo.ts = pg_temp_2.foo.ts) \n -> HashAggregate (cost=34.45..36.95 rows=200 width=16) \n -> Seq Scan on foo (cost=0.00..26.30 rows=1630 width=16) \n -> Hash (cost=14.51..14.51 rows=8 width=20) \n -> Bitmap Heap Scan on foo (cost=4.33..14.51 rows=8 width=20) \n Recheck Cond: ((ts > '2010-11-11 00:00:00'::timestamp without time zone) AND (ts < '2010-11-13 00:00:00'::timestamp without time zone)) \n -> Bitmap Index Scan on foo_pkey (cost=0.00..4.33 rows=8 width=0) \n Index Cond: ((ts > '2010-11-11 00:00:00'::timestamp without time zone) AND (ts < '2010-11-13 00:00:00'::timestamp without time zone)) \n\non the other hand, if i specify \"which\" timestamp i'm restricting, it appears to do the right thing:\n\nus_quotedb=# explain select ts, id, val - aval from foo join (select ts, avg(val) as aval from foo group by ts) as a using (ts) where a.ts > '2010-11-11' and a.ts < '2010-11-13'; \n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------- \n Nested Loop (cost=18.86..29.14 rows=8 width=28) \n -> HashAggregate (cost=14.55..14.56 rows=1 width=16) \n -> Bitmap Heap Scan on foo (cost=4.33..14.51 rows=8 width=16) \n Recheck Cond: ((ts > '2010-11-11 00:00:00'::timestamp without time zone) AND (ts < '2010-11-13 00:00:00'::timestamp without time zone)) \n -> Bitmap Index Scan on foo_pkey (cost=0.00..4.33 rows=8 width=0) \n Index Cond: ((ts > '2010-11-11 00:00:00'::timestamp without time zone) AND (ts < '2010-11-13 00:00:00'::timestamp without time zone)) \n -> Bitmap Heap Scan on foo (cost=4.31..14.45 rows=8 width=20) \n Recheck Cond: (pg_temp_2.foo.ts = pg_temp_2.foo.ts) \n -> Bitmap Index Scan on foo_pkey (cost=0.00..4.31 rows=8 width=0) \n Index Cond: (pg_temp_2.foo.ts = pg_temp_2.foo.ts) \n\ni find this behavior curious. my understanding is that both queries are equivalent, and i would expect that the query planner would be able to choose either of those plans. this is important -- with the real data i'm working with, the table is very large, and the sequential scan is a killer. \n\nare these queries equivalent, or am i mistaken? if the planner distinguishes between these plans, how do i ensure that where clause restrictions propagate (correctly) to subqueries?\n\nbest regards, ben\n\n", "msg_date": "Thu, 11 Nov 2010 13:52:57 -0800", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "equivalent queries lead to different query plans for self-joins with\n\tgroup by?" }, { "msg_contents": "Ben <[email protected]> writes:\n> us_quotedb=# explain select ts, id, val - aval from foo join (select ts, avg(val) as aval from foo group by ts) as a using (ts) where ts > '2010-11-11' and ts < '2010-11-13'; \n> on the other hand, if i specify \"which\" timestamp i'm restricting, it appears to do the right thing:\n\n> us_quotedb=# explain select ts, id, val - aval from foo join (select ts, avg(val) as aval from foo group by ts) as a using (ts) where a.ts > '2010-11-11' and a.ts < '2010-11-13'; \nWell, arguably it's not doing the right thing either way --- you'd sort\nof like the inequalities to get pushed down into both of the join\ninputs, not just one of them. PG doesn't make that deduction though;\nit can make such inferences for equalities, but inequalities are not\noptimized as much.\n\nThe case where you have \"using (ts) where ts > ...\" is, I believe,\ninterpreted as though you'd specified the left-hand join input, ie\n\"using (ts) where foo.ts > ...\". So the range condition is pushed\ninto the foo scan, which doesn't help the avg() subquery. When you\nspecify the restriction against \"a\", it's pushed into the subquery\nwhere it's more useful.\n\nYou could try \"where foo.ts > ... and a.ts > ...\" but not sure if it's\nreally worth the trouble here, at least not if those rowcount estimates\nare anywhere near accurate. If you were doing the join across quite a\nlot of rows, having the constraints in both subqueries would be useful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Nov 2010 17:37:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: equivalent queries lead to different query plans for self-joins\n\twith group by?" }, { "msg_contents": "appreciate the instant response.\n\n> Well, arguably it's not doing the right thing either way --- you'd sort\n> of like the inequalities to get pushed down into both of the join\n> inputs, not just one of them. PG doesn't make that deduction though;\n> it can make such inferences for equalities, but inequalities are not\n> optimized as much.\n\nin my work i have replaced the query with a sql function + window :\n\ncreate or replace function bar(timestamp, timestamp) returns setof foo\nlanguage 'sql' as $$\n select ts,\n id,\n val -\n (avg(val) over (partition by ts)) as val\n from foo\n where ts > $1\n and ts < $2\n$$;\n\ni was forced to use a sql function as opposed to a view because the query planner was unable to push down restrictions on ts inside the view subquery, which i've manually done in the function. indeed,\n\nexplain select ts, id, val - (avg(val) over (partition by ts)) as val from foo where ts > '2009-10-20' and ts < '2009-10-21';\n\nand\n\nexplain select * from (select ts, id, val - (avg(val) over (partition by ts)) as val from foo) as f where ts > '2009-10-20' and ts < '2009-10-21';\n\ngive different answers, despite being equivalent, but i understand it is hard to push things into subqueries in general. in this case it is only legal because we partition by ts.\n\nthanks again for the explanations!\n\nbest, ben\n\n", "msg_date": "Thu, 11 Nov 2010 15:56:31 -0800", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "Re: equivalent queries lead to different query plans for self-joins\n\twith group by?" } ]
[ { "msg_contents": "Hi,\n\nI have a question about the behavior of autovacuum. When I have a big\ntable A which is being processed by autovacuum, I also manually use\n(full) vacuum to clean another table B. Then I found that I always got\nsomething like “found 0 removable, 14283 nonremovable row”. However,\nif I stop the autovacuum functionality and use vacuum on that big\ntable A manually, I can clean table B (ex. found 22615 removable, 2049\nnonremovable row).\n\nIs this correct? Why do vacuum and autovacuum have different actions?\n\nPs. My postgreSQL is 8.4.\n", "msg_date": "Fri, 12 Nov 2010 16:01:24 +0800", "msg_from": "kuopo <[email protected]>", "msg_from_op": true, "msg_subject": "autovacuum blocks the operations of other manual vacuum" }, { "msg_contents": "Excerpts from kuopo's message of vie nov 12 05:01:24 -0300 2010:\n> Hi,\n> \n> I have a question about the behavior of autovacuum. When I have a big\n> table A which is being processed by autovacuum, I also manually use\n> (full) vacuum to clean another table B. Then I found that I always got\n> something like “found 0 removable, 14283 nonremovable row”. However,\n> if I stop the autovacuum functionality and use vacuum on that big\n> table A manually, I can clean table B (ex. found 22615 removable, 2049\n> nonremovable row).\n> \n> Is this correct? Why do vacuum and autovacuum have different actions?\n\nVacuum full does not assume that it can clean up tuples while other\ntransactions are running, and that includes the (non full, or \"lazy\")\nvacuum that autovacuum is running. Autovacuum only runs lazy vacuum;\nand that one is aware that other concurrent vacuums can be ignored.\n\nJust don't use vacuum full unless strictly necessary. It has other\ndrawbacks.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 16 Nov 2010 12:26:44 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum blocks the operations of other manual vacuum" }, { "msg_contents": "Hi,\n\nThanks for your response. I've checked it again and found that the\nmain cause is the execution of ANALYZE. As I have mentioned, I have\ntwo tables: table A is a big one (around 10M~100M records) for log\ndata and table B is a small one (around 1k records) for keeping some\ncurrent status. There are a lot of update operations and some search\noperations on the table B. For the performance issue, I would like to\nkeep table B as compact as possible. According your suggestion, I try\nto invoke standard vacuum (not full) more frequently (e.g., once per\nmin).\n\nHowever, when I analyze the table A, the autovacuum or vacuum on the\ntable B cannot find any removable row version (the number of\nnonremoveable row versions and pages keeps increasing). After the\nanalysis finishes, the search operations on the table B is still\ninefficient. If I call full vacuum right now, then I can have quick\nresponse time of the search operations on the table B again.\n\nAny suggestions for this situation?\n\n\nOn Tue, Nov 16, 2010 at 11:26 PM, Alvaro Herrera\n<[email protected]> wrote:\n> Excerpts from kuopo's message of vie nov 12 05:01:24 -0300 2010:\n>> Hi,\n>>\n>> I have a question about the behavior of autovacuum. When I have a big\n>> table A which is being processed by autovacuum, I also manually use\n>> (full) vacuum to clean another table B. Then I found that I always got\n>> something like “found 0 removable, 14283 nonremovable row”. However,\n>> if I stop the autovacuum functionality and use vacuum on that big\n>> table A manually, I can clean table B (ex. found 22615 removable, 2049\n>> nonremovable row).\n>>\n>> Is this correct? Why do vacuum and autovacuum have different actions?\n>\n> Vacuum full does not assume that it can clean up tuples while other\n> transactions are running, and that includes the (non full, or \"lazy\")\n> vacuum that autovacuum is running.  Autovacuum only runs lazy vacuum;\n> and that one is aware that other concurrent vacuums can be ignored.\n>\n> Just don't use vacuum full unless strictly necessary.  It has other\n> drawbacks.\n>\n> --\n> Álvaro Herrera <[email protected]>\n> The PostgreSQL Company - Command Prompt, Inc.\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>\n", "msg_date": "Thu, 18 Nov 2010 15:10:36 +0800", "msg_from": "kuopo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum blocks the operations of other manual vacuum" }, { "msg_contents": "Excerpts from kuopo's message of jue nov 18 04:10:36 -0300 2010:\n> Hi,\n> \n> Thanks for your response. I've checked it again and found that the\n> main cause is the execution of ANALYZE. As I have mentioned, I have\n> two tables: table A is a big one (around 10M~100M records) for log\n> data and table B is a small one (around 1k records) for keeping some\n> current status. There are a lot of update operations and some search\n> operations on the table B. For the performance issue, I would like to\n> keep table B as compact as possible. According your suggestion, I try\n> to invoke standard vacuum (not full) more frequently (e.g., once per\n> min).\n> \n> However, when I analyze the table A, the autovacuum or vacuum on the\n> table B cannot find any removable row version (the number of\n> nonremoveable row versions and pages keeps increasing). After the\n> analysis finishes, the search operations on the table B is still\n> inefficient. If I call full vacuum right now, then I can have quick\n> response time of the search operations on the table B again.\n\nHmm, I don't think we can optimize the analyze-only operation the same\nway we optimize vacuum (i.e. allow vacuum to proceed while it's in\nprogress). Normally analyze shouldn't take all that long anyway -- why\nis it that slow? Are you calling it in a transaction that also does\nother stuff? Are you analyzing more than one table in a single\ntransaction, perhaps even the whole database?\n\nPerhaps you could speed it up by lowering vacuum_cost_delay, if it's set\nto a nonzero value.\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 19 Nov 2010 22:49:25 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum blocks the operations of other manual vacuum" }, { "msg_contents": "> Excerpts from kuopo's message of jue nov 18 04:10:36 -0300 2010:\n>> However, when I analyze the table A, the autovacuum or vacuum on the\n>> table B cannot find any removable row version (the number of\n>> nonremoveable row versions and pages keeps increasing). After the\n>> analysis finishes, the search operations on the table B is still\n>> inefficient. If I call full vacuum right now, then I can have quick\n>> response time of the search operations on the table B again.\n\nHi, I don't know how to fix the long VACUUM/ANALYZE, but have you tried to\nminimize the growth using HOT?\n\nHOT means that if you update only columns that are not indexed, and if the\nupdate can fit into the same page (into an update chain), this would not\ncreate a dead row.\n\nAre there any indexes on the small table? How large is it? You've\nmentioned there are about 2049 rows - that might be just a few pages so\nthe indexes would not be very efficient anyway.\n\nTry to remove the indexes, and maybe create the table with a smaller\nfillfactor (so that there is more space for the updates).\n\nThat should be much more efficient and the table should not grow.\n\nYou can see if HOT works through pg_stat_all_tables view (columns\nn_tup_upd and n_tup_hot_upd).\n\nregards\nTomas\n\n", "msg_date": "Sat, 20 Nov 2010 05:43:18 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: autovacuum blocks the operations of other manual\n vacuum" }, { "msg_contents": "In my experiment, I need about 1~3 min to finish the analyze operation\non the big table (which depends on the value of vacuum_cost_delay). I\nam not surprised because this table is a really big one (now, it has\nover 200M records).\n\nHowever, the most of my concerns is the behavior of analyze/vacuum.\nYou mentioned that the analyze-only operation cannot be optimized as\nthe same way on optimizing vacuum. Does that mean the analyze\noperation on a table would unavoidably affect the vacuum proceeded on\nanother one? If this is a normal reaction for an analyze operation,\nmaybe I should try to lower vacuum_cost_delay or use more powerful\nhardware to minimize the interfered period. So, the pages for the\nsmall table would not increase quickly.\n\nDo you have any suggestion? Thanks!!\n\n\nOn Sat, Nov 20, 2010 at 9:49 AM, Alvaro Herrera\n<[email protected]> wrote:\n> Excerpts from kuopo's message of jue nov 18 04:10:36 -0300 2010:\n>> Hi,\n>>\n>> Thanks for your response. I've checked it again and found that the\n>> main cause is the execution of ANALYZE. As I have mentioned, I have\n>> two tables: table A is a big one (around 10M~100M records) for log\n>> data and table B is a small one (around 1k records) for keeping some\n>> current status. There are a lot of update operations and some search\n>> operations on the table B. For the performance issue, I would like to\n>> keep table B as compact as possible. According your suggestion, I try\n>> to invoke standard vacuum (not full) more frequently (e.g., once per\n>> min).\n>>\n>> However, when I analyze the table A, the autovacuum or vacuum on the\n>> table B cannot find any removable row version (the number of\n>> nonremoveable row versions and pages keeps increasing). After the\n>> analysis finishes, the search operations on the table B is still\n>> inefficient. If I call full vacuum right now, then I can have quick\n>> response time of the search operations on the table B again.\n>\n> Hmm, I don't think we can optimize the analyze-only operation the same\n> way we optimize vacuum (i.e. allow vacuum to proceed while it's in\n> progress).  Normally analyze shouldn't take all that long anyway -- why\n> is it that slow?  Are you calling it in a transaction that also does\n> other stuff?  Are you analyzing more than one table in a single\n> transaction, perhaps even the whole database?\n>\n> Perhaps you could speed it up by lowering vacuum_cost_delay, if it's set\n> to a nonzero value.\n>\n> --\n> Álvaro Herrera <[email protected]>\n> The PostgreSQL Company - Command Prompt, Inc.\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>\n", "msg_date": "Sun, 21 Nov 2010 22:15:52 +0800", "msg_from": "kuopo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum blocks the operations of other manual vacuum" }, { "msg_contents": "Thanks for your information. I am using postgresql 8.4 and this\nversion should have already supported HOT. The frequently updated\ncolumns are not indexed columns. So, the frequent updates should not\ncreate many dead records. I also did a small test. If I don't execute\nvacuum, the number of pages of the small table does not increase.\n\nHowever, analyzing the big table still bothers me. According current\nresults, if the analyze operation is triggered, vacuum or HOT would\nnot function as I expect.\n\n\nOn Sat, Nov 20, 2010 at 12:43 PM, <[email protected]> wrote:\n>> Excerpts from kuopo's message of jue nov 18 04:10:36 -0300 2010:\n>>> However, when I analyze the table A, the autovacuum or vacuum on the\n>>> table B cannot find any removable row version (the number of\n>>> nonremoveable row versions and pages keeps increasing). After the\n>>> analysis finishes, the search operations on the table B is still\n>>> inefficient. If I call full vacuum right now, then I can have quick\n>>> response time of the search operations on the table B again.\n>\n> Hi, I don't know how to fix the long VACUUM/ANALYZE, but have you tried to\n> minimize the growth using HOT?\n>\n> HOT means that if you update only columns that are not indexed, and if the\n> update can fit into the same page (into an update chain), this would not\n> create a dead row.\n>\n> Are there any indexes on the small table? How large is it? You've\n> mentioned there are about 2049 rows - that might be just a few pages so\n> the indexes would not be very efficient anyway.\n>\n> Try to remove the indexes, and maybe create the table with a smaller\n> fillfactor (so that there is more space for the updates).\n>\n> That should be much more efficient and the table should not grow.\n>\n> You can see if HOT works through pg_stat_all_tables view (columns\n> n_tup_upd and n_tup_hot_upd).\n>\n> regards\n> Tomas\n>\n>\n", "msg_date": "Sun, 21 Nov 2010 22:55:44 +0800", "msg_from": "kuopo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum blocks the operations of other manual vacuum" }, { "msg_contents": "Excerpts from kuopo's message of dom nov 21 11:15:52 -0300 2010:\n> In my experiment, I need about 1~3 min to finish the analyze operation\n> on the big table (which depends on the value of vacuum_cost_delay). I\n> am not surprised because this table is a really big one (now, it has\n> over 200M records).\n\nOkay. You may want to consider lowering the statistics size for all the\ncolumn in that table; that would reduce analyze time, at the cost of\npossibly worsening the plans for that table, depending on how irregular\nthe distribution is. See ALTER TABLE / SET STATISTICS in the\ndocumentation, and the default_statistics_target parameter in\npostgresql.conf.\n\n> However, the most of my concerns is the behavior of analyze/vacuum.\n> You mentioned that the analyze-only operation cannot be optimized as\n> the same way on optimizing vacuum. Does that mean the analyze\n> operation on a table would unavoidably affect the vacuum proceeded on\n> another one?\n\nThat's correct. I think you can run VACUUM ANALYZE, and it would do\nboth things at once; AFAIK this is also optimized like VACUUM is, but I\nadmit I'm not 100% sure (and I can't check right now).\n\n> If this is a normal reaction for an analyze operation,\n> maybe I should try to lower vacuum_cost_delay or use more powerful\n> hardware to minimize the interfered period. So, the pages for the\n> small table would not increase quickly.\n\nI think it would make sense to have as low a cost_delay as possible for\nthis ANALYZE. (Note you can change it locally with a SET command; no\nneed to touch postgresql.conf. So you can change it when you analyze\njust this large table).\n\n-- \nÁlvaro Herrera <[email protected]>\nThe PostgreSQL Company - Command Prompt, Inc.\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Sun, 21 Nov 2010 13:25:37 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum blocks the operations of other manual vacuum" } ]
[ { "msg_contents": "This is my first post in this mailing list and I would like to raise an\nissue that in my opinion is causing performance issues of PostgreSQL\nespecially in a transaction processing environment. In my company we are\nusing PostgreSQL for the last 8 year for our in-house developed billing\nsystem (telecom). The last few months we started considering moving to\nanother RDBMS just because of this issue. \n\nAfter all these years, I believe that the biggest improvement that could\nbe done and will boost overall performance especially for enterprise\napplication will be to improve Multiversion Concurrency Control (MVCC)\nmechanism. In theory this seems to be improving performance for SELECT\nqueries but on tables with very intensive and frequent updates, even\nthat is not fully true because of the fragmentation of data caused by\nMVCC. I saw cases were a SELECT COUNT(*) on an empty (!!!) table (used\nas a buffer) took more than 40min to return a result! VACUUM is not a\nsolution in my opinion even though after the introduction of autovacuum\ndaemon situation got much better.\n\nPROBLEM DECRIPTION\n------------------\nBy definition of MVCC, when an UPDATE is performed, PostgreSQL creates a\nnew copy of the row in a new location. Any SELECT queries within the\nsame session are accessing the new version of the raw and all other\nqueries from other users are still accessing the old version. When\ntransaction is COMMIT PostgreSQL makes the a new version of the row as\nthe \"active\" row and expires the old row that remains \"dead\" and then is\nup to VACUUM procedure to recover the \"dead\" rows space and make it\navailable to the database engine. In case that transaction is ROLLBACK\nthen space reserved for the new version of the row is released. The\nresult is to have huge fragmentation on table space, unnecessary updates\nin all affected indexes, unnecessary costly I/O operations, poor\nperformance on SELECT that retrieves big record sets (i.e. reports etc)\nand slower updates. As an example, consider updating the \"live\" balance\nof a customer for each phone call where the entire customer record has\nto be duplicated again and again upon each call just for modifying a\nnumeric value! \n\nSUGGESTION\n--------------\n1) When a raw UPDATE is performed, store all \"new raw versions\" either\nin separate temporary table space \n or in a reserved space at the end of each table (can be allocated\ndynamically) etc \n2) Any SELECT queries within the same session will be again accessing\nthe new version of the row\n3) Any SELECT queries from other users will still be accessing the old\nversion\n4) When UPDATE transaction is ROLLBACK just release the space used in\nnew temporary location \n5) When UPDATE transaction is COMMIT then try to LOCK the old version\nand overwrite it at the same physical location (NO FRAGMENTATION).\n6) Similar mechanism can be applied on INSERTS and DELETES \n7) In case that transaction was COMMIT, the temporary location can be\neither released or archived/cleaned on a pre-scheduled basis. This will\npossibly allow the introduction of a TRANSACTION LOG backup mechanism as\na next step. \n8) After that VACUUM will have to deal only with deletions!!! \n\n\nI understand that my suggestion seems to be too simplified and also that\nthere are many implementation details and difficulties that I am not\naware. \n\nI strongly believe that the outcome of the discussion regarding this\nissue will be helpful. \n\nBest Regards, \n\nKyriacos Kyriacou\nSenior Developer/DBA\n\n\n", "msg_date": "Fri, 12 Nov 2010 15:47:30 +0200", "msg_from": "\"Kyriacos Kyriacou\" <[email protected]>", "msg_from_op": true, "msg_subject": "MVCC performance issue" }, { "msg_contents": "On Fri, Nov 12, 2010 at 03:47:30PM +0200, Kyriacos Kyriacou wrote:\n> This is my first post in this mailing list and I would like to raise an\n> issue that in my opinion is causing performance issues of PostgreSQL\n> especially in a transaction processing environment. In my company we are\n> using PostgreSQL for the last 8 year for our in-house developed billing\n> system (telecom). The last few months we started considering moving to\n> another RDBMS just because of this issue. \n> \n> After all these years, I believe that the biggest improvement that could\n> be done and will boost overall performance especially for enterprise\n> application will be to improve Multiversion Concurrency Control (MVCC)\n> mechanism. In theory this seems to be improving performance for SELECT\n> queries but on tables with very intensive and frequent updates, even\n> that is not fully true because of the fragmentation of data caused by\n> MVCC. I saw cases were a SELECT COUNT(*) on an empty (!!!) table (used\n> as a buffer) took more than 40min to return a result! VACUUM is not a\n> solution in my opinion even though after the introduction of autovacuum\n> daemon situation got much better.\n> \n> PROBLEM DECRIPTION\n> ------------------\n> By definition of MVCC, when an UPDATE is performed, PostgreSQL creates a\n> new copy of the row in a new location. Any SELECT queries within the\n> same session are accessing the new version of the raw and all other\n> queries from other users are still accessing the old version. When\n> transaction is COMMIT PostgreSQL makes the a new version of the row as\n> the \"active\" row and expires the old row that remains \"dead\" and then is\n> up to VACUUM procedure to recover the \"dead\" rows space and make it\n> available to the database engine. In case that transaction is ROLLBACK\n> then space reserved for the new version of the row is released. The\n> result is to have huge fragmentation on table space, unnecessary updates\n> in all affected indexes, unnecessary costly I/O operations, poor\n> performance on SELECT that retrieves big record sets (i.e. reports etc)\n> and slower updates. As an example, consider updating the \"live\" balance\n> of a customer for each phone call where the entire customer record has\n> to be duplicated again and again upon each call just for modifying a\n> numeric value! \n> \n> SUGGESTION\n> --------------\n> 1) When a raw UPDATE is performed, store all \"new raw versions\" either\n> in separate temporary table space \n> or in a reserved space at the end of each table (can be allocated\n> dynamically) etc \n> 2) Any SELECT queries within the same session will be again accessing\n> the new version of the row\n> 3) Any SELECT queries from other users will still be accessing the old\n> version\n> 4) When UPDATE transaction is ROLLBACK just release the space used in\n> new temporary location \n> 5) When UPDATE transaction is COMMIT then try to LOCK the old version\n> and overwrite it at the same physical location (NO FRAGMENTATION).\n> 6) Similar mechanism can be applied on INSERTS and DELETES \n> 7) In case that transaction was COMMIT, the temporary location can be\n> either released or archived/cleaned on a pre-scheduled basis. This will\n> possibly allow the introduction of a TRANSACTION LOG backup mechanism as\n> a next step. \n> 8) After that VACUUM will have to deal only with deletions!!! \n> \n> \n> I understand that my suggestion seems to be too simplified and also that\n> there are many implementation details and difficulties that I am not\n> aware. \n> \n> I strongly believe that the outcome of the discussion regarding this\n> issue will be helpful. \n> \n> Best Regards, \n> \n> Kyriacos Kyriacou\n> Senior Developer/DBA\n> \n\nI cannot speak to your suggestion, but it sounds like you are not\nvacuuming enough and a lot of the bloat/randomization would be helped\nby making use of HOT updates in which the updates are all in the same\npage and are reclaimed almost immediately.\n\nRegards,\nKen\n", "msg_date": "Fri, 12 Nov 2010 07:52:35 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On 12 November 2010 13:47, Kyriacos Kyriacou <[email protected]>wrote:\n\n> This is my first post in this mailing list and I would like to raise an\n> issue that in my opinion is causing performance issues of PostgreSQL\n> especially in a transaction processing environment. In my company we are\n> using PostgreSQL for the last 8 year for our in-house developed billing\n> system (telecom). The last few months we started considering moving to\n> another RDBMS just because of this issue.\n>\n> After all these years, I believe that the biggest improvement that could\n> be done and will boost overall performance especially for enterprise\n> application will be to improve Multiversion Concurrency Control (MVCC)\n> mechanism. In theory this seems to be improving performance for SELECT\n> queries but on tables with very intensive and frequent updates, even\n> that is not fully true because of the fragmentation of data caused by\n> MVCC. I saw cases were a SELECT COUNT(*) on an empty (!!!) table (used\n> as a buffer) took more than 40min to return a result! VACUUM is not a\n> solution in my opinion even though after the introduction of autovacuum\n> daemon situation got much better.\n>\n> PROBLEM DECRIPTION\n> ------------------\n> By definition of MVCC, when an UPDATE is performed, PostgreSQL creates a\n> new copy of the row in a new location. Any SELECT queries within the\n> same session are accessing the new version of the raw and all other\n> queries from other users are still accessing the old version. When\n> transaction is COMMIT PostgreSQL makes the a new version of the row as\n> the \"active\" row and expires the old row that remains \"dead\" and then is\n> up to VACUUM procedure to recover the \"dead\" rows space and make it\n> available to the database engine. In case that transaction is ROLLBACK\n> then space reserved for the new version of the row is released. The\n> result is to have huge fragmentation on table space, unnecessary updates\n> in all affected indexes, unnecessary costly I/O operations, poor\n> performance on SELECT that retrieves big record sets (i.e. reports etc)\n> and slower updates. As an example, consider updating the \"live\" balance\n> of a customer for each phone call where the entire customer record has\n> to be duplicated again and again upon each call just for modifying a\n> numeric value!\n>\n> SUGGESTION\n> --------------\n> 1) When a raw UPDATE is performed, store all \"new raw versions\" either\n> in separate temporary table space\n> or in a reserved space at the end of each table (can be allocated\n> dynamically) etc\n> 2) Any SELECT queries within the same session will be again accessing\n> the new version of the row\n> 3) Any SELECT queries from other users will still be accessing the old\n> version\n> 4) When UPDATE transaction is ROLLBACK just release the space used in\n> new temporary location\n> 5) When UPDATE transaction is COMMIT then try to LOCK the old version\n> and overwrite it at the same physical location (NO FRAGMENTATION).\n> 6) Similar mechanism can be applied on INSERTS and DELETES\n> 7) In case that transaction was COMMIT, the temporary location can be\n> either released or archived/cleaned on a pre-scheduled basis. This will\n> possibly allow the introduction of a TRANSACTION LOG backup mechanism as\n> a next step.\n> 8) After that VACUUM will have to deal only with deletions!!!\n>\n>\n> I understand that my suggestion seems to be too simplified and also that\n> there are many implementation details and difficulties that I am not\n> aware.\n>\n> I strongly believe that the outcome of the discussion regarding this\n> issue will be helpful.\n>\n>\nWhich version of PostgreSQL are you basing this on?\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nOn 12 November 2010 13:47, Kyriacos Kyriacou <[email protected]> wrote:\n\nThis is my first post in this mailing list and I would like to raise an\nissue that in my opinion is causing performance issues of PostgreSQL\nespecially in a transaction processing environment. In my company we are\nusing PostgreSQL for the last 8 year for our in-house developed billing\nsystem (telecom). The last few months we started considering moving to\nanother RDBMS just because of this issue.\n\nAfter all these years, I believe that the biggest improvement that could\nbe done and will boost overall performance especially for enterprise\napplication will be to improve Multiversion Concurrency Control (MVCC)\nmechanism. In theory this seems to be improving performance for SELECT\nqueries but on tables with very intensive and frequent updates, even\nthat is not fully true because of the fragmentation of data caused by\nMVCC. I saw cases were a SELECT COUNT(*) on an empty (!!!) table (used\nas a buffer) took more than 40min to return a result! VACUUM is not a\nsolution in my opinion even though after the introduction of autovacuum\ndaemon situation got much better.\n\nPROBLEM DECRIPTION\n------------------\nBy definition of MVCC, when an UPDATE is performed, PostgreSQL creates a\nnew copy of the row in a new location. Any SELECT queries within the\nsame session are accessing the new version of the raw and all other\nqueries from other users are still accessing the old version. When\ntransaction is COMMIT PostgreSQL makes the a new version of the row as\nthe \"active\" row and expires the old row that remains \"dead\" and then is\nup to VACUUM procedure to recover the \"dead\" rows space and make it\navailable to the database engine. In case that transaction is ROLLBACK\nthen space reserved for the new version of the row is released. The\nresult is to have huge fragmentation on table space, unnecessary updates\nin all affected indexes, unnecessary costly I/O operations, poor\nperformance on SELECT that retrieves big record sets (i.e. reports etc)\nand slower updates. As an example, consider updating the \"live\" balance\nof a customer for each phone call where the entire customer record has\nto be duplicated again and again upon each call just for modifying a\nnumeric value!\n\nSUGGESTION\n--------------\n1) When a raw UPDATE is performed, store all \"new raw versions\" either\nin separate temporary table space\n   or in a reserved space at the end of each table (can be allocated\ndynamically) etc\n2) Any SELECT queries within the same session will be again accessing\nthe new version of the row\n3) Any SELECT queries from other users will still be accessing the old\nversion\n4) When UPDATE transaction is ROLLBACK just release the space used in\nnew temporary location\n5) When UPDATE transaction is COMMIT then try to LOCK the old version\nand overwrite it at the same physical location (NO FRAGMENTATION).\n6) Similar mechanism can be applied on INSERTS and DELETES\n7) In case that transaction was COMMIT, the temporary location can be\neither released or archived/cleaned on a pre-scheduled basis. This will\npossibly allow the introduction of a TRANSACTION LOG backup mechanism as\na next step.\n8) After that VACUUM will have to deal only with deletions!!!\n\n\nI understand that my suggestion seems to be too simplified and also that\nthere are many implementation details and difficulties that I am not\naware.\n\nI strongly believe that the outcome of the discussion regarding this\nissue will be helpful.\nWhich version of PostgreSQL are you basing this on?-- Thom BrownTwitter: @darkixionIRC (freenode): dark_ixionRegistered Linux user: #516935", "msg_date": "Fri, 12 Nov 2010 13:54:57 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On Fri, Nov 12, 2010 at 5:52 AM, Kenneth Marshall <[email protected]> wrote:\n>\n> I cannot speak to your suggestion, but it sounds like you are not\n> vacuuming enough and a lot of the bloat/randomization would be helped\n> by making use of HOT updates in which the updates are all in the same\n> page and are reclaimed almost immediately.\n>\n> Regards,\n> Ken\n\nIIRC, HOT only operates on non-indexed columns, so if you the tables\nare heavily indexed you won't get the full benefit of HOT. I could be\nwrong though.\n", "msg_date": "Fri, 12 Nov 2010 07:34:36 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On Fri, Nov 12, 2010 at 07:34:36AM -0800, bricklen wrote:\n> On Fri, Nov 12, 2010 at 5:52 AM, Kenneth Marshall <[email protected]> wrote:\n> >\n> > I cannot speak to your suggestion, but it sounds like you are not\n> > vacuuming enough and a lot of the bloat/randomization would be helped\n> > by making use of HOT updates in which the updates are all in the same\n> > page and are reclaimed almost immediately.\n> >\n> > Regards,\n> > Ken\n> \n> IIRC, HOT only operates on non-indexed columns, so if you the tables\n> are heavily indexed you won't get the full benefit of HOT. I could be\n> wrong though.\n> \n\nThat is true, but if they are truly having as big a bloat problem\nas the message indicated, it would be worth designing the schema\nto leverage HOT for the very frequent updates.\n\nCheers,\nKen\n", "msg_date": "Fri, 12 Nov 2010 09:37:08 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "12.11.10 15:47, Kyriacos Kyriacou написав(ла):\n> PROBLEM DECRIPTION\n> ------------------\n> As an example, consider updating the \"live\" balance\n> of a customer for each phone call where the entire customer record has\n> to be duplicated again and again upon each call just for modifying a\n> numeric value!\n> \nHave you considered splitting customer record into two tables with \nmostly read-only data and with data that is updated often? Such 1-1 \nrelationship can make a huge difference to performance in your case. You \ncan even try to simulate old schema by using an updateable view.\n\nBest regards, Vitalii Tymchyshyn\n", "msg_date": "Fri, 12 Nov 2010 17:53:35 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "We are still using PostgreSQL 8.2.4. We are running a 24x7 system and\ndatabase size is over 200Gb so upgrade is not an easy decision! \n\nI have it in my plans so in next few months I will setup new servers and\nupgrade to version 9. \n\n\n>> Which version of PostgreSQL are you basing this on?\n\n>>\n>>-- \n>>Thom Brown\n>>Twitter: @darkixion\n>>IRC (freenode): dark_ixion\n>>Registered Linux user: #516935\n\n\n We are still using PostgreSQL 8.2.4. We are running a 24x7 system and database size is over 200Gb so upgrade is not an easy decision! I have it in my plans so in next few months I will setup new servers and upgrade to version 9. >> Which version of PostgreSQL are you basing this on?>>>>-- >>Thom Brown>>Twitter: @darkixion>>IRC (freenode): dark_ixion>>Registered Linux user: #516935", "msg_date": "Fri, 12 Nov 2010 18:14:00 +0200", "msg_from": "\"Kyriacos Kyriacou\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "This was done already as a workaround after identifying this problem. \r\nI just gave it as an example.\r\n\r\n-----Original Message-----\r\nFrom: Vitalii Tymchyshyn [mailto:[email protected]] \r\nSent: Friday, November 12, 2010 5:54 PM\r\nTo: Kyriacos Kyriacou\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] MVCC performance issue\r\n\r\n12.11.10 15:47, Kyriacos Kyriacou написав(ла):\r\n> PROBLEM DECRIPTION\r\n> ------------------\r\n> As an example, consider updating the \"live\" balance\r\n> of a customer for each phone call where the entire customer record has\r\n> to be duplicated again and again upon each call just for modifying a\r\n> numeric value!\r\n> \r\nHave you considered splitting customer record into two tables with \r\nmostly read-only data and with data that is updated often? Such 1-1 \r\nrelationship can make a huge difference to performance in your case. You \r\ncan even try to simulate old schema by using an updateable view.\r\n\r\nBest regards, Vitalii Tymchyshyn\r\n\r\n\r\n", "msg_date": "Fri, 12 Nov 2010 18:18:18 +0200", "msg_from": "\"Kyriacos Kyriacou\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On Nov 12, 2010, at 8:14 AM, Kyriacos Kyriacou wrote:\n\n> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and database size is over 200Gb so upgrade is not an easy decision!\n\nThis is why we have slony, so you can slowly upgrade your 200Gb while you're live and then only suffer a minute or so of downtime while you switchover. Even if you only install slony for the point of the upgrade and then uninstall it after you're done, that seems well worth it to me rather than running on 8.2.4 for a while.\n\nNote there were some changes between 8.2 and 8.3 in regards to casting that might make you revisit your application.\nOn Nov 12, 2010, at 8:14 AM, Kyriacos Kyriacou wrote:We are still using PostgreSQL 8.2.4. We are running a 24x7 system and database size is over 200Gb so upgrade is not an easy decision!This is why we have slony, so you can slowly upgrade your 200Gb while you're live and then only suffer a minute or so of downtime while you switchover. Even if you only install slony for the point of the upgrade and then uninstall it after you're done, that seems well worth it to me rather than running on 8.2.4 for a while.Note there were some changes between 8.2 and 8.3 in regards to casting that might make you revisit your application.", "msg_date": "Fri, 12 Nov 2010 08:19:03 -0800", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On 11/12/2010 7:47 AM, Kyriacos Kyriacou wrote:\n>\n> SUGGESTION\n> --------------\n> 1) When a raw UPDATE is performed, store all \"new raw versions\" either\n> in separate temporary table space\n> or in a reserved space at the end of each table (can be allocated\n> dynamically) etc\n\nYour use of \"raw\" is confusing. I'll just ignore the word. New row \nversions are already stored in a dynamically allocated spot, right along \nwith the other versions of the table. You are assuming that getting to \nthe \"correct\" version of the row is very slow? That's only going to be \nthe case if you have lots and lots of versions. And your solution will \nnot actually help if there are lots of versions. While one person who \nis hitting the most recent version might be ok, everyone else will still \nhave to search for theirs. Just as they do now.\n\n> 2) Any SELECT queries within the same session will be again accessing\n> the new version of the row\n\nI don't see how this is different from what we currently have. \"same \nsession\" could have been dropped from your separate table space, and \nthen you'd have to go search through previous versions of the row... \nexactly like you do now.\n\nAnd worse, if you dont want to drop your version of the row from the \nseparate table space until you commit/rollback, then no other user can \nstart a transaction on that table until your done! oh no! You have \nreads and writes blocking each other.\n\n> 3) Any SELECT queries from other users will still be accessing the old\n> version\n\nAgain.. the same.\n\n> 4) When UPDATE transaction is ROLLBACK just release the space used in\n> new temporary location\n\ncurrent layout makes rollback very very fast.\n\n> 5) When UPDATE transaction is COMMIT then try to LOCK the old version\n> and overwrite it at the same physical location (NO FRAGMENTATION).\n\nNot sure what you mean by lock, but lock requires single user access and \nslow's things down. Right now we just bump the \"most active transaction \nnumber\", which is very efficient, and requires no locks. As soon as you \nlock anything, somebody, by definition, has to wait.\n\n\n> 6) Similar mechanism can be applied on INSERTS and DELETES\n> 7) In case that transaction was COMMIT, the temporary location can be\n> either released or archived/cleaned on a pre-scheduled basis. This will\n> possibly allow the introduction of a TRANSACTION LOG backup mechanism as\n> a next step.\n\nYou are kind of assuming there will only ever be one new transaction, \nand one old transaction. What about a case where 10 people start a \ntransaction, and there are 10 versions of the row?\n\n\nIt seems to me like you are using very long transactions, which is \ncausing lots of row versions to show up. Have you run explain analyze \non your slow querys to find out the problems?\n\nHave you checked to see if you are cpu bound or io bound? If you are \ndealing with lots of row versions, I'd assume you are cpu bound. If you \ncheck your system though, and see you are io bound, I think that might \ninvalidate your assumptions above.\n\nMVCC makes multi user access very nice because readers and writers dont \nblock each other, and there are very few locks. It does come with some \nkinks (gotta vacuum, keep transactions short, you must commit, etc).\n\nselect count(*) for example is always going to be slow... just expect \nit, lets not destroy what works well about the database just to make it \nfast. Instead, find a better alternative so you dont have to run it.\n\nJust like any database, you have to work within MVCC's good points and \ntry to avoid the bad spots.\n\n-Andy\n", "msg_date": "Fri, 12 Nov 2010 10:21:55 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "Ah, this is a very old version. If you can take advantage of\na version with HOT support, you should be much, much happier.\n\nCheers,\nKen\n\nOn Fri, Nov 12, 2010 at 06:14:00PM +0200, Kyriacos Kyriacou wrote:\n> \n> \n> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and\n> database size is over 200Gb so upgrade is not an easy decision! \n> \n> I have it in my plans so in next few months I will setup new servers and\n> upgrade to version 9. \n> \n> \n> >> Which version of PostgreSQL are you basing this on?\n> \n> >>\n> >>-- \n> >>Thom Brown\n> >>Twitter: @darkixion\n> >>IRC (freenode): dark_ixion\n> >>Registered Linux user: #516935\n> \n", "msg_date": "Fri, 12 Nov 2010 10:22:03 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On 12 November 2010 16:14, Kyriacos Kyriacou <[email protected]>wrote:\n\n>\n>\n> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and\n> database size is over 200Gb so upgrade is not an easy decision!\n>\n> I have it in my plans so in next few months I will setup new servers and\n> upgrade to version 9.\n>\n>\nEverything changed, performance-wise, in 8.3, and there have also been\nimprovements since then too. So rather than completely changing your\ndatabase platform, at least take a look at what work has gone into Postgres\nsince the version you're using.\nhttp://www.postgresql.org/docs/8.3/static/release-8-3.html#AEN87319\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n\nOn 12 November 2010 16:14, Kyriacos Kyriacou <[email protected]> wrote:\n We are still using PostgreSQL 8.2.4. We are running a 24x7 system and database size is over 200Gb so upgrade is not an easy decision! \nI have it in my plans so in next few months I will setup new servers and upgrade to version 9. \nEverything changed, performance-wise, in 8.3, and there have also been improvements since then too.  So rather than completely changing your database platform, at least take a look at what work has gone into Postgres since the version you're using.  http://www.postgresql.org/docs/8.3/static/release-8-3.html#AEN87319\n-- Thom BrownTwitter: @darkixionIRC (freenode): dark_ixionRegistered Linux user: #516935", "msg_date": "Fri, 12 Nov 2010 16:22:08 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "\"Kyriacos Kyriacou\" <[email protected]> writes:\n> We are still using PostgreSQL 8.2.4.\n\nIn that case you don't have HOT updates, so it seems to me to be a\nlittle premature to be proposing a 100% rewrite of the system to fix\nyour problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Nov 2010 11:28:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue " }, { "msg_contents": "To be honest I just now read about HOT (Heap Overflow Tuple) and it\nseems that will help a lot. Thanks for your point.\n\nKyriacos\n\n-----Original Message-----\nFrom: Kenneth Marshall [mailto:[email protected]] \nSent: Friday, November 12, 2010 6:22 PM\nTo: Kyriacos Kyriacou\nCc: Thom Brown; [email protected]\nSubject: Re: [PERFORM] MVCC performance issue\n\nAh, this is a very old version. If you can take advantage of\na version with HOT support, you should be much, much happier.\n\nCheers,\nKen\n\nOn Fri, Nov 12, 2010 at 06:14:00PM +0200, Kyriacos Kyriacou wrote:\n> \n> \n> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and\n> database size is over 200Gb so upgrade is not an easy decision! \n> \n> I have it in my plans so in next few months I will setup new servers\nand\n> upgrade to version 9. \n> \n> \n> >> Which version of PostgreSQL are you basing this on?\n> \n> >>\n> >>-- \n> >>Thom Brown\n> >>Twitter: @darkixion\n> >>IRC (freenode): dark_ixion\n> >>Registered Linux user: #516935\n> \n\n\n", "msg_date": "Fri, 12 Nov 2010 18:33:22 +0200", "msg_from": "\"Kyriacos Kyriacou\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "My suggestion had just a single difference from what currently MVCC is\ndoing (btw I never said that MVCC is bad). \n\nNOW ===> on COMMIT previous version record is expired and the \n new version record (created in new dynamically allocated \n spot, as you said) is set as \"active\"\n \nMY ===> on COMMIT, to update new version data over the same physical\nlocation that initial version was \n and release the space used to keep the new version (that was\ndynamically allocated).\n\nThe rest are all the same! I do not think that this is breaking anything\nand I still believe that this might help.\n\nI will try to plan upgrade the soonest possible to the newest version.\nReading few words about HOT updates \nit seems that more or less is similar to what I have described and will\nbe very helpful.\n\nKyriacos\n\n> -----Original Message-----\n> From: Andy Colson [mailto:[email protected]]\n> Sent: Friday, November 12, 2010 6:22 PM\n> To: Kyriacos Kyriacou\n> Cc: [email protected]\n> Subject: Re: [PERFORM] MVCC performance issue\n> \n> On 11/12/2010 7:47 AM, Kyriacos Kyriacou wrote:\n> >\n> > SUGGESTION\n> > --------------\n> > 1) When a raw UPDATE is performed, store all \"new raw versions\"\neither\n> > in separate temporary table space\n> > or in a reserved space at the end of each table (can be\nallocated\n> > dynamically) etc\n> \n> Your use of \"raw\" is confusing. I'll just ignore the word. New row\n> versions are already stored in a dynamically allocated spot, right\nalong\n> with the other versions of the table. You are assuming that getting\nto\n> the \"correct\" version of the row is very slow? That's only going to\nbe\n> the case if you have lots and lots of versions. And your solution\nwill\n> not actually help if there are lots of versions. While one person who\n> is hitting the most recent version might be ok, everyone else will\nstill\n> have to search for theirs. Just as they do now.\n> \n> > 2) Any SELECT queries within the same session will be again\naccessing\n> > the new version of the row\n> \n> I don't see how this is different from what we currently have. \"same\n> session\" could have been dropped from your separate table space, and\n> then you'd have to go search through previous versions of the row...\n> exactly like you do now.\n> \n> And worse, if you dont want to drop your version of the row from the\n> separate table space until you commit/rollback, then no other user can\n> start a transaction on that table until your done! oh no! You have\n> reads and writes blocking each other.\n> \n> > 3) Any SELECT queries from other users will still be accessing the\nold\n> > version\n> \n> Again.. the same.\n> \n> > 4) When UPDATE transaction is ROLLBACK just release the space used\nin\n> > new temporary location\n> \n> current layout makes rollback very very fast.\n> \n> > 5) When UPDATE transaction is COMMIT then try to LOCK the old\nversion\n> > and overwrite it at the same physical location (NO FRAGMENTATION).\n> \n> Not sure what you mean by lock, but lock requires single user access\nand\n> slow's things down. Right now we just bump the \"most active\ntransaction\n> number\", which is very efficient, and requires no locks. As soon as\nyou\n> lock anything, somebody, by definition, has to wait.\n> \n> \n> > 6) Similar mechanism can be applied on INSERTS and DELETES\n> > 7) In case that transaction was COMMIT, the temporary location can\nbe\n> > either released or archived/cleaned on a pre-scheduled basis. This\nwill\n> > possibly allow the introduction of a TRANSACTION LOG backup\nmechanism as\n> > a next step.\n> \n> You are kind of assuming there will only ever be one new transaction,\n> and one old transaction. What about a case where 10 people start a\n> transaction, and there are 10 versions of the row?\n> \n> \n> It seems to me like you are using very long transactions, which is\n> causing lots of row versions to show up. Have you run explain analyze\n> on your slow querys to find out the problems?\n> \n> Have you checked to see if you are cpu bound or io bound? If you are\n> dealing with lots of row versions, I'd assume you are cpu bound. If\nyou\n> check your system though, and see you are io bound, I think that might\n> invalidate your assumptions above.\n> \n> MVCC makes multi user access very nice because readers and writers\ndont\n> block each other, and there are very few locks. It does come with\nsome\n> kinks (gotta vacuum, keep transactions short, you must commit, etc).\n> \n> select count(*) for example is always going to be slow... just expect\n> it, lets not destroy what works well about the database just to make\nit\n> fast. Instead, find a better alternative so you dont have to run it.\n> \n> Just like any database, you have to work within MVCC's good points and\n> try to avoid the bad spots.\n> \n> -Andy\n\n\n", "msg_date": "Fri, 12 Nov 2010 19:13:14 +0200", "msg_from": "\"Kyriacos Kyriacou\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On Fri, Nov 12, 2010 at 9:22 AM, Thom Brown <[email protected]> wrote:\n> On 12 November 2010 16:14, Kyriacos Kyriacou <[email protected]>\n> wrote:\n>>\n>>\n>>\n>> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and\n>> database size is over 200Gb so upgrade is not an easy decision!\n>>\n>> I have it in my plans so in next few months I will setup new servers and\n>> upgrade to version 9.\n>\n> Everything changed, performance-wise, in 8.3, and there have also been\n> improvements since then too.  So rather than completely changing your\n> database platform, at least take a look at what work has gone into Postgres\n> since the version you're using.\n\nAgreed. 8.3 was a colossal step forward for pg performance. 8.4 was\na huge step ahead in maintenance with on disk fsm. If I was upgrading\nfrom 8.2 today I would go straight to 8.4 and skip 8.3 since it's a\nmuch bigger pain in the butt to configure for fsm stuff.\n", "msg_date": "Fri, 12 Nov 2010 10:27:12 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "OK, in general you have to pay for MVCC one way or another. Many\ndatabases make you pay as you go, so to speak, by storing all the MVCC\ninfo in a log to be applied at some future date. Other databases you\ncan pay later, by storing all the MVCC in the table itself. Both have\nsimilar costs, but one can punish you harshly if you let the MVCC data\nstored in the database get out of hand.\n\n8.3 and above are much more aggresive about autovacuuming, and on\nbigger hardware you can make it VERY aggressive and keep the bloat out\nwhile keeping up good throughput. On some servers I set up 4 or 6 or\n8 autovacuum threads to keep up. If you were on another db you\nmight be adding more drives to make some other part faster.\n\nFor batch processing storing all MVCC data in the data store can be\nproblematic, but for more normal work where you're changing <1% of a\ntable all the time it can be very fast.\n\nSome other databases will just run out of space to store transactions\nand roll back everything you've done. PostgreSQL will gladly let you\nshoot yourself in the foot with bloating the data store by running\nsuccessive whole table updates without vacuuming in between.\n\nBottom line, if your hardware can't keep up, it can't keep up. If\nvacuum capsizes your IO and still can't keep up then you need more\ndisks and / or better storage subsystems. A 32 disk array with single\ncontroller goes for ~$7 to $10k, and you can sustain some pretty\namazing thgouhput on that kind of IO subsystem.\n\nIf you're doing batch processing you can get a lot return by just\nmaking sure you vacuum after each mass update. Especially if you are\non a single use machine with no cost delays for vacuum, running a\nvacuum on a freshly worked table should be pretty fast.\n", "msg_date": "Fri, 12 Nov 2010 10:39:55 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "On Fri, Nov 12, 2010 at 9:19 AM, Ben Chobot <[email protected]> wrote:\n> On Nov 12, 2010, at 8:14 AM, Kyriacos Kyriacou wrote:\n>\n> We are still using PostgreSQL 8.2.4. We are running a 24x7 system and\n> database size is over 200Gb so upgrade is not an easy decision!\n>\n> This is why we have slony, so you can slowly upgrade your 200Gb while you're\n> live and then only suffer a minute or so of downtime while you switchover.\n> Even if you only install slony for the point of the upgrade and then\n> uninstall it after you're done, that seems well worth it to me rather than\n> running on 8.2.4 for a while.\n> Note there were some changes between 8.2 and 8.3 in regards to casting that\n> might make you revisit your application.\n\nI work in a slony shop and we used slony to upgrade from 8.2 to 8.3\nand it was a breeze. Course we practiced on some test machines first,\nbut it went really smoothly. Our total downtime, due to necessary\ntesting before going live again, was less than 20 mintues.\n", "msg_date": "Fri, 12 Nov 2010 10:48:39 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "HOT also usually requires setting FILLFACTOR to something other than the default for your table, so that there is guaranteed room in the page to modify data without allocating a new page.\n\nIf you have fillfactor=75, then basically this proposal is already done -- each page has 25% temp space for updates in it. With the caveat that that is only true if the updates are to columns without indexes.\nOn Nov 12, 2010, at 7:37 AM, Kenneth Marshall wrote:\n\n> On Fri, Nov 12, 2010 at 07:34:36AM -0800, bricklen wrote:\n>> On Fri, Nov 12, 2010 at 5:52 AM, Kenneth Marshall <[email protected]> wrote:\n>>> \n>>> I cannot speak to your suggestion, but it sounds like you are not\n>>> vacuuming enough and a lot of the bloat/randomization would be helped\n>>> by making use of HOT updates in which the updates are all in the same\n>>> page and are reclaimed almost immediately.\n>>> \n>>> Regards,\n>>> Ken\n>> \n>> IIRC, HOT only operates on non-indexed columns, so if you the tables\n>> are heavily indexed you won't get the full benefit of HOT. I could be\n>> wrong though.\n>> \n> \n> That is true, but if they are truly having as big a bloat problem\n> as the message indicated, it would be worth designing the schema\n> to leverage HOT for the very frequent updates.\n> \n> Cheers,\n> Ken\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 12 Nov 2010 10:13:23 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" }, { "msg_contents": "\nOn Nov 12, 2010, at 9:13 AM, Kyriacos Kyriacou wrote:\n\n> My suggestion had just a single difference from what currently MVCC is\n> doing (btw I never said that MVCC is bad). \n> \n> NOW ===> on COMMIT previous version record is expired and the \n> new version record (created in new dynamically allocated \n> spot, as you said) is set as \"active\"\n> \n> MY ===> on COMMIT, to update new version data over the same physical\n> location that initial version was \n> and release the space used to keep the new version (that was\n> dynamically allocated).\n\nBut what about other transactions that can still see the old version?\n\nYou can't overwrite the old data if there are any other transactions open in the system at all. You have to have a mechanism to keep the old copy around for a while.\n\n> \n> The rest are all the same! I do not think that this is breaking anything\n> and I still believe that this might help.\n> \n> I will try to plan upgrade the soonest possible to the newest version.\n> Reading few words about HOT updates \n> it seems that more or less is similar to what I have described and will\n> be very helpful.\n> \n> Kyriacos\n\n", "msg_date": "Fri, 12 Nov 2010 10:19:59 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MVCC performance issue" } ]
[ { "msg_contents": "I was doing some testing with temporary tables using this sql:\n\nbegin;\nselect pg_sleep(30);\ncreate temporary TABLE foo (a int, b int, c int, d text);\ninsert into foo SELECT (x%1000) AS a,(x%1001) AS b, (x % 650) as c, ''\nas d FROM generate_series( 1, 1000000 ) AS x;\n-- create temporary TABLE foo AS SELECT (x%1000) AS a,(x%1001) AS b,\n(x % 650) as c, '' as d FROM generate_series( 1, 1000000 ) AS x;\nselect count(1) from foo;\n\n\nWhile it was in pg_sleep, I would attach to the backend process with strace.\nI observed a few things that I don't yet understand, but one thing I\ndid notice was an I/O pattern (following the count(1)) that seemed to\nsuggest that the table was getting its hint bits set. I thought hint\nbits were just for the mvcc side of things? If this is a temporary\ntable, is there any need or benefit to setting hint bits?\n\n-- \nJon\n", "msg_date": "Sat, 13 Nov 2010 07:53:27 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "do temporary tables have hint bits?" }, { "msg_contents": "Yes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 13 Nov 2010 09:57:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: do temporary tables have hint bits? " } ]
[ { "msg_contents": "I have some simple query (executed with time command):\n\n \n\n \n\ntime psql -c 'explain analyze SELECT te.idt FROM t_positions AS te JOIN\nt_st AS stm ON (te.idt=stm.idt AND 4=stm.idm) WHERE te.idtr IN (347186)'\n\n \n\n \n\n QUERY\nPLAN\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n\nNested Loop (cost=0.00..33.33 rows=2 width=4) (actual time=0.297..0.418\nrows=3 loops=1)\n\n -> Index Scan using t_positions_index1 on t_positions te\n(cost=0.00..8.43 rows=3 width=4) (actual time=0.140..0.148 rows=3 loops=1)\n\n Index Cond: (idtr = 347186)\n\n -> Index Scan using t_st_index4 on t_st stm (cost=0.00..8.29 rows=1\nwidth=4) (actual time=0.078..0.079 rows=1 loops=3)\n\n Index Cond: ((stm.idt = te.idt) AND (4 = stm.idm))\n\nTotal runtime: 0.710 ms\n\n(6 rows)\n\n \n\n \n\nreal 0m3.309s\n\nuser 0m0.002s\n\nsys 0m0.002s\n\n \n\nWhy there is so big difference between explain analyze (0.710 ms) and real\nexecution time (3309 ms)? Any suggestions?\n\n \n\nPsql only execution time:\n\n \n\ntime psql -c 'explain analyze SELECT blabla()'\n\nERROR: function blabla() does not exist\n\n \n\nreal 0m0.011s\n\nuser 0m0.001s\n\nsys 0m0.004s\n\n \n\nSELECT version();\n\n \n\nPostgreSQL 9.0.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n20070502 (Red Hat 4.1.2-12), 32-bit\n\n \n\n-------------------------------------------\n\nArtur Zajac\n\n \n\n \n\n\nI have some simple query (executed with time command):  time psql  -c 'explain analyze SELECT te.idt FROM t_positions AS te  JOIN t_st AS stm ON (te.idt=stm.idt AND 4=stm.idm)   WHERE te.idtr IN (347186)'                                                                         QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.00..33.33 rows=2 width=4) (actual time=0.297..0.418 rows=3 loops=1)   ->  Index Scan using t_positions_index1 on t_positions te  (cost=0.00..8.43 rows=3 width=4) (actual time=0.140..0.148 rows=3 loops=1)         Index Cond: (idtr = 347186)   ->  Index Scan using t_st_index4 on t_st stm  (cost=0.00..8.29 rows=1 width=4) (actual time=0.078..0.079 rows=1 loops=3)         Index Cond: ((stm.idt = te.idt) AND (4 = stm.idm)) Total runtime: 0.710 ms(6 rows)  real    0m3.309suser    0m0.002ssys     0m0.002s Why there is so big difference between explain analyze (0.710 ms) and real execution time (3309 ms)? Any suggestions? Psql only execution time: time psql -c 'explain analyze SELECT blabla()'ERROR:  function blabla() does not exist real    0m0.011suser    0m0.001ssys     0m0.004s SELECT version(); PostgreSQL 9.0.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070502 (Red Hat 4.1.2-12), 32-bit -------------------------------------------Artur Zajac", "msg_date": "Mon, 15 Nov 2010 09:21:34 +0100", "msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>", "msg_from_op": true, "msg_subject": "Difference between explain analyze and real execution time" }, { "msg_contents": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]> writes:\n> Why there is so big difference between explain analyze (0.710 ms) and real\n> execution time (3309 ms)?\n\nEXPLAIN ANALYZE doesn't account for all of the runtime involved. In\nthis case, I'd bet that session startup/shutdown is a big part of the\ndifference.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Nov 2010 10:12:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference between explain analyze and real execution time " }, { "msg_contents": "\n> EXPLAIN ANALYZE doesn't account for all of the runtime involved. In this\ncase, I'd bet that session startup/shutdown is a big part of the difference.\n> \n>\t\t\tregards, tom lane\n\nDoes session startup/shutdown depend on tables used in query? Some simpler\nquery:\n\ntime psql -c 'explain analyze SELECT te.idt FROM t_positions AS te WHERE\nte.idtr IN (347186)'\n\n QUERY PLAN\n----------------------------------------------------------------------------\n---------------------------------------------------------\n Index Scan using tg_transelem_index1 on tg_transelem te (cost=0.00..8.43\nrows=3 width=4) (actual time=0.130..0.134 rows=3 loops=1)\n Index Cond: (tr_idtrans = 347186)\n Total runtime: 0.211 ms\n\n\nreal 0m0.017s\nuser 0m0.002s\nsys 0m0.004s\n\n\n\n-------------------------------------------\nArtur Zajac\n\n\n\n", "msg_date": "Mon, 15 Nov 2010 16:24:30 +0100", "msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Difference between explain analyze and real execution time" }, { "msg_contents": "[Tom Lane]\n> EXPLAIN ANALYZE doesn't account for all of the runtime involved.  In\n> this case, I'd bet that session startup/shutdown is a big part of the\n> difference.\n\nThe session startup/shutdown should be the same for the real SQL and\nthe broken SQL, shouldn't it?\n\n[Artur Zając]\n> time psql -c 'explain analyze SELECT te.idt FROM t_positions AS te\n> JOIN t_st AS stm ON (te.idt=stm.idt AND 4=stm.idm) WHERE te.idtr IN\n> (347186)'\n\nIs this weidness only observed for this query? What happens with\nother queries? \"explain analyze select 1\"? \"explain analyze select *\nfrom t_positions where idtr=347816\"? plain select without \"explain\nanalyze\"? etc?\n", "msg_date": "Mon, 15 Nov 2010 16:25:18 +0100", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference between explain analyze and real execution time" }, { "msg_contents": "2010/11/15 Artur Zając <[email protected]>:\n> Why there is so big difference between explain analyze (0.710 ms) and real\n> execution time (3309 ms)? Any suggestions?\n\nCould it be that it takes a long time to plan for some reason? How\nfast is a plain EXPLAIN?\n\nWhat happens if you start up psql, turn on \\timing, and then run\nEXPLAIN ANALYZE from within an interactive session? That's usually a\nbetter way to test, as it avoids counting the session-startup\noverhead.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 15 Nov 2010 14:39:03 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference between explain analyze and real execution time" }, { "msg_contents": "\n\n2010/11/15 Artur Zając <[email protected]>:\n> Why there is so big difference between explain analyze (0.710 ms) and \n> real execution time (3309 ms)? Any suggestions?\n\n> Could it be that it takes a long time to plan for some reason? How fast\nis a plain EXPLAIN?\n\nYes! That is it :) Planning is painful. I'm so stupid. I didn't check VACUUM\nwithout ANALYZE :)\n\ntime psql -c 'explain SELECT te.idt FROM t_positions AS te WHERE te.idtr\nIN (347186)'\n\nreal 0m1.087s\nuser 0m0.004s\nsys 0m0.001s\n\nI've changed default_statistics_target to 10000 and I think that is a\nreason. When I changed it to 1000 everything seems to be ok:\n\ntime psql -c 'explain analyze SELECT te.idt FROM t_positions AS te WHERE\nte.idtr IN (347186)'\n\nreal 0m0.062s\nuser 0m0.003s\nsys 0m0.004s\n\nThanks.\n\n-------------------------------------------\nArtur Zajac\n\n\n", "msg_date": "Mon, 15 Nov 2010 21:43:48 +0100", "msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Difference between explain analyze and real execution time" }, { "msg_contents": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]> writes:\n> I've changed default_statistics_target to 10000 and I think that is a\n> reason.\n\nThat's certainly going to cost you something, but this seems like a\nmighty large slowdown, especially for a non-join query. What datatype\nis te.idtr, anyway?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 15 Nov 2010 16:03:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference between explain analyze and real execution time " }, { "msg_contents": "\n\n> I've changed default_statistics_target to 10000 and I think that is a \n> reason.\n\n> That's certainly going to cost you something, but this seems like a mighty\nlarge slowdown, especially for a non-join query. What datatype is te.idtr,\nanyway?\n\nInteger not null and primary key of t_positions table.\n\nTable t_positions has about 1500000 records, table t_st about 130000\nrecords.\n\n\n-------------------------------------------\nArtur Zajac\n\n\n", "msg_date": "Mon, 15 Nov 2010 22:30:32 +0100", "msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Difference between explain analyze and real execution time" } ]
[ { "msg_contents": "I have 2 tables with a 200,000 rows of data 3 character/string columns ID, Question and Response. The query below compares the data between the 2 tables based on ID and Question and if the Response does not match between the left table and the right table it identifies the ID's where there is a mismatch. Running the query in SQL Server 2008 using the ISNULL function take a few milliseconds. Running the same query in Postgresql takes over 70 seconds. The 2 queries are below:\nSQL Server 2008 R2 Queryselect t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and t1.question = t2.question and isnull(t1.response,'ISNULL') <> isnull(t2.response,'ISNULL')\nPostgres 9.1 Queryselect t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and t1.question = t2.question and coalesce(t1.response,'ISNULL') <> coalesce(t2.response,'ISNULL')\nWhat gives? \t\t \t \t\t \n\n\n\n\n\nI have 2 tables with a 200,000 rows of data 3 character/string columns ID, Question and Response. The query below compares the data between the 2 tables based on ID and Question and if the Response does not match between the left table and the right table it identifies the ID's where there is a mismatch. Running the query in SQL Server 2008 using the ISNULL function take a few milliseconds. Running the same query in Postgresql takes over 70 seconds. The 2 queries are below:SQL Server 2008 R2 Queryselect t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and t1.question = t2.question and isnull(t1.response,'ISNULL') <> isnull(t2.response,'ISNULL')Postgres 9.1 Queryselect t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and t1.question = t2.question and coalesce(t1.response,'ISNULL') <> coalesce(t2.response,'ISNULL')What gives?", "msg_date": "Mon, 15 Nov 2010 14:14:26 -0600", "msg_from": "Humair Mohammed <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "Hi\n> SQL Server 2008 R2 Query\n> select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id \n> and t1.question = t2.question and isnull(t1.response,'ISNULL') <> \n> isnull(t2.response,'ISNULL')\n> \n> Postgres 9.1 Query\n> select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id \n> and t1.question = t2.question and coalesce(t1.response,'ISNULL') <> \n> coalesce(t2.response,'ISNULL')\n> \n> What gives?\nThey have same indexes/PK etc?\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Tue, 16 Nov 2010 12:30:54 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "On 16/11/10 09:14, Humair Mohammed wrote:\n> I have 2 tables with a 200,000 rows of data 3 character/string columns ID, Question and Response. The query below compares the data between the 2 tables based on ID and Question and if the Response does not match between the left table and the right table it identifies the ID's where there is a mismatch. Running the query in SQL Server 2008 using the ISNULL function take a few milliseconds. Running the same query in Postgresql takes over 70 seconds. The 2 queries are below:\n> SQL Server 2008 R2 Queryselect t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and t1.question = t2.question and isnull(t1.response,'ISNULL')<> isnull(t2.response,'ISNULL')\n> Postgres 9.1 Queryselect t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and t1.question = t2.question and coalesce(t1.response,'ISNULL')<> coalesce(t2.response,'ISNULL')\n> What gives? \t\t \t \t\t\n> \n\nCan we see the execution plans: (EXPLAIN <the query text here>) for \nPostgres and (however you get text based query plan from Sql Server), so \nwe can see if there is any obvious differences in how things are done.\n\nAlso probably worthwhile is telling us the table definitions of the \ntables concerned.\n\nFor Postgres - did you run ANALYZE on the database concerned before \nrunning the queries? (optimizer stats are usually updated automatically, \nbut if you were quick to run the queries after loading the data they \nmight not have been).\n\nregards\n\nMark\n\n\n\n\n\n\nOn 16/11/10 09:14, Humair Mohammed wrote:\n\n\nI have 2 tables with a 200,000 rows of data 3 character/string columns ID, Question and Response. The query below compares the data between the 2 tables based on ID and Question and if the Response does not match between the left table and the right table it identifies the ID's where there is a mismatch. Running the query in SQL Server 2008 using the ISNULL function take a few milliseconds. Running the same query in Postgresql takes over 70 seconds. The 2 queries are below:\nSQL Server 2008 R2 Queryselect t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and t1.question = t2.question and isnull(t1.response,'ISNULL') <> isnull(t2.response,'ISNULL')\nPostgres 9.1 Queryselect t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and t1.question = t2.question and coalesce(t1.response,'ISNULL') <> coalesce(t2.response,'ISNULL')\nWhat gives? \t\t \t \t\t \n \n\n\nCan we see the execution plans: (EXPLAIN <the query text here>)\nfor Postgres and (however you get text based query plan from Sql\nServer), so we can see if there is any obvious differences in how\nthings are done.\n\nAlso probably worthwhile is telling us the table definitions of the\ntables concerned.\n\nFor Postgres - did you run ANALYZE on the database concerned before\nrunning the queries? (optimizer stats are usually updated\nautomatically, but if you were quick to run the queries after loading\nthe data they might not have been).\n\nregards\n\nMark", "msg_date": "Tue, 16 Nov 2010 20:08:30 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "2010/11/15 Humair Mohammed <[email protected]>:\n> I have 2 tables with a 200,000 rows of data 3 character/string columns ID,\n> Question and Response. The query below compares the data between the 2\n> tables based on ID and Question and if the Response does not match between\n> the left table and the right table it identifies the ID's where there is a\n> mismatch. Running the query in SQL Server 2008 using the ISNULL function\n> take a few milliseconds. Running the same query in Postgresql takes over 70\n> seconds. The 2 queries are below:\n> SQL Server 2008 R2 Query\n> select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n> t1.question = t2.question and isnull(t1.response,'ISNULL') <>\n> isnull(t2.response,'ISNULL')\n\n> Postgres 9.1 Query\n> select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n> t1.question = t2.question and coalesce(t1.response,'ISNULL') <>\n> coalesce(t2.response,'ISNULL')\n> What gives?\n\nI think, so must problem can be in ugly predicate\ncoalesce(t1.response,'ISNULL') <>\n> coalesce(t2.response,'ISNULL')\n\ntry use a IS DISTINCT OF operator\n\n... AND t1.response IS DISTINCT t2.response\n\nRegards\n\nPavel Stehule\n\np.s. don't use a coalesce in WHERE clause if it is possible.\n", "msg_date": "Tue, 16 Nov 2010 08:12:03 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "There are no indexes on the tables either in SQL Server or Postgresql - I am comparing apples to apples here. I ran ANALYZE on the postgresql tables, after that query performance times are still high 42 seconds with COALESCE and 35 seconds with IS DISTINCT FROM.\nHere is the execution plan from Postgresql for qurey - select pb.id from pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question = pg.question and coalesce(pb.response,'MISSING') <> coalesce(pg.response,'MISSING')\nExecution Time: 42 seconds\n\"Hash Join (cost=16212.30..48854.24 rows=93477 width=17)\"\" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\" Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496 width=134)\"\" -> Hash (cost=7537.12..7537.12 rows=251212 width=134)\"\" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212 width=134)\"\n\nAnd here is the execution plan from SQL Server for query - select pb.id from pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question = pg.question and isnull(pb.response,'ISNULL')<> isnull(pg.response,'ISNULL')\nExecution Time: < 1 second\nCost: 1% |--Parallelism(Gather Streams)Cost: 31% |--Hash Match(Inner Join, HASH:([pb].[ID], [pb].[Question])=([pg].[ID], [pg].[Question]), RESIDUAL:([master].[dbo].[pivotbad].[ID] as [pb].[ID]=[master].[dbo].[pivotgood].[ID] as [pg].[ID] AND [master].[dbo].[pivotbad].[Question] as [pb].[Question]=[master].[dbo].[pivotgood].[Question] as [pg].[Question] AND [Expr1006]<>[Expr1007])) Cost: 0% |--Bitmap(HASH:([pb].[ID], [pb].[Question]), DEFINE:([Bitmap1008])) Cost: 0% |--Compute Scalar(DEFINE:([Expr1006]=isnull([master].[dbo].[pivotbad].[Response] as [pb].[Response],'ISNULL'))) Cost: 6% |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([pb].[ID], [pb].[Question])) Cost: 12% |--Table Scan(OBJECT:([master].[dbo].[pivotbad] AS [pb])) Cost: 0% |--Compute Scalar(DEFINE:([Expr1007]=isnull([master].[dbo].[pivotgood].[Response] as [pg].[Response],'ISNULL'))) Cost: 17% |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([pg].[ID], [pg].[Question])) Cost: 33% |--Table Scan(OBJECT:([master].[dbo].[pivotgood] AS [pg]), WHERE:(PROBE([Bitmap1008],[master].[dbo].[pivotgood].[ID] as [pg].[ID],[master].[dbo].[pivotgood].[Question] as [pg].[Question])))\n\n\n\n> From: [email protected]\n> Date: Tue, 16 Nov 2010 08:12:03 +0100\n> Subject: Re: [PERFORM]\n> To: [email protected]\n> CC: [email protected]\n> \n> 2010/11/15 Humair Mohammed <[email protected]>:\n> > I have 2 tables with a 200,000 rows of data 3 character/string columns ID,\n> > Question and Response. The query below compares the data between the 2\n> > tables based on ID and Question and if the Response does not match between\n> > the left table and the right table it identifies the ID's where there is a\n> > mismatch. Running the query in SQL Server 2008 using the ISNULL function\n> > take a few milliseconds. Running the same query in Postgresql takes over 70\n> > seconds. The 2 queries are below:\n> > SQL Server 2008 R2 Query\n> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n> > t1.question = t2.question and isnull(t1.response,'ISNULL') <>\n> > isnull(t2.response,'ISNULL')\n> \n> > Postgres 9.1 Query\n> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n> > t1.question = t2.question and coalesce(t1.response,'ISNULL') <>\n> > coalesce(t2.response,'ISNULL')\n> > What gives?\n> \n> I think, so must problem can be in ugly predicate\n> coalesce(t1.response,'ISNULL') <>\n> > coalesce(t2.response,'ISNULL')\n> \n> try use a IS DISTINCT OF operator\n> \n> ... AND t1.response IS DISTINCT t2.response\n> \n> Regards\n> \n> Pavel Stehule\n> \n> p.s. don't use a coalesce in WHERE clause if it is possible.\n \t\t \t \t\t \n\n\n\n\n\nThere are no indexes on the tables either in SQL Server or Postgresql - I am comparing apples to apples here. I ran ANALYZE on the postgresql tables, after that query performance times are still high 42 seconds with COALESCE and 35 seconds with IS DISTINCT FROM.Here is the execution plan from Postgresql for qurey - select pb.id from pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question = pg.question and coalesce(pb.response,'MISSING') <> coalesce(pg.response,'MISSING')Execution Time: 42 seconds\"Hash Join  (cost=16212.30..48854.24 rows=93477 width=17)\"\"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134)\"\"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134)\"\"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212 width=134)\"And here is the execution plan from SQL Server for query - select pb.id from pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question = pg.question and isnull(pb.response,'ISNULL')<>  isnull(pg.response,'ISNULL')Execution Time: < 1 secondCost: 1%  |--Parallelism(Gather Streams)Cost: 31%       |--Hash Match(Inner Join, HASH:([pb].[ID], [pb].[Question])=([pg].[ID], [pg].[Question]), RESIDUAL:([master].[dbo].[pivotbad].[ID] as [pb].[ID]=[master].[dbo].[pivotgood].[ID] as [pg].[ID] AND [master].[dbo].[pivotbad].[Question] as [pb].[Question]=[master].[dbo].[pivotgood].[Question] as [pg].[Question] AND [Expr1006]<>[Expr1007]))    Cost: 0%  |--Bitmap(HASH:([pb].[ID], [pb].[Question]), DEFINE:([Bitmap1008]))            Cost: 0%    |--Compute Scalar(DEFINE:([Expr1006]=isnull([master].[dbo].[pivotbad].[Response] as [pb].[Response],'ISNULL')))            Cost:  6%   |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([pb].[ID], [pb].[Question]))            Cost: 12%  |--Table Scan(OBJECT:([master].[dbo].[pivotbad] AS [pb]))            Cost: 0% |--Compute Scalar(DEFINE:([Expr1007]=isnull([master].[dbo].[pivotgood].[Response] as [pg].[Response],'ISNULL')))                Cost: 17% |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([pg].[ID], [pg].[Question]))                    Cost: 33% |--Table Scan(OBJECT:([master].[dbo].[pivotgood] AS [pg]), WHERE:(PROBE([Bitmap1008],[master].[dbo].[pivotgood].[ID] as [pg].[ID],[master].[dbo].[pivotgood].[Question] as [pg].[Question])))> From: [email protected]> Date: Tue, 16 Nov 2010 08:12:03 +0100> Subject: Re: [PERFORM]> To: [email protected]> CC: [email protected]> > 2010/11/15 Humair Mohammed <[email protected]>:> > I have 2 tables with a 200,000 rows of data 3 character/string columns ID,> > Question and Response. The query below compares the data between the 2> > tables based on ID and Question and if the Response does not match between> > the left table and the right table it identifies the ID's where there is a> > mismatch. Running the query in SQL Server 2008 using the ISNULL function> > take a few milliseconds. Running the same query in Postgresql takes over 70> > seconds. The 2 queries are below:> > SQL Server 2008 R2 Query> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and> > t1.question = t2.question and isnull(t1.response,'ISNULL') <>> > isnull(t2.response,'ISNULL')> > > Postgres 9.1 Query> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and> > t1.question = t2.question and coalesce(t1.response,'ISNULL') <>> > coalesce(t2.response,'ISNULL')> > What gives?> > I think, so must problem can be in ugly predicate> coalesce(t1.response,'ISNULL') <>> > coalesce(t2.response,'ISNULL')> > try use a IS DISTINCT OF operator> > ... AND t1.response IS DISTINCT t2.response> > Regards> > Pavel Stehule> > p.s. don't use a coalesce in WHERE clause if it is possible.", "msg_date": "Tue, 16 Nov 2010 21:53:50 -0600", "msg_from": "Humair Mohammed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "2010/11/17 Humair Mohammed <[email protected]>:\n>\n> There are no indexes on the tables either in SQL Server or Postgresql - I am\n> comparing apples to apples here. I ran ANALYZE on the postgresql tables,\n> after that query performance times are still high 42 seconds with COALESCE\n> and 35 seconds with IS DISTINCT FROM.\n> Here is the execution plan from Postgresql for qurey - select pb.id from\n> pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question =\n> pg.question and coalesce(pb.response,'MISSING') <>\n> coalesce(pg.response,'MISSING')\n> Execution Time: 42 seconds\n> \"Hash Join  (cost=16212.30..48854.24 rows=93477 width=17)\"\n> \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text =\n> (pg.question)::text))\"\n> \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text\n> <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n> \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134)\"\n> \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134)\"\n> \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212\n> width=134)\"\n\nthis is little bit strange - did you ANALYZE and VACUUM?\n\nplease send result of EXPLAIN ANALYZE\n\nPavel\n\n>\n> And here is the execution plan from SQL Server for query - select pb.id from\n> pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question =\n> pg.question and isnull(pb.response,'ISNULL')<>  isnull(pg.response,'ISNULL')\n> Execution Time: < 1 second\n> Cost: 1%  |--Parallelism(Gather Streams)\n> Cost: 31%       |--Hash Match(Inner Join, HASH:([pb].[ID],\n> [pb].[Question])=([pg].[ID], [pg].[Question]),\n> RESIDUAL:([master].[dbo].[pivotbad].[ID] as\n> [pb].[ID]=[master].[dbo].[pivotgood].[ID] as [pg].[ID] AND\n> [master].[dbo].[pivotbad].[Question] as\n> [pb].[Question]=[master].[dbo].[pivotgood].[Question] as [pg].[Question] AND\n> [Expr1006]<>[Expr1007]))\n>     Cost: 0%  |--Bitmap(HASH:([pb].[ID], [pb].[Question]),\n> DEFINE:([Bitmap1008]))\n>             Cost: 0%    |--Compute\n> Scalar(DEFINE:([Expr1006]=isnull([master].[dbo].[pivotbad].[Response] as\n> [pb].[Response],'ISNULL')))\n>             Cost:  6%   |--Parallelism(Repartition Streams, Hash\n> Partitioning, PARTITION COLUMNS:([pb].[ID], [pb].[Question]))\n>             Cost: 12%  |--Table Scan(OBJECT:([master].[dbo].[pivotbad] AS\n> [pb]))\n>             Cost: 0% |--Compute\n> Scalar(DEFINE:([Expr1007]=isnull([master].[dbo].[pivotgood].[Response] as\n> [pg].[Response],'ISNULL')))\n>                 Cost: 17% |--Parallelism(Repartition Streams, Hash\n> Partitioning, PARTITION COLUMNS:([pg].[ID], [pg].[Question]))\n>                     Cost: 33% |--Table\n> Scan(OBJECT:([master].[dbo].[pivotgood] AS [pg]),\n> WHERE:(PROBE([Bitmap1008],[master].[dbo].[pivotgood].[ID] as\n> [pg].[ID],[master].[dbo].[pivotgood].[Question] as [pg].[Question])))\n>\n>\n>\n>> From: [email protected]\n>> Date: Tue, 16 Nov 2010 08:12:03 +0100\n>> Subject: Re: [PERFORM]\n>> To: [email protected]\n>> CC: [email protected]\n>>\n>> 2010/11/15 Humair Mohammed <[email protected]>:\n>> > I have 2 tables with a 200,000 rows of data 3 character/string columns\n>> > ID,\n>> > Question and Response. The query below compares the data between the 2\n>> > tables based on ID and Question and if the Response does not match\n>> > between\n>> > the left table and the right table it identifies the ID's where there is\n>> > a\n>> > mismatch. Running the query in SQL Server 2008 using the ISNULL function\n>> > take a few milliseconds. Running the same query in Postgresql takes over\n>> > 70\n>> > seconds. The 2 queries are below:\n>> > SQL Server 2008 R2 Query\n>> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n>> > t1.question = t2.question and isnull(t1.response,'ISNULL') <>\n>> > isnull(t2.response,'ISNULL')\n>>\n>> > Postgres 9.1 Query\n>> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n>> > t1.question = t2.question and coalesce(t1.response,'ISNULL') <>\n>> > coalesce(t2.response,'ISNULL')\n>> > What gives?\n>>\n>> I think, so must problem can be in ugly predicate\n>> coalesce(t1.response,'ISNULL') <>\n>> > coalesce(t2.response,'ISNULL')\n>>\n>> try use a IS DISTINCT OF operator\n>>\n>> ... AND t1.response IS DISTINCT t2.response\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>> p.s. don't use a coalesce in WHERE clause if it is possible.\n>\n", "msg_date": "Wed, 17 Nov 2010 05:47:51 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "Dne 17.11.2010 05:47, Pavel Stehule napsal(a):\n> 2010/11/17 Humair Mohammed <[email protected]>:\n>>\n>> There are no indexes on the tables either in SQL Server or Postgresql - I am\n>> comparing apples to apples here. I ran ANALYZE on the postgresql tables,\n\nActually no, you're not comparing apples to apples. You've provided so\nlittle information that you may be comparing apples to cucumbers or\nmaybe some strange animals.\n\n1) info about the install\n\nWhat OS is this running on? I guess it's Windows in both cases, right?\n\nHow nuch memory is there? What is the size of shared_buffers? The\ndefault PostgreSQL settings is very very very limited, you have to bump\nit to a much larger value.\n\nWhat are the other inportant settings (e.g. the work_mem)?\n\n2) info about the dataset\n\nHow large are the tables? I don't mean number of rows, I mean number of\nblocks / occupied disk space. Run this query\n\nSELECT relname, relpages, reltuples, pg_size_pretty(pg_table_size(oid))\nFROM pg_class WHERE relname IN ('table1', 'table2');\n\n3) info about the plan\n\nPlease, provide EXPLAIN ANALYZE output, maybe with info about buffers,\ne.g. something like\n\nEXPLAIN (ANALYZE ON, BUFFERS ON) SELECT ...\n\n4) no indexes ?\n\nWhy have you decided not to use any indexes? If you want a decent\nperformance, you will have to use indexes. Obviously there is some\noverhead associated with them, but it's premature optimization unless\nyou prove the opposite.\n\nBTW I'm not a MSSQL expert, but it seems like it's building a bitmap\nindex on the fly, to synchronize parallelized query - PostgreSQL does\nnot support that.\n\nregards\nTomas\n", "msg_date": "Wed, 17 Nov 2010 21:47:31 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "Yes strange indeed, I did rerun ANALYZE and VACCUM. Took 70 seconds to rerun the query. Results from EXPLAIN ANALYZE below:\n\"Hash Join (cost=16212.30..52586.43 rows=92869 width=17) (actual time=43200.223..49502.874 rows=3163 loops=1)\"\" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\" Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.009..48.200 rows=93496 loops=1)\"\" -> Hash (cost=7537.12..7537.12 rows=251212 width=134) (actual time=42919.453..42919.453 rows=251212 loops=1)\"\" Buckets: 1024 Batches: 64 Memory Usage: 650kB\"\" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.119..173.019 rows=251212 loops=1)\"\"Total runtime: 49503.450 ms\"\n\n> From: [email protected]\n> Date: Wed, 17 Nov 2010 05:47:51 +0100\n> Subject: Re: Query Performance SQL Server vs. Postgresql\n> To: [email protected]\n> CC: [email protected]\n> \n> 2010/11/17 Humair Mohammed <[email protected]>:\n> >\n> > There are no indexes on the tables either in SQL Server or Postgresql - I am\n> > comparing apples to apples here. I ran ANALYZE on the postgresql tables,\n> > after that query performance times are still high 42 seconds with COALESCE\n> > and 35 seconds with IS DISTINCT FROM.\n> > Here is the execution plan from Postgresql for qurey - select pb.id from\n> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question =\n> > pg.question and coalesce(pb.response,'MISSING') <>\n> > coalesce(pg.response,'MISSING')\n> > Execution Time: 42 seconds\n> > \"Hash Join (cost=16212.30..48854.24 rows=93477 width=17)\"\n> > \" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text =\n> > (pg.question)::text))\"\n> > \" Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text\n> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n> > \" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496 width=134)\"\n> > \" -> Hash (cost=7537.12..7537.12 rows=251212 width=134)\"\n> > \" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212\n> > width=134)\"\n> \n> this is little bit strange - did you ANALYZE and VACUUM?\n> \n> please send result of EXPLAIN ANALYZE\n> \n> Pavel\n> \n> >\n> > And here is the execution plan from SQL Server for query - select pb.id from\n> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question =\n> > pg.question and isnull(pb.response,'ISNULL')<> isnull(pg.response,'ISNULL')\n> > Execution Time: < 1 second\n> > Cost: 1% |--Parallelism(Gather Streams)\n> > Cost: 31% |--Hash Match(Inner Join, HASH:([pb].[ID],\n> > [pb].[Question])=([pg].[ID], [pg].[Question]),\n> > RESIDUAL:([master].[dbo].[pivotbad].[ID] as\n> > [pb].[ID]=[master].[dbo].[pivotgood].[ID] as [pg].[ID] AND\n> > [master].[dbo].[pivotbad].[Question] as\n> > [pb].[Question]=[master].[dbo].[pivotgood].[Question] as [pg].[Question] AND\n> > [Expr1006]<>[Expr1007]))\n> > Cost: 0% |--Bitmap(HASH:([pb].[ID], [pb].[Question]),\n> > DEFINE:([Bitmap1008]))\n> > Cost: 0% |--Compute\n> > Scalar(DEFINE:([Expr1006]=isnull([master].[dbo].[pivotbad].[Response] as\n> > [pb].[Response],'ISNULL')))\n> > Cost: 6% |--Parallelism(Repartition Streams, Hash\n> > Partitioning, PARTITION COLUMNS:([pb].[ID], [pb].[Question]))\n> > Cost: 12% |--Table Scan(OBJECT:([master].[dbo].[pivotbad] AS\n> > [pb]))\n> > Cost: 0% |--Compute\n> > Scalar(DEFINE:([Expr1007]=isnull([master].[dbo].[pivotgood].[Response] as\n> > [pg].[Response],'ISNULL')))\n> > Cost: 17% |--Parallelism(Repartition Streams, Hash\n> > Partitioning, PARTITION COLUMNS:([pg].[ID], [pg].[Question]))\n> > Cost: 33% |--Table\n> > Scan(OBJECT:([master].[dbo].[pivotgood] AS [pg]),\n> > WHERE:(PROBE([Bitmap1008],[master].[dbo].[pivotgood].[ID] as\n> > [pg].[ID],[master].[dbo].[pivotgood].[Question] as [pg].[Question])))\n> >\n> >\n> >\n> >> From: [email protected]\n> >> Date: Tue, 16 Nov 2010 08:12:03 +0100\n> >> Subject: Re: [PERFORM]\n> >> To: [email protected]\n> >> CC: [email protected]\n> >>\n> >> 2010/11/15 Humair Mohammed <[email protected]>:\n> >> > I have 2 tables with a 200,000 rows of data 3 character/string columns\n> >> > ID,\n> >> > Question and Response. The query below compares the data between the 2\n> >> > tables based on ID and Question and if the Response does not match\n> >> > between\n> >> > the left table and the right table it identifies the ID's where there is\n> >> > a\n> >> > mismatch. Running the query in SQL Server 2008 using the ISNULL function\n> >> > take a few milliseconds. Running the same query in Postgresql takes over\n> >> > 70\n> >> > seconds. The 2 queries are below:\n> >> > SQL Server 2008 R2 Query\n> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n> >> > t1.question = t2.question and isnull(t1.response,'ISNULL') <>\n> >> > isnull(t2.response,'ISNULL')\n> >>\n> >> > Postgres 9.1 Query\n> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n> >> > t1.question = t2.question and coalesce(t1.response,'ISNULL') <>\n> >> > coalesce(t2.response,'ISNULL')\n> >> > What gives?\n> >>\n> >> I think, so must problem can be in ugly predicate\n> >> coalesce(t1.response,'ISNULL') <>\n> >> > coalesce(t2.response,'ISNULL')\n> >>\n> >> try use a IS DISTINCT OF operator\n> >>\n> >> ... AND t1.response IS DISTINCT t2.response\n> >>\n> >> Regards\n> >>\n> >> Pavel Stehule\n> >>\n> >> p.s. don't use a coalesce in WHERE clause if it is possible.\n> >\n \t\t \t \t\t \n\n\n\n\n\nYes strange indeed, I did rerun ANALYZE and VACCUM. Took 70 seconds to rerun the query. Results from EXPLAIN ANALYZE below:\"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual time=43200.223..49502.874 rows=3163 loops=1)\"\"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.009..48.200 rows=93496 loops=1)\"\"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual time=42919.453..42919.453 rows=251212 loops=1)\"\"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"\"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.119..173.019 rows=251212 loops=1)\"\"Total runtime: 49503.450 ms\"> From: [email protected]> Date: Wed, 17 Nov 2010 05:47:51 +0100> Subject: Re: Query Performance SQL Server vs. Postgresql> To: [email protected]> CC: [email protected]> > 2010/11/17 Humair Mohammed <[email protected]>:> >> > There are no indexes on the tables either in SQL Server or Postgresql - I am> > comparing apples to apples here. I ran ANALYZE on the postgresql tables,> > after that query performance times are still high 42 seconds with COALESCE> > and 35 seconds with IS DISTINCT FROM.> > Here is the execution plan from Postgresql for qurey - select pb.id from> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question => > pg.question and coalesce(pb.response,'MISSING') <>> > coalesce(pg.response,'MISSING')> > Execution Time: 42 seconds> > \"Hash Join  (cost=16212.30..48854.24 rows=93477 width=17)\"> > \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text => > (pg.question)::text))\"> > \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"> > \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134)\"> > \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134)\"> > \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212> > width=134)\"> > this is little bit strange - did you ANALYZE and VACUUM?> > please send result of EXPLAIN ANALYZE> > Pavel> > >> > And here is the execution plan from SQL Server for query - select pb.id from> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question => > pg.question and isnull(pb.response,'ISNULL')<>  isnull(pg.response,'ISNULL')> > Execution Time: < 1 second> > Cost: 1%  |--Parallelism(Gather Streams)> > Cost: 31%       |--Hash Match(Inner Join, HASH:([pb].[ID],> > [pb].[Question])=([pg].[ID], [pg].[Question]),> > RESIDUAL:([master].[dbo].[pivotbad].[ID] as> > [pb].[ID]=[master].[dbo].[pivotgood].[ID] as [pg].[ID] AND> > [master].[dbo].[pivotbad].[Question] as> > [pb].[Question]=[master].[dbo].[pivotgood].[Question] as [pg].[Question] AND> > [Expr1006]<>[Expr1007]))> >     Cost: 0%  |--Bitmap(HASH:([pb].[ID], [pb].[Question]),> > DEFINE:([Bitmap1008]))> >             Cost: 0%    |--Compute> > Scalar(DEFINE:([Expr1006]=isnull([master].[dbo].[pivotbad].[Response] as> > [pb].[Response],'ISNULL')))> >             Cost:  6%   |--Parallelism(Repartition Streams, Hash> > Partitioning, PARTITION COLUMNS:([pb].[ID], [pb].[Question]))> >             Cost: 12%  |--Table Scan(OBJECT:([master].[dbo].[pivotbad] AS> > [pb]))> >             Cost: 0% |--Compute> > Scalar(DEFINE:([Expr1007]=isnull([master].[dbo].[pivotgood].[Response] as> > [pg].[Response],'ISNULL')))> >                 Cost: 17% |--Parallelism(Repartition Streams, Hash> > Partitioning, PARTITION COLUMNS:([pg].[ID], [pg].[Question]))> >                     Cost: 33% |--Table> > Scan(OBJECT:([master].[dbo].[pivotgood] AS [pg]),> > WHERE:(PROBE([Bitmap1008],[master].[dbo].[pivotgood].[ID] as> > [pg].[ID],[master].[dbo].[pivotgood].[Question] as [pg].[Question])))> >> >> >> >> From: [email protected]> >> Date: Tue, 16 Nov 2010 08:12:03 +0100> >> Subject: Re: [PERFORM]> >> To: [email protected]> >> CC: [email protected]> >>> >> 2010/11/15 Humair Mohammed <[email protected]>:> >> > I have 2 tables with a 200,000 rows of data 3 character/string columns> >> > ID,> >> > Question and Response. The query below compares the data between the 2> >> > tables based on ID and Question and if the Response does not match> >> > between> >> > the left table and the right table it identifies the ID's where there is> >> > a> >> > mismatch. Running the query in SQL Server 2008 using the ISNULL function> >> > take a few milliseconds. Running the same query in Postgresql takes over> >> > 70> >> > seconds. The 2 queries are below:> >> > SQL Server 2008 R2 Query> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and> >> > t1.question = t2.question and isnull(t1.response,'ISNULL') <>> >> > isnull(t2.response,'ISNULL')> >>> >> > Postgres 9.1 Query> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and> >> > t1.question = t2.question and coalesce(t1.response,'ISNULL') <>> >> > coalesce(t2.response,'ISNULL')> >> > What gives?> >>> >> I think, so must problem can be in ugly predicate> >> coalesce(t1.response,'ISNULL') <>> >> > coalesce(t2.response,'ISNULL')> >>> >> try use a IS DISTINCT OF operator> >>> >> ... AND t1.response IS DISTINCT t2.response> >>> >> Regards> >>> >> Pavel Stehule> >>> >> p.s. don't use a coalesce in WHERE clause if it is possible.> >", "msg_date": "Wed, 17 Nov 2010 15:50:25 -0600", "msg_from": "Humair Mohammed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "I have to concur. Sql is written specifially and only for Windows. It is\noptimized for windows. Postgreal is writeen for just about everything\ntrying to use common code so there isn't much optimization because it has to\nbe optimized based on the OS that is running it. Check out your config and\nsend it to us. That would include the OS and hardware configs for both\nmachines.\n\nOn Wed, Nov 17, 2010 at 3:47 PM, Tomas Vondra <[email protected]> wrote:\n\n> Dne 17.11.2010 05:47, Pavel Stehule napsal(a):\n> > 2010/11/17 Humair Mohammed <[email protected]>:\n> >>\n> >> There are no indexes on the tables either in SQL Server or Postgresql -\n> I am\n> >> comparing apples to apples here. I ran ANALYZE on the postgresql tables,\n>\n> Actually no, you're not comparing apples to apples. You've provided so\n> little information that you may be comparing apples to cucumbers or\n> maybe some strange animals.\n>\n> 1) info about the install\n>\n> What OS is this running on? I guess it's Windows in both cases, right?\n>\n> How nuch memory is there? What is the size of shared_buffers? The\n> default PostgreSQL settings is very very very limited, you have to bump\n> it to a much larger value.\n>\n> What are the other inportant settings (e.g. the work_mem)?\n>\n> 2) info about the dataset\n>\n> How large are the tables? I don't mean number of rows, I mean number of\n> blocks / occupied disk space. Run this query\n>\n> SELECT relname, relpages, reltuples, pg_size_pretty(pg_table_size(oid))\n> FROM pg_class WHERE relname IN ('table1', 'table2');\n>\n> 3) info about the plan\n>\n> Please, provide EXPLAIN ANALYZE output, maybe with info about buffers,\n> e.g. something like\n>\n> EXPLAIN (ANALYZE ON, BUFFERS ON) SELECT ...\n>\n> 4) no indexes ?\n>\n> Why have you decided not to use any indexes? If you want a decent\n> performance, you will have to use indexes. Obviously there is some\n> overhead associated with them, but it's premature optimization unless\n> you prove the opposite.\n>\n> BTW I'm not a MSSQL expert, but it seems like it's building a bitmap\n> index on the fly, to synchronize parallelized query - PostgreSQL does\n> not support that.\n>\n> regards\n> Tomas\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI have to concur.  Sql is written specifially and only for Windows. It is optimized for windows.  Postgreal is writeen for just about everything trying to use common code so there isn't much optimization because it has to be optimized based on the OS that is running it.  Check out your config and send it to us.  That would include the OS and hardware configs for both machines.\nOn Wed, Nov 17, 2010 at 3:47 PM, Tomas Vondra <[email protected]> wrote:\nDne 17.11.2010 05:47, Pavel Stehule napsal(a):\n> 2010/11/17 Humair Mohammed <[email protected]>:\n>>\n>> There are no indexes on the tables either in SQL Server or Postgresql - I am\n>> comparing apples to apples here. I ran ANALYZE on the postgresql tables,\n\nActually no, you're not comparing apples to apples. You've provided so\nlittle information that you may be comparing apples to cucumbers or\nmaybe some strange animals.\n\n1) info about the install\n\nWhat OS is this running on? I guess it's Windows in both cases, right?\n\nHow nuch memory is there? What is the size of shared_buffers? The\ndefault PostgreSQL settings is very very very limited, you have to bump\nit to a much larger value.\n\nWhat are the other inportant settings (e.g. the work_mem)?\n\n2) info about the dataset\n\nHow large are the tables? I don't mean number of rows, I mean number of\nblocks / occupied disk space. Run this query\n\nSELECT relname, relpages, reltuples, pg_size_pretty(pg_table_size(oid))\nFROM pg_class WHERE relname IN ('table1', 'table2');\n\n3) info about the plan\n\nPlease, provide EXPLAIN ANALYZE output, maybe with info about buffers,\ne.g. something like\n\nEXPLAIN (ANALYZE ON, BUFFERS ON) SELECT ...\n\n4) no indexes ?\n\nWhy have you decided not to use any indexes? If you want a decent\nperformance, you will have to use indexes. Obviously there is some\noverhead associated with them, but it's premature optimization unless\nyou prove the opposite.\n\nBTW I'm not a MSSQL expert, but it seems like it's building a bitmap\nindex on the fly, to synchronize parallelized query - PostgreSQL does\nnot support that.\n\nregards\nTomas\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 17 Nov 2010 16:51:55 -0500", "msg_from": "Rich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "Humair Mohammed <[email protected]> writes:\n> Yes strange indeed, I did rerun ANALYZE and VACCUM. Took 70 seconds to rerun the query. Results from EXPLAIN ANALYZE below:\n> \"Hash Join (cost=16212.30..52586.43 rows=92869 width=17) (actual time=43200.223..49502.874 rows=3163 loops=1)\"\" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\" Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.009..48.200 rows=93496 loops=1)\"\" -> Hash (cost=7537.12..7537.12 rows=251212 width=134) (actual time=42919.453..42919.453 rows=251212 loops=1)\"\" Buckets: 1024 Batches: 64 Memory Usage: 650kB\"\" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.119..173.019 rows=251212 loops=1)\"\"Total runtime: 49503.450 ms\"\n\nI have no idea how much memory SQL Server thinks it can use, but\nPostgres is limiting itself to work_mem which you've apparently left at\nthe default 1MB. You might get a fairer comparison by bumping that up\nsome --- try 32MB or so. You want it high enough so that the Hash\noutput doesn't say there are multiple batches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Nov 2010 18:21:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql " }, { "msg_contents": "Hello,\n\nthere should be a problem in a statistic, they are out of reality.\nPlease, try to use a DISTINCT OF operator now - maybe a statistic will\nbe better. Next - try to increase a work_mem. Hash join is\nuntypically slow in your comp.\n\nRegards\n\nPavel Stehule\n\n2010/11/17 Humair Mohammed <[email protected]>:\n> Yes strange indeed, I did rerun ANALYZE and VACCUM. Took 70 seconds to rerun\n> the query. Results from EXPLAIN ANALYZE below:\n> \"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual\n> time=43200.223..49502.874 rows=3163 loops=1)\"\n> \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text =\n> (pg.question)::text))\"\n> \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text\n> <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n> \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134)\n> (actual time=0.009..48.200 rows=93496 loops=1)\"\n> \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual\n> time=42919.453..42919.453 rows=251212 loops=1)\"\n> \"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"\n> \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212\n> width=134) (actual time=0.119..173.019 rows=251212 loops=1)\"\n> \"Total runtime: 49503.450 ms\"\n>\n>> From: [email protected]\n>> Date: Wed, 17 Nov 2010 05:47:51 +0100\n>> Subject: Re: Query Performance SQL Server vs. Postgresql\n>> To: [email protected]\n>> CC: [email protected]\n>>\n>> 2010/11/17 Humair Mohammed <[email protected]>:\n>> >\n>> > There are no indexes on the tables either in SQL Server or Postgresql -\n>> > I am\n>> > comparing apples to apples here. I ran ANALYZE on the postgresql tables,\n>> > after that query performance times are still high 42 seconds with\n>> > COALESCE\n>> > and 35 seconds with IS DISTINCT FROM.\n>> > Here is the execution plan from Postgresql for qurey - select pb.id from\n>> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question =\n>> > pg.question and coalesce(pb.response,'MISSING') <>\n>> > coalesce(pg.response,'MISSING')\n>> > Execution Time: 42 seconds\n>> > \"Hash Join  (cost=16212.30..48854.24 rows=93477 width=17)\"\n>> > \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text\n>> > =\n>> > (pg.question)::text))\"\n>> > \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character\n>> > varying))::text\n>> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n>> > \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496\n>> > width=134)\"\n>> > \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134)\"\n>> > \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212\n>> > width=134)\"\n>>\n>> this is little bit strange - did you ANALYZE and VACUUM?\n>>\n>> please send result of EXPLAIN ANALYZE\n>>\n>> Pavel\n>>\n>> >\n>> > And here is the execution plan from SQL Server for query - select pb.id\n>> > from\n>> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question =\n>> > pg.question and isnull(pb.response,'ISNULL')<>\n>> >  isnull(pg.response,'ISNULL')\n>> > Execution Time: < 1 second\n>> > Cost: 1%  |--Parallelism(Gather Streams)\n>> > Cost: 31%       |--Hash Match(Inner Join, HASH:([pb].[ID],\n>> > [pb].[Question])=([pg].[ID], [pg].[Question]),\n>> > RESIDUAL:([master].[dbo].[pivotbad].[ID] as\n>> > [pb].[ID]=[master].[dbo].[pivotgood].[ID] as [pg].[ID] AND\n>> > [master].[dbo].[pivotbad].[Question] as\n>> > [pb].[Question]=[master].[dbo].[pivotgood].[Question] as [pg].[Question]\n>> > AND\n>> > [Expr1006]<>[Expr1007]))\n>> >     Cost: 0%  |--Bitmap(HASH:([pb].[ID], [pb].[Question]),\n>> > DEFINE:([Bitmap1008]))\n>> >             Cost: 0%    |--Compute\n>> > Scalar(DEFINE:([Expr1006]=isnull([master].[dbo].[pivotbad].[Response] as\n>> > [pb].[Response],'ISNULL')))\n>> >             Cost:  6%   |--Parallelism(Repartition Streams, Hash\n>> > Partitioning, PARTITION COLUMNS:([pb].[ID], [pb].[Question]))\n>> >             Cost: 12%  |--Table Scan(OBJECT:([master].[dbo].[pivotbad]\n>> > AS\n>> > [pb]))\n>> >             Cost: 0% |--Compute\n>> > Scalar(DEFINE:([Expr1007]=isnull([master].[dbo].[pivotgood].[Response]\n>> > as\n>> > [pg].[Response],'ISNULL')))\n>> >                 Cost: 17% |--Parallelism(Repartition Streams, Hash\n>> > Partitioning, PARTITION COLUMNS:([pg].[ID], [pg].[Question]))\n>> >                     Cost: 33% |--Table\n>> > Scan(OBJECT:([master].[dbo].[pivotgood] AS [pg]),\n>> > WHERE:(PROBE([Bitmap1008],[master].[dbo].[pivotgood].[ID] as\n>> > [pg].[ID],[master].[dbo].[pivotgood].[Question] as [pg].[Question])))\n>> >\n>> >\n>> >\n>> >> From: [email protected]\n>> >> Date: Tue, 16 Nov 2010 08:12:03 +0100\n>> >> Subject: Re: [PERFORM]\n>> >> To: [email protected]\n>> >> CC: [email protected]\n>> >>\n>> >> 2010/11/15 Humair Mohammed <[email protected]>:\n>> >> > I have 2 tables with a 200,000 rows of data 3 character/string\n>> >> > columns\n>> >> > ID,\n>> >> > Question and Response. The query below compares the data between the\n>> >> > 2\n>> >> > tables based on ID and Question and if the Response does not match\n>> >> > between\n>> >> > the left table and the right table it identifies the ID's where there\n>> >> > is\n>> >> > a\n>> >> > mismatch. Running the query in SQL Server 2008 using the ISNULL\n>> >> > function\n>> >> > take a few milliseconds. Running the same query in Postgresql takes\n>> >> > over\n>> >> > 70\n>> >> > seconds. The 2 queries are below:\n>> >> > SQL Server 2008 R2 Query\n>> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n>> >> > t1.question = t2.question and isnull(t1.response,'ISNULL') <>\n>> >> > isnull(t2.response,'ISNULL')\n>> >>\n>> >> > Postgres 9.1 Query\n>> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n>> >> > t1.question = t2.question and coalesce(t1.response,'ISNULL') <>\n>> >> > coalesce(t2.response,'ISNULL')\n>> >> > What gives?\n>> >>\n>> >> I think, so must problem can be in ugly predicate\n>> >> coalesce(t1.response,'ISNULL') <>\n>> >> > coalesce(t2.response,'ISNULL')\n>> >>\n>> >> try use a IS DISTINCT OF operator\n>> >>\n>> >> ... AND t1.response IS DISTINCT t2.response\n>> >>\n>> >> Regards\n>> >>\n>> >> Pavel Stehule\n>> >>\n>> >> p.s. don't use a coalesce in WHERE clause if it is possible.\n>> >\n>\n", "msg_date": "Thu, 18 Nov 2010 07:14:24 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> I have no idea how much memory SQL Server thinks it can use\n \nHmmm... That triggered an old memory -- when we were running SQL\nServer on Windows there was some registry setting which we tweaked\nto prevent the OS from trying to cache disk I/O. (Sorry I don't\nremember the name of it.) That helped SQL Server perform better,\nbut would cripple PostgreSQL -- it counts on OS caching.\n \nOf course, once we found that PostgreSQL was 70% faster on identical\nhardware with identical load, and switching the OS to Linux brought\nit to twice as fast, I haven't had to worry about SQL Server or\nWindows configurations. ;-) Don't panic if PostgreSQL seems slower\nat first, it's probably a configuration or maintenance schedule\nissue that can be sorted out.\n \nBesides the specific advice Tom gave you, you might want to browse\nthis page for configuration in general:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \nAnd if you continue to experience performance issues, this page can\nhelp you get to a resolution quickly:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nWe've been very happy with the switch to PostgreSQL. We've had\nbetter performance, better reliability, less staff time needed to\nbabysit backups, and we've been gradually using more of the advance\nfeatures not available in other products. It's well worth the\neffort to get over those initial bumps resulting from product\ndifferences.\n \n-Kevin\n", "msg_date": "Thu, 18 Nov 2010 15:00:07 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "I am running 64-bit Windows 7 Enterprise with 8G RAM and a Dual Core 2.67 Ghz Intel CPU. Both the SQL 2008 R2 and Postgresql are installed on the same machine. The DISTINCT FROM instead of the COALESCE does not help much. I ran 2 further tests with work_mem modifications (please note memory usage is quite low 650kb, so I am not sure if the work_mem is a factor):\nFirst, I modified the work_mem setting to 1GB (reloaded config) from the default 1MB and I see a response time of 33 seconds. Results below from EXPLAIN ANALYZE:\n\"Hash Join (cost=16212.30..52586.43 rows=92869 width=17) (actual time=26742.343..33274.317 rows=3163 loops=1)\"\" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\" Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.055..40.710 rows=93496 loops=1)\"\" -> Hash (cost=7537.12..7537.12 rows=251212 width=134) (actual time=25603.460..25603.460 rows=251212 loops=1)\"\" Buckets: 1024 Batches: 64 Memory Usage: 650kB\"\" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.050..120.269 rows=251212 loops=1)\"\"Total runtime: 33275.028 ms\"\n\nSecond, I modified the work_mem setting to 2GB (reloaded config) and I see a response time of 38 seconds. Results below from EXPLAIN ANALYZE:\n\"Hash Join (cost=16212.30..52586.43 rows=92869 width=17) (actual time=26574.459..38406.422 rows=3163 loops=1)\"\" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\" Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.067..37.938 rows=93496 loops=1)\"\" -> Hash (cost=7537.12..7537.12 rows=251212 width=134) (actual time=26426.127..26426.127 rows=251212 loops=1)\"\" Buckets: 1024 Batches: 64 Memory Usage: 650kB\"\" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.038..115.319 rows=251212 loops=1)\"\"Total runtime: 38406.927 ms\"\n\nBy no means I am trying to compare the 2 products. When I noticed the slow behavior of COALESCE I tried it on SQL Server. And since they are running on the same machine my comment regarding apples to apples. It is possible that this is not an apples to apples comparison other than the fact that it is running on the same machine.\n\n> From: [email protected]\n> Date: Thu, 18 Nov 2010 07:14:24 +0100\n> Subject: Re: Query Performance SQL Server vs. Postgresql\n> To: [email protected]\n> CC: [email protected]\n> \n> Hello,\n> \n> there should be a problem in a statistic, they are out of reality.\n> Please, try to use a DISTINCT OF operator now - maybe a statistic will\n> be better. Next - try to increase a work_mem. Hash join is\n> untypically slow in your comp.\n> \n> Regards\n> \n> Pavel Stehule\n> \n> 2010/11/17 Humair Mohammed <[email protected]>:\n> > Yes strange indeed, I did rerun ANALYZE and VACCUM. Took 70 seconds to rerun\n> > the query. Results from EXPLAIN ANALYZE below:\n> > \"Hash Join (cost=16212.30..52586.43 rows=92869 width=17) (actual\n> > time=43200.223..49502.874 rows=3163 loops=1)\"\n> > \" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text =\n> > (pg.question)::text))\"\n> > \" Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text\n> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n> > \" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496 width=134)\n> > (actual time=0.009..48.200 rows=93496 loops=1)\"\n> > \" -> Hash (cost=7537.12..7537.12 rows=251212 width=134) (actual\n> > time=42919.453..42919.453 rows=251212 loops=1)\"\n> > \" Buckets: 1024 Batches: 64 Memory Usage: 650kB\"\n> > \" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212\n> > width=134) (actual time=0.119..173.019 rows=251212 loops=1)\"\n> > \"Total runtime: 49503.450 ms\"\n> >\n> >> From: [email protected]\n> >> Date: Wed, 17 Nov 2010 05:47:51 +0100\n> >> Subject: Re: Query Performance SQL Server vs. Postgresql\n> >> To: [email protected]\n> >> CC: [email protected]\n> >>\n> >> 2010/11/17 Humair Mohammed <[email protected]>:\n> >> >\n> >> > There are no indexes on the tables either in SQL Server or Postgresql -\n> >> > I am\n> >> > comparing apples to apples here. I ran ANALYZE on the postgresql tables,\n> >> > after that query performance times are still high 42 seconds with\n> >> > COALESCE\n> >> > and 35 seconds with IS DISTINCT FROM.\n> >> > Here is the execution plan from Postgresql for qurey - select pb.id from\n> >> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question =\n> >> > pg.question and coalesce(pb.response,'MISSING') <>\n> >> > coalesce(pg.response,'MISSING')\n> >> > Execution Time: 42 seconds\n> >> > \"Hash Join (cost=16212.30..48854.24 rows=93477 width=17)\"\n> >> > \" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text\n> >> > =\n> >> > (pg.question)::text))\"\n> >> > \" Join Filter: ((COALESCE(pb.response, 'MISSING'::character\n> >> > varying))::text\n> >> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n> >> > \" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496\n> >> > width=134)\"\n> >> > \" -> Hash (cost=7537.12..7537.12 rows=251212 width=134)\"\n> >> > \" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212\n> >> > width=134)\"\n> >>\n> >> this is little bit strange - did you ANALYZE and VACUUM?\n> >>\n> >> please send result of EXPLAIN ANALYZE\n> >>\n> >> Pavel\n> >>\n> >> >\n> >> > And here is the execution plan from SQL Server for query - select pb.id\n> >> > from\n> >> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question =\n> >> > pg.question and isnull(pb.response,'ISNULL')<>\n> >> > isnull(pg.response,'ISNULL')\n> >> > Execution Time: < 1 second\n> >> > Cost: 1% |--Parallelism(Gather Streams)\n> >> > Cost: 31% |--Hash Match(Inner Join, HASH:([pb].[ID],\n> >> > [pb].[Question])=([pg].[ID], [pg].[Question]),\n> >> > RESIDUAL:([master].[dbo].[pivotbad].[ID] as\n> >> > [pb].[ID]=[master].[dbo].[pivotgood].[ID] as [pg].[ID] AND\n> >> > [master].[dbo].[pivotbad].[Question] as\n> >> > [pb].[Question]=[master].[dbo].[pivotgood].[Question] as [pg].[Question]\n> >> > AND\n> >> > [Expr1006]<>[Expr1007]))\n> >> > Cost: 0% |--Bitmap(HASH:([pb].[ID], [pb].[Question]),\n> >> > DEFINE:([Bitmap1008]))\n> >> > Cost: 0% |--Compute\n> >> > Scalar(DEFINE:([Expr1006]=isnull([master].[dbo].[pivotbad].[Response] as\n> >> > [pb].[Response],'ISNULL')))\n> >> > Cost: 6% |--Parallelism(Repartition Streams, Hash\n> >> > Partitioning, PARTITION COLUMNS:([pb].[ID], [pb].[Question]))\n> >> > Cost: 12% |--Table Scan(OBJECT:([master].[dbo].[pivotbad]\n> >> > AS\n> >> > [pb]))\n> >> > Cost: 0% |--Compute\n> >> > Scalar(DEFINE:([Expr1007]=isnull([master].[dbo].[pivotgood].[Response]\n> >> > as\n> >> > [pg].[Response],'ISNULL')))\n> >> > Cost: 17% |--Parallelism(Repartition Streams, Hash\n> >> > Partitioning, PARTITION COLUMNS:([pg].[ID], [pg].[Question]))\n> >> > Cost: 33% |--Table\n> >> > Scan(OBJECT:([master].[dbo].[pivotgood] AS [pg]),\n> >> > WHERE:(PROBE([Bitmap1008],[master].[dbo].[pivotgood].[ID] as\n> >> > [pg].[ID],[master].[dbo].[pivotgood].[Question] as [pg].[Question])))\n> >> >\n> >> >\n> >> >\n> >> >> From: [email protected]\n> >> >> Date: Tue, 16 Nov 2010 08:12:03 +0100\n> >> >> Subject: Re: [PERFORM]\n> >> >> To: [email protected]\n> >> >> CC: [email protected]\n> >> >>\n> >> >> 2010/11/15 Humair Mohammed <[email protected]>:\n> >> >> > I have 2 tables with a 200,000 rows of data 3 character/string\n> >> >> > columns\n> >> >> > ID,\n> >> >> > Question and Response. The query below compares the data between the\n> >> >> > 2\n> >> >> > tables based on ID and Question and if the Response does not match\n> >> >> > between\n> >> >> > the left table and the right table it identifies the ID's where there\n> >> >> > is\n> >> >> > a\n> >> >> > mismatch. Running the query in SQL Server 2008 using the ISNULL\n> >> >> > function\n> >> >> > take a few milliseconds. Running the same query in Postgresql takes\n> >> >> > over\n> >> >> > 70\n> >> >> > seconds. The 2 queries are below:\n> >> >> > SQL Server 2008 R2 Query\n> >> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n> >> >> > t1.question = t2.question and isnull(t1.response,'ISNULL') <>\n> >> >> > isnull(t2.response,'ISNULL')\n> >> >>\n> >> >> > Postgres 9.1 Query\n> >> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and\n> >> >> > t1.question = t2.question and coalesce(t1.response,'ISNULL') <>\n> >> >> > coalesce(t2.response,'ISNULL')\n> >> >> > What gives?\n> >> >>\n> >> >> I think, so must problem can be in ugly predicate\n> >> >> coalesce(t1.response,'ISNULL') <>\n> >> >> > coalesce(t2.response,'ISNULL')\n> >> >>\n> >> >> try use a IS DISTINCT OF operator\n> >> >>\n> >> >> ... AND t1.response IS DISTINCT t2.response\n> >> >>\n> >> >> Regards\n> >> >>\n> >> >> Pavel Stehule\n> >> >>\n> >> >> p.s. don't use a coalesce in WHERE clause if it is possible.\n> >> >\n> >\n \t\t \t \t\t \n\n\n\n\n\nI am running 64-bit Windows 7 Enterprise with 8G RAM and a Dual Core 2.67 Ghz Intel CPU. Both the SQL 2008 R2 and Postgresql are installed on the same machine. The DISTINCT FROM instead of the COALESCE does not help much. I ran 2 further tests with work_mem modifications (please note memory usage is quite low 650kb, so I am not sure if the work_mem is a factor):First, I modified the work_mem setting to 1GB (reloaded config) from the default 1MB and I see a response time of 33 seconds. Results below from EXPLAIN ANALYZE:\"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual time=26742.343..33274.317 rows=3163 loops=1)\"\"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.055..40.710 rows=93496 loops=1)\"\"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual time=25603.460..25603.460 rows=251212 loops=1)\"\"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"\"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.050..120.269 rows=251212 loops=1)\"\"Total runtime: 33275.028 ms\"Second, I modified the work_mem setting to 2GB (reloaded config) and I see a response time of 38 seconds. Results below from EXPLAIN ANALYZE:\"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual time=26574.459..38406.422 rows=3163 loops=1)\"\"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.067..37.938 rows=93496 loops=1)\"\"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual time=26426.127..26426.127 rows=251212 loops=1)\"\"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"\"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.038..115.319 rows=251212 loops=1)\"\"Total runtime: 38406.927 ms\"By no means I am trying to compare the 2 products. When I noticed the slow behavior of COALESCE I tried it on SQL Server. And since they are running on the same machine my comment regarding apples to apples. It is possible that this is not an apples to apples comparison other than the fact that it is running on the same machine.> From: [email protected]> Date: Thu, 18 Nov 2010 07:14:24 +0100> Subject: Re: Query Performance SQL Server vs. Postgresql> To: [email protected]> CC: [email protected]> > Hello,> > there should be a problem in a statistic, they are out of reality.> Please, try to use a DISTINCT OF operator now - maybe a statistic will> be better. Next - try to increase a work_mem. Hash join is> untypically slow in your comp.> > Regards> > Pavel Stehule> > 2010/11/17 Humair Mohammed <[email protected]>:> > Yes strange indeed, I did rerun ANALYZE and VACCUM. Took 70 seconds to rerun> > the query. Results from EXPLAIN ANALYZE below:> > \"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual> > time=43200.223..49502.874 rows=3163 loops=1)\"> > \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text => > (pg.question)::text))\"> > \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"> > \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134)> > (actual time=0.009..48.200 rows=93496 loops=1)\"> > \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual> > time=42919.453..42919.453 rows=251212 loops=1)\"> > \"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"> > \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212> > width=134) (actual time=0.119..173.019 rows=251212 loops=1)\"> > \"Total runtime: 49503.450 ms\"> >> >> From: [email protected]> >> Date: Wed, 17 Nov 2010 05:47:51 +0100> >> Subject: Re: Query Performance SQL Server vs. Postgresql> >> To: [email protected]> >> CC: [email protected]> >>> >> 2010/11/17 Humair Mohammed <[email protected]>:> >> >> >> > There are no indexes on the tables either in SQL Server or Postgresql -> >> > I am> >> > comparing apples to apples here. I ran ANALYZE on the postgresql tables,> >> > after that query performance times are still high 42 seconds with> >> > COALESCE> >> > and 35 seconds with IS DISTINCT FROM.> >> > Here is the execution plan from Postgresql for qurey - select pb.id from> >> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question => >> > pg.question and coalesce(pb.response,'MISSING') <>> >> > coalesce(pg.response,'MISSING')> >> > Execution Time: 42 seconds> >> > \"Hash Join  (cost=16212.30..48854.24 rows=93477 width=17)\"> >> > \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text> >> > => >> > (pg.question)::text))\"> >> > \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character> >> > varying))::text> >> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"> >> > \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496> >> > width=134)\"> >> > \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134)\"> >> > \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212> >> > width=134)\"> >>> >> this is little bit strange - did you ANALYZE and VACUUM?> >>> >> please send result of EXPLAIN ANALYZE> >>> >> Pavel> >>> >> >> >> > And here is the execution plan from SQL Server for query - select pb.id> >> > from> >> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question => >> > pg.question and isnull(pb.response,'ISNULL')<>> >> >  isnull(pg.response,'ISNULL')> >> > Execution Time: < 1 second> >> > Cost: 1%  |--Parallelism(Gather Streams)> >> > Cost: 31%       |--Hash Match(Inner Join, HASH:([pb].[ID],> >> > [pb].[Question])=([pg].[ID], [pg].[Question]),> >> > RESIDUAL:([master].[dbo].[pivotbad].[ID] as> >> > [pb].[ID]=[master].[dbo].[pivotgood].[ID] as [pg].[ID] AND> >> > [master].[dbo].[pivotbad].[Question] as> >> > [pb].[Question]=[master].[dbo].[pivotgood].[Question] as [pg].[Question]> >> > AND> >> > [Expr1006]<>[Expr1007]))> >> >     Cost: 0%  |--Bitmap(HASH:([pb].[ID], [pb].[Question]),> >> > DEFINE:([Bitmap1008]))> >> >             Cost: 0%    |--Compute> >> > Scalar(DEFINE:([Expr1006]=isnull([master].[dbo].[pivotbad].[Response] as> >> > [pb].[Response],'ISNULL')))> >> >             Cost:  6%   |--Parallelism(Repartition Streams, Hash> >> > Partitioning, PARTITION COLUMNS:([pb].[ID], [pb].[Question]))> >> >             Cost: 12%  |--Table Scan(OBJECT:([master].[dbo].[pivotbad]> >> > AS> >> > [pb]))> >> >             Cost: 0% |--Compute> >> > Scalar(DEFINE:([Expr1007]=isnull([master].[dbo].[pivotgood].[Response]> >> > as> >> > [pg].[Response],'ISNULL')))> >> >                 Cost: 17% |--Parallelism(Repartition Streams, Hash> >> > Partitioning, PARTITION COLUMNS:([pg].[ID], [pg].[Question]))> >> >                     Cost: 33% |--Table> >> > Scan(OBJECT:([master].[dbo].[pivotgood] AS [pg]),> >> > WHERE:(PROBE([Bitmap1008],[master].[dbo].[pivotgood].[ID] as> >> > [pg].[ID],[master].[dbo].[pivotgood].[Question] as [pg].[Question])))> >> >> >> >> >> >> >> >> From: [email protected]> >> >> Date: Tue, 16 Nov 2010 08:12:03 +0100> >> >> Subject: Re: [PERFORM]> >> >> To: [email protected]> >> >> CC: [email protected]> >> >>> >> >> 2010/11/15 Humair Mohammed <[email protected]>:> >> >> > I have 2 tables with a 200,000 rows of data 3 character/string> >> >> > columns> >> >> > ID,> >> >> > Question and Response. The query below compares the data between the> >> >> > 2> >> >> > tables based on ID and Question and if the Response does not match> >> >> > between> >> >> > the left table and the right table it identifies the ID's where there> >> >> > is> >> >> > a> >> >> > mismatch. Running the query in SQL Server 2008 using the ISNULL> >> >> > function> >> >> > take a few milliseconds. Running the same query in Postgresql takes> >> >> > over> >> >> > 70> >> >> > seconds. The 2 queries are below:> >> >> > SQL Server 2008 R2 Query> >> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and> >> >> > t1.question = t2.question and isnull(t1.response,'ISNULL') <>> >> >> > isnull(t2.response,'ISNULL')> >> >>> >> >> > Postgres 9.1 Query> >> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id and> >> >> > t1.question = t2.question and coalesce(t1.response,'ISNULL') <>> >> >> > coalesce(t2.response,'ISNULL')> >> >> > What gives?> >> >>> >> >> I think, so must problem can be in ugly predicate> >> >> coalesce(t1.response,'ISNULL') <>> >> >> > coalesce(t2.response,'ISNULL')> >> >>> >> >> try use a IS DISTINCT OF operator> >> >>> >> >> ... AND t1.response IS DISTINCT t2.response> >> >>> >> >> Regards> >> >>> >> >> Pavel Stehule> >> >>> >> >> p.s. don't use a coalesce in WHERE clause if it is possible.> >> >> >", "msg_date": "Sun, 21 Nov 2010 00:00:39 -0600", "msg_from": "Humair Mohammed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "Hello\n\nI don't know. I checked similar query on similar dataset (size), and I\nhave a times about 3 sec on much more worse notebook. So problem can\nbe in disk IO operation speed - maybe in access to TOASTed value.\n\n2010/11/21 Humair Mohammed <[email protected]>:\n> I am running 64-bit Windows 7 Enterprise with 8G RAM and a Dual Core 2.67\n> Ghz Intel CPU. Both the SQL 2008 R2 and Postgresql are installed on the same\n> machine. The DISTINCT FROM instead of the COALESCE does not help much. I ran\n> 2 further tests with work_mem modifications (please note memory usage is\n> quite low 650kb, so I am not sure if the work_mem is a factor):\n\nit's has a little bit different meaning. work_mem is just limit, so\n\"memory usage\" must not be great than work_mem ever. if then pg\nincrease \"butches\" number - store data to blocks on disk. Higher\nwork_mem ~ less butches. So ideal is 1 butches.\n\nRegards\n\nPavel Stehule\n\n> First, I modified the work_mem setting to 1GB (reloaded config) from the\n> default 1MB and I see a response time of 33 seconds. Results below from\n> EXPLAIN ANALYZE:\n> \"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual\n> time=26742.343..33274.317 rows=3163 loops=1)\"\n> \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text =\n> (pg.question)::text))\"\n> \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text\n> <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n> \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134)\n> (actual time=0.055..40.710 rows=93496 loops=1)\"\n> \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual\n> time=25603.460..25603.460 rows=251212 loops=1)\"\n> \"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"\n> \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212\n> width=134) (actual time=0.050..120.269 rows=251212 loops=1)\"\n> \"Total runtime: 33275.028 ms\"\n>\n> Second, I modified the work_mem setting to 2GB (reloaded config) and I see a\n> response time of 38 seconds. Results below from EXPLAIN ANALYZE:\n> \"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual\n> time=26574.459..38406.422 rows=3163 loops=1)\"\n> \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text =\n> (pg.question)::text))\"\n> \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text\n> <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n> \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134)\n> (actual time=0.067..37.938 rows=93496 loops=1)\"\n> \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual\n> time=26426.127..26426.127 rows=251212 loops=1)\"\n> \"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"\n> \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212\n> width=134) (actual time=0.038..115.319 rows=251212 loops=1)\"\n> \"Total runtime: 38406.927 ms\"\n>\n> By no means I am trying to compare the 2 products. When I noticed the slow\n> behavior of COALESCE I tried it on SQL Server. And since they are running on\n> the same machine my comment regarding apples to apples. It is possible that\n> this is not an apples to apples comparison other than the fact that it is\n> running on the same machine.\n>\n>> From: [email protected]\n>> Date: Thu, 18 Nov 2010 07:14:24 +0100\n>> Subject: Re: Query Performance SQL Server vs. Postgresql\n>> To: [email protected]\n>> CC: [email protected]\n>>\n>> Hello,\n>>\n>> there should be a problem in a statistic, they are out of reality.\n>> Please, try to use a DISTINCT OF operator now - maybe a statistic will\n>> be better. Next - try to increase a work_mem. Hash join is\n>> untypically slow in your comp.\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>> 2010/11/17 Humair Mohammed <[email protected]>:\n>> > Yes strange indeed, I did rerun ANALYZE and VACCUM. Took 70 seconds to\n>> > rerun\n>> > the query. Results from EXPLAIN ANALYZE below:\n>> > \"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual\n>> > time=43200.223..49502.874 rows=3163 loops=1)\"\n>> > \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text\n>> > =\n>> > (pg.question)::text))\"\n>> > \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character\n>> > varying))::text\n>> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n>> > \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496\n>> > width=134)\n>> > (actual time=0.009..48.200 rows=93496 loops=1)\"\n>> > \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual\n>> > time=42919.453..42919.453 rows=251212 loops=1)\"\n>> > \"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"\n>> > \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212\n>> > width=134) (actual time=0.119..173.019 rows=251212 loops=1)\"\n>> > \"Total runtime: 49503.450 ms\"\n>> >\n>> >> From: [email protected]\n>> >> Date: Wed, 17 Nov 2010 05:47:51 +0100\n>> >> Subject: Re: Query Performance SQL Server vs. Postgresql\n>> >> To: [email protected]\n>> >> CC: [email protected]\n>> >>\n>> >> 2010/11/17 Humair Mohammed <[email protected]>:\n>> >> >\n>> >> > There are no indexes on the tables either in SQL Server or Postgresql\n>> >> > -\n>> >> > I am\n>> >> > comparing apples to apples here. I ran ANALYZE on the postgresql\n>> >> > tables,\n>> >> > after that query performance times are still high 42 seconds with\n>> >> > COALESCE\n>> >> > and 35 seconds with IS DISTINCT FROM.\n>> >> > Here is the execution plan from Postgresql for qurey - select pb.id\n>> >> > from\n>> >> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question\n>> >> > =\n>> >> > pg.question and coalesce(pb.response,'MISSING') <>\n>> >> > coalesce(pg.response,'MISSING')\n>> >> > Execution Time: 42 seconds\n>> >> > \"Hash Join  (cost=16212.30..48854.24 rows=93477 width=17)\"\n>> >> > \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND\n>> >> > ((pb.question)::text\n>> >> > =\n>> >> > (pg.question)::text))\"\n>> >> > \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character\n>> >> > varying))::text\n>> >> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n>> >> > \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496\n>> >> > width=134)\"\n>> >> > \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134)\"\n>> >> > \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12\n>> >> > rows=251212\n>> >> > width=134)\"\n>> >>\n>> >> this is little bit strange - did you ANALYZE and VACUUM?\n>> >>\n>> >> please send result of EXPLAIN ANALYZE\n>> >>\n>> >> Pavel\n>> >>\n>> >> >\n>> >> > And here is the execution plan from SQL Server for query - select\n>> >> > pb.id\n>> >> > from\n>> >> > pivotbad pb inner join pivotgood pg on pb.id = pg.id and pb.question\n>> >> > =\n>> >> > pg.question and isnull(pb.response,'ISNULL')<>\n>> >> >  isnull(pg.response,'ISNULL')\n>> >> > Execution Time: < 1 second\n>> >> > Cost: 1%  |--Parallelism(Gather Streams)\n>> >> > Cost: 31%       |--Hash Match(Inner Join, HASH:([pb].[ID],\n>> >> > [pb].[Question])=([pg].[ID], [pg].[Question]),\n>> >> > RESIDUAL:([master].[dbo].[pivotbad].[ID] as\n>> >> > [pb].[ID]=[master].[dbo].[pivotgood].[ID] as [pg].[ID] AND\n>> >> > [master].[dbo].[pivotbad].[Question] as\n>> >> > [pb].[Question]=[master].[dbo].[pivotgood].[Question] as\n>> >> > [pg].[Question]\n>> >> > AND\n>> >> > [Expr1006]<>[Expr1007]))\n>> >> >     Cost: 0%  |--Bitmap(HASH:([pb].[ID], [pb].[Question]),\n>> >> > DEFINE:([Bitmap1008]))\n>> >> >             Cost: 0%    |--Compute\n>> >> > Scalar(DEFINE:([Expr1006]=isnull([master].[dbo].[pivotbad].[Response]\n>> >> > as\n>> >> > [pb].[Response],'ISNULL')))\n>> >> >             Cost:  6%   |--Parallelism(Repartition Streams, Hash\n>> >> > Partitioning, PARTITION COLUMNS:([pb].[ID], [pb].[Question]))\n>> >> >             Cost: 12%  |--Table\n>> >> > Scan(OBJECT:([master].[dbo].[pivotbad]\n>> >> > AS\n>> >> > [pb]))\n>> >> >             Cost: 0% |--Compute\n>> >> >\n>> >> > Scalar(DEFINE:([Expr1007]=isnull([master].[dbo].[pivotgood].[Response]\n>> >> > as\n>> >> > [pg].[Response],'ISNULL')))\n>> >> >                 Cost: 17% |--Parallelism(Repartition Streams, Hash\n>> >> > Partitioning, PARTITION COLUMNS:([pg].[ID], [pg].[Question]))\n>> >> >                     Cost: 33% |--Table\n>> >> > Scan(OBJECT:([master].[dbo].[pivotgood] AS [pg]),\n>> >> > WHERE:(PROBE([Bitmap1008],[master].[dbo].[pivotgood].[ID] as\n>> >> > [pg].[ID],[master].[dbo].[pivotgood].[Question] as [pg].[Question])))\n>> >> >\n>> >> >\n>> >> >\n>> >> >> From: [email protected]\n>> >> >> Date: Tue, 16 Nov 2010 08:12:03 +0100\n>> >> >> Subject: Re: [PERFORM]\n>> >> >> To: [email protected]\n>> >> >> CC: [email protected]\n>> >> >>\n>> >> >> 2010/11/15 Humair Mohammed <[email protected]>:\n>> >> >> > I have 2 tables with a 200,000 rows of data 3 character/string\n>> >> >> > columns\n>> >> >> > ID,\n>> >> >> > Question and Response. The query below compares the data between\n>> >> >> > the\n>> >> >> > 2\n>> >> >> > tables based on ID and Question and if the Response does not match\n>> >> >> > between\n>> >> >> > the left table and the right table it identifies the ID's where\n>> >> >> > there\n>> >> >> > is\n>> >> >> > a\n>> >> >> > mismatch. Running the query in SQL Server 2008 using the ISNULL\n>> >> >> > function\n>> >> >> > take a few milliseconds. Running the same query in Postgresql\n>> >> >> > takes\n>> >> >> > over\n>> >> >> > 70\n>> >> >> > seconds. The 2 queries are below:\n>> >> >> > SQL Server 2008 R2 Query\n>> >> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id\n>> >> >> > and\n>> >> >> > t1.question = t2.question and isnull(t1.response,'ISNULL') <>\n>> >> >> > isnull(t2.response,'ISNULL')\n>> >> >>\n>> >> >> > Postgres 9.1 Query\n>> >> >> > select t1.id from table1 t1 inner join table2 t2 on t1.id = t2.id\n>> >> >> > and\n>> >> >> > t1.question = t2.question and coalesce(t1.response,'ISNULL') <>\n>> >> >> > coalesce(t2.response,'ISNULL')\n>> >> >> > What gives?\n>> >> >>\n>> >> >> I think, so must problem can be in ugly predicate\n>> >> >> coalesce(t1.response,'ISNULL') <>\n>> >> >> > coalesce(t2.response,'ISNULL')\n>> >> >>\n>> >> >> try use a IS DISTINCT OF operator\n>> >> >>\n>> >> >> ... AND t1.response IS DISTINCT t2.response\n>> >> >>\n>> >> >> Regards\n>> >> >>\n>> >> >> Pavel Stehule\n>> >> >>\n>> >> >> p.s. don't use a coalesce in WHERE clause if it is possible.\n>> >> >\n>> >\n>\n", "msg_date": "Sun, 21 Nov 2010 07:20:12 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "1) OS/Configuration64-bit Windows 7 Enterprise with 8G RAM and a Dual Core 2.67 Ghz Intel CPUpostgresql-x64-9.0 (PostgreSQL 9.0.1, compiled by Visual C++ build 1500, 64-bit)work_mem 2GBshared_buffers = 22) Datasetname,pages,tuples,pg_size_pretty\"pivotbad\";1870;93496;\"15 MB\"\"pivotgood\";5025;251212;\"39 MB\"\n3) EXPLAIN (ANALYZE ON, BUFFERS ON)\"Hash Join (cost=16212.30..52586.43 rows=92869 width=17) (actual time=25814.222..32296.765 rows=3163 loops=1)\"\" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\" Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\" Buffers: shared hit=384 read=6511, temp read=6444 written=6318\"\" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.069..37.143 rows=93496 loops=1)\"\" Buffers: shared hit=192 read=1678\"\" -> Hash (cost=7537.12..7537.12 rows=251212 width=134) (actual time=24621.752..24621.752 rows=251212 loops=1)\"\" Buckets: 1024 Batches: 64 Memory Usage: 650kB\"\" Buffers: shared hit=192 read=4833, temp written=4524\"\" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.038..117.780 rows=251212 loops=1)\"\" Buffers: shared hit=192 read=4833\"\"Total runtime: 32297.305 ms\"\n4) INDEXESI can certainly add an index but given the table sizes I am not sure if that is a factor. This by no means is a large dataset less than 350,000 rows in total and 3 columns. Also this was just a quick dump of data for comparison purpose. When I saw the poor performance on the COALESCE, I pointed the data load to SQL Server and ran the same query except with the TSQL specific ISNULL function.\n \t\t \t \t\t \n\n\n\n\n\n1) OS/Configuration64-bit Windows 7 Enterprise with 8G RAM and a Dual Core 2.67 Ghz Intel CPUpostgresql-x64-9.0 (PostgreSQL 9.0.1, compiled by Visual C++ build 1500, 64-bit)work_mem  2GBshared_buffers = 22) Datasetname,pages,tuples,pg_size_pretty\"pivotbad\";1870;93496;\"15 MB\"\"pivotgood\";5025;251212;\"39 MB\"3) EXPLAIN (ANALYZE ON, BUFFERS ON)\"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual time=25814.222..32296.765 rows=3163 loops=1)\"\"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\"  Buffers: shared hit=384 read=6511, temp read=6444 written=6318\"\"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.069..37.143 rows=93496 loops=1)\"\"        Buffers: shared hit=192 read=1678\"\"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual time=24621.752..24621.752 rows=251212 loops=1)\"\"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"\"        Buffers: shared hit=192 read=4833, temp written=4524\"\"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.038..117.780 rows=251212 loops=1)\"\"              Buffers: shared hit=192 read=4833\"\"Total runtime: 32297.305 ms\"4) INDEXESI can certainly add an index but given the table sizes I am not sure if that is a factor. This by no means is a large dataset less than 350,000 rows in total and 3 columns. Also this was just a quick dump of data for comparison purpose. When I saw the poor performance on the COALESCE, I pointed the data load to SQL Server and ran the same query except with the TSQL specific ISNULL function.", "msg_date": "Sun, 21 Nov 2010 00:25:03 -0600", "msg_from": "Humair Mohammed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "2010/11/21 Humair Mohammed <[email protected]>:\n>\n> 1) OS/Configuration\n> 64-bit Windows 7 Enterprise with 8G RAM and a Dual Core 2.67 Ghz Intel CPU\n> postgresql-x64-9.0 (PostgreSQL 9.0.1, compiled by Visual C++ build 1500,\n> 64-bit)\n> work_mem  2GB\n> shared_buffers = 2\n\nshared_buffers = 2 ???\n\nRegards\n\nPavel Stehule\n\n\n> 2) Dataset\n> name,pages,tuples,pg_size_pretty\n> \"pivotbad\";1870;93496;\"15 MB\"\n> \"pivotgood\";5025;251212;\"39 MB\"\n> 3) EXPLAIN (ANALYZE ON, BUFFERS ON)\n> \"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual\n> time=25814.222..32296.765 rows=3163 loops=1)\"\n> \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text =\n> (pg.question)::text))\"\n> \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text\n> <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n> \"  Buffers: shared hit=384 read=6511, temp read=6444 written=6318\"\n> \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134)\n> (actual time=0.069..37.143 rows=93496 loops=1)\"\n> \"        Buffers: shared hit=192 read=1678\"\n> \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual\n> time=24621.752..24621.752 rows=251212 loops=1)\"\n> \"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"\n> \"        Buffers: shared hit=192 read=4833, temp written=4524\"\n> \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212\n> width=134) (actual time=0.038..117.780 rows=251212 loops=1)\"\n> \"              Buffers: shared hit=192 read=4833\"\n> \"Total runtime: 32297.305 ms\"\n> 4) INDEXES\n> I can certainly add an index but given the table sizes I am not sure if that\n> is a factor. This by no means is a large dataset less than 350,000 rows in\n> total and 3 columns. Also this was just a quick dump of data for comparison\n> purpose. When I saw the poor performance on the COALESCE, I pointed the data\n> load to SQL Server and ran the same query except with the TSQL specific\n> ISNULL function.\n>\n", "msg_date": "Sun, 21 Nov 2010 12:38:43 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "> 4) INDEXESI can certainly add an index but given the table sizes I am not\n> sure if that is a factor. This by no means is a large dataset less than\n> 350,000 rows in total and 3 columns. Also this was just a quick dump of\n> data for comparison purpose. When I saw the poor performance on the\n> COALESCE, I pointed the data load to SQL Server and ran the same query\n> except with the TSQL specific ISNULL function.\n\n350000 rows definitely is a lot of rows, although with 3 INT column it's\njust about 13MB of data (including overhead). But indexes can be quite\nhandy when doing joins, as in this case.\n\nTomas\n\n", "msg_date": "Sun, 21 Nov 2010 15:34:46 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "That was a typo:\nwork_mem = 2GBshared_buffers = 2GB\n> From: [email protected]\n> Date: Sun, 21 Nov 2010 12:38:43 +0100\n> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql\n> To: [email protected]\n> CC: [email protected]\n> \n> 2010/11/21 Humair Mohammed <[email protected]>:\n> >\n> > 1) OS/Configuration\n> > 64-bit Windows 7 Enterprise with 8G RAM and a Dual Core 2.67 Ghz Intel CPU\n> > postgresql-x64-9.0 (PostgreSQL 9.0.1, compiled by Visual C++ build 1500,\n> > 64-bit)\n> > work_mem 2GB\n> > shared_buffers = 2\n> \n> shared_buffers = 2 ???\n> \n> Regards\n> \n> Pavel Stehule\n> \n> \n> > 2) Dataset\n> > name,pages,tuples,pg_size_pretty\n> > \"pivotbad\";1870;93496;\"15 MB\"\n> > \"pivotgood\";5025;251212;\"39 MB\"\n> > 3) EXPLAIN (ANALYZE ON, BUFFERS ON)\n> > \"Hash Join (cost=16212.30..52586.43 rows=92869 width=17) (actual\n> > time=25814.222..32296.765 rows=3163 loops=1)\"\n> > \" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text =\n> > (pg.question)::text))\"\n> > \" Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text\n> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n> > \" Buffers: shared hit=384 read=6511, temp read=6444 written=6318\"\n> > \" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496 width=134)\n> > (actual time=0.069..37.143 rows=93496 loops=1)\"\n> > \" Buffers: shared hit=192 read=1678\"\n> > \" -> Hash (cost=7537.12..7537.12 rows=251212 width=134) (actual\n> > time=24621.752..24621.752 rows=251212 loops=1)\"\n> > \" Buckets: 1024 Batches: 64 Memory Usage: 650kB\"\n> > \" Buffers: shared hit=192 read=4833, temp written=4524\"\n> > \" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212\n> > width=134) (actual time=0.038..117.780 rows=251212 loops=1)\"\n> > \" Buffers: shared hit=192 read=4833\"\n> > \"Total runtime: 32297.305 ms\"\n> > 4) INDEXES\n> > I can certainly add an index but given the table sizes I am not sure if that\n> > is a factor. This by no means is a large dataset less than 350,000 rows in\n> > total and 3 columns. Also this was just a quick dump of data for comparison\n> > purpose. When I saw the poor performance on the COALESCE, I pointed the data\n> > load to SQL Server and ran the same query except with the TSQL specific\n> > ISNULL function.\n> >\n \t\t \t \t\t \n\n\n\n\n\nThat was a typo:work_mem = 2GBshared_buffers = 2GB> From: [email protected]> Date: Sun, 21 Nov 2010 12:38:43 +0100> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql> To: [email protected]> CC: [email protected]> > 2010/11/21 Humair Mohammed <[email protected]>:> >> > 1) OS/Configuration> > 64-bit Windows 7 Enterprise with 8G RAM and a Dual Core 2.67 Ghz Intel CPU> > postgresql-x64-9.0 (PostgreSQL 9.0.1, compiled by Visual C++ build 1500,> > 64-bit)> > work_mem  2GB> > shared_buffers = 2> > shared_buffers = 2 ???> > Regards> > Pavel Stehule> > > > 2) Dataset> > name,pages,tuples,pg_size_pretty> > \"pivotbad\";1870;93496;\"15 MB\"> > \"pivotgood\";5025;251212;\"39 MB\"> > 3) EXPLAIN (ANALYZE ON, BUFFERS ON)> > \"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual> > time=25814.222..32296.765 rows=3163 loops=1)\"> > \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text => > (pg.question)::text))\"> > \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"> > \"  Buffers: shared hit=384 read=6511, temp read=6444 written=6318\"> > \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134)> > (actual time=0.069..37.143 rows=93496 loops=1)\"> > \"        Buffers: shared hit=192 read=1678\"> > \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual> > time=24621.752..24621.752 rows=251212 loops=1)\"> > \"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"> > \"        Buffers: shared hit=192 read=4833, temp written=4524\"> > \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212> > width=134) (actual time=0.038..117.780 rows=251212 loops=1)\"> > \"              Buffers: shared hit=192 read=4833\"> > \"Total runtime: 32297.305 ms\"> > 4) INDEXES> > I can certainly add an index but given the table sizes I am not sure if that> > is a factor. This by no means is a large dataset less than 350,000 rows in> > total and 3 columns. Also this was just a quick dump of data for comparison> > purpose. When I saw the poor performance on the COALESCE, I pointed the data> > load to SQL Server and ran the same query except with the TSQL specific> > ISNULL function.> >", "msg_date": "Sun, 21 Nov 2010 08:53:35 -0600", "msg_from": "Humair Mohammed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "Hello\n\n2010/11/21 Humair Mohammed <[email protected]>:\n> That was a typo:\n> work_mem = 2GB\n> shared_buffers = 2GB\n\nok, then try to decrease a shared_buffers. Maybe a win7 has a some\nproblem - large a shared buffers are well just for UNIX like systems.\nI am thinking so 500 MB is enough\n\nRegards\n\nPavel Stehule\n\n>> From: [email protected]\n>> Date: Sun, 21 Nov 2010 12:38:43 +0100\n>> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql\n>> To: [email protected]\n>> CC: [email protected]\n>>\n>> 2010/11/21 Humair Mohammed <[email protected]>:\n>> >\n>> > 1) OS/Configuration\n>> > 64-bit Windows 7 Enterprise with 8G RAM and a Dual Core 2.67 Ghz Intel\n>> > CPU\n>> > postgresql-x64-9.0 (PostgreSQL 9.0.1, compiled by Visual C++ build 1500,\n>> > 64-bit)\n>> > work_mem  2GB\n>> > shared_buffers = 2\n>>\n>> shared_buffers = 2 ???\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>>\n>> > 2) Dataset\n>> > name,pages,tuples,pg_size_pretty\n>> > \"pivotbad\";1870;93496;\"15 MB\"\n>> > \"pivotgood\";5025;251212;\"39 MB\"\n>> > 3) EXPLAIN (ANALYZE ON, BUFFERS ON)\n>> > \"Hash Join  (cost=16212.30..52586.43 rows=92869 width=17) (actual\n>> > time=25814.222..32296.765 rows=3163 loops=1)\"\n>> > \"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text\n>> > =\n>> > (pg.question)::text))\"\n>> > \"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character\n>> > varying))::text\n>> > <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\n>> > \"  Buffers: shared hit=384 read=6511, temp read=6444 written=6318\"\n>> > \"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496\n>> > width=134)\n>> > (actual time=0.069..37.143 rows=93496 loops=1)\"\n>> > \"        Buffers: shared hit=192 read=1678\"\n>> > \"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual\n>> > time=24621.752..24621.752 rows=251212 loops=1)\"\n>> > \"        Buckets: 1024  Batches: 64  Memory Usage: 650kB\"\n>> > \"        Buffers: shared hit=192 read=4833, temp written=4524\"\n>> > \"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212\n>> > width=134) (actual time=0.038..117.780 rows=251212 loops=1)\"\n>> > \"              Buffers: shared hit=192 read=4833\"\n>> > \"Total runtime: 32297.305 ms\"\n>> > 4) INDEXES\n>> > I can certainly add an index but given the table sizes I am not sure if\n>> > that\n>> > is a factor. This by no means is a large dataset less than 350,000 rows\n>> > in\n>> > total and 3 columns. Also this was just a quick dump of data for\n>> > comparison\n>> > purpose. When I saw the poor performance on the COALESCE, I pointed the\n>> > data\n>> > load to SQL Server and ran the same query except with the TSQL specific\n>> > ISNULL function.\n>> >\n>\n", "msg_date": "Sun, 21 Nov 2010 16:14:10 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "> First, I modified the work_mem setting to 1GB (reloaded config) from the\n> default 1MB and I see a response time of 33 seconds. Results below from\n> EXPLAIN ANALYZE:\n\n...\n\n> Second, I modified the work_mem setting to 2GB (reloaded config) and I see\n> a response time of 38 seconds. Results below from EXPLAIN ANALYZE:\n\n...\n\nHow did you reload the config? Using 'kill -HUP pid'? That should work\nfine. Have you cheched 'work_mem' after the reload?\n\nBecause the explain plans are exactly the same (structure, estimated\ncosts). The really interesting bit is this and it did not change at all\n\n Buckets: 1024 Batches: 64 Memory Usage: 650kB\n\nAs Tom Lane already mentioned, splitting hash join into batches (due to\nsmall memory) adds overhead, the optimal number of batches is 1. But I\nguess 1GB of work_mem is an overkill - something like 64MB should be fine.\n\nThe suspicious thing is the query plans have not changed at all\n(especially the number of batches). I think you're not telling us\nsomething important (unintentionally of course).\n\n> By no means I am trying to compare the 2 products. When I noticed the slow\n> behavior of COALESCE I tried it on SQL Server. And since they are running\n> on the same machine my comment regarding apples to apples. It is possible\n> that this is not an apples to apples comparison other than the fact that\n> it is running on the same machine.\n\nOK. The point of my post was that you've provided very little info about\nthe settings etc. so it was difficult to identify why PostgreSQL is so\nslow.\n\nTomas\n\n", "msg_date": "Sun, 21 Nov 2010 16:36:25 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": ">> 4) INDEXESI can certainly add an index but given the table sizes I am\n>> not\n>> sure if that is a factor. This by no means is a large dataset less than\n>> 350,000 rows in total and 3 columns. Also this was just a quick dump of\n>> data for comparison purpose. When I saw the poor performance on the\n>> COALESCE, I pointed the data load to SQL Server and ran the same query\n>> except with the TSQL specific ISNULL function.\n>\n> 350000 rows definitely is a lot of rows, although with 3 INT column it's\n> just about 13MB of data (including overhead). But indexes can be quite\n> handy when doing joins, as in this case.\n\nOK, I've just realized the tables have 3 character columns, not integers.\nIn that case the tables are probably much bigger (and there are things\nlike TOAST). In that case indexes may be even more important.\n\nTomas\n\n", "msg_date": "Sun, 21 Nov 2010 16:42:07 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "[email protected] writes:\n>> Second, I modified the work_mem setting to 2GB (reloaded config) and I see\n>> a response time of 38 seconds. Results below from EXPLAIN ANALYZE:\n\n> How did you reload the config? Using 'kill -HUP pid'? That should work\n> fine. Have you cheched 'work_mem' after the reload?\n\n> Because the explain plans are exactly the same (structure, estimated\n> costs). The really interesting bit is this and it did not change at all\n\n> Buckets: 1024 Batches: 64 Memory Usage: 650kB\n\nIf that didn't change, I'm prepared to bet that the OP didn't actually\nmanage to change the active value of work_mem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 21 Nov 2010 12:16:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql " }, { "msg_contents": "On Nov 21, 2010, at 12:16 PM, Tom Lane <[email protected]> wrote:\n> [email protected] writes:\n>>> Second, I modified the work_mem setting to 2GB (reloaded config) and I see\n>>> a response time of 38 seconds. Results below from EXPLAIN ANALYZE:\n> \n>> How did you reload the config? Using 'kill -HUP pid'? That should work\n>> fine. Have you cheched 'work_mem' after the reload?\n> \n>> Because the explain plans are exactly the same (structure, estimated\n>> costs). The really interesting bit is this and it did not change at all\n> \n>> Buckets: 1024 Batches: 64 Memory Usage: 650kB\n> \n> If that didn't change, I'm prepared to bet that the OP didn't actually\n> manage to change the active value of work_mem.\n\nYep. All this speculation about slow disks and/or COALESCE strikes me as likely totally off-base. I think the original poster needs to run \"show work_mem\" right before the EXPLAIN ANALYZE to make sure the new value they set actually stuck. There's no reason for the planner to have used only 650kB if work_mem is set to anything >=2MB.\n\n...Robert", "msg_date": "Sun, 21 Nov 2010 13:55:54 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "Correct, the optimizer did not take the settings with the pg_ctl reload command. I did a pg_ctl restart and work_mem now displays the updated value. I had to bump up all the way to 2047 MB to get the response below (with work_mem at 1024 MB I see 7 seconds response time) and with 2047 MB (which is the max value that can be set for work_mem - anything more than that results in a FATAL error because of the limit) the results are below. The batches and memory usage are reflecting the right behavior with these settings. Thanks for everyones input, the result is now matching what SQL Server was producing.\n\"Hash Join (cost=11305.30..39118.43 rows=92869 width=17) (actual time=145.888..326.216 rows=3163 loops=1)\"\" Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\" Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\" Buffers: shared hit=6895\"\" -> Seq Scan on pivotbad pb (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.011..11.903 rows=93496 loops=1)\"\" Buffers: shared hit=1870\"\" -> Hash (cost=7537.12..7537.12 rows=251212 width=134) (actual time=145.673..145.673 rows=251212 loops=1)\"\" Buckets: 32768 Batches: 1 Memory Usage: 39939kB\"\" Buffers: shared hit=5025\"\" -> Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.004..26.242 rows=251212 loops=1)\"\" Buffers: shared hit=5025\"\"Total runtime: 331.168 ms\"\nHumair\n\n> CC: [email protected]; [email protected]; [email protected]; [email protected]\n> From: [email protected]\n> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql\n> Date: Sun, 21 Nov 2010 13:55:54 -0500\n> To: [email protected]\n> \n> On Nov 21, 2010, at 12:16 PM, Tom Lane <[email protected]> wrote:\n> > [email protected] writes:\n> >>> Second, I modified the work_mem setting to 2GB (reloaded config) and I see\n> >>> a response time of 38 seconds. Results below from EXPLAIN ANALYZE:\n> > \n> >> How did you reload the config? Using 'kill -HUP pid'? That should work\n> >> fine. Have you cheched 'work_mem' after the reload?\n> > \n> >> Because the explain plans are exactly the same (structure, estimated\n> >> costs). The really interesting bit is this and it did not change at all\n> > \n> >> Buckets: 1024 Batches: 64 Memory Usage: 650kB\n> > \n> > If that didn't change, I'm prepared to bet that the OP didn't actually\n> > manage to change the active value of work_mem.\n> \n> Yep. All this speculation about slow disks and/or COALESCE strikes me as likely totally off-base. I think the original poster needs to run \"show work_mem\" right before the EXPLAIN ANALYZE to make sure the new value they set actually stuck. There's no reason for the planner to have used only 650kB if work_mem is set to anything >=2MB.\n> \n> ...Robert\n \t\t \t \t\t \n\n\n\n\n\nCorrect, the optimizer did not take the settings with the pg_ctl reload command. I did a pg_ctl restart and work_mem now displays the updated value. I had to bump up all the way to 2047 MB to get the response below (with work_mem at 1024 MB I see 7 seconds response time) and with 2047 MB (which is the max value that can be set for work_mem - anything more than that results in a FATAL error because of the limit) the results are below. The batches and memory usage are reflecting the right behavior with these settings. Thanks for everyones input, the result is now matching what SQL Server was producing.\"Hash Join  (cost=11305.30..39118.43 rows=92869 width=17) (actual time=145.888..326.216 rows=3163 loops=1)\"\"  Hash Cond: (((pb.id)::text = (pg.id)::text) AND ((pb.question)::text = (pg.question)::text))\"\"  Join Filter: ((COALESCE(pb.response, 'MISSING'::character varying))::text <> (COALESCE(pg.response, 'MISSING'::character varying))::text)\"\"  Buffers: shared hit=6895\"\"  ->  Seq Scan on pivotbad pb  (cost=0.00..2804.96 rows=93496 width=134) (actual time=0.011..11.903 rows=93496 loops=1)\"\"        Buffers: shared hit=1870\"\"  ->  Hash  (cost=7537.12..7537.12 rows=251212 width=134) (actual time=145.673..145.673 rows=251212 loops=1)\"\"        Buckets: 32768  Batches: 1  Memory Usage: 39939kB\"\"        Buffers: shared hit=5025\"\"        ->  Seq Scan on pivotgood pg  (cost=0.00..7537.12 rows=251212 width=134) (actual time=0.004..26.242 rows=251212 loops=1)\"\"              Buffers: shared hit=5025\"\"Total runtime: 331.168 ms\"Humair> CC: [email protected]; [email protected]; [email protected]; [email protected]> From: [email protected]> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql> Date: Sun, 21 Nov 2010 13:55:54 -0500> To: [email protected]> > On Nov 21, 2010, at 12:16 PM, Tom Lane <[email protected]> wrote:> > [email protected] writes:> >>> Second, I modified the work_mem setting to 2GB (reloaded config) and I see> >>> a response time of 38 seconds. Results below from EXPLAIN ANALYZE:> > > >> How did you reload the config? Using 'kill -HUP pid'? That should work> >> fine. Have you cheched 'work_mem' after the reload?> > > >> Because the explain plans are exactly the same (structure, estimated> >> costs). The really interesting bit is this and it did not change at all> > > >> Buckets: 1024 Batches: 64 Memory Usage: 650kB> > > > If that didn't change, I'm prepared to bet that the OP didn't actually> > manage to change the active value of work_mem.> > Yep. All this speculation about slow disks and/or COALESCE strikes me as likely totally off-base. I think the original poster needs to run \"show work_mem\" right before the EXPLAIN ANALYZE to make sure the new value they set actually stuck. There's no reason for the planner to have used only 650kB if work_mem is set to anything >=2MB.> > ...Robert", "msg_date": "Mon, 22 Nov 2010 00:21:40 -0600", "msg_from": "Humair Mohammed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": ">\n>\n> Correct, the optimizer did not take the settings with the pg_ctl reload\n> command. I did a pg_ctl restart and work_mem now displays the updated\n> value. I had to bump up all the way to 2047 MB to get the response below\n> (with work_mem at 1024 MB I see 7 seconds response time) and with 2047 MB\n> (which is the max value that can be set for work_mem - anything more than\n> that results in a FATAL error because of the limit) the results are below.\n\nHm, can you post explain plan for the case work_mem=1024MB. I guess the\ndifference is due to caching. According to the explain analyze, there are\njust cache hits, no reads.\n\nAnyway the hash join uses only about 40MB of memory, so 1024MB should be\nperfectly fine and the explain plan should be exactly the same as with\nwork_mem=2047MB. And the row estimates seem quite precise, so I don't\nthink there's some severe overestimation.\n\nTomas\n\n", "msg_date": "Mon, 22 Nov 2010 12:00:15 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "On Sun, Nov 21, 2010 at 10:21 PM, Humair Mohammed <[email protected]>wrote:\n\n>\n> Correct, the optimizer did not take the settings with the pg_ctl reload\n> command. I did a pg_ctl restart and work_mem now displays the updated value.\n> I had to bump up all the way to 2047 MB to get the response below (with\n> work_mem at 1024 MB I see 7 seconds response time) and with 2047 MB (which\n> is the max value that can be set for work_mem - anything more than that\n> results in a FATAL error because of the limit) the results are below. The\n> batches and memory usage are reflecting the right behavior with these\n> settings. Thanks for everyones input, the result is now matching what SQL\n> Server was producing.\n>\n>\nI believe you can set work_mem to a different value just for the duration of\na single query, so you needn't have work_mem set so high if for every query\non the system. A single query may well use a multiple of work_mem, so you\nreally probably don't want it that high all the time unless all of your\nqueries are structured similarly. Just set work_mem='2047MB'; query; reset\nall;\n\nBut you should wait until someone more knowledgable than I confirm what I\njust wrote.\n\nOn Sun, Nov 21, 2010 at 10:21 PM, Humair Mohammed <[email protected]> wrote:\n\nCorrect, the optimizer did not take the settings with the pg_ctl reload command. I did a pg_ctl restart and work_mem now displays the updated value. I had to bump up all the way to 2047 MB to get the response below (with work_mem at 1024 MB I see 7 seconds response time) and with 2047 MB (which is the max value that can be set for work_mem - anything more than that results in a FATAL error because of the limit) the results are below. The batches and memory usage are reflecting the right behavior with these settings. Thanks for everyones input, the result is now matching what SQL Server was producing.\nI believe you can set work_mem to a different value just for the duration of a single query, so you needn't have work_mem set so high if for every query on the system.  A single query may well use a multiple of work_mem, so you really probably don't want it that high all the time unless all of your queries are structured similarly.  Just set work_mem='2047MB'; query; reset all;\nBut you should wait until someone more knowledgable than I confirm what I just wrote.", "msg_date": "Mon, 22 Nov 2010 03:02:37 -0800", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "> I believe you can set work_mem to a different value just for the duration\n> of\n> a single query, so you needn't have work_mem set so high if for every\n> query\n> on the system. A single query may well use a multiple of work_mem, so you\n> really probably don't want it that high all the time unless all of your\n> queries are structured similarly. Just set work_mem='2047MB'; query;\n> reset\n> all;\n\nYes, executing \"set work_mem='64MB'\" right before the query should be just\nfine. Setting work_mem to 2GB is an overkill most of the time (99.99999%).\n\nTomas\n\n", "msg_date": "Mon, 22 Nov 2010 13:22:43 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "I did some further analysis and here are the results:\nwork_mem;response_time1MB;62 seconds2MB;2 seconds4MB;700 milliseconds8MB;550 milliseconds\nIn all cases shared_buffers were set to the default value of 32MB. As you can see the 1 to 2 MB jump on the work_mem does wonders. I probably don't need this to be any higher than 8 or 16 MB. Thanks to all for help!\nHumair\n> Date: Mon, 22 Nov 2010 12:00:15 +0100\n> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql\n> From: [email protected]\n> To: [email protected]\n> CC: [email protected]\n> \n> >\n> >\n> > Correct, the optimizer did not take the settings with the pg_ctl reload\n> > command. I did a pg_ctl restart and work_mem now displays the updated\n> > value. I had to bump up all the way to 2047 MB to get the response below\n> > (with work_mem at 1024 MB I see 7 seconds response time) and with 2047 MB\n> > (which is the max value that can be set for work_mem - anything more than\n> > that results in a FATAL error because of the limit) the results are below.\n> \n> Hm, can you post explain plan for the case work_mem=1024MB. I guess the\n> difference is due to caching. According to the explain analyze, there are\n> just cache hits, no reads.\n> \n> Anyway the hash join uses only about 40MB of memory, so 1024MB should be\n> perfectly fine and the explain plan should be exactly the same as with\n> work_mem=2047MB. And the row estimates seem quite precise, so I don't\n> think there's some severe overestimation.\n> \n> Tomas\n> \n \t\t \t \t\t \n\n\n\n\n\nI did some further analysis and here are the results:work_mem;response_time1MB;62 seconds2MB;2 seconds4MB;700 milliseconds8MB;550 millisecondsIn all cases shared_buffers were set to the default value of 32MB. As you can see the 1 to 2 MB jump on the work_mem does wonders. I probably don't need this to be any higher than 8 or 16 MB. Thanks to all for help!Humair> Date: Mon, 22 Nov 2010 12:00:15 +0100> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql> From: [email protected]> To: [email protected]> CC: [email protected]> > >> >> > Correct, the optimizer did not take the settings with the pg_ctl reload> > command. I did a pg_ctl restart and work_mem now displays the updated> > value. I had to bump up all the way to 2047 MB to get the response below> > (with work_mem at 1024 MB I see 7 seconds response time) and with 2047 MB> > (which is the max value that can be set for work_mem - anything more than> > that results in a FATAL error because of the limit) the results are below.> > Hm, can you post explain plan for the case work_mem=1024MB. I guess the> difference is due to caching. According to the explain analyze, there are> just cache hits, no reads.> > Anyway the hash join uses only about 40MB of memory, so 1024MB should be> perfectly fine and the explain plan should be exactly the same as with> work_mem=2047MB. And the row estimates seem quite precise, so I don't> think there's some severe overestimation.> > Tomas>", "msg_date": "Mon, 22 Nov 2010 18:12:30 -0600", "msg_from": "Humair Mohammed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "On Mon, Nov 22, 2010 at 7:12 PM, Humair Mohammed <[email protected]> wrote:\n> I did some further analysis and here are the results:\n> work_mem;response_time\n> 1MB;62 seconds\n> 2MB;2 seconds\n> 4MB;700 milliseconds\n> 8MB;550 milliseconds\n> In all cases shared_buffers were set to the default value of 32MB. As you\n> can see the 1 to 2 MB jump on the work_mem does wonders. I probably don't\n> need this to be any higher than 8 or 16 MB. Thanks to all for help!\n> Humair\n\nwork_mem directly affects how the query is planned, because certain\ntypes of plans (hash joins ans large sorts) require memory. raising or\nlowering shared_buffers OTOH is very subtle and is not something you\ntune to improve the execution of a single query...\n\nmerlin\n", "msg_date": "Tue, 23 Nov 2010 09:20:05 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" } ]
[ { "msg_contents": "Hi,\n\nI have to collect lots of prices from web sites and keep track of their\nchanges. What is the best option?\n\n1) one 'price' row per price change:\n\n\tcreate table price (\n\t\tid_price primary key, \n\t\tid_product integer references product,\n\t\tprice integer\n\t);\n\n2) a single 'price' row containing all the changes:\n\n\tcreate table price (\n\t\tid_price primary key, \n\t\tid_product integer references product,\n\t\tprice integer[] -- prices are 'pushed' on this array as they change\n\t);\n\nWhich is bound to give the best performance, knowing I will often need\nto access the latest and next-to-latest prices?\n\nThanks,\n", "msg_date": "Tue, 16 Nov 2010 11:50:55 +0100", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "best db schema for time series data?" }, { "msg_contents": "Hello\n\nmy opinion:\n\n@1 can be faster for access to last items with index\n@2 can be more effective about data files length allocation\n\n@1 or @2 - it depends on number of prices per product. For small\nnumber (less 100) I am strong for @2 (if speed is important).\nPersonally prefer @2.\n\nPavel\n\n2010/11/16 Louis-David Mitterrand <[email protected]>:\n> Hi,\n>\n> I have to collect lots of prices from web sites and keep track of their\n> changes. What is the best option?\n>\n> 1) one 'price' row per price change:\n>\n>        create table price (\n>                id_price primary key,\n>                id_product integer references product,\n>                price integer\n>        );\n>\n> 2) a single 'price' row containing all the changes:\n>\n>        create table price (\n>                id_price primary key,\n>                id_product integer references product,\n>                price integer[] -- prices are 'pushed' on this array as they change\n>        );\n>\n> Which is bound to give the best performance, knowing I will often need\n> to access the latest and next-to-latest prices?\n>\n> Thanks,\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 16 Nov 2010 12:03:29 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "On Tue, Nov 16, 2010 at 12:03:29PM +0100, Pavel Stehule wrote:\n> Hello\n> \n> my opinion:\n> \n> @1 can be faster for access to last items with index\n> @2 can be more effective about data files length allocation\n\nHi Pavel,\n\nWhat is \"data files length allocation\" ?\n", "msg_date": "Tue, 16 Nov 2010 12:07:30 +0100", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "2010/11/16 Louis-David Mitterrand <[email protected]>:\n> On Tue, Nov 16, 2010 at 12:03:29PM +0100, Pavel Stehule wrote:\n>> Hello\n>>\n>> my opinion:\n>>\n>> @1 can be faster for access to last items with index\n>> @2 can be more effective about data files length allocation\n>\n> Hi Pavel,\n>\n> What is \"data files length allocation\" ?\n\nsize of data files on disc :)\n\npg needs a some bytes for head on every row - so if you use a array,\nthen you share its. Next varlena types (like array) can be compressed.\n\nPavel\n\n\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 16 Nov 2010 12:11:43 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "On 16-11-2010 11:50, Louis-David Mitterrand wrote:\n> I have to collect lots of prices from web sites and keep track of their\n> changes. What is the best option?\n>\n> 1) one 'price' row per price change:\n>\n> \tcreate table price (\n> \t\tid_price primary key,\n> \t\tid_product integer references product,\n> \t\tprice integer\n> \t);\n>\n> 2) a single 'price' row containing all the changes:\n>\n> \tcreate table price (\n> \t\tid_price primary key,\n> \t\tid_product integer references product,\n> \t\tprice integer[] -- prices are 'pushed' on this array as they change\n> \t);\n>\n> Which is bound to give the best performance, knowing I will often need\n> to access the latest and next-to-latest prices?\n\nIf you mostly need the last few prices, I'd definitaly go with the first \naproach, its much cleaner. Besides, you can store a date/time per price, \nso you know when it changed. With the array-approach that's a bit harder \nto do.\n\nIf you're concerned with performance, introduce some form of a \nmaterialized view for the most recent price of a product. Or reverse the \nentire process and make a \"current price\"-table and a \"price history\"-table.\n\nBest regards,\n\nArjen\n\n", "msg_date": "Tue, 16 Nov 2010 12:18:35 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "On Tue, Nov 16, 2010 at 12:18:35PM +0100, Arjen van der Meijden wrote:\n> On 16-11-2010 11:50, Louis-David Mitterrand wrote:\n> >I have to collect lots of prices from web sites and keep track of their\n> >changes. What is the best option?\n> >\n> >1) one 'price' row per price change:\n> >\n> >\tcreate table price (\n> >\t\tid_price primary key,\n> >\t\tid_product integer references product,\n> >\t\tprice integer\n> >\t);\n> >\n> >2) a single 'price' row containing all the changes:\n> >\n> >\tcreate table price (\n> >\t\tid_price primary key,\n> >\t\tid_product integer references product,\n> >\t\tprice integer[] -- prices are 'pushed' on this array as they change\n> >\t);\n> >\n> >Which is bound to give the best performance, knowing I will often need\n> >to access the latest and next-to-latest prices?\n> \n> If you mostly need the last few prices, I'd definitaly go with the\n> first aproach, its much cleaner. Besides, you can store a date/time\n> per price, so you know when it changed. With the array-approach\n> that's a bit harder to do.\n> \n> If you're concerned with performance, introduce some form of a\n> materialized view for the most recent price of a product. Or reverse\n> the entire process and make a \"current price\"-table and a \"price\n> history\"-table.\n\nThat's exactly my current 'modus operandi'. So it's nice to have\nconfirmation that I'm not using the worst schema out there :)\n", "msg_date": "Tue, 16 Nov 2010 12:28:16 +0100", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "Hi,\n> > If you mostly need the last few prices, I'd definitaly go with the\n> > first aproach, its much cleaner. Besides, you can store a date/time\n> > per price, so you know when it changed. \nWe too were using such an approach for 'soft deletes'. Soon we realized \nthat using a one char valid flag to mark the latest records was better. It \nwas easier to filter on that. An index on the modified date column was \nnot being used consistently for some reason or the other. \nThe VALID records form a small portion of the big table and an index on \nthe column help fetch the data pretty fast. Of course, you could partition \non the flag also (we did not have to). A slight processing overhead of \nupdating the valid FLAG column is the penalty. This was an Oracle \ndatabase.\nRegards,\nJayadevan\n\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Tue, 16 Nov 2010 17:25:42 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "In article <[email protected]>,\nArjen van der Meijden <[email protected]> writes:\n\n> On 16-11-2010 11:50, Louis-David Mitterrand wrote:\n>> I have to collect lots of prices from web sites and keep track of their\n>> changes. What is the best option?\n>> \n>> 1) one 'price' row per price change:\n>> \n>> create table price (\n>> id_price primary key,\n>> id_product integer references product,\n>> price integer\n>> );\n>> \n>> 2) a single 'price' row containing all the changes:\n>> \n>> create table price (\n>> id_price primary key,\n>> id_product integer references product,\n>> price integer[] -- prices are 'pushed' on this array as they change\n>> );\n>> \n>> Which is bound to give the best performance, knowing I will often need\n>> to access the latest and next-to-latest prices?\n\n> If you mostly need the last few prices, I'd definitaly go with the\n> first aproach, its much cleaner. Besides, you can store a date/time\n> per price, so you know when it changed. With the array-approach that's\n> a bit harder to do.\n\nI'd probably use a variant of this:\n\n CREATE TABLE prices (\n pid int NOT NULL REFERENCES products,\n validTil timestamp(0) NULL,\n price int NOT NULL,\n UNIQUE (pid, validTil)\n );\n\nThe current price of a product is always the row with validTil IS NULL.\nThe lookup should be pretty fast because it can use the index of the\nUNIQUE constraint.\n\n", "msg_date": "Tue, 16 Nov 2010 17:28:19 +0100", "msg_from": "Harald Fuchs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "[email protected] (Louis-David Mitterrand)\nwrites:\n> I have to collect lots of prices from web sites and keep track of their\n> changes. What is the best option?\n>\n> 1) one 'price' row per price change:\n>\n> \tcreate table price (\n> \t\tid_price primary key,\n> \t\tid_product integer references product,\n> \t\tprice integer\n> \t);\n>\n> 2) a single 'price' row containing all the changes:\n>\n> \tcreate table price (\n> \t\tid_price primary key,\n> \t\tid_product integer references product,\n> \t\tprice integer[] -- prices are 'pushed' on this array as they change\n> \t);\n>\n> Which is bound to give the best performance, knowing I will often need\n> to access the latest and next-to-latest prices?\n\nI'd definitely bias towards #1, but with a bit of a change...\n\ncreate table product (\n id_product serial primary key\n);\n\ncreate table price (\n id_product integer references product,\n as_at timestamptz default now(),\n primary key (id_product, as_at),\n price integer\n);\n\nThe query to get the last 5 prices for a product should be\nsplendidly efficient:\n\n select price, as_at from price\n where id_product = 17\n order by as_at desc limit 5;\n\n(That'll use the PK index perfectly nicely.)\n\nIf you needed higher performance, for \"latest price,\" then I'd add a\nsecondary table, and use triggers to copy latest price into place:\n\n create table latest_prices (\n id_product integer primary key references product,\n price integer\n );\n\ncreate or replace function capture_latest_price () returns trigger as $$\ndeclare\nbegin\n\tdelete from latest_prices where id_product = NEW.id_product;\n\tinsert into latest_prices (id_product,price) values\n\t (NEW.id_product, NEW.price);\n\treturn NEW;\nend\n$$ language plpgsql;\n\ncreate trigger price_capture after insert on price execute procedure capture_latest_price();\n\nThis captures *just* the latest price for each product. (There's a bit\nof race condition - if there are two concurrent price updates, one will\nfail, which wouldn't happen without this trigger in place.)\n--\n\"... Turns out that JPG was in fact using his brain... and I am\ninclined to encourage him to continue the practice even if it isn't\nexactly what I would have done myself.\" -- Alan Bawden (way out of\ncontext)\n", "msg_date": "Tue, 16 Nov 2010 11:35:24 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "On Tue, Nov 16, 2010 at 11:35:24AM -0500, Chris Browne wrote:\n> [email protected] (Louis-David Mitterrand)\n> writes:\n> > I have to collect lots of prices from web sites and keep track of their\n> > changes. What is the best option?\n> >\n> > 1) one 'price' row per price change:\n> >\n> > \tcreate table price (\n> > \t\tid_price primary key,\n> > \t\tid_product integer references product,\n> > \t\tprice integer\n> > \t);\n> >\n> > 2) a single 'price' row containing all the changes:\n> >\n> > \tcreate table price (\n> > \t\tid_price primary key,\n> > \t\tid_product integer references product,\n> > \t\tprice integer[] -- prices are 'pushed' on this array as they change\n> > \t);\n> >\n> > Which is bound to give the best performance, knowing I will often need\n> > to access the latest and next-to-latest prices?\n> \n> I'd definitely bias towards #1, but with a bit of a change...\n> \n> create table product (\n> id_product serial primary key\n> );\n> \n> create table price (\n> id_product integer references product,\n> as_at timestamptz default now(),\n> primary key (id_product, as_at),\n> price integer\n> );\n\nHi Chris,\n\nSo an \"id_price serial\" on the price table is not necessary in your\nopinion? I am using \"order by id_price limit X\" or \"max(id_price)\" to\nget at the most recent prices.\n\n> The query to get the last 5 prices for a product should be\n> splendidly efficient:\n> \n> select price, as_at from price\n> where id_product = 17\n> order by as_at desc limit 5;\n> \n> (That'll use the PK index perfectly nicely.)\n> \n> If you needed higher performance, for \"latest price,\" then I'd add a\n> secondary table, and use triggers to copy latest price into place:\n> \n> create table latest_prices (\n> id_product integer primary key references product,\n> price integer\n> );\n\nI did the same thing with a 'price_dispatch' trigger and partitioned\ntables (inheritance). It's definitely needed when the price database\ngrow into the millions.\n\nThanks,\n", "msg_date": "Fri, 19 Nov 2010 10:46:24 +0100", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "On Tue, Nov 16, 2010 at 05:28:19PM +0100, Harald Fuchs wrote:\n> In article <[email protected]>,\n> Arjen van der Meijden <[email protected]> writes:\n> \n> > On 16-11-2010 11:50, Louis-David Mitterrand wrote:\n> >> I have to collect lots of prices from web sites and keep track of their\n> >> changes. What is the best option?\n> >> \n> >> 1) one 'price' row per price change:\n> >> \n> >> create table price (\n> >> id_price primary key,\n> >> id_product integer references product,\n> >> price integer\n> >> );\n> >> \n> >> 2) a single 'price' row containing all the changes:\n> >> \n> >> create table price (\n> >> id_price primary key,\n> >> id_product integer references product,\n> >> price integer[] -- prices are 'pushed' on this array as they change\n> >> );\n> >> \n> >> Which is bound to give the best performance, knowing I will often need\n> >> to access the latest and next-to-latest prices?\n> \n> > If you mostly need the last few prices, I'd definitaly go with the\n> > first aproach, its much cleaner. Besides, you can store a date/time\n> > per price, so you know when it changed. With the array-approach that's\n> > a bit harder to do.\n> \n> I'd probably use a variant of this:\n> \n> CREATE TABLE prices (\n> pid int NOT NULL REFERENCES products,\n> validTil timestamp(0) NULL,\n> price int NOT NULL,\n> UNIQUE (pid, validTil)\n> );\n> \n> The current price of a product is always the row with validTil IS NULL.\n> The lookup should be pretty fast because it can use the index of the\n> UNIQUE constraint.\n\nHi,\n\nThe validTil idea is nice, but you have to manage that field with a\ntrigger, right?\n", "msg_date": "Fri, 19 Nov 2010 10:50:21 +0100", "msg_from": "Louis-David Mitterrand <[email protected]>", "msg_from_op": true, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "[email protected] (Louis-David Mitterrand)\nwrites:\n> On Tue, Nov 16, 2010 at 11:35:24AM -0500, Chris Browne wrote:\n>> [email protected] (Louis-David Mitterrand)\n>> writes:\n>> > I have to collect lots of prices from web sites and keep track of their\n>> > changes. What is the best option?\n>> >\n>> > 1) one 'price' row per price change:\n>> >\n>> > \tcreate table price (\n>> > \t\tid_price primary key,\n>> > \t\tid_product integer references product,\n>> > \t\tprice integer\n>> > \t);\n>> >\n>> > 2) a single 'price' row containing all the changes:\n>> >\n>> > \tcreate table price (\n>> > \t\tid_price primary key,\n>> > \t\tid_product integer references product,\n>> > \t\tprice integer[] -- prices are 'pushed' on this array as they change\n>> > \t);\n>> >\n>> > Which is bound to give the best performance, knowing I will often need\n>> > to access the latest and next-to-latest prices?\n>> \n>> I'd definitely bias towards #1, but with a bit of a change...\n>> \n>> create table product (\n>> id_product serial primary key\n>> );\n>> \n>> create table price (\n>> id_product integer references product,\n>> as_at timestamptz default now(),\n>> primary key (id_product, as_at),\n>> price integer\n>> );\n>\n> Hi Chris,\n>\n> So an \"id_price serial\" on the price table is not necessary in your\n> opinion? I am using \"order by id_price limit X\" or \"max(id_price)\" to\n> get at the most recent prices.\n\nIt (id_price) is an extra piece of information that doesn't reveal an\nimportant fact, namely when the price was added.\n\nI'm uncomfortable with adding data that doesn't provide much more\ninformation, and it troubles me when people put a lot of interpretation\ninto the meanings of SERIAL columns.\n\nI'd like to set up some schemas (for experiment, if not necessarily to\nget deployed to production) where I'd use DCE UUID values rather than\nsequences, so that people wouldn't make the error of imagining meanings\nin the values that aren't really there. \n\nAnd I suppose that there lies a way to think about it... If you used\nUUIDs rather than SERIAL, how would your application break? \n\nAnd of the ways in which it would break, which of those are errors that\nfall from:\n\n a) Ignorant usage, assuming order that isn't really there? (e.g. - a\n SERIAL might capture some order information, but UUID won't!)\n\n b) Inadequate data capture, where you're using the implicit data\n collection from SERIAL to capture, poorly, information that should\n be expressly captured?\n\nWhen I added the timestamp to the \"price\" table, that's intended to\naddress b), capturing the time that the price was added.\n\n>> The query to get the last 5 prices for a product should be\n>> splendidly efficient:\n>> \n>> select price, as_at from price\n>> where id_product = 17\n>> order by as_at desc limit 5;\n>> \n>> (That'll use the PK index perfectly nicely.)\n>> \n>> If you needed higher performance, for \"latest price,\" then I'd add a\n>> secondary table, and use triggers to copy latest price into place:\n>> \n>> create table latest_prices (\n>> id_product integer primary key references product,\n>> price integer\n>> );\n>\n> I did the same thing with a 'price_dispatch' trigger and partitioned\n> tables (inheritance). It's definitely needed when the price database\n> grow into the millions.\n>\n> Thanks,\n\nThe conversations are always interesting! Cheers!\n-- \noutput = (\"cbbrowne\" \"@\" \"gmail.com\")\nhttp://www3.sympatico.ca/cbbrowne/x.html\nFLORIDA: If you think we can't vote, wait till you see us drive.\n", "msg_date": "Fri, 19 Nov 2010 12:13:58 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "On Fri, Nov 19, 2010 at 10:50 AM, Louis-David Mitterrand\n<[email protected]> wrote:\n> On Tue, Nov 16, 2010 at 05:28:19PM +0100, Harald Fuchs wrote:\n>> In article <[email protected]>,\n>> Arjen van der Meijden <[email protected]> writes:\n>>\n>> > On 16-11-2010 11:50, Louis-David Mitterrand wrote:\n>> >> I have to collect lots of prices from web sites and keep track of their\n>> >> changes. What is the best option?\n>> >>\n>> >> 1) one 'price' row per price change:\n>> >>\n>> >> create table price (\n>> >> id_price primary key,\n>> >> id_product integer references product,\n>> >> price integer\n>> >> );\n>> >>\n>> >> 2) a single 'price' row containing all the changes:\n>> >>\n>> >> create table price (\n>> >> id_price primary key,\n>> >> id_product integer references product,\n>> >> price integer[] -- prices are 'pushed' on this array as they change\n>> >> );\n>> >>\n>> >> Which is bound to give the best performance, knowing I will often need\n>> >> to access the latest and next-to-latest prices?\n>>\n>> > If you mostly need the last few prices, I'd definitaly go with the\n>> > first aproach, its much cleaner. Besides, you can store a date/time\n>> > per price, so you know when it changed. With the array-approach that's\n>> > a bit harder to do.\n>>\n>> I'd probably use a variant of this:\n>>\n>>   CREATE TABLE prices (\n>>     pid int NOT NULL REFERENCES products,\n>>     validTil timestamp(0) NULL,\n>>     price int NOT NULL,\n>>     UNIQUE (pid, validTil)\n>>   );\n>>\n>> The current price of a product is always the row with validTil IS NULL.\n>> The lookup should be pretty fast because it can use the index of the\n>> UNIQUE constraint.\n\nEven better: with a partial index lookup should be more efficient and\nprobably will stay that way even when the number of prices increases\n(and the number of products stays the same). With\n\nCREATE UNIQUE INDEX current_prices\nON prices (\n pid\n)\nWHERE validTil IS NULL;\n\nI get\n\nrobert=> explain select price from prices where pid = 12344 and\nvalidTil is null;\n QUERY PLAN\n-----------------------------------------------------------------------------\n Index Scan using current_prices on prices (cost=0.00..8.28 rows=1 width=4)\n Index Cond: (pid = 12344)\n(2 rows)\n\nThe index can actually be used here.\n\n(see attachment)\n\n> The validTil idea is nice, but you have to manage that field with a\n> trigger, right?\n\nWell, you don't need to. You can always do\n\nbegin;\nupdate prices set validTil = current_timestamp\n where pid = 123 and validTil is NULL;\ninsert into prices values ( 123, null, 94 );\ncommit;\n\nBut with a trigger it would be more convenient of course.\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/", "msg_date": "Sat, 20 Nov 2010 01:16:23 +0100", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best db schema for time series data?" }, { "msg_contents": "--- On Fri, 11/19/10, Robert Klemme <[email protected]> wrote:\n\n> From: Robert Klemme <[email protected]>\n> Subject: Re: [PERFORM] best db schema for time series data?\n> To: [email protected]\n> Date: Friday, November 19, 2010, 7:16 PM\n> On Fri, Nov 19, 2010 at 10:50 AM,\n> Louis-David Mitterrand\n> <[email protected]>\n> wrote:\n> > On Tue, Nov 16, 2010 at 05:28:19PM +0100, Harald Fuchs\n> wrote:\n> >> In article <[email protected]>,\n> >> Arjen van der Meijden <[email protected]>\n> writes:\n> >>\n> >> > On 16-11-2010 11:50, Louis-David Mitterrand\n> wrote:\n> >> >> I have to collect lots of prices from web\n> sites and keep track of their\n> >> >> changes. What is the best option?\n> >> >>\n> >> >> 1) one 'price' row per price change:\n> >> >>\n> >> >> create table price (\n> >> >> id_price primary key,\n> >> >> id_product integer references product,\n> >> >> price integer\n> >> >> );\n> >> >>\n> >> >> 2) a single 'price' row containing all\n> the changes:\n> >> >>\n> >> >> create table price (\n> >> >> id_price primary key,\n> >> >> id_product integer references product,\n> >> >> price integer[] -- prices are 'pushed' on\n> this array as they change\n> >> >> );\n> >> >>\n> >> >> Which is bound to give the best\n> performance, knowing I will often need\n> >> >> to access the latest and next-to-latest\n> prices?\n> >>\n> >> > If you mostly need the last few prices, I'd\n> definitaly go with the\n> >> > first aproach, its much cleaner. Besides, you\n> can store a date/time\n> >> > per price, so you know when it changed. With\n> the array-approach that's\n> >> > a bit harder to do.\n> >>\n> >> I'd probably use a variant of this:\n> >>\n> >>   CREATE TABLE prices (\n> >>     pid int NOT NULL REFERENCES products,\n> >>     validTil timestamp(0) NULL,\n> >>     price int NOT NULL,\n> >>     UNIQUE (pid, validTil)\n> >>   );\n> >>\n> >> The current price of a product is always the row\n> with validTil IS NULL.\n> >> The lookup should be pretty fast because it can\n> use the index of the\n> >> UNIQUE constraint.\n> \n> Even better: with a partial index lookup should be more\n> efficient and\n> probably will stay that way even when the number of prices\n> increases\n> (and the number of products stays the same).  With\n> \n> CREATE UNIQUE INDEX current_prices\n> ON prices (\n>   pid\n> )\n> WHERE validTil IS NULL;\n> \n> I get\n> \n> robert=> explain select price from prices where pid =\n> 12344 and\n> validTil is null;\n>                \n>              \n>    QUERY PLAN\n> -----------------------------------------------------------------------------\n> Index Scan using current_prices on prices \n> (cost=0.00..8.28 rows=1 width=4)\n>    Index Cond: (pid = 12344)\n> (2 rows)\n> \n> The index can actually be used here.\n> \n> (see attachment)\n> \n> > The validTil idea is nice, but you have to manage that\n> field with a\n> > trigger, right?\n> \n> Well, you don't need to.  You can always do\n> \n> begin;\n> update prices set validTil = current_timestamp\n>   where pid = 123 and validTil is NULL;\n> insert into prices values ( 123, null, 94 );\n> commit;\n> \n> But with a trigger it would be more convenient of course.\n> \n> Kind regards\n> \n> robert\n> \n> -- \n> remember.guy do |as, often| as.you_can - without end\n> http://blog.rubybestpractices.com/\n> \n> -----Inline Attachment Follows-----\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\nLouis,\n\nSomeday, as sure as Codd made little relational databases, someone will put an incorrect price in that table, and it will have to be changed, and that change will ripple throughout your system. You have a unique chance here, at the beginning, to foresee that inevitability and plan for it. \n\nTake a look at \n\n http://en.wikipedia.org/wiki/Temporal_database\n\nand \n\n http://pgfoundry.org/projects/temporal/\n\nand anything Snodgrass ever wrote about temporal databases. Its a fascinating schema design subject, one that comes in very handy in dealing with time-influenced data.\n\nGood luck!\n\nBob Lunney\n\n\n \n", "msg_date": "Sat, 20 Nov 2010 08:27:51 -0800 (PST)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: best db schema for time series data?" } ]
[ { "msg_contents": "This is not directly a PostgreSQL performance question but I'm hoping \nsome of the chaps that build high IO PostgreSQL servers on here can help.\n\nWe build file transfer acceleration s/w (and use PostgreSQL as our \ndatabase) but we need to build a test server that can handle a sustained \nwrite throughput of 1,25 GB for 5 mins.\n\nWhy this number, because we want to push a 10 Gbps network link for 5-8 \nmins, 10Gbps = 1.25 GB write, and would like to drive it for 5-8 mins \nwhich would be 400-500 GB.\n\nNote this is just a \"test\" server therefore it does not need fault \ntolerance.\n\nThanks in advance,\nEric\n", "msg_date": "Wed, 17 Nov 2010 09:26:56 -0500", "msg_from": "Eric Comeau <[email protected]>", "msg_from_op": true, "msg_subject": "How to achieve sustained disk performance of 1.25 GB write for 5 mins" }, { "msg_contents": "On Wednesday 17 November 2010 15:26:56 Eric Comeau wrote:\n> This is not directly a PostgreSQL performance question but I'm hoping\n> some of the chaps that build high IO PostgreSQL servers on here can help.\n> \n> We build file transfer acceleration s/w (and use PostgreSQL as our\n> database) but we need to build a test server that can handle a sustained\n> write throughput of 1,25 GB for 5 mins.\n> \n> Why this number, because we want to push a 10 Gbps network link for 5-8\n> mins, 10Gbps = 1.25 GB write, and would like to drive it for 5-8 mins\n> which would be 400-500 GB.\n> \n> Note this is just a \"test\" server therefore it does not need fault\n> tolerance.\n> \n> Thanks in advance,\n> Eric\n\nI'm sure there are others with more experience on this, but if you don't need \nfailt tolerance, a bunch of fast disks in striping-mode (so-called RAID-0) on \nseperated channels (eg. different PCI-Express channels) would be my first step.\n\nAlternatively, if you don't care if the data is actually stored, couldn't you \nprocess it with a program that does a checksum over the data transmitted and \nthen ignores/forgets it? (eg. forget about disk-storage and do it all in \nmemory?)\n\n--\nJoost\n", "msg_date": "Wed, 17 Nov 2010 16:25:02 +0100", "msg_from": "\"J. Roeleveld\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB write for 5\n\tmins" }, { "msg_contents": "On 11/17/2010 09:26 AM, Eric Comeau wrote:\n> This is not directly a PostgreSQL performance question but I'm hoping\n> some of the chaps that build high IO PostgreSQL servers on here can help.\n>\n> We build file transfer acceleration s/w (and use PostgreSQL as our\n> database) but we need to build a test server that can handle a sustained\n> write throughput of 1,25 GB for 5 mins.\n>\n> Why this number, because we want to push a 10 Gbps network link for 5-8\n> mins, 10Gbps = 1.25 GB write, and would like to drive it for 5-8 mins\n> which would be 400-500 GB.\n>\n> Note this is just a \"test\" server therefore it does not need fault\n> tolerance.\n>\n> Thanks in advance,\n> Eric\n>\n\nOff hand, I would suggest:\n\n8x http://www.kingston.com/ssd/vplus100.asp (180MB/sec sustained write) \nstripped (RAID 0, you did say that you don't care about safety). That \nshould be 1.44GB/sec write, minus overhead.\n\n1x \nhttp://www.lsi.com/channel/products/raid_controllers/3ware_9690sa8i/index.html \nRAID card (note that it's the internal port model, despite the image)\n\n4x http://usa.chenbro.com/corporatesite/products_detail.php?sku=114 (for \nmounting the drives)\n\nThat would be about the minimum I should expect you can pay to get that \nkind of performance. Others are free to dis/agree. :)\n\n-- \nDigimer\nE-Mail: [email protected]\nAN!Whitepapers: http://alteeve.com\nNode Assassin: http://nodeassassin.org\n", "msg_date": "Wed, 17 Nov 2010 10:28:24 -0500", "msg_from": "Digimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB\n\twrite for 5 mins" }, { "msg_contents": "On Wed, 17 Nov 2010 09:26:56 -0500\nEric Comeau <[email protected]> wrote:\n\n> This is not directly a PostgreSQL performance question but I'm hoping \n> some of the chaps that build high IO PostgreSQL servers on here can help.\n> \n> We build file transfer acceleration s/w (and use PostgreSQL as our \n> database) but we need to build a test server that can handle a sustained \n> write throughput of 1,25 GB for 5 mins.\n> \n> Why this number, because we want to push a 10 Gbps network link for 5-8 \n> mins, 10Gbps = 1.25 GB write, and would like to drive it for 5-8 mins \n> which would be 400-500 GB.\n> \n> Note this is just a \"test\" server therefore it does not need fault \n> tolerance.\n\nGet a machine with enough RAM and run postgresql from RAM disk. Write a\nstart script to copy the RAM disk back to normal disk then stopping and back\nto RAM disk for start.\n\nthis must be the fastest solution.\n\n-- \nLutz\n\n", "msg_date": "Wed, 17 Nov 2010 16:49:51 +0100", "msg_from": "Lutz Steinborn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB\n\twrite for 5 mins" }, { "msg_contents": "On 11/17/10 15:26, Eric Comeau wrote:\n> This is not directly a PostgreSQL performance question but I'm hoping\n> some of the chaps that build high IO PostgreSQL servers on here can help.\n>\n> We build file transfer acceleration s/w (and use PostgreSQL as our\n> database) but we need to build a test server that can handle a sustained\n> write throughput of 1,25 GB for 5 mins.\n\nJust to clarify: you need 1.25 GB/s write throughput?\n\nFor one thing, you need not only fast storage but also a fast CPU and \nfile system. If you are going to stream this data directly over the \nnetwork in a single FTP-like session, you need fast single-core \nperformance (so buy the fastest low-core-count CPU possible) and a file \nsystem which doesn't interfere much with raw data streaming. If you're \nusing Linux I'd guess either something very simple like ext2 or complex \nbut designed for the task like XFS might be best. On FreeBSD, ZFS is \ngreat for streaming but you'll spend a lot of time tuning it :)\n\n From the hardware POW, since you don't really need high IOPS rates, you \ncan go much cheaper with a large number of cheap desktop drives than \nwith SSD-s, if you can build something like this: \nhttp://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/\n\nYou don't need the storage space here, but you *do* need many drives to \nachieve speed in RAID (remember to overdesign and assume 50 MB/s per drive).\n\n", "msg_date": "Wed, 17 Nov 2010 17:12:44 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB write for\n\t5 mins" }, { "msg_contents": "On Wed, Nov 17, 2010 at 9:26 AM, Eric Comeau <[email protected]> wrote:\n> This is not directly a PostgreSQL performance question but I'm hoping some\n> of the chaps that build high IO PostgreSQL servers on here can help.\n>\n> We build file transfer acceleration s/w (and use PostgreSQL as our database)\n> but we need to build a test server that can handle a sustained write\n> throughput of 1,25 GB for 5 mins.\n>\n> Why this number, because we want to push a 10 Gbps network link for 5-8\n> mins, 10Gbps = 1.25 GB write, and would like to drive it for 5-8 mins which\n> would be 400-500 GB.\n>\n> Note this is just a \"test\" server therefore it does not need fault\n> tolerance.\n\nI really doubt you will see 1.25gb/sec over 10gige link. Even if you\ndo though, you will hit a number of bottlenecks if you want to see\nanything close to those numbers. Even with really fast storage you\nwill probably become cpu bound, or bottlenecked in the WAL, or some\nother place.\n\n*) what kind of data do you expect to be writing out at this speed?\n*) how many transactions per second will you expect to have?\n*) what is the architecture of the client? how many connections will\nbe open to postgres writing?\n*) how many cores are in this box? what kind?\n\nmerlin\n", "msg_date": "Wed, 17 Nov 2010 12:28:11 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB\n\twrite for 5 mins" }, { "msg_contents": "\nOn Nov 17, 2010, at 7:28 AM, Digimer wrote:\n\n> On 11/17/2010 09:26 AM, Eric Comeau wrote:\n>> This is not directly a PostgreSQL performance question but I'm hoping\n>> some of the chaps that build high IO PostgreSQL servers on here can help.\n>> \n>> We build file transfer acceleration s/w (and use PostgreSQL as our\n>> database) but we need to build a test server that can handle a sustained\n>> write throughput of 1,25 GB for 5 mins.\n>> \n>> Why this number, because we want to push a 10 Gbps network link for 5-8\n>> mins, 10Gbps = 1.25 GB write, and would like to drive it for 5-8 mins\n>> which would be 400-500 GB.\n>> \n>> Note this is just a \"test\" server therefore it does not need fault\n>> tolerance.\n>> \n>> Thanks in advance,\n>> Eric\n>> \n> \n> Off hand, I would suggest:\n> \n> 8x http://www.kingston.com/ssd/vplus100.asp (180MB/sec sustained write) \n> stripped (RAID 0, you did say that you don't care about safety). That \n> should be 1.44GB/sec write, minus overhead.\n\nCan get cheaper disks that go ~135MB/sec write and a couple more of them.\n\n> \n> 1x \n> http://www.lsi.com/channel/products/raid_controllers/3ware_9690sa8i/index.html \n> RAID card (note that it's the internal port model, despite the image)\n> \n\nYou'll need 2 RAID cards with software raid-0 on top to sustain this rate, or simply pure software raid-0. A single raid card tends to be unable to sustain reads or writes that high, no matter how many drives you put on it.\n\nThe last time I tried a 3ware card, it couldn't go past 380MB/sec with 10 drives. 6 to 10 drives in raid 10 were all the same sequential througput, only random iops went up. Maybe raid0 is better. Software raid is usually fastest for raid 0, 1, and 10, other than write cache effects (which are strong and important for a real world db).\n\nI get ~1000MB/sec out of 2 Adaptec 5805s with linux 'md' software raid 0 on top of these (each are raid 10 with 10 drives). If i did not care about data reliability I'd go with anything that had a lot of ports (perhaps a couple cheap SAS cards without complicated raid features) and software raid 0.\n\n\n> 4x http://usa.chenbro.com/corporatesite/products_detail.php?sku=114 (for \n> mounting the drives)\n> \n> That would be about the minimum I should expect you can pay to get that \n> kind of performance. Others are free to dis/agree. :)\n> \n> -- \n> Digimer\n> E-Mail: [email protected]\n> AN!Whitepapers: http://alteeve.com\n> Node Assassin: http://nodeassassin.org\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 17 Nov 2010 10:48:04 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB\n\twrite for 5 mins" }, { "msg_contents": "On Nov 17, 2010, at 10:48 AM, Scott Carey wrote:\n\n>> \n>> Off hand, I would suggest:\n>> \n>> 8x http://www.kingston.com/ssd/vplus100.asp (180MB/sec sustained write) \n>> stripped (RAID 0, you did say that you don't care about safety). That \n>> should be 1.44GB/sec write, minus overhead.\n> \n> Can get cheaper disks that go ~135MB/sec write and a couple more of them.\n> \n\nAnother option, two of these (650MB+ /sec sustained) in raid 0: http://www.anandtech.com/show/3997/ocz-revodrive-x2-review/3\n\nNo external enclosure required, no raid card required (the card is basically 4 ssd's raided together in one package). Just 2 PCIe slots. The cost seems to be not too bad, at least for \"how much does it cost to go 600MB/sec\". $1200 will get two of them, for a total of 480GB and 1300MB/sec.\n\nNote, these are not data-safe on power failure.\n\n\n", "msg_date": "Wed, 17 Nov 2010 10:58:36 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB\n\twrite for 5 mins" }, { "msg_contents": "On 10-11-17 12:28 PM, Merlin Moncure wrote: \n\n\tOn Wed, Nov 17, 2010 at 9:26 AM, Eric Comeau <[email protected]> <mailto:[email protected]> wrote:\n\t> This is not directly a PostgreSQL performance question but I'm hoping some\n\t> of the chaps that build high IO PostgreSQL servers on here can help.\n\t>\n\t> We build file transfer acceleration s/w (and use PostgreSQL as our database)\n\t> but we need to build a test server that can handle a sustained write\n\t> throughput of 1,25 GB for 5 mins.\n\t>\n\t> Why this number, because we want to push a 10 Gbps network link for 5-8\n\t> mins, 10Gbps = 1.25 GB write, and would like to drive it for 5-8 mins which\n\t> would be 400-500 GB.\n\t>\n\t> Note this is just a \"test\" server therefore it does not need fault\n\t> tolerance.\n\t\n\tI really doubt you will see 1.25gb/sec over 10gige link. Even if you\n\tdo though, you will hit a number of bottlenecks if you want to see\n\tanything close to those numbers. Even with really fast storage you\n\twill probably become cpu bound, or bottlenecked in the WAL, or some\n\tother place.\n\t\n\t*) what kind of data do you expect to be writing out at this speed?\n\t\n\nLarge Video files ... our s/w is used to displace FTP.\n\n\n\t*) how many transactions per second will you expect to have?\n\t\n\nIdeally 1 large file, but it may have to be multiple. We find that if we send multiple files it just causes the disk to thrash more so we get better throughput by sending one large file.\n\n\n\t*) what is the architecture of the client? how many connections will\n\tbe open to postgres writing?\n\t\n\nOur s/w can do multiple streams, but I believe we get better performance with 1 stream handling one large file, you could have 4 streams with 4 files in flight, but the disk thrashes more... postgres is not be writing the file data, our agent reports back to postgres stats on the transfer rate being achieved ... postgres transactions is not the issue. The client and server are written in C and use UDP (with our own error correction) to achieve high network throughput as opposed to TCP.\n\n\n\t*) how many cores are in this box? what kind?\n\t\n\nWell obviously thats part of the equation as well, but its sort of unbounded right now not defined, but our s/w is multi-threaded and can make use of the multiple cores... so I'll say for now at a minimum 4.\n\n\n\t\n\tmerlin\n\t\n\n\n\n\n\n\n\n\n On 10-11-17 12:28 PM, Merlin Moncure wrote:\n \n\n\nRe: [PERFORM] How to achieve sustained disk performance of\n 1.25 GB write for 5 mins\n\nOn Wed, Nov 17, 2010 at 9:26 AM, Eric Comeau\n <[email protected]> wrote:\n > This is not directly a PostgreSQL performance question\n but I'm hoping some\n > of the chaps that build high IO PostgreSQL servers on\n here can help.\n >\n > We build file transfer acceleration s/w (and use\n PostgreSQL as our database)\n > but we need to build a test server that can handle a\n sustained write\n > throughput of 1,25 GB for 5 mins.\n >\n > Why this number, because we want to push a 10 Gbps\n network link for 5-8\n > mins, 10Gbps = 1.25 GB write, and would like to drive it\n for 5-8 mins which\n > would be 400-500 GB.\n >\n > Note this is just a \"test\" server therefore it does not\n need fault\n > tolerance.\n\n I really doubt you will see 1.25gb/sec over 10gige link.  Even\n if you\n do though, you will hit a number of bottlenecks if you want to\n see\n anything close to those numbers.  Even with really fast\n storage you\n will probably become cpu bound, or bottlenecked in the WAL, or\n some\n other place.\n\n *) what kind of data do you expect to be writing out at this\n speed?\n\n\n Large Video files ... our s/w is used to displace FTP.\n\n\n *) how many transactions per second will you expect to have?\n\n\n Ideally 1 large file, but it may have to be multiple. We find that\n if we send multiple files it just causes the disk to thrash more so\n we get better throughput by sending one large file.\n\n\n *) what is the architecture of the client? how many\n connections will\n be open to postgres writing?\n\n\n Our s/w can do multiple streams, but I believe we get better\n performance with 1 stream handling one large file, you could have 4\n streams with 4 files in flight, but the disk thrashes more...\n postgres is not be writing the file data, our agent reports back to\n postgres stats on the transfer rate being achieved ... postgres\n transactions is not the issue. The client and server are written in\n C and use UDP (with our own error correction) to achieve high\n network throughput as opposed to TCP.\n\n\n *) how many cores are in this box? what kind?\n\n\n Well obviously thats part of the equation as well, but its sort of\n unbounded right now not defined, but our s/w is multi-threaded and\n can make use of the multiple cores... so I'll say for now at a\n minimum 4.\n\n\n\n merlin", "msg_date": "Wed, 17 Nov 2010 15:11:26 -0600", "msg_from": "\"Eric Comeau\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB write for 5\n\tmins" }, { "msg_contents": "On 11/17/10 22:11, Eric Comeau wrote:\n\n>> *) what kind of data do you expect to be writing out at this speed?\n>>\n> Large Video files ... our s/w is used to displace FTP.\n>>\n>> *) how many transactions per second will you expect to have?\n>>\n> Ideally 1 large file, but it may have to be multiple. We find that if we\n> send multiple files it just causes the disk to thrash more so we get\n> better throughput by sending one large file.\n >\n>> *) what is the architecture of the client? how many connections will\n>> be open to postgres writing?\n>>\n> Our s/w can do multiple streams, but I believe we get better performance\n> with 1 stream handling one large file, you could have 4 streams with 4\n> files in flight, but the disk thrashes more... postgres is not be\n> writing the file data, our agent reports back to postgres stats on the\n> transfer rate being achieved ... postgres transactions is not the issue.\n> The client and server are written in C and use UDP (with our own error\n> correction) to achieve high network throughput as opposed to TCP.\n\nI hope you know what you are doing, there is a large list of tricks used \nby modern high performance FTP and web servers to get maximum \nperformance from hardware and the operating system while minimizing CPU \nusage - and most of them don't work with UDP.\n\nBefore you test with real hardware, try simply sending dummy data or \n/dev/null data (i.e. not from disks, not from file systems) and see how \nit goes.\n\n\n", "msg_date": "Thu, 18 Nov 2010 00:27:35 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB write for\n\t5 mins" }, { "msg_contents": "Eric Comeau wrote:\n> Ideally 1 large file, but it may have to be multiple. We find that if \n> we send multiple files it just causes the disk to thrash more so we \n> get better throughput by sending one large file.\n\nIf it's really one disk, sure. The problem you're facing is that your \ntypical drive controller is going to top out at somewhere between 300 - \n500MB/s of sequential writes before it becomes the bottleneck. Above \nsomewhere between 6 and 10 drives attached to one controller on current \nhardware, adding more to a RAID-0 volume only increases the ability to \nhandle seeks quickly. If you want to try and do this with traditional \nhard drives, I'd guess you'd need 3 controllers with at least 4 \nshort-stroked drives attached to each to have any hope of hitting \n1.25GB/s. Once you do that, you'll run into CPU time as the next \nbottleneck. At that point, you'll probably need one CPU per controller, \nall writing out at once, to keep up with your target.\n\nThe only popular hardware design that comes to mind aimed at this sort \nof thing was Sun's \"Thumper\" design, most recently seen in the Sun Fire \nX4540. That put 8 controllers with 6 disks attached to each, claiming \n\"demonstrated up to 2 GB/sec from disk to network\". It will take a \ndesign like that, running across multiple controllers, to get what \nyou're looking for on the disk side--presuming everything else keeps up.\n\nOne of the big SSD-on-PCI-e designs mentioned here already may very well \nend up being a better choice for you here though, as those aren't going \nto require quite as much hardware all get wired up.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\nEric Comeau wrote:\n Ideally 1 large file, but it may have to be multiple. We\nfind that if we send multiple files it just causes the disk to thrash\nmore so we get better throughput by sending one large file.\n\n\nIf it's really one disk, sure.  The problem you're facing is that your\ntypical drive controller is going to top out at somewhere between 300 -\n500MB/s of sequential writes before it becomes the bottleneck.  Above\nsomewhere between 6 and 10 drives attached to one controller on current\nhardware, adding more to a RAID-0 volume only increases the ability to\nhandle seeks quickly.  If you want to try and do this with traditional\nhard drives, I'd guess you'd need 3 controllers with at least 4\nshort-stroked drives attached to each to have any hope of hitting\n1.25GB/s.  Once you do that, you'll run into CPU time as the next\nbottleneck.  At that point, you'll probably need one CPU per\ncontroller, all writing out at once, to keep up with your target.\n\nThe only popular hardware design that comes to mind aimed at this sort\nof thing was Sun's \"Thumper\" design, most recently seen in the Sun Fire\nX4540.  That put 8 controllers with 6 disks attached to each, claiming\n\"demonstrated up to 2 GB/sec from disk to network\".  It will take a\ndesign like that, running across multiple controllers, to get what\nyou're looking for on the disk side--presuming everything else keeps up.\n\nOne of the big SSD-on-PCI-e designs mentioned here already may very\nwell end up being a better choice for you here though, as those aren't\ngoing to require quite as much hardware all get wired up.\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Wed, 17 Nov 2010 18:43:32 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB\n\twrite for 5 mins" }, { "msg_contents": "You may also try the Sun's F5100 (flash storage array) - you may\neasily get 700 MB/s just with a single I/O stream (single process), so\njust with 2 streams you'll get your throughput.. - The array has 2TB\ntotal space and max throughput should be around 4GB/s..\n\nRgds,\n-Dimitri\n\n\nOn 11/18/10, Greg Smith <[email protected]> wrote:\n> Eric Comeau wrote:\n>> Ideally 1 large file, but it may have to be multiple. We find that if\n>> we send multiple files it just causes the disk to thrash more so we\n>> get better throughput by sending one large file.\n>\n> If it's really one disk, sure. The problem you're facing is that your\n> typical drive controller is going to top out at somewhere between 300 -\n> 500MB/s of sequential writes before it becomes the bottleneck. Above\n> somewhere between 6 and 10 drives attached to one controller on current\n> hardware, adding more to a RAID-0 volume only increases the ability to\n> handle seeks quickly. If you want to try and do this with traditional\n> hard drives, I'd guess you'd need 3 controllers with at least 4\n> short-stroked drives attached to each to have any hope of hitting\n> 1.25GB/s. Once you do that, you'll run into CPU time as the next\n> bottleneck. At that point, you'll probably need one CPU per controller,\n> all writing out at once, to keep up with your target.\n>\n> The only popular hardware design that comes to mind aimed at this sort\n> of thing was Sun's \"Thumper\" design, most recently seen in the Sun Fire\n> X4540. That put 8 controllers with 6 disks attached to each, claiming\n> \"demonstrated up to 2 GB/sec from disk to network\". It will take a\n> design like that, running across multiple controllers, to get what\n> you're looking for on the disk side--presuming everything else keeps up.\n>\n> One of the big SSD-on-PCI-e designs mentioned here already may very well\n> end up being a better choice for you here though, as those aren't going\n> to require quite as much hardware all get wired up.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services and Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n", "msg_date": "Sat, 20 Nov 2010 11:16:42 +0100", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to achieve sustained disk performance of 1.25 GB\n\twrite for 5 mins" } ]
[ { "msg_contents": "All,\n\nHaving an interesting issue on one 8.4 database. Due to poor \napplication design, the application is requesting 8-15 exclusive \n(update) locks on the same row on parallel connections pretty much \nsimultaneously (i.e. < 50ms apart).\n\nWhat's odd about this is that the resulting \"lock pileup\" takes a \nmysterious 2-3.5 seconds to clear, despite the fact that none of the \nconnections are *doing* anything during that time, nor are there \ndeadlock errors. In theory at least, the locks should clear out in \nreverse order in less than a second; none of the individual statements \ntakes more than 10ms to execute.\n\nHas anyone else seen something like this? Any idea what causes it?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Wed, 17 Nov 2010 09:37:56 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Anyone seen this kind of lock pileup?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Having an interesting issue on one 8.4 database. Due to poor \n> application design, the application is requesting 8-15 exclusive \n> (update) locks on the same row on parallel connections pretty much \n> simultaneously (i.e. < 50ms apart).\n\n> What's odd about this is that the resulting \"lock pileup\" takes a \n> mysterious 2-3.5 seconds to clear, despite the fact that none of the \n> connections are *doing* anything during that time, nor are there \n> deadlock errors. In theory at least, the locks should clear out in \n> reverse order in less than a second; none of the individual statements \n> takes more than 10ms to execute.\n\nHmm ... can you extract a test case? Or at least strace the backends\ninvolved?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Nov 2010 16:58:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone seen this kind of lock pileup? " }, { "msg_contents": "\n> Hmm ... can you extract a test case? Or at least strace the backends\n> involved?\n\nNo, and no. Strace was the first thing I thought of, but I'd have to\nsomehow catch one of these backends in the 3 seconds it's locked. Not\nreally feasible.\n\nIt might be possible to construct a test case, depending on how much the\nuser wants to spend on the problem. I'd estimate that a test case would\ntake 8-12 hours of my time to get working, given the level of activity\nand concurrency required.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Wed, 17 Nov 2010 14:02:54 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anyone seen this kind of lock pileup?" }, { "msg_contents": "On 11/17/10 18:37, Josh Berkus wrote:\n> All,\n>\n> Having an interesting issue on one 8.4 database. Due to poor application\n> design, the application is requesting 8-15 exclusive (update) locks on\n> the same row on parallel connections pretty much simultaneously (i.e. <\n> 50ms apart).\n>\n> What's odd about this is that the resulting \"lock pileup\" takes a\n> mysterious 2-3.5 seconds to clear, despite the fact that none of the\n> connections are *doing* anything during that time, nor are there\n> deadlock errors. In theory at least, the locks should clear out in\n> reverse order in less than a second; none of the individual statements\n> takes more than 10ms to execute.\n\nJust a random guess: a timeout-supported livelock? (of course if there \nis any timeout-and-retry protocol going on and the timeout intervals are \nnon-randomized).\n\n\n", "msg_date": "Wed, 17 Nov 2010 23:53:58 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone seen this kind of lock pileup?" }, { "msg_contents": "\n>> What's odd about this is that the resulting \"lock pileup\" takes a \n>> mysterious 2-3.5 seconds to clear, despite the fact that none of the \n>> connections are *doing* anything during that time, nor are there \n>> deadlock errors. In theory at least, the locks should clear out in \n>> reverse order in less than a second; none of the individual statements \n>> takes more than 10ms to execute.\n\nOk, I've collected more data. Looks like the case I was examining was\nidiosyncratic; most of these lock pile-ups involve 400 or more locks\nwaiting held by around 20 different backends. Given this, taking 3\nseconds to sort that all out doesn't seem that unreasonable.\nPresumably there's a poll cycle of some sort for waiting statements?\n\nAnyway, the obvious answer is for the user to fix their application.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Wed, 17 Nov 2010 15:42:14 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Anyone seen this kind of lock pileup?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Ok, I've collected more data. Looks like the case I was examining was\n> idiosyncratic; most of these lock pile-ups involve 400 or more locks\n> waiting held by around 20 different backends. Given this, taking 3\n> seconds to sort that all out doesn't seem that unreasonable.\n> Presumably there's a poll cycle of some sort for waiting statements?\n\nNo ... but if the lock requests were mutually exclusive, I could believe\nit taking 3 seconds for all of the waiting backends to get their turn\nwith the lock, do whatever they were gonna do, commit, and release the\nlock to the next guy.\n\n> Anyway, the obvious answer is for the user to fix their application.\n\nProbably.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Nov 2010 18:56:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone seen this kind of lock pileup? " } ]
[ { "msg_contents": "I thought that I've seen an announcement about the SQL Server for Linux on 04/01/2005? I cannot find the link right now, but I am quite certain that there was such an announcement.\r\n\r\n________________________________\r\nFrom: [email protected] <[email protected]>\r\nTo: Tomas Vondra <[email protected]>\r\nCc: [email protected] <[email protected]>\r\nSent: Wed Nov 17 15:51:55 2010\r\nSubject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql\r\n\r\nI have to concur. Sql is written specifially and only for Windows. It is optimized for windows. Postgreal is writeen for just about everything trying to use common code so there isn't much optimization because it has to be optimized based on the OS that is running it. Check out your config and send it to us. That would include the OS and hardware configs for both machines.\r\n\r\nOn Wed, Nov 17, 2010 at 3:47 PM, Tomas Vondra <[email protected]<mailto:[email protected]>> wrote:\r\nDne 17.11.2010 05:47, Pavel Stehule napsal(a):\r\n> 2010/11/17 Humair Mohammed <[email protected]<mailto:[email protected]>>:\r\n>>\r\n>> There are no indexes on the tables either in SQL Server or Postgresql - I am\r\n>> comparing apples to apples here. I ran ANALYZE on the postgresql tables,\r\n\r\nActually no, you're not comparing apples to apples. You've provided so\r\nlittle information that you may be comparing apples to cucumbers or\r\nmaybe some strange animals.\r\n\r\n1) info about the install\r\n\r\nWhat OS is this running on? I guess it's Windows in both cases, right?\r\n\r\nHow nuch memory is there? What is the size of shared_buffers? The\r\ndefault PostgreSQL settings is very very very limited, you have to bump\r\nit to a much larger value.\r\n\r\nWhat are the other inportant settings (e.g. the work_mem)?\r\n\r\n2) info about the dataset\r\n\r\nHow large are the tables? I don't mean number of rows, I mean number of\r\nblocks / occupied disk space. Run this query\r\n\r\nSELECT relname, relpages, reltuples, pg_size_pretty(pg_table_size(oid))\r\nFROM pg_class WHERE relname IN ('table1', 'table2');\r\n\r\n3) info about the plan\r\n\r\nPlease, provide EXPLAIN ANALYZE output, maybe with info about buffers,\r\ne.g. something like\r\n\r\nEXPLAIN (ANALYZE ON, BUFFERS ON) SELECT ...\r\n\r\n4) no indexes ?\r\n\r\nWhy have you decided not to use any indexes? If you want a decent\r\nperformance, you will have to use indexes. Obviously there is some\r\noverhead associated with them, but it's premature optimization unless\r\nyou prove the opposite.\r\n\r\nBTW I'm not a MSSQL expert, but it seems like it's building a bitmap\r\nindex on the fly, to synchronize parallelized query - PostgreSQL does\r\nnot support that.\r\n\r\nregards\r\nTomas\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\r\n\n\r\nI thought that I've seen an announcement about the SQL Server for Linux on 04/01/2005? I cannot find the link right now, but I am quite certain that there was such an announcement.\n\n\nFrom: [email protected] <[email protected]>\rTo: Tomas Vondra <[email protected]>\rCc: [email protected] <[email protected]>\rSent: Wed Nov 17 15:51:55 2010Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql\r\r\nI have to concur.  Sql is written specifially and only for Windows. It is optimized for windows.  Postgreal is writeen for just about everything trying to use common code so there isn't much optimization because it has to be optimized based on the OS that is running it.  Check out your config and send it to us.  That would include the OS and hardware configs for both machines.\nOn Wed, Nov 17, 2010 at 3:47 PM, Tomas Vondra <[email protected]> wrote:\r\nDne 17.11.2010 05:47, Pavel Stehule napsal(a):\n> 2010/11/17 Humair Mohammed <[email protected]>:\r\n>>\r\n>> There are no indexes on the tables either in SQL Server or Postgresql - I am\r\n>> comparing apples to apples here. I ran ANALYZE on the postgresql tables,\n\nActually no, you're not comparing apples to apples. You've provided so\r\nlittle information that you may be comparing apples to cucumbers or\r\nmaybe some strange animals.\n\r\n1) info about the install\n\r\nWhat OS is this running on? I guess it's Windows in both cases, right?\n\r\nHow nuch memory is there? What is the size of shared_buffers? The\r\ndefault PostgreSQL settings is very very very limited, you have to bump\r\nit to a much larger value.\n\r\nWhat are the other inportant settings (e.g. the work_mem)?\n\r\n2) info about the dataset\n\r\nHow large are the tables? I don't mean number of rows, I mean number of\r\nblocks / occupied disk space. Run this query\n\r\nSELECT relname, relpages, reltuples, pg_size_pretty(pg_table_size(oid))\r\nFROM pg_class WHERE relname IN ('table1', 'table2');\n\r\n3) info about the plan\n\r\nPlease, provide EXPLAIN ANALYZE output, maybe with info about buffers,\r\ne.g. something like\n\r\nEXPLAIN (ANALYZE ON, BUFFERS ON) SELECT ...\n\r\n4) no indexes ?\n\r\nWhy have you decided not to use any indexes? If you want a decent\r\nperformance, you will have to use indexes. Obviously there is some\r\noverhead associated with them, but it's premature optimization unless\r\nyou prove the opposite.\n\r\nBTW I'm not a MSSQL expert, but it seems like it's building a bitmap\r\nindex on the fly, to synchronize parallelized query - PostgreSQL does\r\nnot support that.\n\r\nregards\nTomas\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 17 Nov 2010 16:00:03 -0600", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" } ]
[ { "msg_contents": "Hi,\n\nI'm experiencing extremely different response times for some complex pgsql\nfunctions. extremly different means from 20ms - 500ms and up to 20s.\nI have to say that the complete database fits in memory (64GB).\nshared_buffers is set to 16GB. the rest ist used by thefs cache and\nconections/work_mem.\nthe server is running under linux rhel5 and is 8.4.5.\nthe filesystem is ext3 due to the lack of xfs support by redhat.\n\n- I have for the one function response time of 20 ms with no shared blocks\nread.\n- If there are shared blocks to be read I get immediatly response time of at\nleast 80ms and up to 200ms.\n- If i see page reclaims I always get response times above 400ms\n- I'm guessing that 20s response time come together with i/o.\n\nAs far as I read page reclaims occur probably here, because fs cache has to\nfree memory for allocations for the client. Am I right?\nSo how can i prevent page reclaims?\n\nWhat do the number is within the brackets mean e.g. 0/3330 [0/4269] page\nfaults/reclaims?\nOr is this output somewhere explained? I didn't find anything.\n\nbest regards,\nUwe\n\n\nthis is one output of an execution without page reclaims:\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 0.071247 elapsed 0.053992 user 0.016998 system sec\n! [0.056991 user 0.018997 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 0/3330 [0/4269] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [8/0] voluntary/involuntary context switches\n! buffer usage stats:\n! Shared blocks: 1 read, 0 written, buffer hit rate\n= 99.97%\n! Local blocks: 0 read, 0 written, buffer hit rate\n= 0.00%\n! Direct blocks: 0 read, 0 written\nTime: 73.154 ms\n\n\nthis is one output of an execution with page reclaims:\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 0.627502 elapsed 0.461930 user 0.075988 system sec\n! [0.465929 user 0.078987 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 0/20941 [0/21893] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 12/7 [20/7] voluntary/involuntary context switches\n! buffer usage stats:\n! Shared blocks: 48 read, 0 written, buffer hit rate\n= 99.72%\n! Local blocks: 0 read, 0 written, buffer hit rate\n= 0.00%\n! Direct blocks: 0 read, 0 written\nTime: 629.823 ms\n\nHi,I'm experiencing extremely different response times for some complex pgsql functions. extremly different means from 20ms - 500ms and up to 20s.I have to say that the complete database fits in memory (64GB).\nshared_buffers is set to 16GB. the rest ist used by thefs cache and conections/work_mem.the server is running under linux rhel5 and is 8.4.5.the filesystem is ext3 due to the lack of xfs support by redhat.\n- I have for the one function response time of 20 ms with no shared blocks read.- If there are shared blocks to be read I get immediatly response time of at least 80ms and up to 200ms.- If i see page reclaims I always get response times above 400ms\n- I'm guessing that 20s response time come together with i/o.As far as I read page reclaims occur probably here, because fs cache has to free memory for allocations for the client. Am I right?So how can i prevent page reclaims?\nWhat do the number is within the brackets mean e.g. 0/3330 [0/4269] page faults/reclaims?Or is this output somewhere explained? I didn't find anything.best regards,Uwethis is one output of an execution without page reclaims:\nLOG:  EXECUTOR STATISTICSDETAIL:  ! system usage stats:!       0.071247 elapsed 0.053992 user 0.016998 system sec!       [0.056991 user 0.018997 sys total]!       0/0 [0/0] filesystem blocks in/out!       0/3330 [0/4269] page faults/reclaims, 0 [0] swaps\n!       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent!       0/0 [8/0] voluntary/involuntary context switches! buffer usage stats:!       Shared blocks:          1 read,          0 written, buffer hit rate = 99.97%\n!       Local  blocks:          0 read,          0 written, buffer hit rate = 0.00%!       Direct blocks:          0 read,          0 writtenTime: 73.154 msthis is one output of an execution with page reclaims:\nLOG:  EXECUTOR STATISTICSDETAIL:  ! system usage stats:!       0.627502 elapsed 0.461930 user 0.075988 system sec!       [0.465929 user 0.078987 sys total]!       0/0 [0/0] filesystem blocks in/out!       0/20941 [0/21893] page faults/reclaims, 0 [0] swaps\n!       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent!       12/7 [20/7] voluntary/involuntary context switches! buffer usage stats:!       Shared blocks:         48 read,          0 written, buffer hit rate = 99.72%\n!       Local  blocks:          0 read,          0 written, buffer hit rate = 0.00%!       Direct blocks:          0 read,          0 writtenTime: 629.823 ms", "msg_date": "Thu, 18 Nov 2010 11:10:36 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "executor stats / page reclaims" }, { "msg_contents": "On Thu, Nov 18, 2010 at 5:10 AM, Uwe Bartels <[email protected]> wrote:\n> I'm experiencing extremely different response times for some complex pgsql\n> functions. extremly different means from 20ms - 500ms and up to 20s.\n> I have to say that the complete database fits in memory (64GB).\n> shared_buffers is set to 16GB. the rest ist used by thefs cache and\n> conections/work_mem.\n> the server is running under linux rhel5 and is 8.4.5.\n> the filesystem is ext3 due to the lack of xfs support by redhat.\n>\n> - I have for the one function response time of 20 ms with no shared blocks\n> read.\n> - If there are shared blocks to be read I get immediatly response time of at\n> least 80ms and up to 200ms.\n> - If i see page reclaims I always get response times above 400ms\n> - I'm guessing that 20s response time come together with i/o.\n>\n> As far as I read page reclaims occur probably here, because fs cache has to\n> free memory for allocations for the client. Am I right?\n> So how can i prevent page reclaims?\n>\n> What do the number is within the brackets mean e.g. 0/3330 [0/4269] page\n> faults/reclaims?\n> Or is this output somewhere explained? I didn't find anything.\n\nI think you're probably going about this the wrong way. Rather than\nmess around with those executor stats, which I think are telling you\nalmost nothing, I'd enable log_min_duration_statement or load up\nauto_explain and try to find out the specific queries that are\nperforming badly, and the plans for those queries. Post the queries\nthat are performing badly and the EXPLAIN ANALYZE output for those\nqueries, and you'll get a lot more help.\n\nAs for the numbers in brackets, a quick glance at the source code\nsuggests that the bracketed numbers are cumulative since program start\nand the unbracketed numbers are deltas.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 3 Dec 2010 12:16:18 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: executor stats / page reclaims" } ]
[ { "msg_contents": "Hello everybody,\nhaving this SQL query:\n\n--------------\nselect variable_id,float_value,ts,good_through,interval,datetime_value,string_value,int_value,blob_value,history_value_type\nfrom \"records_437954e9-e048-43de-bde3-057658966a9f\" where variable_id\nin (22727) and (ts >= '2010-10-02 11:19:55' or good_through >=\n'2010-10-02 11:19:55') and (ts <= '2010-10-14 11:19:55' or\ngood_through <= '2010-10-14 11:19:55')\nunion all\nselect variable_id,float_value,ts,good_through,interval,datetime_value,string_value,int_value,blob_value,history_value_type\nfrom \"records_1d115712-e943-4ae3-bb14-b56a95796111\" where variable_id\nin (24052) and (ts >= '2010-10-02 11:19:55' or good_through >=\n'2010-10-02 11:19:55') and (ts <= '2010-10-14 11:19:55' or\ngood_through <= '2010-10-14 11:19:55')\norder by ts\nlimit 2501 offset 0\n\n---------------\n\nand these two results:\n\n1st run:\nhttp://explain.depesz.com/s/1lT\n\n2nd run:\nhttp://explain.depesz.com/s/bhA\n\nis there anything I can do about the speed? Only buying faster\nhard-disk seems to me as the solution... Am I right?\n\nThank you in advance\n Martin\n", "msg_date": "Thu, 18 Nov 2010 12:09:06 +0100", "msg_from": "Martin Chlupac <[email protected]>", "msg_from_op": true, "msg_subject": "Low disk performance?" }, { "msg_contents": "Hi, what is the size of the table and index (in terms of pages and\ntuples)? Try something like\n\nSELECT relpages, reltuples FROM pg_class WHERE relname = 'table or index\nname';\n\nAnd what indexes have you created? It seems to me there's just index on\nthe variable_id. It might be useful to create index on (variable_id, ts)\nor even (variable_id, ts, good_through).\n\nTomas\n\n> Hello everybody,\n> having this SQL query:\n>\n> --------------\n> select\n> variable_id,float_value,ts,good_through,interval,datetime_value,string_value,int_value,blob_value,history_value_type\n> from \"records_437954e9-e048-43de-bde3-057658966a9f\" where variable_id\n> in (22727) and (ts >= '2010-10-02 11:19:55' or good_through >=\n> '2010-10-02 11:19:55') and (ts <= '2010-10-14 11:19:55' or\n> good_through <= '2010-10-14 11:19:55')\n> union all\n> select\n> variable_id,float_value,ts,good_through,interval,datetime_value,string_value,int_value,blob_value,history_value_type\n> from \"records_1d115712-e943-4ae3-bb14-b56a95796111\" where variable_id\n> in (24052) and (ts >= '2010-10-02 11:19:55' or good_through >=\n> '2010-10-02 11:19:55') and (ts <= '2010-10-14 11:19:55' or\n> good_through <= '2010-10-14 11:19:55')\n> order by ts\n> limit 2501 offset 0\n>\n> ---------------\n>\n> and these two results:\n>\n> 1st run:\n> http://explain.depesz.com/s/1lT\n>\n> 2nd run:\n> http://explain.depesz.com/s/bhA\n>\n> is there anything I can do about the speed? Only buying faster\n> hard-disk seems to me as the solution... Am I right?\n>\n> Thank you in advance\n> Martin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Thu, 18 Nov 2010 14:33:25 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Low disk performance?" } ]
[ { "msg_contents": "Trying to understand why query planer changes the plan from effective\none to ineffective one when I change the offset in the LIMIT. Also,\nthankfully accepting RTFM pointers to the actual FMs.\n\nSetup is: 3 tables with 0.5M to 1.5M records\nWhile tuning indexes for the following query\n\nSELECT c.id, c.name, c.owner\nFROM catalog c, catalog_securitygroup cs, catalog_university cu\nWHERE c.root < 50\n AND cs.catalog = c.id\n AND cu.catalog = c.id\n AND cs.securitygroup < 200\n AND cu.university < 200\nORDER BY c.name\nLIMIT 50 OFFSET 100\n\nI managed to bring it to ~3ms with the following plan\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Limit (cost=15141.07..22711.60 rows=50 width=59)\n -> Nested Loop (cost=0.00..30052749.38 rows=198485 width=59)\n -> Nested Loop (cost=0.00..705519.23 rows=147500 width=63)\n -> Index Scan using test2 on catalog c\n(cost=0.00..241088.93 rows=147500 width=59)\n Index Cond: (root < 50)\n -> Index Scan using catalog_university_pkey on\ncatalog_university cu (cost=0.00..3.14 rows=1 width=4)\n Index Cond: ((cu.catalog = c.id) AND\n(cu.university < 200))\n -> Index Scan using catalog_securitygroup_pkey on\ncatalog_securitygroup cs (cost=0.00..196.48 rows=199 width=4)\n Index Cond: ((cs.catalog = c.id) AND (cs.securitygroup\n< 200))\n\n\nBut when I change the OFFSET in the limit to 500 it goes to ~500ms\nwith following plan\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=61421.34..61421.46 rows=50 width=59)\n -> Sort (cost=61420.09..61916.30 rows=198485 width=59)\n Sort Key: c.name\n -> Merge Join (cost=45637.87..51393.33 rows=198485\nwidth=59)\n Merge Cond: (c.id = cs.catalog)\n -> Merge Join (cost=48.95..440699.65 rows=147500\nwidth=63)\n Merge Cond: (c.id = cu.catalog)\n -> Index Scan using catalog_pkey on catalog c\n(cost=0.00..78947.35 rows=147500 width=59)\n Filter: (root < 50)\n -> Index Scan using catalog_university_pkey on\ncatalog_university cu (cost=0.00..358658.68 rows=499950 width=4)\n Index Cond: (cu.university < 200)\n -> Materialize (cost=45527.12..48008.19 rows=198485\nwidth=4)\n -> Sort (cost=45527.12..46023.34 rows=198485\nwidth=4)\n Sort Key: cs.catalog\n -> Seq Scan on catalog_securitygroup cs\n(cost=0.00..25345.76 rows=198485 width=4)\n Filter: (securitygroup < 200)\n\nThanks for your time\n", "msg_date": "Fri, 19 Nov 2010 04:33:43 -0800 (PST)", "msg_from": "goran <[email protected]>", "msg_from_op": true, "msg_subject": "Should changing offset in LIMIT change query plan (at all/so early)?" } ]
[ { "msg_contents": "Pavel Stehule wrote:\n> 2010/11/21 Humair Mohammed :\n \n>> shared_buffers = 2\n\n> shared_buffers = 2 ???\n \nYeah, if that's not a typo, that's a very serious misconfiguration.\n \nWith 8 GB of RAM in the machine, this should probably be set to\nsomewhere between 200 MB and 2 GB, depending on your workload and\nwhat else is running on the machine.\n \nPlease read through this page and make use of the information:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \n-Kevin\n", "msg_date": "Sun, 21 Nov 2010 09:23:07 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" }, { "msg_contents": "> Pavel Stehule wrote:\n>> 2010/11/21 Humair Mohammed :\n>\n>>> shared_buffers = 2\n>\n>> shared_buffers = 2 ???\n>\n> Yeah, if that's not a typo, that's a very serious misconfiguration.\n\nI guess that's a typo, as the explain plain in one of the previous posts\ncontains\n\n Buffers: shared hit=192 read=4833\n\nfor a sequential scan. But I still don't know why is the query so slow :-(\n\nregards\nTomas\n\n", "msg_date": "Sun, 21 Nov 2010 16:56:59 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Query Performance SQL Server vs. Postgresql" } ]
[ { "msg_contents": "goran wrote:\n \n> Trying to understand why query planer changes the plan from\n> effective one to ineffective one when I change the offset in the\n> LIMIT. Also, thankfully accepting RTFM pointers to the actual FMs.\n \nThe query planner will consider offset and limit clauses when\nestimating the cost of each plan. The optimal plan will shift as\nmore tuples need to be read. If the plan is not shifting at the\nright point, it probably means that you need to tune the costing\nfactors used by the planner.\n \nYou didn't report enough information for me to suggest any particular\nchange; if you follow up, please review the suggested information to\npost:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nIn particular, information about the hardware and your\npostgresql.conf settings would help.\n \nYou might also want to review this page and see if you can tune\nthings.\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \nIn particular, effective_cache_size, random_page_cost, and\nseq_page_cost would be likely to need adjustment based on what you've\ntold us; however, it might pay to review the whole configuration.\n \n-Kevin\n", "msg_date": "Sun, 21 Nov 2010 09:44:21 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Should changing offset in LIMIT change query\n\tplan (at all/so early)?" } ]
[ { "msg_contents": "This is not a request for help but a report, in case it helps developers \nor someone in the future. The setup is:\n\nAMD64 machine, 24 GB RAM, 2x6-core Xeon CPU + HTT (24 logical CPUs)\nFreeBSD 8.1-stable, AMD64\nPostgreSQL 9.0.1, 10 GB shared buffers, using pgbench with a scale \nfactor of 500 (7.5 GB database)\n\nwith pgbench -S (SELECT-queries only) the performance curve is:\n\n-c#\tresult\n4\t33549\n8\t64864\n12\t79491\n16\t79887\n20\t66957\n24\t52576\n28\t50406\n32\t49491\n40\t45535\n50\t39499\n75\t29415\n\nAfter 16 clients (which is still good since there are only 12 \"real\" \ncores in the system), the performance drops sharply, and looking at the \nprocesses' state, most of them seem to eat away system call (i.e. \nexecuting in the kernel) in states \"semwait\" and \"sbwait\", i.e. \nsemaphore wait and socket buffer wait, for example:\n\n 3047 pgsql 1 60 0 10533M 283M sbwait 12 0:01 6.79% postgres\n 3055 pgsql 1 64 0 10533M 279M sbwait 15 0:01 6.79% postgres\n 3033 pgsql 1 64 0 10533M 279M semwai 6 0:01 6.69% postgres\n 3038 pgsql 1 64 0 10533M 283M CPU5 13 0:01 6.69% postgres\n 3037 pgsql 1 62 0 10533M 279M sbwait 23 0:01 6.69% postgres\n 3048 pgsql 1 65 0 10533M 280M semwai 4 0:01 6.69% postgres\n 3056 pgsql 1 65 0 10533M 277M semwai 1 0:01 6.69% postgres\n 3002 pgsql 1 62 0 10533M 284M CPU19 0 0:01 6.59% postgres\n 3042 pgsql 1 63 0 10533M 279M semwai 21 0:01 6.59% postgres\n 3029 pgsql 1 63 0 10533M 277M semwai 23 0:01 6.59% postgres\n 3046 pgsql 1 63 0 10533M 278M RUN 5 0:01 6.59% postgres\n 3036 pgsql 1 63 0 10533M 278M CPU1 12 0:01 6.59% postgres\n 3051 pgsql 1 63 0 10533M 277M semwai 1 0:01 6.59% postgres\n 3030 pgsql 1 63 0 10533M 281M semwai 1 0:01 6.49% postgres\n 3050 pgsql 1 60 0 10533M 276M semwai 1 0:01 6.49% postgres\n\nThe \"sbwait\" part is from FreeBSD - IPC sockets, but so much blocking on \nsemwait indicates large contention in PostgreSQL.\n\n", "msg_date": "Mon, 22 Nov 2010 01:15:43 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Performance under contention" }, { "msg_contents": "Ivan Voras wrote:\n> PostgreSQL 9.0.1, 10 GB shared buffers, using pgbench with a scale \n> factor of 500 (7.5 GB database)\n>\n> with pgbench -S (SELECT-queries only) the performance curve is:\n>\n> -c# result\n> 4 33549\n> 8 64864\n> 12 79491\n> 16 79887\n> 20 66957\n> 24 52576\n> 28 50406\n> 32 49491\n> 40 45535\n> 50 39499\n> 75 29415\n\nTwo suggestions to improve your results here:\n\n1) Don't set shared_buffers to 10GB. There are some known issues with \nlarge settings for that which may or may not be impacting your results. \nTry 4GB instead, just to make sure you're not even on the edge of that area.\n\n2) pgbench itself is known to become a bottleneck when running with lots \nof clients. You should be using the \"-j\" option to spawn multiple \nworkers, probably 12 of them (one per core), to make some of this go \naway. On the system I saw the most improvement here, I got a 15-25% \ngain having more workers at the higher client counts.\n\n> The \"sbwait\" part is from FreeBSD - IPC sockets, but so much blocking \n> on semwait indicates large contention in PostgreSQL.\n\nIt will be interesting to see if that's different after the changes \nsuggested above.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Thu, 25 Nov 2010 21:00:29 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On 26 November 2010 03:00, Greg Smith <[email protected]> wrote:\n\n> Two suggestions to improve your results here:\n>\n> 1) Don't set shared_buffers to 10GB.  There are some known issues with large\n> settings for that which may or may not be impacting your results.  Try 4GB\n> instead, just to make sure you're not even on the edge of that area.\n>\n> 2) pgbench itself is known to become a bottleneck when running with lots of\n> clients.  You should be using the \"-j\" option to spawn multiple workers,\n> probably 12 of them (one per core), to make some of this go away.  On the\n> system I saw the most improvement here, I got a 15-25% gain having more\n> workers at the higher client counts.\n\n> It will be interesting to see if that's different after the changes\n> suggested above.\n\nToo late, can't test on the hardware anymore. I did use -j on pgbench,\nbut after 2 threads there were not significant improvements - the two\nthreads did not saturate two CPU cores.\n\nHowever, I did run a similar select-only test on tmpfs on different\nhardware with much less memory (4 GB total), with shared_buffers\nsomewhere around 2 GB, with the same performance curve:\n\nhttp://ivoras.sharanet.org/blog/tree/2010-07-21.postgresql-on-tmpfs.html\n\nso I doubt the curve would change by reducing shared_buffers below\nwhat I used in the original post.\n", "msg_date": "Fri, 26 Nov 2010 03:08:30 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras <[email protected]> wrote:\n> The \"sbwait\" part is from FreeBSD - IPC sockets, but so much blocking on\n> semwait indicates large contention in PostgreSQL.\n\nI can reproduce this. I suspect, but cannot yet prove, that this is\ncontention over the lock manager partition locks or the buffer mapping\nlocks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 6 Dec 2010 12:10:19 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On Mon, Dec 6, 2010 at 12:10 PM, Robert Haas <[email protected]> wrote:\n> On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras <[email protected]> wrote:\n>> The \"sbwait\" part is from FreeBSD - IPC sockets, but so much blocking on\n>> semwait indicates large contention in PostgreSQL.\n>\n> I can reproduce this.  I suspect, but cannot yet prove, that this is\n> contention over the lock manager partition locks or the buffer mapping\n> locks.\n\nI compiled with LWLOCK_STATS defined and found that exactly one lock\nmanager partition lwlock was heavily contended, because, of course,\nthe SELECT-only test only hits one table, and all the threads fight\nover acquisition and release of AccessShareLock on that table. One\nmight argue that in more normal workloads there will be more than one\ntable involved, but that's not necessarily true, and in any case there\nmight not be more than a handful of major ones.\n\nHowever, I don't have a very clear idea what to do about it.\nIncreasing the number of lock partitions doesn't help, because the one\ntable you care about is still only in one partition.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 6 Dec 2010 14:07:05 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On Tue, Dec 7, 2010 at 1:10 AM, Robert Haas <[email protected]> wrote:\n> On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras <[email protected]> wrote:\n>> The \"sbwait\" part is from FreeBSD - IPC sockets, but so much blocking on\n>> semwait indicates large contention in PostgreSQL.\n>\n> I can reproduce this.  I suspect, but cannot yet prove, that this is\n> contention over the lock manager partition locks or the buffer mapping\n> locks.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\nHi Robert,\n\nThat's exactly what I concluded when I was doing the sysbench simple\nread-only test. I had also tried with different lock partitions and it\ndid not help since they all go after the same table. I think one way\nto kind of avoid the problem on the same table is to do more granular\nlocking (Maybe at page level instead of table level). But then I dont\nreally understand on how to even create a prototype related to this\none. If you can help create a prototype then I can test it out with my\nsetup and see if it helps us to catch up with other guys out there.\n\nAlso on the subject whether this is a real workload: in fact it seems\nall social networks uses this frequently with their usertables and\nthis test actually came from my talks with Mark Callaghan which he\nsays is very common in their environment where thousands of users pull\nup their userprofile data from the same table. Which is why I got\ninterested in trying it more.\n\nRegards,\nJignesh\n", "msg_date": "Tue, 7 Dec 2010 10:59:23 +0800", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On Tue, Dec 7, 2010 at 10:59 AM, Jignesh Shah <[email protected]> wrote:\n> On Tue, Dec 7, 2010 at 1:10 AM, Robert Haas <[email protected]> wrote:\n>> On Sun, Nov 21, 2010 at 7:15 PM, Ivan Voras <[email protected]> wrote:\n>>> The \"sbwait\" part is from FreeBSD - IPC sockets, but so much blocking on\n>>> semwait indicates large contention in PostgreSQL.\n>>\n>> I can reproduce this.  I suspect, but cannot yet prove, that this is\n>> contention over the lock manager partition locks or the buffer mapping\n>> locks.\n>>\n>> --\n>> Robert Haas\n>> EnterpriseDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n> Hi Robert,\n>\n> That's exactly what I concluded when I was doing the sysbench simple\n> read-only test. I had also tried with different lock partitions and it\n> did not help since they all go after the same table. I think one way\n> to kind of avoid the problem on the same table is to do more granular\n> locking (Maybe at page level instead of table level). But then I dont\n> really understand on how to even create a prototype related to this\n> one. If you can help create a prototype then I can test it out with my\n> setup and see if it helps us to catch up with other guys out there.\n>\n> Also on the subject whether this is a real workload: in fact it seems\n> all social networks uses this frequently with their usertables and\n> this test actually came from my talks with Mark Callaghan which he\n> says is very common in their environment where thousands of users pull\n> up their userprofile data from the same table. Which is why I got\n> interested in trying it more.\n>\n> Regards,\n> Jignesh\n>\n\nAlso I forgot to mention in my sysbench test I saw exactly two locks\none related to AccessShareLock on the table but other related to\nRevalidateCachePlan one which atleast to me seemed to be slightly\nbigger problem than the AccessShareLock one..\n\nBut I will take anything. Ideally both :-)\n\nRegards,\nJignesh\n", "msg_date": "Tue, 7 Dec 2010 11:03:30 +0800", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On Mon, Dec 6, 2010 at 9:59 PM, Jignesh Shah <[email protected]> wrote:\n> That's exactly what I concluded when I was doing the sysbench simple\n> read-only test. I had also tried with different lock partitions and it\n> did not help since they all go after the same table. I think one way\n> to kind of avoid the problem on the same table is to do more granular\n> locking (Maybe at page level instead of table level). But then I dont\n> really understand on how to even create a prototype related to this\n> one. If you can help create a prototype then I can test it out with my\n> setup and see if it helps us to catch up with other guys out there.\n\nWe're trying to lock the table against a concurrent DROP or schema\nchange, so locking only part of it doesn't really work. I don't\nreally see any way to avoid needing some kind of a lock here; the\ntrick is how to take it quickly. The main obstacle to making this\nfaster is that the deadlock detector needs to be able to obtain enough\ninformation to break cycles, which means we've got to record in shared\nmemory not only the locks that are granted but who has them. However,\nI wonder if it would be possible to have a very short critical section\nwhere we grab the partition lock, acquire the heavyweight lock, and\nrelease the partition lock; and then only as a second step record (in\nthe form of a PROCLOCK) the fact that we got it. During this second\nstep, we'd hold a lock associated with the PROC, not the LOCK. If the\ndeadlock checker runs after we've acquired the lock and before we've\nrecorded that we have it, it'll see more locks than lock holders, but\nthat should be OK, since the process which hasn't yet recorded its\nlock acquisition is clearly not part of any deadlock.\n\nCurrently, PROCLOCKs are included in both a list of locks held by that\nPROC, and a list of lockers of that LOCK. The latter list would be\nhard to maintain in this scheme, but maybe that's OK too. We really\nonly need that information for the deadlock checker, and the deadlock\nchecker could potentially still get the information by grovelling\nthrough all the PROCs. That might be a bit slow, but maybe it'd be\nOK, or maybe we could think of a clever way to speed it up.\n\nJust thinking out loud here...\n\n> Also on the subject whether this is a real workload: in fact it seems\n> all social networks uses this frequently with their usertables and\n> this test actually came from my talks with Mark Callaghan which he\n> says is very common in their environment where thousands of users pull\n> up their userprofile data from the same table. Which is why I got\n> interested in trying it more.\n\nYeah.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Compan\n", "msg_date": "Tue, 7 Dec 2010 12:37:31 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I wonder if it would be possible to have a very short critical section\n> where we grab the partition lock, acquire the heavyweight lock, and\n> release the partition lock; and then only as a second step record (in\n> the form of a PROCLOCK) the fact that we got it.\n\n[ confused... ] Exactly what do you suppose \"acquire the lock\" would\nbe represented as, if not \"create a PROCLOCK entry attached to it\"?\n\nIn any case, I think this is another example of not understanding where\nthe costs really are. As far as I can tell, on modern MP systems much\nof the elapsed time in these operations comes from acquiring exclusive\naccess to shared-memory cache lines. Reducing the number of changes you\nhave to make within a small area of shared memory won't save much, once\nyou've paid for the first one. Changing structures that aren't heavily\ncontended (such as a proc's list of its own locks) doesn't cost much at\nall.\n\nOne thing that might be interesting, but that I don't know how to attack\nin a reasonably machine-independent way, is to try to ensure that shared\nand local data structures don't accidentally overlap within cache lines.\nWhen they do, you pay for fighting the cache line away from another\nprocessor even when there's no real need.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Dec 2010 12:50:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention " }, { "msg_contents": "Hi Tom\n\nI suspect I may be missing something here, but I think it's a pretty\nuniversal truism that cache lines are aligned to power-of-2 memory\naddresses, so it would suffice to ensure during setup that the lower order n\nbits of the object address are all zeros for each critical object; if the\nmalloc() routine being used doesn't support that, it could be done by\nallocating a slightly larger than necessary block of memory and choosing a\nlocation within that.\n\nThe value of n could be architecture dependent, but n=8 would cover\neveryone, hopefully without wasting too much RAM.\n\nCheers\nDave\n\nOn Tue, Dec 7, 2010 at 11:50 AM, Tom Lane <[email protected]> wrote:\n\n>\n> One thing that might be interesting, but that I don't know how to attack\n> in a reasonably machine-independent way, is to try to ensure that shared\n> and local data structures don't accidentally overlap within cache lines.\n> When they do, you pay for fighting the cache line away from another\n> processor even when there's no real need.\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi TomI suspect I may be missing something here, but I think it's a pretty universal truism that cache lines are aligned to power-of-2 memory addresses, so it would suffice to ensure during setup that the lower order n bits of the object address are all zeros for each critical object; if the malloc() routine being used doesn't support that, it could be done by allocating a slightly larger than necessary block of memory and choosing a location within that.\nThe value of n could be architecture dependent, but n=8 would cover everyone, hopefully without wasting too much RAM.CheersDaveOn Tue, Dec 7, 2010 at 11:50 AM, Tom Lane <[email protected]> wrote:\n\nOne thing that might be interesting, but that I don't know how to attack\nin a reasonably machine-independent way, is to try to ensure that shared\nand local data structures don't accidentally overlap within cache lines.\nWhen they do, you pay for fighting the cache line away from another\nprocessor even when there's no real need.\n\n                        regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 7 Dec 2010 12:00:58 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On 7 December 2010 18:37, Robert Haas <[email protected]> wrote:\n> On Mon, Dec 6, 2010 at 9:59 PM, Jignesh Shah <[email protected]> wrote:\n>> That's exactly what I concluded when I was doing the sysbench simple\n>> read-only test. I had also tried with different lock partitions and it\n>> did not help since they all go after the same table. I think one way\n>> to kind of avoid the problem on the same table is to do more granular\n>> locking (Maybe at page level instead of table level). But then I dont\n>> really understand on how to even create a prototype related to this\n>> one. If you can help create a prototype then I can test it out with my\n>> setup and see if it helps us to catch up with other guys out there.\n>\n> We're trying to lock the table against a concurrent DROP or schema\n> change, so locking only part of it doesn't really work.  I don't\n> really see any way to avoid needing some kind of a lock here; the\n> trick is how to take it quickly.  The main obstacle to making this\n> faster is that the deadlock detector needs to be able to obtain enough\n> information to break cycles, which means we've got to record in shared\n> memory not only the locks that are granted but who has them.\n\nI'm not very familiar with PostgreSQL code but if we're\nbrainstorming... if you're only trying to protect against a small\nnumber of expensive operations (like DROP, etc.) that don't really\nhappen often, wouldn't an atomic reference counter be good enough for\nthe purpose (e.g. the expensive operations would spin-wait until the\ncounter is 0)?\n", "msg_date": "Tue, 7 Dec 2010 19:08:06 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On Tue, Dec 7, 2010 at 12:50 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> I wonder if it would be possible to have a very short critical section\n>> where we grab the partition lock, acquire the heavyweight lock, and\n>> release the partition lock; and then only as a second step record (in\n>> the form of a PROCLOCK) the fact that we got it.\n>\n> [ confused... ]  Exactly what do you suppose \"acquire the lock\" would\n> be represented as, if not \"create a PROCLOCK entry attached to it\"?\n\nUpdate the \"granted\" array and, if necessary, the grantMask.\n\n> In any case, I think this is another example of not understanding where\n> the costs really are.\n\nPossible.\n\n> As far as I can tell, on modern MP systems much\n> of the elapsed time in these operations comes from acquiring exclusive\n> access to shared-memory cache lines.  Reducing the number of changes you\n> have to make within a small area of shared memory won't save much, once\n> you've paid for the first one.\n\nSeems reasonable.\n\n> Changing structures that aren't heavily\n> contended (such as a proc's list of its own locks) doesn't cost much at\n> all.\n\nI'm not sure where you're getting the idea that a proc's list of its\nown locks isn't heavily contended. That could be true, but it isn't\nobvious to me. We allocate PROCLOCK structures out of a shared hash\ntable while holding the lock manager partition lock, and we add every\nlock to a queue associated with the PROC and a second queue associated\nwith the LOCK. So if two processes acquire an AccessShareLock on the\nsame table, both the LOCK object and at least the SHM_QUEUE portions\nof each PROCLOCK are shared, and those aren't necessarily nearby in\nmemory.\n\n> One thing that might be interesting, but that I don't know how to attack\n> in a reasonably machine-independent way, is to try to ensure that shared\n> and local data structures don't accidentally overlap within cache lines.\n> When they do, you pay for fighting the cache line away from another\n> processor even when there's no real need.\n\nI'd be sort of surprised if this is a problem - as I understand it,\ncache lines are small, contiguous chunks, and surely the heap and the\nshared memory segment are mapped into different portions of the\naddress space...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 7 Dec 2010 13:09:08 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On Tue, Dec 7, 2010 at 1:08 PM, Ivan Voras <[email protected]> wrote:\n> On 7 December 2010 18:37, Robert Haas <[email protected]> wrote:\n>> On Mon, Dec 6, 2010 at 9:59 PM, Jignesh Shah <[email protected]> wrote:\n>>> That's exactly what I concluded when I was doing the sysbench simple\n>>> read-only test. I had also tried with different lock partitions and it\n>>> did not help since they all go after the same table. I think one way\n>>> to kind of avoid the problem on the same table is to do more granular\n>>> locking (Maybe at page level instead of table level). But then I dont\n>>> really understand on how to even create a prototype related to this\n>>> one. If you can help create a prototype then I can test it out with my\n>>> setup and see if it helps us to catch up with other guys out there.\n>>\n>> We're trying to lock the table against a concurrent DROP or schema\n>> change, so locking only part of it doesn't really work.  I don't\n>> really see any way to avoid needing some kind of a lock here; the\n>> trick is how to take it quickly.  The main obstacle to making this\n>> faster is that the deadlock detector needs to be able to obtain enough\n>> information to break cycles, which means we've got to record in shared\n>> memory not only the locks that are granted but who has them.\n>\n> I'm not very familiar with PostgreSQL code but if we're\n> brainstorming... if you're only trying to protect against a small\n> number of expensive operations (like DROP, etc.) that don't really\n> happen often, wouldn't an atomic reference counter be good enough for\n> the purpose (e.g. the expensive operations would spin-wait until the\n> counter is 0)?\n\nNo, because (1) busy-waiting is only suitable for locks that will only\nbe held for a short time, and an AccessShareLock on a table might be\nheld while we read 10GB of data in from disk, and (2) that wouldn't\nallow for deadlock detection.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 7 Dec 2010 13:10:54 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On 7 December 2010 19:10, Robert Haas <[email protected]> wrote:\n\n>> I'm not very familiar with PostgreSQL code but if we're\n>> brainstorming... if you're only trying to protect against a small\n>> number of expensive operations (like DROP, etc.) that don't really\n>> happen often, wouldn't an atomic reference counter be good enough for\n>> the purpose (e.g. the expensive operations would spin-wait until the\n>> counter is 0)?\n>\n> No, because (1) busy-waiting is only suitable for locks that will only\n> be held for a short time, and an AccessShareLock on a table might be\n> held while we read 10GB of data in from disk,\n\nGenerally yes, but a variant with adaptive sleeping could possibly be\nused if it would be acceptable to delay (uncertainly) the already\nexpensive and rare operations.\n\n> and (2) that wouldn't\n> allow for deadlock detection.\n\nProbably :)\n", "msg_date": "Tue, 7 Dec 2010 19:21:13 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "2010/12/7 Robert Haas <[email protected]>\n\n> On Tue, Dec 7, 2010 at 1:08 PM, Ivan Voras <[email protected]> wrote:\n>\n> > I'm not very familiar with PostgreSQL code but if we're\n> > brainstorming... if you're only trying to protect against a small\n> > number of expensive operations (like DROP, etc.) that don't really\n> > happen often, wouldn't an atomic reference counter be good enough for\n> > the purpose (e.g. the expensive operations would spin-wait until the\n> > counter is 0)?\n>\n> No, because (1) busy-waiting is only suitable for locks that will only\n> be held for a short time, and an AccessShareLock on a table might be\n> held while we read 10GB of data in from disk, and (2) that wouldn't\n> allow for deadlock detection.\n>\n\nAs far as I understand this thread, the talk is about contention - where\nlarge number of processors want to get single partition lock to get\nhigh-level shared lock.\nAs far as I can see from the source, there is a lot of code executed under\nthe partition lock protection, like two hash searches (and possibly\nallocations).\nWhat can be done, is that number of locks can be increased - one could use\nspin locks for hash table manipulations, e.g. a lock preventing rehashing\n(number of baskets being changed) and a lock for required basket.\nIn this case only small range of code can be protected by partition lock.\nAs for me, this will make locking process more cpu-intensive (more locks\nwill be acquired/freed during the exection), but will decrease contention\n(since all but one lock can be spin locks working on atomic counters, hash\nsearches can be done in parallel), won't it?\nThe thing I am not sure in is how much spinlocks on atomic counters cost\ntoday.\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2010/12/7 Robert Haas <[email protected]>\nOn Tue, Dec 7, 2010 at 1:08 PM, Ivan Voras <[email protected]> wrote:\n> I'm not very familiar with PostgreSQL code but if we're\n> brainstorming... if you're only trying to protect against a small\n> number of expensive operations (like DROP, etc.) that don't really\n> happen often, wouldn't an atomic reference counter be good enough for\n> the purpose (e.g. the expensive operations would spin-wait until the\n> counter is 0)?\n\nNo, because (1) busy-waiting is only suitable for locks that will only\nbe held for a short time, and an AccessShareLock on a table might be\nheld while we read 10GB of data in from disk, and (2) that wouldn't\nallow for deadlock detection.\nAs far as I understand this thread, the talk is about contention - where large number of processors want to get single partition lock to get high-level shared lock.\nAs far as I can see from the source, there is a lot of code executed under the partition lock protection, like two hash searches (and possibly allocations).What can be done, is that number of locks can be increased - one could use spin locks for hash table manipulations, e.g. a lock preventing rehashing (number of baskets being changed) and a lock for required basket.\nIn this case only small range of code can be protected by partition lock.As for me, this will make locking process more cpu-intensive (more locks will be acquired/freed during the exection), but will decrease contention (since all but one lock can be spin locks working on atomic counters, hash searches can be done in parallel), won't it?\nThe thing I am not sure in is how much spinlocks on atomic counters cost today.   -- Best regards, Vitalii Tymchyshyn", "msg_date": "Tue, 7 Dec 2010 23:36:02 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "2010/12/7 Віталій Тимчишин <[email protected]>:\n>\n>\n> 2010/12/7 Robert Haas <[email protected]>\n>>\n>> On Tue, Dec 7, 2010 at 1:08 PM, Ivan Voras <[email protected]> wrote:\n>>\n>> > I'm not very familiar with PostgreSQL code but if we're\n>> > brainstorming... if you're only trying to protect against a small\n>> > number of expensive operations (like DROP, etc.) that don't really\n>> > happen often, wouldn't an atomic reference counter be good enough for\n>> > the purpose (e.g. the expensive operations would spin-wait until the\n>> > counter is 0)?\n>>\n>> No, because (1) busy-waiting is only suitable for locks that will only\n>> be held for a short time, and an AccessShareLock on a table might be\n>> held while we read 10GB of data in from disk, and (2) that wouldn't\n>> allow for deadlock detection.\n\n> What can be done, is that number of locks can be increased - one could use\n> spin locks for hash table manipulations, e.g. a lock preventing rehashing\n> (number of baskets being changed) and a lock for required basket.\n> In this case only small range of code can be protected by partition lock.\n> As for me, this will make locking process more cpu-intensive (more locks\n> will be acquired/freed during the exection), but will decrease contention\n> (since all but one lock can be spin locks working on atomic counters, hash\n> searches can be done in parallel), won't it?\n\nFor what it's worth, this is pretty much the opposite of what I had in\nmind. I proposed atomic reference counters (as others pointed, this\nprobably won't work) as poor-man's shared-exclusive locks, so that\nmost operations would not have to contend on them.\n", "msg_date": "Tue, 7 Dec 2010 23:43:14 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "2010/12/7 Віталій Тимчишин <[email protected]>:\n> As far as I can see from the source, there is a lot of code executed under\n> the partition lock protection, like two hash searches (and possibly\n> allocations).\n\nYeah, that was my concern, too, though Tom seems skeptical (perhaps\nrightly). And I'm not really sure why the PROCLOCKs need to be in a\nhash table anyway - if we know the PROC and LOCK we can surely look up\nthe PROCLOCK pretty expensively by following the PROC SHM_QUEUE.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 7 Dec 2010 23:23:48 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "2010/12/7 Robert Haas <[email protected]>:\n> 2010/12/7 Віталій Тимчишин <[email protected]>:\n>> As far as I can see from the source, there is a lot of code executed under\n>> the partition lock protection, like two hash searches (and possibly\n>> allocations).\n>\n> Yeah, that was my concern, too, though Tom seems skeptical (perhaps\n> rightly).  And I'm not really sure why the PROCLOCKs need to be in a\n> hash table anyway - if we know the PROC and LOCK we can surely look up\n> the PROCLOCK pretty expensively by following the PROC SHM_QUEUE.\n\nErr, pretty INexpensively.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 7 Dec 2010 23:24:14 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n>> Yeah, that was my concern, too, though Tom seems skeptical (perhaps\n>> rightly). �And I'm not really sure why the PROCLOCKs need to be in a\n>> hash table anyway - if we know the PROC and LOCK we can surely look up\n>> the PROCLOCK pretty expensively by following the PROC SHM_QUEUE.\n\n> Err, pretty INexpensively.\n\nThere are plenty of scenarios in which a proc might hold hundreds or\neven thousands of locks. pg_dump, for example. You do not want to be\ndoing seq search there.\n\nNow, it's possible that you could avoid *ever* needing to search for a\nspecific PROCLOCK, in which case eliminating the hash calculation\noverhead might be worth it. Of course, you'd still have to replicate\nall the space-management functionality of a shared hash table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Dec 2010 09:34:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention " }, { "msg_contents": "2010/12/8 Tom Lane <[email protected]>:\n> Robert Haas <[email protected]> writes:\n>>> Yeah, that was my concern, too, though Tom seems skeptical (perhaps\n>>> rightly). šAnd I'm not really sure why the PROCLOCKs need to be in a\n>>> hash table anyway - if we know the PROC and LOCK we can surely look up\n>>> the PROCLOCK pretty expensively by following the PROC SHM_QUEUE.\n>\n>> Err, pretty INexpensively.\n>\n> There are plenty of scenarios in which a proc might hold hundreds or\n> even thousands of locks.  pg_dump, for example.  You do not want to be\n> doing seq search there.\n>\n> Now, it's possible that you could avoid *ever* needing to search for a\n> specific PROCLOCK, in which case eliminating the hash calculation\n> overhead might be worth it.\n\nThat seems like it might be feasible. The backend that holds the lock\nought to be able to find out whether there's a PROCLOCK by looking at\nthe LOCALLOCK table, and the LOCALLOCK has a pointer to the PROCLOCK.\nIt's not clear to me whether there's any other use case for doing a\nlookup for a particular combination of PROC A + LOCK B, but I'll have\nto look at the code more closely.\n\n> Of course, you'd still have to replicate\n> all the space-management functionality of a shared hash table.\n\nMaybe we ought to revisit Markus Wanner's wamalloc. Although given\nour recent discussions, I'm thinking that you might want to try to\ndesign any allocation system so as to minimize cache line contention.\nFor example, you could hard-allocate each backend 512 bytes of\ndedicated shared memory in which to record the locks it holds. If it\nneeds more, it allocates additional 512 byte chunks.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 8 Dec 2010 16:09:17 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> 2010/12/8 Tom Lane <[email protected]>:\n>> Now, it's possible that you could avoid *ever* needing to search for a\n>> specific PROCLOCK, in which case eliminating the hash calculation\n>> overhead might be worth it.\n\n> That seems like it might be feasible. The backend that holds the lock\n> ought to be able to find out whether there's a PROCLOCK by looking at\n> the LOCALLOCK table, and the LOCALLOCK has a pointer to the PROCLOCK.\n\nHm, that is a real good point. Those shared memory data structures\npredate the invention of the local lock tables, and I don't think we\nlooked real hard at whether we should rethink the fundamental\nrepresentation in shared memory given the additional local state.\nThe issue though is whether any other processes ever need to look\nat a proc's PROCLOCKs. I think at least deadlock detection does.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Dec 2010 17:02:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention " }, { "msg_contents": "2010/12/8 Tom Lane <[email protected]>:\n> Robert Haas <[email protected]> writes:\n>> 2010/12/8 Tom Lane <[email protected]>:\n>>> Now, it's possible that you could avoid *ever* needing to search for a\n>>> specific PROCLOCK, in which case eliminating the hash calculation\n>>> overhead might be worth it.\n>\n>> That seems like it might be feasible.  The backend that holds the lock\n>> ought to be able to find out whether there's a PROCLOCK by looking at\n>> the LOCALLOCK table, and the LOCALLOCK has a pointer to the PROCLOCK.\n>\n> Hm, that is a real good point.  Those shared memory data structures\n> predate the invention of the local lock tables, and I don't think we\n> looked real hard at whether we should rethink the fundamental\n> representation in shared memory given the additional local state.\n> The issue though is whether any other processes ever need to look\n> at a proc's PROCLOCKs.  I think at least deadlock detection does.\n\nSure, but it doesn't use the hash table to do it. All the PROCLOCKs\nfor any given LOCK are in a linked list; we just walk it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 8 Dec 2010 20:41:44 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" } ]
[ { "msg_contents": "Ivan Voras wrote:\n \n> After 16 clients (which is still good since there are only 12\n> \"real\" cores in the system), the performance drops sharply\n \nYet another data point to confirm the importance of connection\npooling. :-)\n \n-Kevin\n", "msg_date": "Sun, 21 Nov 2010 19:47:09 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On 11/22/10 02:47, Kevin Grittner wrote:\n> Ivan Voras wrote:\n>\n>> After 16 clients (which is still good since there are only 12\n>> \"real\" cores in the system), the performance drops sharply\n>\n> Yet another data point to confirm the importance of connection\n> pooling. :-)\n\nI agree, connection pooling will get rid of the symptom. But not the \nunderlying problem. I'm not saying that having 1000s of connections to \nthe database is a particularly good design, only that there shouldn't be \na sharp decline in performance when it does happen. Ideally, the \nperformance should remain the same as it was at its peek.\n\nI've been monitoring the server some more and it looks like there are \nperiods where almost all servers are in the semwait state followed by \nperiods of intensive work - approximately similar to the \"thundering \nherd\" problem, or maybe to what Josh Berkus has posted a few days ago.\n\n\n", "msg_date": "Mon, 22 Nov 2010 03:18:50 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On Sun, Nov 21, 2010 at 9:18 PM, Ivan Voras <[email protected]> wrote:\n> On 11/22/10 02:47, Kevin Grittner wrote:\n>>\n>> Ivan Voras  wrote:\n>>\n>>> After 16 clients (which is still good since there are only 12\n>>> \"real\" cores in the system), the performance drops sharply\n>>\n>> Yet another data point to confirm the importance of connection\n>> pooling.  :-)\n>\n> I agree, connection pooling will get rid of the symptom. But not the\n> underlying problem. I'm not saying that having 1000s of connections to the\n> database is a particularly good design, only that there shouldn't be a sharp\n> decline in performance when it does happen. Ideally, the performance should\n> remain the same as it was at its peek.\n>\n> I've been monitoring the server some more and it looks like there are\n> periods where almost all servers are in the semwait state followed by\n> periods of intensive work - approximately similar to the \"thundering herd\"\n> problem, or maybe to what Josh Berkus has posted a few days ago.\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nTry it with systemtap or dtrace and see if you find the same\nbottlenecks as I do in\nhttp://jkshah.blogspot.com/2010/11/postgresql-90-simple-select-scaling.html\n\nI will probably retry it with pgBench and see what I find ..\n\nRegards,\nJignesh\n", "msg_date": "Mon, 22 Nov 2010 01:54:50 -0500", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "Hi Ivan,\n\nWe have the same issue on our database machines (which are 2x6\nIntel(R) Xeon(R) CPU X5670 @ 2.93GHz with 24 logical cores and 144Gb\nof RAM) -- they run RHEL 5. The issue occurs with our normal OLTP\nworkload, so it's not just pgbench.\n\nWe use pgbouncer to limit total connections to 15 (this seemed to be\nthe 'sweet spot' for us) -- there's definitely a bunch of contention\non ... something... for a workload where you're running a lot of very\nfast SELECTs (around 2000-4000/s) from more than 15-16 clients.\n\nI had a chat with Neil C or Gavin S about this at some point, but I\nforget the reason for it. I don't think there's anything you can do\nfor it configuration-wise except use a connection pool.\n\nRegards,\nOmar\n\nOn Mon, Nov 22, 2010 at 5:54 PM, Jignesh Shah <[email protected]> wrote:\n> On Sun, Nov 21, 2010 at 9:18 PM, Ivan Voras <[email protected]> wrote:\n>> On 11/22/10 02:47, Kevin Grittner wrote:\n>>>\n>>> Ivan Voras  wrote:\n>>>\n>>>> After 16 clients (which is still good since there are only 12\n>>>> \"real\" cores in the system), the performance drops sharply\n>>>\n>>> Yet another data point to confirm the importance of connection\n>>> pooling.  :-)\n>>\n>> I agree, connection pooling will get rid of the symptom. But not the\n>> underlying problem. I'm not saying that having 1000s of connections to the\n>> database is a particularly good design, only that there shouldn't be a sharp\n>> decline in performance when it does happen. Ideally, the performance should\n>> remain the same as it was at its peek.\n>>\n>> I've been monitoring the server some more and it looks like there are\n>> periods where almost all servers are in the semwait state followed by\n>> periods of intensive work - approximately similar to the \"thundering herd\"\n>> problem, or maybe to what Josh Berkus has posted a few days ago.\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n> Try it with systemtap or dtrace and see if you find the same\n> bottlenecks as I do in\n> http://jkshah.blogspot.com/2010/11/postgresql-90-simple-select-scaling.html\n>\n> I will probably retry it with pgBench and see what  I find ..\n>\n> Regards,\n> Jignesh\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 22 Nov 2010 21:01:54 +1100", "msg_from": "Omar Kilani <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "Ivan Voras <[email protected]> wrote:\n> On 11/22/10 02:47, Kevin Grittner wrote:\n>> Ivan Voras wrote:\n>>\n>>> After 16 clients (which is still good since there are only 12\n>>> \"real\" cores in the system), the performance drops sharply\n>>\n>> Yet another data point to confirm the importance of connection\n>> pooling. :-)\n> \n> I agree, connection pooling will get rid of the symptom. But not\n> the underlying problem. I'm not saying that having 1000s of\n> connections to the database is a particularly good design, only\n> that there shouldn't be a sharp decline in performance when it\n> does happen. Ideally, the performance should remain the same as it\n> was at its peek.\n \nWell, I suggested that we add an admission control[1] mechanism,\nwith at least part of the initial default policy being that there is\na limit on the number of active database transactions. Such a\npolicy would do what you are suggesting, but the idea was shot down\non the basis that in most of the cases where this would help, people\nwould be better served by using an external connection pool.\n \nIf interested, search the archives for details of the discussion.\n \n-Kevin\n \n[1] http://db.cs.berkeley.edu/papers/fntdb07-architecture.pdf\nJoseph M. Hellerstein, Michael Stonebraker and James Hamilton. 2007.\nArchitecture of a Database System. Foundations and Trends(R) in\nDatabases Vol. 1, No. 2 (2007) 141*259\n(see Section 2.4 - Admission Control)\n", "msg_date": "Mon, 22 Nov 2010 09:26:21 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On 11/22/10 16:26, Kevin Grittner wrote:\n> Ivan Voras<[email protected]> wrote:\n>> On 11/22/10 02:47, Kevin Grittner wrote:\n>>> Ivan Voras wrote:\n>>>\n>>>> After 16 clients (which is still good since there are only 12\n>>>> \"real\" cores in the system), the performance drops sharply\n>>>\n>>> Yet another data point to confirm the importance of connection\n>>> pooling. :-)\n>>\n>> I agree, connection pooling will get rid of the symptom. But not\n>> the underlying problem. I'm not saying that having 1000s of\n>> connections to the database is a particularly good design, only\n>> that there shouldn't be a sharp decline in performance when it\n>> does happen. Ideally, the performance should remain the same as it\n>> was at its peek.\n>\n> Well, I suggested that we add an admission control[1] mechanism,\n\nIt looks like a hack (and one which is already implemented by connection \npool software); the underlying problem should be addressed.\n\nBut on the other hand if it's affecting so many people, maybe a warning \ncomment in postgresql.conf around max_connections would be helpful.\n\n", "msg_date": "Mon, 22 Nov 2010 16:38:28 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On 11/22/2010 11:38 PM, Ivan Voras wrote:\n> On 11/22/10 16:26, Kevin Grittner wrote:\n>> Ivan Voras<[email protected]> wrote:\n>>> On 11/22/10 02:47, Kevin Grittner wrote:\n>>>> Ivan Voras wrote:\n>>>>\n>>>>> After 16 clients (which is still good since there are only 12\n>>>>> \"real\" cores in the system), the performance drops sharply\n>>>>\n>>>> Yet another data point to confirm the importance of connection\n>>>> pooling. :-)\n>>>\n>>> I agree, connection pooling will get rid of the symptom. But not\n>>> the underlying problem. I'm not saying that having 1000s of\n>>> connections to the database is a particularly good design, only\n>>> that there shouldn't be a sharp decline in performance when it\n>>> does happen. Ideally, the performance should remain the same as it\n>>> was at its peek.\n>>\n>> Well, I suggested that we add an admission control[1] mechanism,\n>\n> It looks like a hack (and one which is already implemented by connection\n> pool software); the underlying problem should be addressed.\n\nMy (poor) understanding is that addressing the underlying problem would \nrequire a massive restructure of postgresql to separate \"connection and \nsession state\" from \"executor and backend\". Idle connections wouldn't \nrequire a backend to sit around unused but participating in all-backends \nsynchronization and signalling. Active connections over a configured \nmaximum concurrency limit would queue for access to a backend rather \nthan fighting it out for resources at the OS level.\n\nThe trouble is that this would be an *enormous* rewrite of the codebase, \nand would still only solve part of the problem. See the prior discussion \non in-server connection pooling and admission control.\n\nPersonally I think the current approach is clearly difficult for many \nadmins to understand and it's unfortunate that it requires external \nsoftware to be effective. OTOH, I'm not sure what the answer is.\n\n--\nCraig Ringer\n\n", "msg_date": "Wed, 24 Nov 2010 08:11:28 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On 24 November 2010 01:11, Craig Ringer <[email protected]> wrote:\n> On 11/22/2010 11:38 PM, Ivan Voras wrote:\n\n>> It looks like a hack (and one which is already implemented by connection\n>> pool software); the underlying problem should be addressed.\n>\n> My (poor) understanding is that addressing the underlying problem would\n> require a massive restructure of postgresql to separate \"connection and\n> session state\" from \"executor and backend\". Idle connections wouldn't\n> require a backend to sit around unused but participating in all-backends\n> synchronization and signalling. Active connections over a configured maximum\n> concurrency limit would queue for access to a backend rather than fighting\n> it out for resources at the OS level.\n>\n> The trouble is that this would be an *enormous* rewrite of the codebase, and\n> would still only solve part of the problem. See the prior discussion on\n> in-server connection pooling and admission control.\n\nI'm (also) not a PostgreSQL developer so I'm hoping that someone who\nis will join the thread, but speaking generally, there is no reason\nwhy this couldn't be a simpler problem which just requires\nfiner-grained locking or smarter semaphore usage.\n\nI'm not talking about forcing performance out of situation where there\nare no more CPU cycles to take, but about degrading gracefully in\nthose circumstances and not taking a 80%+ drop because of spinning\naround in semaphore syscalls.\n", "msg_date": "Wed, 24 Nov 2010 02:09:32 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "24.11.10 02:11, Craig Ringer написав(ла):\n> On 11/22/2010 11:38 PM, Ivan Voras wrote:\n>> On 11/22/10 16:26, Kevin Grittner wrote:\n>>> Ivan Voras<[email protected]> wrote:\n>>>> On 11/22/10 02:47, Kevin Grittner wrote:\n>>>>> Ivan Voras wrote:\n>>>>>\n>>>>>> After 16 clients (which is still good since there are only 12\n>>>>>> \"real\" cores in the system), the performance drops sharply\n>>>>>\n>>>>> Yet another data point to confirm the importance of connection\n>>>>> pooling. :-)\n>>>>\n>>>> I agree, connection pooling will get rid of the symptom. But not\n>>>> the underlying problem. I'm not saying that having 1000s of\n>>>> connections to the database is a particularly good design, only\n>>>> that there shouldn't be a sharp decline in performance when it\n>>>> does happen. Ideally, the performance should remain the same as it\n>>>> was at its peek.\n>>>\n>>> Well, I suggested that we add an admission control[1] mechanism,\n>>\n>> It looks like a hack (and one which is already implemented by connection\n>> pool software); the underlying problem should be addressed.\n>\n> My (poor) understanding is that addressing the underlying problem \n> would require a massive restructure of postgresql to separate \n> \"connection and session state\" from \"executor and backend\". Idle \n> connections wouldn't require a backend to sit around unused but \n> participating in all-backends synchronization and signalling. Active \n> connections over a configured maximum concurrency limit would queue \n> for access to a backend rather than fighting it out for resources at \n> the OS level.\n>\n> The trouble is that this would be an *enormous* rewrite of the \n> codebase, and would still only solve part of the problem. See the \n> prior discussion on in-server connection pooling and admission control.\nHello.\n\nIMHO the main problem is not a backend sitting and doing nothing, but \nmultiple backends trying to do their work. So, as for me, the simplest \noption that will make most people happy would be to have a limit \n(waitable semaphore) on backends actively executing the query. Such a \nlimit can even be automatically detected based on number of CPUs \n(simple) and spindels (not sure if simple, but some default can be \nused). Idle (or waiting for a lock) backend consumes little resources. \nIf one want to reduce resource usage for such a backends, he can \nintroduce external pooling, but such a simple limit would make me happy \n(e.g. having max_active_connections=1000, max_active_queries=20).\nThe main Q here, is how much resources can take a backend that is \nwaiting for a lock. Is locking done at the query start? Or it may go \ninto wait while consumed much of work_mem. In the second case, the limit \nwon't be work_mem limit, but will still prevent much contention.\n\nBest regards, Vitalii Tymchyshyn\n", "msg_date": "Wed, 24 Nov 2010 10:58:16 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "Vitalii Tymchyshyn <[email protected]> wrote:\n \n> the simplest option that will make most people happy would be to\n> have a limit (waitable semaphore) on backends actively executing\n> the query.\n \nThat's very similar to the admission control policy I proposed,\nexcept that I suggested a limit on the number of active database\ntransactions rather than the number of queries. The reason is that\nyou could still get into a lot of lock contention with a query-based\nlimit -- a query could acquire locks (perhaps by writing rows to the\ndatabase) and then be blocked waiting its turn, leading to conflicts\nwith other transactions. Such problems would be less common with a\ntransaction limit, since most common locks don't persist past the\nend of the transaction.\n \n-Kevin\n", "msg_date": "Wed, 24 Nov 2010 08:46:29 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance under contention" } ]
[ { "msg_contents": "Hi,\n\nI am using Postgresql: 9.01, PostGIS 1.5 on FreeBSD 7.0. I have at\nleast one table on which SELECT's turn terribly slow from time to time.\nThis happened at least three times, also on version 8.4.\n\nThe table has only ~1400 rows. A count(*) takes more than 70 seconds.\nOther tables are fast as usual.\n\nWhen this happens I can also see my system's disks are suffering.\n'systat -vm' shows 100% disk load at ~4MB/sec data rates.\n\nA simple VACUUM does *not* fix it, a VACUUM FULL however does. See the\ntextfile attached.\n\nMy postgresql.conf is untouched as per distribution.\n\nCan someone hint me how I can troubleshoot this problem?\n\nThanks!\n\nMartin", "msg_date": "Mon, 22 Nov 2010 09:59:30 +0100", "msg_from": "Martin Boese <[email protected]>", "msg_from_op": true, "msg_subject": "Slow SELECT on small table" }, { "msg_contents": "Martin Boese <[email protected]> wrote:\n \n> The table has only ~1400 rows. A count(*) takes more than 70\n> seconds. Other tables are fast as usual.\n> \n> When this happens I can also see my system's disks are suffering.\n> 'systat -vm' shows 100% disk load at ~4MB/sec data rates.\n> \n> A simple VACUUM does *not* fix it, a VACUUM FULL however does. See\n> the textfile attached.\n \nThis is almost certainly a result of bloat on this table. \nAutovacuum should normally protect you from that, but there are a\nfew things which can prevent it from doing so, like long-running\ntransactions or repeated updates against the entire table in a short\ntime. There has also been a bug found recently which, as I\nunderstand it, can cause autovacuum to become less aggressive over\ntime, which might possibly contribute to this sort of problem.\n \nYou appear to have snipped the portion of the vacuum output which\nmight have confirmed and quantified the problem. If you get into\nthis state again, the entire output of this would be informative:\n \nVACUUM VERBOSE public.circuit;\n \nThe goal would be to try to prevent the bloat in the first place so\nthat you don't need to use aggressive maintenance like VACUUM FULL\nto recover. Manual vacuums or tweaking the autovacuum parameters\nmay help. Also, keep an eye out for maintenance releases for 9.0;\nthere's likely to be a fix coming which will help you with this.\n \n-Kevin\n", "msg_date": "Mon, 22 Nov 2010 09:11:15 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow SELECT on small table" } ]
[ { "msg_contents": "Ivan Voras <[email protected]> wrote:\n \n> It looks like a hack\n \nNot to everyone. In the referenced section, Hellerstein,\nStonebraker and Hamilton say:\n \n\"any good multi-user system has an admission control policy\"\n \nIn the case of PostgreSQL I understand the counter-argument,\nalthough I'm inclined to think that it's prudent for a product to\nlimit resource usage to a level at which it can still function well,\neven if there's an external solution which can also work, should\npeople use it correctly. It seems likely that a mature admission\ncontrol policy could do a better job of managing some resources than\nan external product could.\n \n-Kevin\n", "msg_date": "Mon, 22 Nov 2010 11:47:15 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance under contention" }, { "msg_contents": "On 11/22/10 18:47, Kevin Grittner wrote:\n> Ivan Voras<[email protected]> wrote:\n>\n>> It looks like a hack\n>\n> Not to everyone. In the referenced section, Hellerstein,\n> Stonebraker and Hamilton say:\n>\n> \"any good multi-user system has an admission control policy\"\n>\n> In the case of PostgreSQL I understand the counter-argument,\n> although I'm inclined to think that it's prudent for a product to\n> limit resource usage to a level at which it can still function well,\n> even if there's an external solution which can also work, should\n> people use it correctly. It seems likely that a mature admission\n> control policy could do a better job of managing some resources than\n> an external product could.\n\nI didn't think it would be that useful but yesterday I did some \n(unrelated) testing with MySQL and it looks like its configuration \nparameter \"thread_concurrency\" does something to that effect.\n\nInitially I thought it is equivalent to PostgreSQL's max_connections but \nno, connections can grow (MySQL spawns a thread per connection by \ndefault) but the actual concurrency is limited in some way by this \nparameter.\n\nThe comment for the parameter says \"# Try number of CPU's*2 for \nthread_concurrency\" but obviously it would depend a lot on the \nreal-world load.\n\n\n", "msg_date": "Thu, 25 Nov 2010 13:39:42 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance under contention" } ]
[ { "msg_contents": "Hello.\n\nI have a query which works a bit slow.\n\nIt's runned on desktop computer: AMD Athlon X2 2GHz , Win Xp sp2, 1GB ram.\nPostgres 8.4.5 with some changes in config:\n\nshared_buffers = 200MB # min 128kB\n # (change requires restart)\ntemp_buffers = 8MB # min 800kB\nwork_mem = 12MB # min 64kB\nmaintenance_work_mem = 32MB # min 1MB\n\nIndexes in table \"NumeryA\":\n\"NTA\", \"NKA\", \"KodBłędu\", \"Plik\" primary key\n\"DataPliku\", \"KodBłędu\" index dp_kb\n\"NKA\", \"NTA\" index nka_nta\n\nIndexes in table \"Rejestr stacji do naprawy\":\n\"LP\" - primary key\n\"Numer kierunkowy\", substr(\"Numer stacji\"::text, 1, 5) - index \"3\"\n\"Data weryfikacji\" - index \"Data weryfikacji_1\"\n\"Numer kierunkowy\", \"Numer stacji\", \"Data odrzucania bilingu z\nSerat\" - index \"Powtórzenia\"\n\n---------------------\nQuery is:\n----------------------\nSELECT\n A.\"NKA\",\n A.\"NTA\",\n Min(\"PołączeniaMin\") || ',' || Max(\"PołączeniaMax\") AS \"Biling\",\n Sum(\"Ile\")::text AS \"Ilość CDR\",\n R.\"LP\"::text AS \"Sprawa\",\n (R.\"Osoba weryfikująca\") AS \"Osoba\",\n to_char(min(\"Wartość\"),'FM9999990D00') AS \"Wartość po kontroli\",\n max(R.\"Kontrola po naprawie w Serat - CDR\")::text AS \"CDR po kontroli\",\n min(A.\"KodBłędu\")::text AS KodBłędu,\n Max(to_char(R.\"Data kontroli\",'YYYY-MM-DD')) AS \"Ostatnia Kontrola\"\n, max(\"Skutek wprowadzenia błednej ewidencji w Serat\") as \"Skutek\"\n, sum(www.a_biling_070(\"NRB\"))::text\n, sum(www.a_biling_darmowy(\"NRB\"))::text\nFROM\n (SELECT \"NumeryA\".*\n FROM ONLY \"NumeryA\"\n WHERE \"DataPliku\" >= current_date-4*30 and \"KodBłędu\"=74::text\n ) AS A\nLEFT JOIN\n (SELECT * FROM \"Rejestr stacji do naprawy\"\n WHERE \"Data weryfikacji\" >= current_date-4*30\n ) AS R\nON\n A.\"NKA\" = R.\"Numer kierunkowy\"\n and substr(A.\"NTA\",1,5) = substr(R.\"Numer stacji\",1,5)\n and A.\"NTA\" like R.\"Numer stacji\"\nGROUP BY R.\"Osoba weryfikująca\",R.\"LP\",A.\"NKA\", A.\"NTA\"\nORDER BY Sum(\"Ile\") DESC\nLIMIT 5000\n----------------------\nExplain analyze:\n----------------------\n\n\"Limit (cost=30999.84..31012.34 rows=5000 width=149) (actual\ntime=7448.483..7480.094 rows=5000 loops=1)\"\n\" -> Sort (cost=30999.84..31073.19 rows=29341 width=149) (actual\ntime=7448.475..7459.663 rows=5000 loops=1)\"\n\" Sort Key: (sum(\"NumeryA\".\"Ile\"))\"\n\" Sort Method: top-N heapsort Memory: 1488kB\"\n\" -> GroupAggregate (cost=11093.77..29050.46 rows=29341\nwidth=149) (actual time=4700.654..7377.762 rows=14225 loops=1)\"\n\" -> Sort (cost=11093.77..11167.12 rows=29341\nwidth=149) (actual time=4699.587..4812.776 rows=46732 loops=1)\"\n\" Sort Key: \"Rejestr stacji do naprawy\".\"Osoba\nweryfikująca\", \"Rejestr stacji do naprawy\".\"LP\", \"NumeryA\".\"NKA\",\n\"NumeryA\".\"NTA\"\"\n\" Sort Method: quicksort Memory: 9856kB\"\n\" -> Merge Left Join (cost=8297.99..8916.58\nrows=29341 width=149) (actual time=2931.449..3735.876 rows=46732\nloops=1)\"\n\" Merge Cond: (((\"NumeryA\".\"NKA\")::text =\n(\"Rejestr stacji do naprawy\".\"Numer kierunkowy\")::text) AND\n((substr((\"NumeryA\".\"NTA\")::text, 1, 5)) = (substr((\"Rejestr stacji do\nnaprawy\".\"Numer stacji\")::text, 1, 5))))\"\n\" Join Filter: ((\"NumeryA\".\"NTA\")::text ~~\n(\"Rejestr stacji do naprawy\".\"Numer stacji\")::text)\"\n\" -> Sort (cost=6062.18..6135.53 rows=29341\nwidth=95) (actual time=2131.297..2241.303 rows=46694 loops=1)\"\n\" Sort Key: \"NumeryA\".\"NKA\",\n(substr((\"NumeryA\".\"NTA\")::text, 1, 5))\"\n\" Sort Method: quicksort Memory: 7327kB\"\n\" -> Bitmap Heap Scan on \"NumeryA\"\n(cost=1502.09..3884.98 rows=29341 width=95) (actual\ntime=282.570..1215.355 rows=46694 loops=1)\"\n\" Recheck Cond: ((\"DataPliku\" >=\n(('now'::text)::date - 120)) AND ((\"KodBłędu\")::text = '74'::text))\"\n\" -> Bitmap Index Scan on dp_kb\n(cost=0.00..1494.75 rows=29341 width=0) (actual time=281.991..281.991\nrows=46694 loops=1)\"\n\" Index Cond: ((\"DataPliku\"\n>= (('now'::text)::date - 120)) AND ((\"KodBłędu\")::text =\n'74'::text))\"\n\" -> Sort (cost=2235.82..2285.03 rows=19684\nwidth=64) (actual time=800.101..922.463 rows=54902 loops=1)\"\n\" Sort Key: \"Rejestr stacji do\nnaprawy\".\"Numer kierunkowy\", (substr((\"Rejestr stacji do\nnaprawy\".\"Numer stacji\")::text, 1, 5))\"\n\" Sort Method: quicksort Memory: 3105kB\"\n\" -> Seq Scan on \"Rejestr stacji do\nnaprawy\" (cost=0.00..831.88 rows=19684 width=64) (actual\ntime=2.118..361.463 rows=19529 loops=1)\"\n\" Filter: (\"Data weryfikacji\" >=\n(('now'::text)::date - 120))\"\n\"Total runtime: 7495.697 ms\"\n---------------------------------\n\nHow to make it faster ?\n\n\n\n------------\npasman\n", "msg_date": "Wed, 24 Nov 2010 15:48:43 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing query" }, { "msg_contents": "\n\nNote that your LEFT JOIN condition is probably quite slow...\n\nPlease post EXPLAIN ANALYZE for this simplified version :\n\nSELECT\n\tR.\"Osoba weryfikuj?ca\",\n\tR.\"LP\",\n\tA.\"NKA\",\n\tA.\"NTA\",\n\tSum(\"Ile\")\nFROM\t\t\"NumeryA\" A\nLEFT JOIN\t\"Rejestr stacji do naprawy\" R ON (\n\t A.\"NKA\" = R.\"Numer kierunkowy\"\n\tand A.\"NTA\" like R.\"Numer stacji\"\n\tand substr(A.\"NTA\",1,5) = substr(R.\"Numer stacji\",1,5)\n)\nWHERE\n\t A.\"DataPliku\" >= current_date-4*30\n\tand A.\"KodB??du\"=74::text\n\tand R.\"Data weryfikacji\" >= current_date-4*30\nGROUP BY R.\"Osoba weryfikuj?ca\",R.\"LP\",A.\"NKA\", A.\"NTA\"\nORDER BY Sum(\"Ile\") DESC\nLIMIT 5000\n\nAnd also post EXPLAIN ANALYZE for this :\n\nSELECT\n\tA.\"NKA\",\n\tA.\"NTA\",\n\tSum(\"Ile\") AS ss -- if it's in this table\nFROM\t\t\"NumeryA\" A\nWHERE\n\t A.\"DataPliku\" >= current_date-4*30\n\tand A.\"KodB??du\"=74::text\nGROUP BY A.\"NKA\", A.\"NTA\"\n", "msg_date": "Fri, 26 Nov 2010 10:46:11 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing query" } ]
[ { "msg_contents": "hello,\n\ni have a big performance problem with some views which would joined \n(from the third party tool crystal reports) to print a document.\n\nview1:\n\nSELECT ...\nFROM \n personen.kunde kunde, \n personen.natuerliche_person person, \n viewakteur akteur, \n personen.anschrift adresse, \n personen.kontaktdaten kontakt, \n konten.bankverbindung konto, \n personen.berufsdaten beruf\nWHERE person.objid = kunde.objid AND akteur.objid = kunde.objid AND \nperson.adresse = adresse.objid AND person.kontaktdaten = kontakt.objid \nAND person.bankverbindung = konto.objid AND person.berufsdaten = \nberuf.objid\n\nview2: \n\nSELECT ...\nFROM vertraege.vertrag basisvertrag \n JOIN ..\n .. twelve more inner joins ..\n\nEach view works alone very fast for objid-access.(no sequence scans)\nThe final query build by crystal reports was like:\n\nSELECT ...\nFROM view2 INNER JOIN view1 ON view2.kunde_objid = view1.objid\nWHERE view2.objid = XXXX\n\nas you can see the search-key for view1 comes from view2.\n\nif i set \"from_collapse_limit\" (to merge the views) and \njoin_collapse_limit (to explode the explicit joins) high enough(approx \n32), all is fine (good performance). But other queries are really slow \nin our environment (therefore it's no option to raise the \njoin_collapse_limit to a higher value)\n\nWith defaults (8) for both, the performance is ugly because pgsql can't \nexplode the views to build a better join-table with view1. \n(basisvertrag.kunde_objid from view2 is the key for kunde.objid from \nview1).\n\nAs workaround nr.1 i can do the following:\n\nSELECT ...\nFROM view2 INNER JOIN view1 ON view2.kunde_objid = view1.objid \nWHERE view2.objid = XXXX AND view1.objid = YYYY\n\nyyyy (redundant information) is the same value as view2.kunde_objid. \nThis instructs pgsql to minimize the result of view1 (one entry). \nBut for this solution i must change hundreds of crystal report files.\n\n\nFor workaround nr.2 i need to instruct crystal report to generate a \ncross-join:\nSELECT ...\nFROM view2 , view1 \nWHERE view2.VNID = view1.ID AND view2.ID = XXXX \n\nThen i can slightly increase the from_collapse_limit (9) to enforce \npgsql to explode the view1 and build a better join-plan. But i don't \nfind a way to enforce crystal reports to using cross joins.\n\nWorkaround nr.3:\nbuild one big view which contains all parts of view1 and view2. \nReally ugly (view1 and view2 are used in many more places).\n\n\nWhat are the other options?\n\nRegards,\nmsc\n", "msg_date": "Wed, 24 Nov 2010 19:37:07 +0100", "msg_from": "Markus Schulz <[email protected]>", "msg_from_op": true, "msg_subject": "problem with from_collapse_limit and joined views" }, { "msg_contents": "Markus Schulz <[email protected]> wrote:\n \n> i have a big performance problem\n \n> [joining two complex views for reporting]\n \nWhat version of PostgreSQL is this? How is PostgreSQL configured? \n(The postgresql.conf file with all comments removed is good.)\n \n-Kevin\n", "msg_date": "Fri, 03 Dec 2010 14:32:32 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with from_collapse_limit and joined\n\t views" }, { "msg_contents": "Am Freitag, 3. Dezember 2010 schrieb Kevin Grittner:\n> Markus Schulz <[email protected]> wrote:\n> > i have a big performance problem\n> > \n> > [joining two complex views for reporting]\n> \n> What version of PostgreSQL is this? How is PostgreSQL configured?\n> (The postgresql.conf file with all comments removed is good.)\n\nProduction System is 8.4 (config attached).\nBut i've tried 9.0 with same result.\n\nRegards \nmsc", "msg_date": "Sat, 4 Dec 2010 11:24:02 +0100", "msg_from": "Markus Schulz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problem with from_collapse_limit and joined views" } ]
[ { "msg_contents": "Hello Friends,\nI have many instances of my software running on a server (Solaris SPARC). Each \nsoftware instance requires some DB tables (same DDL for all instances' tables) \nto store data.\nIt essentially means that some processes from each instance of the software \nconnect to these tables.\nNow, should I put these tables in 1 Database's different schemas or in separate \ndatabases itself for good performance?\nI am using libpq for connection. \n\nPictorial Representation:\n\nProcess1 -> DB1.schema1.table1\n\nProcess2 -> DB1.schema2.table1\n\n Vs.\n\nProcess1 -> DB1.default.table1\n\nProcess2 -> DB2.default.table1\n\nWhich one is better?\n\n\n\n thanks in advance \n\n\n\n \nHello Friends,I have many instances of my software running on a server (Solaris SPARC). Each software instance requires some DB tables (same DDL for all instances' tables) to store data.It essentially means that some processes from each instance of the software connect to these tables.Now, should I put these tables in 1 Database's different schemas or in separate databases itself for good performance?I am using libpq for connection.   Pictorial Representation:Process1 -> DB1.schema1.table1Process2 -> DB1.schema2.table1  Vs.Process1 -> DB1.default.table1\n\nProcess2 -> DB2.default.table1Which one is better? thanks in advance", "msg_date": "Thu, 25 Nov 2010 03:37:36 -0800 (PST)", "msg_from": "Divakar Singh <[email protected]>", "msg_from_op": true, "msg_subject": "Which gives good performance? separate database vs separate schema" }, { "msg_contents": "Hello,\n\n> Now, should I put these tables in 1 Database's different schemas or in\n> separate\n> databases itself for good performance?\n> I am using libpq for connection.\n>\n> Pictorial Representation:\n>\n> Process1 -> DB1.schema1.table1\n>\n> Process2 -> DB1.schema2.table1\n>\n> Vs.\n>\n> Process1 -> DB1.default.table1\n>\n> Process2 -> DB2.default.table1\n>\n> Which one is better?\n\nWell, that depends on what you mean by \"database.\" In many other products\neach database is completely separate (with it's own cache, processes etc).\nIn PostgreSQL, there's a cluster of databases, and all of them share the\nsame cache (shared buffers) etc.\n\nI don't think you'll get performance improvement from running two\nPostgreSQL clusters (one for DB1, one for DB2). And when running two\ndatabases within the same cluster, there's no measurable performance\ndifference AFAIK.\n\nSo the two options are exactly the same.\n\nTomas\n\n", "msg_date": "Thu, 25 Nov 2010 13:02:08 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Which gives good performance? separate database vs separate\n schema" }, { "msg_contents": "Divakar Singh, 25.11.2010 12:37:\n> Hello Friends,\n> I have many instances of my software running on a server (Solaris SPARC). Each software instance requires some DB tables (same DDL for all instances' tables) to store data.\n> It essentially means that some processes from each instance of the software connect to these tables.\n> Now, should I put these tables in 1 Database's different schemas or in separate databases itself for good performance?\n> I am using libpq for connection.\n>\n\nI don't think it will make a big difference in performance.\n\nThe real question is: do you need queries that \"cross boundaries\"? If that is the case you have to use schema, because Postgres does not support cross-database queries.\n\nRegards\nThomas\n\n", "msg_date": "Thu, 25 Nov 2010 13:03:29 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which gives good performance? separate database vs separate\n schema" }, { "msg_contents": "> I don't think it will make a big difference in performance.\n>\n> The real question is: do you need queries that \"cross boundaries\"? If that\n> is the case you have to use schema, because Postgres does not support\n> cross-database queries.\n\nWell, there's dblink contrib module, but that won't improve performance.\n\nTomas\n\n", "msg_date": "Thu, 25 Nov 2010 13:09:06 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Which gives good performance? separate database vs separate\n schema" }, { "msg_contents": "On Thursday 25 November 2010 13:02:08 [email protected] wrote:\n> I don't think you'll get performance improvement from running two\n> PostgreSQL clusters (one for DB1, one for DB2). And when running two\n> databases within the same cluster, there's no measurable performance\n> difference AFAIK.\nThat one is definitely not true in many circumstances. As soon as you start to \nhit contention (shared memory, locks) you may very well be better of with two \nseparate clusters.\n\nAndres\n", "msg_date": "Thu, 25 Nov 2010 13:10:16 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which gives good performance? separate database vs separate\n schema" }, { "msg_contents": "> On Thursday 25 November 2010 13:02:08 [email protected] wrote:\n>> I don't think you'll get performance improvement from running two\n>> PostgreSQL clusters (one for DB1, one for DB2). And when running two\n>> databases within the same cluster, there's no measurable performance\n>> difference AFAIK.\n> That one is definitely not true in many circumstances. As soon as you\n> start to\n> hit contention (shared memory, locks) you may very well be better of with\n> two\n> separate clusters.\n>\n> Andres\n>\nGood point, I forgot about that. Anyway it's hard to predict what kind of\nperformance issue he's facing and whether two clusters would fix it.\n\nregards\nTomas\n\n", "msg_date": "Thu, 25 Nov 2010 13:25:33 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Which gives good performance? separate database vs separate\n schema" }, { "msg_contents": "I am not facing any issues, but yes I want to have optimal performance for \nSELECT and INSERT, especially when I am doing these ops repeatedly.\nActually I am porting from Oracle to PG. Oracle starts a lot of processes when \nit needs to run many schemas. I do not think PG would need much more resources \n(mem, cpu) if I go for different database for each process..? Also, is there any \nlimit on number of databases I can start using a PG server? \n\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: \"[email protected]\" <[email protected]>\nTo: Andres Freund <[email protected]>\nCc: [email protected]; [email protected]; Divakar Singh \n<[email protected]>\nSent: Thu, November 25, 2010 5:55:33 PM\nSubject: Re: [PERFORM] Which gives good performance? separate database vs \nseparate schema\n\n> On Thursday 25 November 2010 13:02:08 [email protected] wrote:\n>> I don't think you'll get performance improvement from running two\n>> PostgreSQL clusters (one for DB1, one for DB2). And when running two\n>> databases within the same cluster, there's no measurable performance\n>> difference AFAIK.\n> That one is definitely not true in many circumstances. As soon as you\n> start to\n> hit contention (shared memory, locks) you may very well be better of with\n> two\n> separate clusters.\n>\n> Andres\n>\nGood point, I forgot about that. Anyway it's hard to predict what kind of\nperformance issue he's facing and whether two clusters would fix it.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nI am not facing any issues, but yes I want to have optimal performance for SELECT and INSERT, especially when I am doing these ops repeatedly.Actually I am porting from Oracle to PG. Oracle starts a lot of processes when it needs to run many schemas. I do not think PG would need much more resources (mem, cpu) if I go for different database for each process..? Also, is there any limit on number of databases I can start using a PG server?  Best Regards,DivakarFrom: \"[email protected]\" <[email protected]>To: Andres Freund <[email protected]>Cc: [email protected]; [email protected]; Divakar Singh <[email protected]>Sent: Thu, November 25, 2010 5:55:33 PMSubject: Re: [PERFORM] Which gives good performance? separate database vs separate schema> On Thursday 25 November 2010 13:02:08 [email protected] wrote:>> I don't think you'll get performance improvement from running two>> PostgreSQL clusters (one for DB1, one for DB2). And when running two>> databases within the same cluster, there's no measurable performance>> difference AFAIK.> That one is definitely not true in many circumstances. As soon as you> start to> hit contention\n (shared memory, locks) you may very well be better of with> two> separate clusters.>> Andres>Good point, I forgot about that. Anyway it's hard to predict what kind ofperformance issue he's facing and whether two clusters would fix it.regardsTomas-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 25 Nov 2010 06:53:40 -0800 (PST)", "msg_from": "Divakar Singh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which gives good performance? separate database vs separate\n schema" }, { "msg_contents": "> I am not facing any issues, but yes I want to have optimal performance for\n> SELECT and INSERT, especially when I am doing these ops repeatedly.\n> Actually I am porting from Oracle to PG. Oracle starts a lot of processes\n> when\n> it needs to run many schemas. I do not think PG would need much more\n> resources\n> (mem, cpu) if I go for different database for each process..? Also, is\n> there any\n> limit on number of databases I can start using a PG server?\n\nHm, I would try to run that using single cluster, and only if that does\nnot perform well I'd try multiple clusters. Yes, Oracle starts a lot of\nprocesses for an instance, and then some processes for each connection.\n\nBut again - in PostgreSQL, you do not start databases. You start a\ncluster, containing databases and then there are connections. This is\nsimilar to Oracle where you start instances (something like cluster in\nPostgreSQL) containing schemas (something like databases in PostgreSQL).\nAnd then you create connections, which is the object consuming processes\nand memory.\n\nPostgreSQL will create one process for each connection (roughly the same\nas Oracle in case of dedicated server). And yes, the number of connections\nis limited - see max_connections parameter in postgresql.conf.\n\nTomas\n\n>\n>\n> Best Regards,\n> Divakar\n>\n>\n>\n>\n> ________________________________\n> From: \"[email protected]\" <[email protected]>\n> To: Andres Freund <[email protected]>\n> Cc: [email protected]; [email protected]; Divakar Singh\n> <[email protected]>\n> Sent: Thu, November 25, 2010 5:55:33 PM\n> Subject: Re: [PERFORM] Which gives good performance? separate database vs\n> separate schema\n>\n>> On Thursday 25 November 2010 13:02:08 [email protected] wrote:\n>>> I don't think you'll get performance improvement from running two\n>>> PostgreSQL clusters (one for DB1, one for DB2). And when running two\n>>> databases within the same cluster, there's no measurable performance\n>>> difference AFAIK.\n>> That one is definitely not true in many circumstances. As soon as you\n>> start to\n>> hit contention (shared memory, locks) you may very well be better of\n>> with\n>> two\n>> separate clusters.\n>>\n>> Andres\n>>\n> Good point, I forgot about that. Anyway it's hard to predict what kind of\n> performance issue he's facing and whether two clusters would fix it.\n>\n> regards\n> Tomas\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n>\n\n\n", "msg_date": "Thu, 25 Nov 2010 16:46:33 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Which gives good performance? separate database vs separate\n schema" }, { "msg_contents": "On Thu, Nov 25, 2010 at 4:46 PM, <[email protected]> wrote:\n>> I am not facing any issues, but yes I want to have optimal performance for\n>> SELECT and INSERT, especially when I am doing these ops repeatedly.\n>> Actually I am porting from Oracle to PG. Oracle starts a lot of processes\n>> when\n>> it needs to run many schemas. I do not think PG would need much more\n>> resources\n>> (mem, cpu) if I go for different database for each process..? Also, is\n>> there any\n>> limit on number of databases I can start using a PG server?\n>\n> Hm, I would try to run that using single cluster, and only if that does\n> not perform well I'd try multiple clusters. Yes, Oracle starts a lot of\n> processes for an instance, and then some processes for each connection.\n>\n> But again - in PostgreSQL, you do not start databases. You start a\n> cluster, containing databases and then there are connections. This is\n> similar to Oracle where you start instances (something like cluster in\n> PostgreSQL) containing schemas (something like databases in PostgreSQL).\n> And then you create connections, which is the object consuming processes\n> and memory.\n>\n> PostgreSQL will create one process for each connection (roughly the same\n> as Oracle in case of dedicated server). And yes, the number of connections\n> is limited - see max_connections parameter in postgresql.conf.\n\nI think this is a pretty common trade off that is frequently made:\nbasically the question is whether one wants to reserve resources or\nshare resources. In this case resources would be memory and maybe\nalso disk IO. With two separate clusters each one has its own memory.\n Which means that if one instance is idle and the other one has high\nload then the idle instance's memory cannot be used by the other one.\nWith a single cluster all the memory is shared which has the downside\nthat high load of one instance can affect the other instance's memory.\n\nIt depends on the usage patterns (load) and the user's policy which\nway to go. Since the OP mentioned \"many instances\" the aspect of\noverhead of many instances (even if idle) may come into play as well.\nPlus, a single cluster is likely easier to administer than multiple.\nBut of course the more DB there are in a single cluster the higher the\nlikeliness of bottlenecks (see the other thread \"Performance under\ncontention\").\n\nKind regards\n\nrobert\n\n-- \nremember.guy do |as, often| as.you_can - without end\nhttp://blog.rubybestpractices.com/\n", "msg_date": "Fri, 26 Nov 2010 12:38:44 +0100", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which gives good performance? separate database vs separate\n schema" } ]
[ { "msg_contents": "Hello,\nI have a very large table that I'm not too fond of. I'm revising the design\nnow.\n\nUp until now its been insert only, storing tracking codes from incoming\nwebtraffic.\n\nIt has 8m rows\nIt appears to insert fine, but simple updates using psql are hanging.\n\nupdate ONLY traffic_tracking2010 set src_content_type_id = 90 where id =\n90322;\n\nI am also now trying to remove the constraints, this also hangs.\n\nalter table traffic_tracking2010 drop constraint\ntraffic_tracking2010_src_content_type_id_fkey;\n\nthanks in advance for any advice.\n\n\n Table \"public.traffic_tracking2010\"\n Column | Type |\n Modifiers\n---------------------+--------------------------+-------------------------------------------------------------------\n id | integer | not null default\nnextval('traffic_tracking2010_id_seq'::regclass)\n action_time | timestamp with time zone | not null\n user_id | integer |\n content_type_id | integer |\n object_id | integer |\n action_type | smallint | not null\n src_type | smallint |\n src_content_type_id | integer |\n src_object_id | integer |\n http_referrer | character varying(100) |\n search_term | character varying(50) |\n remote_addr | inet | not null\nIndexes:\n \"traffic_tracking2010_pkey\" PRIMARY KEY, btree (id)\n \"traffic_tracking2010_content_type_id\" btree (content_type_id)\n \"traffic_tracking2010_src_content_type_id\" btree (src_content_type_id)\n \"traffic_tracking2010_user_id\" btree (user_id)\nForeign-key constraints:\n \"traffic_tracking2010_content_type_id_fkey\" FOREIGN KEY\n(content_type_id) REFERENCES django_content_type(id) DEFERRABLE INITIALLY\nDEFERRED\n \"traffic_tracking2010_src_content_type_id_fkey\" FOREIGN KEY\n(src_content_type_id) REFERENCES django_content_type(id) DEFERRABLE\nINITIALLY DEFERRED\n \"traffic_tracking2010_user_id_fkey\" FOREIGN KEY (user_id) REFERENCES\nauth_user(id) DEFERRABLE INITIALLY DEFERRED\n\n\nThis is generated by Django's ORM.\n\nThe hang may be do having other clients connected, though I have tried doing\nthe update when I know all tracking inserts are stopped.\nBut the other client (the webapp) is still connected.\n\nbased on this:\nhttp://postgresql.1045698.n5.nabble.com/slow-full-table-update-td2070754.html\n\nns=> ANALYZE traffic_tracking2010;\nANALYZE\nns=> SELECT relpages, reltuples FROM pg_class WHERE relname =\n'traffic_tracking2010';\n relpages | reltuples\n----------+-------------\n 99037 | 8.38355e+06\n\nand I did vacuum it\n\nvacuum verbose traffic_tracking2010;\nINFO: vacuuming \"public.traffic_tracking2010\"\nINFO: scanned index \"traffic_tracking2010_pkey\" to remove 1057 row versions\nDETAIL: CPU 0.09s/0.37u sec elapsed 10.70 sec.\nINFO: scanned index \"traffic_tracking2010_user_id\" to remove 1057 row\nversions\nDETAIL: CPU 0.12s/0.30u sec elapsed 13.53 sec.\nINFO: scanned index \"traffic_tracking2010_content_type_id\" to remove 1057\nrow versions\nDETAIL: CPU 0.11s/0.28u sec elapsed 13.99 sec.\nINFO: scanned index \"traffic_tracking2010_src_content_type_id\" to remove\n1057 row versions\nDETAIL: CPU 0.09s/0.26u sec elapsed 15.57 sec.\nINFO: \"traffic_tracking2010\": removed 1057 row versions in 535 pages\nDETAIL: CPU 0.01s/0.02u sec elapsed 2.83 sec.\nINFO: index \"traffic_tracking2010_pkey\" now contains 8315147 row versions\nin 22787 pages\nDETAIL: 1057 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"traffic_tracking2010_user_id\" now contains 8315147 row\nversions in 29006 pages\nDETAIL: 1057 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"traffic_tracking2010_content_type_id\" now contains 8315147 row\nversions in 28980 pages\nDETAIL: 1057 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"traffic_tracking2010_src_content_type_id\" now contains 8315147\nrow versions in 28978 pages\nDETAIL: 1057 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"traffic_tracking2010\": found 336 removable, 8315147 nonremovable row\nversions in 99035 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n25953 pages contain useful free space.\n0 pages are entirely empty.\nCPU 0.78s/1.49u sec elapsed 100.43 sec.\nINFO: vacuuming \"pg_toast.pg_toast_165961\"\nINFO: index \"pg_toast_165961_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: \"pg_toast_165961\": found 0 removable, 0 nonremovable row versions in\n0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages contain useful free space.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.06 sec.\n\nHello, I have a very large table that I'm not too fond of.  I'm revising the design now.Up until now its been insert only, storing tracking codes from incoming webtraffic.\nIt has 8m rowsIt appears to insert fine, but simple updates using psql are hanging.update ONLY traffic_tracking2010 set src_content_type_id = 90 where id = 90322;\nI am also now trying to remove the constraints, this also hangs.alter table traffic_tracking2010 drop constraint traffic_tracking2010_src_content_type_id_fkey;\nthanks in advance for any advice.\n                                        Table \"public.traffic_tracking2010\"\n       Column        |           Type           |                             Modifiers                             ---------------------+--------------------------+-------------------------------------------------------------------\n id                  | integer                  | not null default nextval('traffic_tracking2010_id_seq'::regclass) action_time         | timestamp with time zone | not null\n user_id             | integer                  |  content_type_id     | integer                  | \n object_id           | integer                  |  action_type         | smallint                 | not null\n src_type            | smallint                 |  src_content_type_id | integer                  | \n src_object_id       | integer                  |  http_referrer       | character varying(100)   | \n search_term         | character varying(50)    |  remote_addr         | inet                     | not null\nIndexes:    \"traffic_tracking2010_pkey\" PRIMARY KEY, btree (id)\n    \"traffic_tracking2010_content_type_id\" btree (content_type_id)    \"traffic_tracking2010_src_content_type_id\" btree (src_content_type_id)\n    \"traffic_tracking2010_user_id\" btree (user_id)Foreign-key constraints:\n    \"traffic_tracking2010_content_type_id_fkey\" FOREIGN KEY (content_type_id) REFERENCES django_content_type(id) DEFERRABLE INITIALLY DEFERRED\n    \"traffic_tracking2010_src_content_type_id_fkey\" FOREIGN KEY (src_content_type_id) REFERENCES django_content_type(id) DEFERRABLE INITIALLY DEFERRED\n    \"traffic_tracking2010_user_id_fkey\" FOREIGN KEY (user_id) REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED\nThis is generated by Django's ORM.  The hang may be do having other clients connected, though I have tried doing the update when I know all tracking inserts are stopped.\nBut the other client (the webapp) is still connected.based on this:http://postgresql.1045698.n5.nabble.com/slow-full-table-update-td2070754.html\nns=> ANALYZE traffic_tracking2010;ANALYZEns=> SELECT relpages, reltuples FROM pg_class WHERE relname = 'traffic_tracking2010'; relpages |  reltuples  \n----------+-------------    99037 | 8.38355e+06and I did vacuum itvacuum verbose traffic_tracking2010;INFO:  vacuuming \"public.traffic_tracking2010\"\nINFO:  scanned index \"traffic_tracking2010_pkey\" to remove 1057 row versionsDETAIL:  CPU 0.09s/0.37u sec elapsed 10.70 sec.INFO:  scanned index \"traffic_tracking2010_user_id\" to remove 1057 row versions\nDETAIL:  CPU 0.12s/0.30u sec elapsed 13.53 sec.INFO:  scanned index \"traffic_tracking2010_content_type_id\" to remove 1057 row versionsDETAIL:  CPU 0.11s/0.28u sec elapsed 13.99 sec.\nINFO:  scanned index \"traffic_tracking2010_src_content_type_id\" to remove 1057 row versionsDETAIL:  CPU 0.09s/0.26u sec elapsed 15.57 sec.INFO:  \"traffic_tracking2010\": removed 1057 row versions in 535 pages\nDETAIL:  CPU 0.01s/0.02u sec elapsed 2.83 sec.INFO:  index \"traffic_tracking2010_pkey\" now contains 8315147 row versions in 22787 pagesDETAIL:  1057 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  index \"traffic_tracking2010_user_id\" now contains 8315147 row versions in 29006 pages\nDETAIL:  1057 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  index \"traffic_tracking2010_content_type_id\" now contains 8315147 row versions in 28980 pages\nDETAIL:  1057 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  index \"traffic_tracking2010_src_content_type_id\" now contains 8315147 row versions in 28978 pages\nDETAIL:  1057 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  \"traffic_tracking2010\": found 336 removable, 8315147 nonremovable row versions in 99035 pages\nDETAIL:  0 dead row versions cannot be removed yet.There were 0 unused item pointers.25953 pages contain useful free space.0 pages are entirely empty.CPU 0.78s/1.49u sec elapsed 100.43 sec.\nINFO:  vacuuming \"pg_toast.pg_toast_165961\"INFO:  index \"pg_toast_165961_index\" now contains 0 row versions in 1 pagesDETAIL:  0 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.INFO:  \"pg_toast_165961\": found 0 removable, 0 nonremovable row versions in 0 pagesDETAIL:  0 dead row versions cannot be removed yet.There were 0 unused item pointers.\n0 pages contain useful free space.0 pages are entirely empty.CPU 0.00s/0.00u sec elapsed 0.06 sec.", "msg_date": "Fri, 26 Nov 2010 15:22:21 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Update problem on large table" }, { "msg_contents": "On Fri, Nov 26, 2010 at 6:22 AM, felix <[email protected]> wrote:\n>\n> Hello,\n> I have a very large table that I'm not too fond of.  I'm revising the design\n> now.\n> Up until now its been insert only, storing tracking codes from incoming\n> webtraffic.\n> It has 8m rows\n> It appears to insert fine, but simple updates using psql are hanging.\n> update ONLY traffic_tracking2010 set src_content_type_id = 90 where id =\n> 90322;\n> I am also now trying to remove the constraints, this also hangs.\n> alter table traffic_tracking2010 drop constraint\n> traffic_tracking2010_src_content_type_id_fkey;\n> thanks in advance for any advice.\n\nTry your update or alter and in another session, run the following\nquery and look at the \"waiting\" column. A \"true\" value means that that\ntransaction is blocked.\n\nSELECT pg_stat_activity.datname, pg_stat_activity.procpid,\npg_stat_activity.usename, pg_stat_activity.current_query,\npg_stat_activity.waiting,\npg_stat_activity.query_start,pg_stat_activity.client_addr\nFROM pg_stat_activity\nWHERE ((pg_stat_activity.procpid <> pg_backend_pid())\nAND (pg_stat_activity.current_query <> '<IDLE>'))\nORDER BY pg_stat_activity.query_start;\n", "msg_date": "Fri, 26 Nov 2010 08:00:42 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update problem on large table" }, { "msg_contents": "Ok, I caught one : an update that is stuck in waiting.\n\nthe first one blocks the second one.\n\nns | 5902 | nssql | UPDATE \"fastadder_fastadderstatus\" SET \"built\"\n= false WHERE \"fastadder_fastadderstatus\".\"service_id\" = 1\n\n\n\n\n\n\n\n\n\n\n\n | f |\n2010-12-04 13:44:38.5228-05 | 127.0.0.1\n\n ns | 7000 | nssql | UPDATE \"fastadder_fastadderstatus\" SET\n\"last_sent\" = E'2010-12-04 13:50:51.452800', \"sent\" = true WHERE\n(\"fastadder_fastadderstatus\".\"built\" = true AND\n\"fastadder_fastadderstatus\".\"service_id\" = 1 )\n\n\n\n\n\n\n\n\n\n\n | t | 2010-12-04 13:50:51.4628-05\n| 127.0.0.1\n\nis it possible to release the lock and/or cancel the query ? the process\nthat initiated the first one is long ceased.\n\n\n\n\n\n\nOn Fri, Nov 26, 2010 at 6:02 PM, bricklen <[email protected]> wrote:\n\n> No problem!\n>\n> On Fri, Nov 26, 2010 at 8:34 AM, felix <[email protected]> wrote:\n> > thanks !\n> > of course now, 2 hours later, the queries run fine.\n> > the first one was locked up for so long that I interrupted it.\n> > maybe that caused it to get blocked\n> > saved your query for future reference, thanks again !\n> > On Fri, Nov 26, 2010 at 5:00 PM, bricklen <[email protected]> wrote:\n> >>\n> >> On Fri, Nov 26, 2010 at 6:22 AM, felix <[email protected]> wrote:\n> >> >\n> >> > Hello,\n> >> > I have a very large table that I'm not too fond of. I'm revising the\n> >> > design\n> >> > now.\n> >> > Up until now its been insert only, storing tracking codes from\n> incoming\n> >> > webtraffic.\n> >> > It has 8m rows\n> >> > It appears to insert fine, but simple updates using psql are hanging.\n> >> > update ONLY traffic_tracking2010 set src_content_type_id = 90 where id\n> =\n> >> > 90322;\n> >> > I am also now trying to remove the constraints, this also hangs.\n> >> > alter table traffic_tracking2010 drop constraint\n> >> > traffic_tracking2010_src_content_type_id_fkey;\n> >> > thanks in advance for any advice.\n> >>\n> >> Try your update or alter and in another session, run the following\n> >> query and look at the \"waiting\" column. A \"true\" value means that that\n> >> transaction is blocked.\n> >>\n> >> SELECT pg_stat_activity.datname, pg_stat_activity.procpid,\n> >> pg_stat_activity.usename, pg_stat_activity.current_query,\n> >> pg_stat_activity.waiting,\n> >> pg_stat_activity.query_start,pg_stat_activity.client_addr\n> >> FROM pg_stat_activity\n> >> WHERE ((pg_stat_activity.procpid <> pg_backend_pid())\n> >> AND (pg_stat_activity.current_query <> '<IDLE>'))\n> >> ORDER BY pg_stat_activity.query_start;\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list (\n> [email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >\n> >\n>\n\nOk, I caught one : an update that is stuck in waiting.the first one blocks the second one.ns      |    5902 | nssql   | UPDATE \"fastadder_fastadderstatus\" SET \"built\" = false WHERE \"fastadder_fastadderstatus\".\"service_id\" = 1                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       | f       | 2010-12-04 13:44:38.5228-05   | 127.0.0.1\n ns      |    7000 | nssql   | UPDATE \"fastadder_fastadderstatus\" SET \"last_sent\" = E'2010-12-04 13:50:51.452800', \"sent\" = true WHERE (\"fastadder_fastadderstatus\".\"built\" = true  AND \"fastadder_fastadderstatus\".\"service_id\" = 1 )                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         | t       | 2010-12-04 13:50:51.4628-05   | 127.0.0.1\nis it possible to release the lock and/or cancel the query ?  the process that initiated the first one is long ceased.\nOn Fri, Nov 26, 2010 at 6:02 PM, bricklen <[email protected]> wrote:\nNo problem!\n\nOn Fri, Nov 26, 2010 at 8:34 AM, felix <[email protected]> wrote:\n> thanks !\n> of course now, 2 hours later, the queries run fine.\n> the first one was locked up for so long that I interrupted it.\n> maybe that caused it to get blocked\n> saved your query for future reference, thanks again !\n> On Fri, Nov 26, 2010 at 5:00 PM, bricklen <[email protected]> wrote:\n>>\n>> On Fri, Nov 26, 2010 at 6:22 AM, felix <[email protected]> wrote:\n>> >\n>> > Hello,\n>> > I have a very large table that I'm not too fond of.  I'm revising the\n>> > design\n>> > now.\n>> > Up until now its been insert only, storing tracking codes from incoming\n>> > webtraffic.\n>> > It has 8m rows\n>> > It appears to insert fine, but simple updates using psql are hanging.\n>> > update ONLY traffic_tracking2010 set src_content_type_id = 90 where id =\n>> > 90322;\n>> > I am also now trying to remove the constraints, this also hangs.\n>> > alter table traffic_tracking2010 drop constraint\n>> > traffic_tracking2010_src_content_type_id_fkey;\n>> > thanks in advance for any advice.\n>>\n>> Try your update or alter and in another session, run the following\n>> query and look at the \"waiting\" column. A \"true\" value means that that\n>> transaction is blocked.\n>>\n>> SELECT pg_stat_activity.datname, pg_stat_activity.procpid,\n>> pg_stat_activity.usename, pg_stat_activity.current_query,\n>> pg_stat_activity.waiting,\n>> pg_stat_activity.query_start,pg_stat_activity.client_addr\n>> FROM pg_stat_activity\n>> WHERE ((pg_stat_activity.procpid <> pg_backend_pid())\n>> AND (pg_stat_activity.current_query <> '<IDLE>'))\n>> ORDER BY pg_stat_activity.query_start;\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>", "msg_date": "Sat, 4 Dec 2010 20:45:19 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update problem on large table" }, { "msg_contents": "On Sat, Dec 4, 2010 at 11:45 AM, felix <[email protected]> wrote:\n> Ok, I caught one : an update that is stuck in waiting.\n> the first one blocks the second one.\n> ns      |    5902 | nssql   | UPDATE \"fastadder_fastadderstatus\" SET \"built\"\n> = false WHERE \"fastadder_fastadderstatus\".\"service_id\" = 1\n\nNot sure if anyone replied about killing your query, but you can do it like so:\n\nselect pg_cancel_backend(5902); -- assuming 5902 is the pid of the\nquery you want canceled.\n", "msg_date": "Mon, 6 Dec 2010 11:46:04 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update problem on large table" }, { "msg_contents": "On Mon, Dec 6, 2010 at 1:46 PM, bricklen <[email protected]> wrote:\n> On Sat, Dec 4, 2010 at 11:45 AM, felix <[email protected]> wrote:\n>> Ok, I caught one : an update that is stuck in waiting.\n>> the first one blocks the second one.\n>> ns      |    5902 | nssql   | UPDATE \"fastadder_fastadderstatus\" SET \"built\"\n>> = false WHERE \"fastadder_fastadderstatus\".\"service_id\" = 1\n>\n> Not sure if anyone replied about killing your query, but you can do it like so:\n>\n> select pg_cancel_backend(5902);  -- assuming 5902 is the pid of the\n> query you want canceled.\n\nHow does this differ from just killing the pid?\n\n-- \nJon\n", "msg_date": "Mon, 6 Dec 2010 13:48:47 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update problem on large table" }, { "msg_contents": "On Mon, Dec 6, 2010 at 2:48 PM, Jon Nelson <[email protected]> wrote:\n> On Mon, Dec 6, 2010 at 1:46 PM, bricklen <[email protected]> wrote:\n>> Not sure if anyone replied about killing your query, but you can do it like so:\n>>\n>> select pg_cancel_backend(5902);  -- assuming 5902 is the pid of the\n>> query you want canceled.\n>\n> How does this differ from just killing the pid?\n\npg_cancel_backend(5902) does the same thing as:\n kill -SIGINT 5902\n\nJosh\n", "msg_date": "Mon, 6 Dec 2010 15:24:31 -0500", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update problem on large table" }, { "msg_contents": "On Mon, Dec 06, 2010 at 03:24:31PM -0500, Josh Kupershmidt wrote:\n> On Mon, Dec 6, 2010 at 2:48 PM, Jon Nelson <[email protected]> wrote:\n> > On Mon, Dec 6, 2010 at 1:46 PM, bricklen <[email protected]> wrote:\n> >> Not sure if anyone replied about killing your query, but you can do it like so:\n> >>\n> >> select pg_cancel_backend(5902); ?-- assuming 5902 is the pid of the\n> >> query you want canceled.\n> >\n> > How does this differ from just killing the pid?\n> \n> pg_cancel_backend(5902) does the same thing as:\n> kill -SIGINT 5902\n> \n> Josh\n> \n\nYes, but you can use it from within the database. The kill command\nrequires shell access to the backend.\n\nCheers,\nKen\n", "msg_date": "Mon, 6 Dec 2010 14:26:56 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update problem on large table" }, { "msg_contents": "thanks for the replies !,\n\nbut actually I did figure out how to kill it\n\nbut pb_cancel_backend didn't work. here's some notes:\n\nthis has been hung for 5 days:\nns | 32681 | nssql | <IDLE> in transaction | f | 2010-12-01\n15\n\nresulting in: \"fastadder_fastadderstatus\": scanned 3000 of 58551 pages,\ncontaining 13587 live rows and 254709 dead rows;\nand resulting in general pandemonium\n\n\nyou need to become the postgres superuser to use pg_cancel_backend:\n su postgres\n psql\n\nand then:\n\nselect pg_cancel_backend(32681);\n\nbut this does not kill the IDLE in transaction processes.\nit returns true, but its still there\n\nfrom the linux shell I tried:\n\npg_ctl kill INT 32681\n\nbut it still will not die\n\nthe docs for pg_ctl state:\n\"Use pb_ctl --help to see a list of supported signal names.\"\n\ndoing so does indeed tell me the names:\n\nHUP INT QUIT ABRT TERM USR1 USR2\n\nbut nothing about them whatseover :)\n\nthrowing caution to the wind:\n\npg_ctl kill TERM 32681\n\nand that did it\n\nran VACUUM and now performance has returned to normal.\n\nlessons learned.\n\nI guess as Josh says, pg_cancel_backend is the same as SIGINT, which also\nfailed for me using pg_ctl.\nnot sure why. the hung transaction was doing something like update table\nset field = null where service_id = x\n\n\n\nOn Mon, Dec 6, 2010 at 9:26 PM, Kenneth Marshall <[email protected]> wrote:\n\n> On Mon, Dec 06, 2010 at 03:24:31PM -0500, Josh Kupershmidt wrote:\n> > On Mon, Dec 6, 2010 at 2:48 PM, Jon Nelson <[email protected]<jnelson%[email protected]>>\n> wrote:\n> > > On Mon, Dec 6, 2010 at 1:46 PM, bricklen <[email protected]> wrote:\n> > >> Not sure if anyone replied about killing your query, but you can do it\n> like so:\n> > >>\n> > >> select pg_cancel_backend(5902); ?-- assuming 5902 is the pid of the\n> > >> query you want canceled.\n> > >\n> > > How does this differ from just killing the pid?\n> >\n> > pg_cancel_backend(5902) does the same thing as:\n> > kill -SIGINT 5902\n> >\n> > Josh\n> >\n>\n> Yes, but you can use it from within the database. The kill command\n> requires shell access to the backend.\n>\n> Cheers,\n> Ken\n>\n\nthanks for the replies !, but actually I did figure out how to kill itbut pb_cancel_backend didn't work.  here's some notes:this has been hung for 5 days:\nns      |   32681 | nssql   | <IDLE> in transaction | f       | 2010-12-01 15resulting in:  \"fastadder_fastadderstatus\": scanned 3000 of 58551 pages, containing 13587 live rows and 254709 dead rows; \nand resulting in general pandemonium you need to become the postgres superuser to use pg_cancel_backend: su postgres  psql\nand then:select pg_cancel_backend(32681);but this does not kill the IDLE in transaction processes.it returns true, but its still there\nfrom the linux shell I tried:pg_ctl kill INT 32681but it still will not diethe docs for pg_ctl state:\"Use pb_ctl --help to see a list of supported signal names.\"\ndoing so does indeed tell me the names:HUP INT QUIT ABRT TERM USR1 USR2but nothing about them whatseover :)throwing caution to the wind:\npg_ctl kill TERM 32681and that did itran VACUUM and now performance has returned to normal.lessons learned.\nI guess as Josh says, pg_cancel_backend is the same as SIGINT, which also failed for me using pg_ctl.  not sure why.  the hung transaction was doing something like update table set field = null where service_id = x\nOn Mon, Dec 6, 2010 at 9:26 PM, Kenneth Marshall <[email protected]> wrote:\nOn Mon, Dec 06, 2010 at 03:24:31PM -0500, Josh Kupershmidt wrote:\n> On Mon, Dec 6, 2010 at 2:48 PM, Jon Nelson <[email protected]> wrote:\n> > On Mon, Dec 6, 2010 at 1:46 PM, bricklen <[email protected]> wrote:\n> >> Not sure if anyone replied about killing your query, but you can do it like so:\n> >>\n> >> select pg_cancel_backend(5902); ?-- assuming 5902 is the pid of the\n> >> query you want canceled.\n> >\n> > How does this differ from just killing the pid?\n>\n> pg_cancel_backend(5902) does the same thing as:\n>   kill -SIGINT 5902\n>\n> Josh\n>\n\nYes, but you can use it from within the database. The kill command\nrequires shell access to the backend.\n\nCheers,\nKen", "msg_date": "Mon, 6 Dec 2010 22:31:44 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update problem on large table" }, { "msg_contents": "On Mon, Dec 6, 2010 at 4:31 PM, felix <[email protected]> wrote:\n>\n> thanks for the replies !,\n> but actually I did figure out how to kill it\n> but pb_cancel_backend didn't work.  here's some notes:\n> this has been hung for 5 days:\n> ns      |   32681 | nssql   | <IDLE> in transaction | f       | 2010-12-01\n> 15\n\nRight, pg_cancel_backend() isn't going to help when the session you're\ntrying to kill is '<IDLE> in transaction' -- there's no query to be\nkilled. If this '<IDLE> in transaction' session was causing problems\nby blocking other transactions, you should look at the application\nrunning these queries and figure out why it's hanging out in this\nstate. Staying like that for 5 days is not a good sign, and can cause\nalso problems with e.g. autovacuum.\n\n[snip]\n\n> but it still will not die\n> the docs for pg_ctl state:\n> \"Use pb_ctl --help to see a list of supported signal names.\"\n> doing so does indeed tell me the names:\n> HUP INT QUIT ABRT TERM USR1 USR2\n> but nothing about them whatseover :)\n\nI agree this could be better documented. There's a brief mention at:\n http://www.postgresql.org/docs/current/static/app-postgres.html#AEN77350\n \"To cancel a running query, send the SIGINT signal to the process\nrunning that command.\"\n\nthough that snippet of information is out-of-place on a page about the\npostmaster, and SIGINT vs. SIGTERM for individual backends isn't\ndiscussed there at any rate.\n\nAt any rate, as you discovered, you have to send SIGTERM to the\nbackend to kill off an '<IDLE> in transaction' session. If you're\nusing 8.4 or newer, you have pg_terminate_backend() as a SQL wrapper\nfor SIGTERM. If you're using an older version, be careful, see e.g.\n http://archives.postgresql.org/pgsql-admin/2010-04/msg00274.php\n\nJosh\n", "msg_date": "Tue, 7 Dec 2010 17:25:19 -0500", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update problem on large table" } ]
[ { "msg_contents": "Thanks for reply.\n\n\nFirst query:\n\nSELECT\n R.\"Osoba weryfikująca\" AS \"Osoba\",\n R.\"LP\"::text AS \"Sprawa\",\n A.\"NKA\",\n A.\"NTA\",\n Sum(A.\"Ile\")::text AS \"Ilość CDR\"\nFROM\n ONLY \"NumeryA\" A\nLEFT JOIN\n \"Rejestr stacji do naprawy\" R\nON\n A.\"NKA\" = R.\"Numer kierunkowy\"\n and A.\"NTA\" like R.\"Numer stacji\"\n and substr(A.\"NTA\",1,5) = substr(R.\"Numer stacji\",1,5)\n\nWHERE\n \"DataPliku\" >= current_date-4*30\n and \"KodBłędu\"=74::text\n and \"Data weryfikacji\" >= current_date-4*30\n\nGROUP BY R.\"Osoba weryfikująca\",R.\"LP\",A.\"NKA\", A.\"NTA\"\nORDER BY Sum(\"Ile\") DESC\nLIMIT 5000\n\n-----------------------\nExplain analyze\n-----------------------\n\n\"Limit (cost=8806.28..8806.30 rows=5 width=28) (actual\ntime=2575.143..2607.092 rows=5000 loops=1)\"\n\" -> Sort (cost=8806.28..8806.30 rows=5 width=28) (actual\ntime=2575.135..2586.797 rows=5000 loops=1)\"\n\" Sort Key: (sum(a.\"Ile\"))\"\n\" Sort Method: quicksort Memory: 929kB\"\n\" -> HashAggregate (cost=8806.12..8806.23 rows=5 width=28)\n(actual time=2500.549..2544.315 rows=9564 loops=1)\"\n\" -> Merge Join (cost=8196.81..8806.04 rows=5 width=28)\n(actual time=1583.222..2368.858 rows=37364 loops=1)\"\n\" Merge Cond: (((a.\"NKA\")::text = (r.\"Numer\nkierunkowy\")::text) AND ((substr((a.\"NTA\")::text, 1, 5)) =\n(substr((r.\"Numer stacji\")::text, 1, 5))))\"\n\" Join Filter: ((a.\"NTA\")::text ~~ (r.\"Numer stacji\")::text)\"\n\" -> Sort (cost=5883.01..5952.95 rows=27977\nwidth=15) (actual time=1006.220..1118.692 rows=46769 loops=1)\"\n\" Sort Key: a.\"NKA\", (substr((a.\"NTA\")::text, 1, 5))\"\n\" Sort Method: quicksort Memory: 4313kB\"\n\" -> Bitmap Heap Scan on \"NumeryA\" a\n(cost=1454.33..3816.64 rows=27977 width=15) (actual\ntime=16.331..158.007 rows=46769 loops=1)\"\n\" Recheck Cond: ((\"DataPliku\" >=\n(('now'::text)::date - 120)) AND ((\"KodBłędu\")::text = '74'::text))\"\n\" -> Bitmap Index Scan on dp_kb\n(cost=0.00..1447.34 rows=27977 width=0) (actual time=15.838..15.838\nrows=46769 loops=1)\"\n\" Index Cond: ((\"DataPliku\" >=\n(('now'::text)::date - 120)) AND ((\"KodBłędu\")::text = '74'::text))\"\n\" -> Sort (cost=2313.79..2364.81 rows=20410\nwidth=24) (actual time=576.966..703.179 rows=56866 loops=1)\"\n\" Sort Key: r.\"Numer kierunkowy\",\n(substr((r.\"Numer stacji\")::text, 1, 5))\"\n\" Sort Method: quicksort Memory: 1973kB\"\n\" -> Seq Scan on \"Rejestr stacji do naprawy\"\nr (cost=0.00..852.74 rows=20410 width=24) (actual time=0.050..143.901\nrows=20768 loops=1)\"\n\" Filter: (\"Data weryfikacji\" >=\n(('now'::text)::date - 120))\"\n\"Total runtime: 2620.220 ms\"\n\n---------------------------\nSecond query:\n----------------------------\nSELECT\n\tA.\"NKA\",\n\tA.\"NTA\",\n\tSum(\"Ile\") AS ss -- if it's in this table\nFROM\t\t\"NumeryA\" A\nWHERE\n\t A.\"DataPliku\" >= current_date-4*30\n\tand A.\"KodBłędu\"=74::text\nGROUP BY A.\"NKA\", A.\"NTA\"\n\n--------------------------------\nExplain analyze:\n--------------------------------\n\n\"HashAggregate (cost=20616.64..20643.22 rows=2798 width=15) (actual\ntime=13244.712..13284.490 rows=14288 loops=1)\"\n\" -> Append (cost=1454.33..20406.79 rows=27979 width=15) (actual\ntime=16.811..13093.395 rows=46769 loops=1)\"\n\" -> Bitmap Heap Scan on \"NumeryA\" a (cost=1454.33..3816.64\nrows=27977 width=15) (actual time=16.804..141.495 rows=46769 loops=1)\"\n\" Recheck Cond: ((\"DataPliku\" >= (('now'::text)::date -\n120)) AND ((\"KodBłędu\")::text = '74'::text))\"\n\" -> Bitmap Index Scan on dp_kb (cost=0.00..1447.34\nrows=27977 width=0) (actual time=16.289..16.289 rows=46769 loops=1)\"\n\" Index Cond: ((\"DataPliku\" >= (('now'::text)::date\n- 120)) AND ((\"KodBłędu\")::text = '74'::text))\"\n\" -> Seq Scan on \"NumeryA_2008\" a (cost=0.00..16590.16 rows=2\nwidth=15) (actual time=12759.731..12759.731 rows=0 loops=1)\"\n\" Filter: (((\"KodBłędu\")::text = '74'::text) AND\n(\"DataPliku\" >= (('now'::text)::date - 120)))\"\n\"Total runtime: 13314.149 ms\"\n\n\nThe first query looks to work faster than original (6s) thanks !!! :)\n\n\n\n------------\npasman\n", "msg_date": "Fri, 26 Nov 2010 16:06:18 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing query" } ]
[ { "msg_contents": "\"Christian Elmerot @ One.com\" wrote:\n \n> Highest results comes at 32 threads:\n \nIt would be interesting to see the results if you built a version of\nPostgreSQL with LOG2_NUM_LOCK_PARTITIONS set to 6 (instead of 4).\n \n-Kevin\n\n", "msg_date": "Fri, 26 Nov 2010 11:46:18 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPUs for new databases" } ]
[ { "msg_contents": "The database for monitoring certain drone statuses is quite simple:\n\nCREATE TABLE samples (\n\tsample_id integer not null primary key,\n\tsample_timestamp timestamp not null default now()\n);\n\nCREATE TABLE drones (\n\tdrone_id integer not null primary key,\n\tdrone_log_notice character varying,\n\tcrone_coordinates point not null,\n\tdrone_temperature float,\n\tdrone_pressure float\n);\n\nCREATE TABLE drones_history (\n\tdrone_id integer not null,\n\tsample_id integer not null,\n\tdrone_log_notice character varying,\n\tdrone_temperature float,\n\tdrone_pressure float,\n\tconstraint drones_history_pk primary key (drone_id, sample_id),\n\tconstraint drones_history_fk__samples foreign key (sample_id) \nreferences samples(sample_id),\n\tconstraint drones_history_fk__drones foreign key (drone_id) references \ndrones(drone_id)\n);\n\nEvery ten to twenty minutes I receive CSV file with most of the drones \nstatuses. CSV file includes data for new drones, if they're put into \nuse. When I receive new data I load whole CSV file to a database, then \ncall stored procedure that 'deals' with that data.\n\nSo far I have around 6000 samples, around 160k drones and drones_history \nis around 25M rows.\n\nThe CSV file contains around 15k-20k of 'rows', mostly data about old \ndrones. Every now and then (on every 5th - 10th CSV-insert) there is \ndata with around 1000-5000 new drones.\n\nHere is what I do in stored procedure, after i COPYed the data from the \nCSV to temporary.drones table:\n\nFirst, I create temporary table, inside the procedure, that holds rows \nfor the new drones:\n\nCREATE TEMPORARY TABLE tmpNew ON COMMIT DROP AS\nSELECT drone_id, log_notice, coord_x, coord_y, temp, press\nFROM temp.drones WHERE NOT EXISTS (SELECT 1 FROM public.drones WHERE \npublic.drones.drone_id = temporary.drone.drone_id);\n\nThis is done in miliseconds, even if the count for the new drones is \nlarge (i've tested it with 10k new drones although I real-life action \nI'd never get more thatn 5k new drones per CSV).\n\nINSERT INTO public.drones (drone_id, drone_log_notice, coordinates, \ndrone_temperature, drone_temperature)\nSELECT drone_id, log_notice, point(coord_x, coord_y) as coordinates, \ntemp, press FROM tmpNew;\nINSERT INTO public.drones_history (sample_id, drone_id, \ndrone_log_notice, drone_temperature, drone_pressure)\nSELECT a_sample_id, drone_id, log_notice, temp, pressue FROM tmpNew;\n\nThis is also done in miliseconds.\n\nNow, I 'update' data for the existing drones, and fill in the history \ntable on those drones. First I create temporary table with just the \nchanged rows:\n\nCREATE TEMPORARY TABLE tmpUpdate ON COMMIT DROP AS\nSELECT a_batch_id, t.drone_id, t.log_notice, t.temp, t.press\n FROM temporary.drones t\n JOIN public.drones p\n ON t.drone_id = p.drone_id\nWHERE p.drone_log_notice != t.log_notice OR p.temp != t.temp OR p.press \n!= t.press;\n\nNow, that part is also fast. I usualy have around 100-1000 drones that \nchanged 'state', but sometimes I get even half of the drones change \nstates (around 50k) and creation of the tmpUpdate takes no more than ten \nto twenty milliseconds.\n\nThis is the slow part:\nINSERT INTO drones_history (sample_id, drone_id, drone_log_notice, \ndrone_temperature, drone_pressure)\nSELECT * FROM tmpUpdate;\n\nFor 100 rows this takes around 2 seconds. For 1000 rows this takes \naround 40 seconds. For 5000 rows this takes around 5 minutes.\nFor 50k rows this takes around 30 minutes! Now this is where I start lag \nbecause I get new CSV every 10 minutes or so.\n\nAnd the last part is to upadte the actual drones table:\nUPDATE public.drones p\nSET drone_log_notice = t.log_notice, drone_temperature = t.temp, \ndrone_pressure = t.press\nFROM temporary.drones t\nWHERE t.drone_id = p.drone_id\nAND (t.log_notice != p.drone_log_notice OR t.temp != p.drone_temperature \nOR p.press != t.drone_pressure);\n\nThis is also very fast, even when almost half the table is updated the \nUPDATE takes around 10 seconds. Usualy it's around 30-50 ms.\n\nThe machine I'm doing this has 4 GB of RAM, dual-Xeon something (3GHz). \nTwo SAS drives in mirror, capable of around 100 MB/s in sequential r/w \n(i know it means nothing, but just to get an idea).\n\nDatabase is around 2 GB is size (pg_database_size). When I dump/recreate \nthe database I can speedup things a bit, but after half day of \noperations the INSERTs are slow again.\nWhen I do dump/restore of the database I get around 40/50 MB/sec \nreding/writing from the disk (COPYing data, PK/FK constraints creation), \nbut when that INSERT gets stuck io-wait goes to skies - iostat shows \nthat Postgres is mainly reading from the disks, around 800k/sec - 1024k/sec.\n\nI've set shared_buffers to 256M, work_mem to 96M, wal_buffers to 16M and \ncheckpoint_segments to 16. I've turned off autovaccum, I do \nanalyze/vacuum after each insert-job is done, after TRUNCATEing \ntemporary.drones table.\n\nOut of despair I tried to set fsync=off, but that gave me just a small \nperformance improvement.\n\nWhen I remove foreign constraints (drones_history_fk__samples and \ndrones_history_fk__drones) (I leave the primary key on drones_history) \nthan that INSERT, even for 50k rows, takes no more than a second.\n\nSo, my question is - is there anything I can do to make INSERTS with PK \nfaster? Or, since all the reference checking is done inside the \nprocedure for loading data, shall I abandon those constraints entirely?\n\n\tMario\n\n", "msg_date": "Sun, 28 Nov 2010 12:46:11 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT INTO large FKyed table is slow" }, { "msg_contents": "\n> When I remove foreign constraints (drones_history_fk__samples and \n> drones_history_fk__drones) (I leave the primary key on drones_history) \n> than that INSERT, even for 50k rows, takes no more than a second.\n>\n> So, my question is - is there anything I can do to make INSERTS with PK \n> faster? Or, since all the reference checking is done inside the \n> procedure for loading data, shall I abandon those constraints entirely?\n>\n> \tMario\n\nMaybe... or not. Can you post details about :\n\n- the foreign keys\n- the tables that are referred to (including indexes)\n\n\nCREATE TABLE foo (x INTEGER PRIMARY KEY); INSERT INTO foo SELECT * FROM \ngenerate_series( 1,100000 );\nTemps : 766,182 ms\ntest=> VACUUM ANALYZE foo;\nTemps : 71,938 ms\ntest=> CREATE TABLE bar ( x INTEGER REFERENCES foo(x) );\nCREATE TABLE\ntest=> INSERT INTO bar SELECT * FROM generate_series( 1,100000 );\nTemps : 2834,430 ms\n\nAs you can see, 100.000 FK checks take less than 3 seconds on this very \nsimple example. There is probably something that needs fixing.\n", "msg_date": "Sun, 28 Nov 2010 19:56:17 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 11/28/2010 07:56 PM, Pierre C wrote:\n>\n>> When I remove foreign constraints (drones_history_fk__samples and\n>> drones_history_fk__drones) (I leave the primary key on drones_history)\n>> than that INSERT, even for 50k rows, takes no more than a second.\n>>\n>> So, my question is - is there anything I can do to make INSERTS with\n>> PK faster? Or, since all the reference checking is done inside the\n>> procedure for loading data, shall I abandon those constraints entirely?\n>>\n>> Mario\n>\n> Maybe... or not. Can you post details about :\n>\n> - the foreign keys\n> - the tables that are referred to (including indexes)\n\nI pasted DDL at the begining of my post. The only indexes tables have \nare the ones created because of PK constraints. Table drones has around \n100k rows. Table drones_history has around 30M rows. I'm not sure what \nadditional info you'd want but I'll be more than happy to provide more \nrelevant information.\n\n\n> CREATE TABLE foo (x INTEGER PRIMARY KEY); I\n> generate_series( 1,100000 );\n> Temps : 766,182 ms\n> test=> VACUUM ANALYZE foo;\n> Temps : 71,938 ms\n> test=> CREATE TABLE bar ( x INTEGER REFERENCES foo(x) );\n> CREATE TABLE\n> test=> INSERT INTO bar SELECT * FROM generate_series( 1,100000 );\n> Temps : 2834,430 ms\n>\n> As you can see, 100.000 FK checks take less than 3 seconds on this very\n> simple example. There is probably something that needs fixing.\n\n\nYes, when the FKyed table is small enough inserts are quite fast. But \nwhen they grow larger the whole system slows down.\n\nI just repeated your test and I'm getting similar results - on my \ndesktop. I'll try to assemble some code to recreate workload and see if \nI'll run into same problems.\n\n\tMario\n", "msg_date": "Sun, 28 Nov 2010 20:08:22 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "\n> I pasted DDL at the begining of my post.\n\nAh, sorry, didn't see it ;)\n\n> The only indexes tables have are the ones created because of PK \n> constraints. Table drones has around 100k rows. Table drones_history has \n> around 30M rows. I'm not sure what additional info you'd want but I'll \n> be more than happy to provide more relevant information.\n\nCan you post the following :\n\n- pg version\n- output of VACCUM ANALYZE VERBOSE for your 2 tables\n", "msg_date": "Sun, 28 Nov 2010 22:50:42 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 29/11/10 00:46, Mario Splivalo wrote:\n>\n> This is the slow part:\n> INSERT INTO drones_history (sample_id, drone_id, drone_log_notice, \n> drone_temperature, drone_pressure)\n> SELECT * FROM tmpUpdate;\n>\n> For 100 rows this takes around 2 seconds. For 1000 rows this takes \n> around 40 seconds. For 5000 rows this takes around 5 minutes.\n> For 50k rows this takes around 30 minutes! Now this is where I start \n> lag because I get new CSV every 10 minutes or so.\n\nHave you created indexes on drones_history(sample_id) and \ndrones_history(drone_id)? That would probably help speed up your INSERT \nquite a bit if you have not done so.\n\nAlso it would be worthwhile for you to post the output of:\n\nEXPLAIN ANALYZE INSERT INTO drones_history (sample_id, drone_id, \ndrone_log_notice, drone_temperature, drone_pressure)\nSELECT * FROM tmpUpdate;\n\nto the list, so we can see what is taking the time.\n\nCheers\n\nMark\n", "msg_date": "Mon, 29 Nov 2010 20:11:39 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 11/28/2010 10:50 PM, Pierre C wrote:\n>\n>> I pasted DDL at the begining of my post.\n>\n> Ah, sorry, didn't see it ;)\n>\n>> The only indexes tables have are the ones created because of PK\n>> constraints. Table drones has around 100k rows. Table drones_history\n>> has around 30M rows. I'm not sure what additional info you'd want but\n>> I'll be more than happy to provide more relevant information.\n>\n> Can you post the following :\n>\n> - pg version\n> - output of VACCUM ANALYZE VERBOSE for your 2 tables\n\nHere it is:\n\nrealm_51=# vacuum analyze verbose drones;\nINFO: vacuuming \"public.drones\"\nINFO: scanned index \"drones_pk\" to remove 242235 row versions\nDETAIL: CPU 0.02s/0.11u sec elapsed 0.28 sec.\nINFO: \"drones\": removed 242235 row versions in 1952 pages\nDETAIL: CPU 0.01s/0.02u sec elapsed 0.03 sec.\nINFO: index \"drones_pk\" now contains 174068 row versions in 721 pages\nDETAIL: 107716 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"drones\": found 486 removable, 174068 nonremovable row versions \nin 1958 out of 1958 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 64 unused item pointers.\n0 pages are entirely empty.\nCPU 0.22s/0.90u sec elapsed 22.29 sec.\nINFO: vacuuming \"pg_toast.pg_toast_2695558\"\nINFO: index \"pg_toast_2695558_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_2695558\": found 0 removable, 0 nonremovable row \nversions in 0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.drones\"\nINFO: \"drones\": scanned 1958 of 1958 pages, containing 174068 live rows \nand 0 dead rows; 174068 rows in sample, 174068 estimated total rows\nVACUUM\nrealm_51=# vacuum analyze verbose drones_history;\nINFO: vacuuming \"public.drones_history\"\nINFO: index \"drones_history_pk\" now contains 25440352 row versions in \n69268 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.38s/0.12u sec elapsed 16.56 sec.\nINFO: \"drones_history\": found 0 removable, 16903164 nonremovable row \nversions in 129866 out of 195180 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 2.00s/1.42u sec elapsed 49.24 sec.\nINFO: vacuuming \"pg_toast.pg_toast_2695510\"\nINFO: index \"pg_toast_2695510_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_2695510\": found 0 removable, 0 nonremovable row \nversions in 0 out of 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.drones_history\"\nINFO: \"drones_history\": scanned 195180 of 195180 pages, containing \n25440352 live rows and 0 dead rows; 600000 rows in sample, 25440352 \nestimated total rows\nVACUUM\nrealm_51=# select version();\n version \n\n---------------------------------------------------------------------------------------------\n PostgreSQL 8.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (Debian \n4.3.2-1.1) 4.3.2, 32-bit\n(1 row)\n\n\n\tMario\n", "msg_date": "Mon, 29 Nov 2010 13:23:51 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 11/29/2010 08:11 AM, Mark Kirkwood wrote:\n> On 29/11/10 00:46, Mario Splivalo wrote:\n>>\n>> This is the slow part:\n>> INSERT INTO drones_history (sample_id, drone_id, drone_log_notice,\n>> drone_temperature, drone_pressure)\n>> SELECT * FROM tmpUpdate;\n>>\n>> For 100 rows this takes around 2 seconds. For 1000 rows this takes\n>> around 40 seconds. For 5000 rows this takes around 5 minutes.\n>> For 50k rows this takes around 30 minutes! Now this is where I start\n>> lag because I get new CSV every 10 minutes or so.\n>\n> Have you created indexes on drones_history(sample_id) and\n> drones_history(drone_id)? That would probably help speed up your INSERT\n> quite a bit if you have not done so.\n\nYes, since (sample_id, drone_id) is primary key, postgres created \ncomposite index on those columns. Are you suggesting I add two more \nindexes, one for drone_id and one for sample_id?\n\n> Also it would be worthwhile for you to post the output of:\n>\n> EXPLAIN ANALYZE INSERT INTO drones_history (sample_id, drone_id,\n> drone_log_notice, drone_temperature, drone_pressure)\n> SELECT * FROM tmpUpdate;\n>\n> to the list, so we can see what is taking the time.\n\nIs there a way to do so inside plpgsql function?\n\nI can recreate the whole process within psql and then post the explain \nanalyze, it would just take me some time to do so. I'll post as soon as \nI'm done.\n\n\tMario\n", "msg_date": "Mon, 29 Nov 2010 13:30:44 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "> realm_51=# vacuum analyze verbose drones;\n> INFO: vacuuming \"public.drones\"\n> INFO: scanned index \"drones_pk\" to remove 242235 row versions\n> DETAIL: CPU 0.02s/0.11u sec elapsed 0.28 sec.\n> INFO: \"drones\": removed 242235 row versions in 1952 pages\n> DETAIL: CPU 0.01s/0.02u sec elapsed 0.03 sec.\n> INFO: index \"drones_pk\" now contains 174068 row versions in 721 pages\n> DETAIL: 107716 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n\nAs you can see your index contains 174068 active rows and 242235 dead rows \nthat probably should have been removed a long time ago by autovacuum, but \nyou seem to have it turned off. It does not take a long time to vacuum \nthis table (only 0.3 sec) so it is not a high cost, you should enable \nautovacuum and let it do the job (note that this doesn't stop you from \nmanual vacuuming after big updates).\n\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"drones\": found 486 removable, 174068 nonremovable row versions \n> in 1958 out of 1958 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 64 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.22s/0.90u sec elapsed 22.29 sec.\n\nHere, the table itself seems quite normal... strange.\n\n> INFO: vacuuming \"pg_toast.pg_toast_2695558\"\n> INFO: index \"pg_toast_2695558_index\" now contains 0 row versions in 1 \n> pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"pg_toast_2695558\": found 0 removable, 0 nonremovable row \n> versions in 0 out of 0 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nSince you don't have large fields, the toast table is empty...\n\n> realm_51=# vacuum analyze verbose drones_history;\n> INFO: vacuuming \"public.drones_history\"\n> INFO: index \"drones_history_pk\" now contains 25440352 row versions in \n> 69268 pages\n> DETAIL: 0 index row versions were removed.\n\ngood\n\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.38s/0.12u sec elapsed 16.56 sec.\n> INFO: \"drones_history\": found 0 removable, 16903164 nonremovable row \n> versions in 129866 out of 195180 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 2.00s/1.42u sec elapsed 49.24 sec.\n\ngood\n\n> INFO: vacuuming \"pg_toast.pg_toast_2695510\"\n> INFO: index \"pg_toast_2695510_index\" now contains 0 row versions in 1 \n> pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"pg_toast_2695510\": found 0 removable, 0 nonremovable row \n> versions in 0 out of 0 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nsame as above, no toast\n\n\n> realm_51=# select version();\n> version \n> ---------------------------------------------------------------------------------------------\n> PostgreSQL 8.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (Debian \n> 4.3.2-1.1) 4.3.2, 32-bit\n> (1 row)\n>\n>\n> \tMario\n\nok\n\nTry this :\n\nCLUSTER drones_pkey ON drones;\n\nThen check if your slow query gets a bit faster. If it does, try :\n\nALTER TABLE drones SET ( fillfactor = 50 );\nALTER INDEX drones_pkey SET ( fillfactor = 50 );\nCLUSTER drones_pkey ON drones; (again)\n\nThis will make the updates on this table less problematic. VACUUM it after \neach mass update.\n", "msg_date": "Mon, 29 Nov 2010 17:47:49 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "\n> Yes, since (sample_id, drone_id) is primary key, postgres created \n> composite index on those columns. Are you suggesting I add two more \n> indexes, one for drone_id and one for sample_id?\n\n(sample_id,drone_id) covers sample_id but if you make searches on drone_id \nalone it is likely to be very slow since you got a large number of \nsample_ids. Postgres can use any column of a multicolumn index but it is \nonly interesting performance-wise if the cardinality of the first \n(ignored) columns is low. If you often make searches on drone_id, create \nan index. But this isn't what is slowing your foreign key checks.\n\n>> Also it would be worthwhile for you to post the output of:\n>>\n>> EXPLAIN ANALYZE INSERT INTO drones_history (sample_id, drone_id,\n>> drone_log_notice, drone_temperature, drone_pressure)\n>> SELECT * FROM tmpUpdate;\n>>\n>> to the list, so we can see what is taking the time.\n>\n> Is there a way to do so inside plpgsql function?\n>\n> I can recreate the whole process within psql and then post the explain \n> analyze, it would just take me some time to do so. I'll post as soon as \n> I'm done.\n\nYes, this would be interesting.\n", "msg_date": "Mon, 29 Nov 2010 17:53:19 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "I'm just back from vacation, so I apologize in advance if I missed \nanything of importance. Here is something to consider:\n\nInstead of using the statement you used to create the table, try the \nfollowing:\n\nCREATE TABLE drones_history (\n\tdrone_id integer not null,\n\tsample_id integer not null,\n\tdrone_log_notice character varying,\n\tdrone_temperature float,\n\tdrone_pressure float,\n\tconstraint drones_history_pk primary key (drone_id, sample_id),\n\tconstraint drones_history_fk__samples foreign key (sample_id) \nreferences samples(sample_id),\n\tconstraint drones_history_fk__drones foreign key (drone_id) references drones(drone_id) deferrable\n);\n\n\nAt the beginning of the load, you should defer all of the deferrable \nconstraints, setting constraints deferred and issuing the copy \nstatement within a transaction block, like this:\n\n scott=# begin; \n BEGIN\n Time: 0.203 ms\n scott=# set constraints all deferred;\n SET CONSTRAINTS\n Time: 0.201 ms\n scott=# copy test1 from '/tmp/test1.csv';\n COPY 100\n Time: 11.939 ms\n scott=# commit;\n ERROR: insert or update on table \"test1\" violates foreign key\n constraint \"fk_tst1_deptno\"\n DETAIL: Key (col1)=(1) is not present in table \"dept\".\n\n\nOf course, that will require complete rewrite of your load script, \nbecause the errors will be checked at the commit time and transaction \ncan either fail as a whole or succeed as a whole. It's all or nothing \nsituation. How frequently do you see records with an incorrect drone_id? \nIf that happens only once in a blue moon, you may need no stinkin' \nforeign keys in the first place, you may be able\nto have a batch job that will flag all the records with an invalid \ndrone_id instead.\nFurthermore, you can make sure that you have enough shared buffers to \ncache the entire \"drones\" table. Also, do \"strace\" on the postgres \nprocess handling your session and see whether the time is spent writing \nto WAL archives. If that is slowing you down, you should consider buying \na SSD or a high end disk drive. I have never had such problem, but you \nshould also check whether pg_loader can do anything for you.\n\nAs far as speed is concerned, inserting with deferred foreign keys is \nalmost as fast as inserting without foreign keys:\n\n\nscott=# alter table test1 drop constraint fk_tst1_deptno;\nALTER TABLE\nTime: 16.219 ms\nscott=# copy test1 from '/tmp/test1.csv';\nCOPY 100\nTime: 10.418 ms\n\nIf you take a look at the example above, you will see that inserting \nwith a deferred FK took 11.939 milliseconds while inserting into the \nsame table without the FK took 10.418 milliseconds, the difference of \n1.5 milliseconds per 100 rows. The timing of 2 seconds per 100\nrows looks suspiciously high. Me thinks that your problem is not just \nthe foreign key, there must be something else devouring the time. You \nshould have a test instance, compiled with \"-g\" option and do profiling.\n\nMario Splivalo wrote:\n> The database for monitoring certain drone statuses is quite simple:\n>\n> CREATE TABLE samples (\n> \tsample_id integer not null primary key,\n> \tsample_timestamp timestamp not null default now()\n> );\n>\n> CREATE TABLE drones (\n> \tdrone_id integer not null primary key,\n> \tdrone_log_notice character varying,\n> \tcrone_coordinates point not null,\n> \tdrone_temperature float,\n> \tdrone_pressure float\n> );\n>\n> CREATE TABLE drones_history (\n> \tdrone_id integer not null,\n> \tsample_id integer not null,\n> \tdrone_log_notice character varying,\n> \tdrone_temperature float,\n> \tdrone_pressure float,\n> \tconstraint drones_history_pk primary key (drone_id, sample_id),\n> \tconstraint drones_history_fk__samples foreign key (sample_id) \n> references samples(sample_id),\n> \tconstraint drones_history_fk__drones foreign key (drone_id) references \n> drones(drone_id)\n> );\n>\n> Every ten to twenty minutes I receive CSV file with most of the drones \n> statuses. CSV file includes data for new drones, if they're put into \n> use. When I receive new data I load whole CSV file to a database, then \n> call stored procedure that 'deals' with that data.\n>\n> So far I have around 6000 samples, around 160k drones and drones_history \n> is around 25M rows.\n>\n> The CSV file contains around 15k-20k of 'rows', mostly data about old \n> drones. Every now and then (on every 5th - 10th CSV-insert) there is \n> data with around 1000-5000 new drones.\n>\n> Here is what I do in stored procedure, after i COPYed the data from the \n> CSV to temporary.drones table:\n>\n> First, I create temporary table, inside the procedure, that holds rows \n> for the new drones:\n>\n> CREATE TEMPORARY TABLE tmpNew ON COMMIT DROP AS\n> SELECT drone_id, log_notice, coord_x, coord_y, temp, press\n> FROM temp.drones WHERE NOT EXISTS (SELECT 1 FROM public.drones WHERE \n> public.drones.drone_id = temporary.drone.drone_id);\n>\n> This is done in miliseconds, even if the count for the new drones is \n> large (i've tested it with 10k new drones although I real-life action \n> I'd never get more thatn 5k new drones per CSV).\n>\n> INSERT INTO public.drones (drone_id, drone_log_notice, coordinates, \n> drone_temperature, drone_temperature)\n> SELECT drone_id, log_notice, point(coord_x, coord_y) as coordinates, \n> temp, press FROM tmpNew;\n> INSERT INTO public.drones_history (sample_id, drone_id, \n> drone_log_notice, drone_temperature, drone_pressure)\n> SELECT a_sample_id, drone_id, log_notice, temp, pressue FROM tmpNew;\n>\n> This is also done in miliseconds.\n>\n> Now, I 'update' data for the existing drones, and fill in the history \n> table on those drones. First I create temporary table with just the \n> changed rows:\n>\n> CREATE TEMPORARY TABLE tmpUpdate ON COMMIT DROP AS\n> SELECT a_batch_id, t.drone_id, t.log_notice, t.temp, t.press\n> FROM temporary.drones t\n> JOIN public.drones p\n> ON t.drone_id = p.drone_id\n> WHERE p.drone_log_notice != t.log_notice OR p.temp != t.temp OR p.press \n> != t.press;\n>\n> Now, that part is also fast. I usualy have around 100-1000 drones that \n> changed 'state', but sometimes I get even half of the drones change \n> states (around 50k) and creation of the tmpUpdate takes no more than ten \n> to twenty milliseconds.\n>\n> This is the slow part:\n> INSERT INTO drones_history (sample_id, drone_id, drone_log_notice, \n> drone_temperature, drone_pressure)\n> SELECT * FROM tmpUpdate;\n>\n> For 100 rows this takes around 2 seconds. For 1000 rows this takes \n> around 40 seconds. For 5000 rows this takes around 5 minutes.\n> For 50k rows this takes around 30 minutes! Now this is where I start lag \n> because I get new CSV every 10 minutes or so.\n>\n> And the last part is to upadte the actual drones table:\n> UPDATE public.drones p\n> SET drone_log_notice = t.log_notice, drone_temperature = t.temp, \n> drone_pressure = t.press\n> FROM temporary.drones t\n> WHERE t.drone_id = p.drone_id\n> AND (t.log_notice != p.drone_log_notice OR t.temp != p.drone_temperature \n> OR p.press != t.drone_pressure);\n>\n> This is also very fast, even when almost half the table is updated the \n> UPDATE takes around 10 seconds. Usualy it's around 30-50 ms.\n>\n> The machine I'm doing this has 4 GB of RAM, dual-Xeon something (3GHz). \n> Two SAS drives in mirror, capable of around 100 MB/s in sequential r/w \n> (i know it means nothing, but just to get an idea).\n>\n> Database is around 2 GB is size (pg_database_size). When I dump/recreate \n> the database I can speedup things a bit, but after half day of \n> operations the INSERTs are slow again.\n> When I do dump/restore of the database I get around 40/50 MB/sec \n> reding/writing from the disk (COPYing data, PK/FK constraints creation), \n> but when that INSERT gets stuck io-wait goes to skies - iostat shows \n> that Postgres is mainly reading from the disks, around 800k/sec - 1024k/sec.\n>\n> I've set shared_buffers to 256M, work_mem to 96M, wal_buffers to 16M and \n> checkpoint_segments to 16. I've turned off autovaccum, I do \n> analyze/vacuum after each insert-job is done, after TRUNCATEing \n> temporary.drones table.\n>\n> Out of despair I tried to set fsync=off, but that gave me just a small \n> performance improvement.\n>\n> When I remove foreign constraints (drones_history_fk__samples and \n> drones_history_fk__drones) (I leave the primary key on drones_history) \n> than that INSERT, even for 50k rows, takes no more than a second.\n>\n> So, my question is - is there anything I can do to make INSERTS with PK \n> faster? Or, since all the reference checking is done inside the \n> procedure for loading data, shall I abandon those constraints entirely?\n>\n> \tMario\n>\n>\n> \n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Tue, 30 Nov 2010 11:26:04 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 11/29/2010 05:47 PM, Pierre C wrote:\n>> realm_51=# vacuum analyze verbose drones;\n>> INFO: vacuuming \"public.drones\"\n>> INFO: scanned index \"drones_pk\" to remove 242235 row versions\n>> DETAIL: CPU 0.02s/0.11u sec elapsed 0.28 sec.\n>> INFO: \"drones\": removed 242235 row versions in 1952 pages\n>> DETAIL: CPU 0.01s/0.02u sec elapsed 0.03 sec.\n>> INFO: index \"drones_pk\" now contains 174068 row versions in 721 pages\n>> DETAIL: 107716 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>\n> As you can see your index contains 174068 active rows and 242235 dead\n> rows that probably should have been removed a long time ago by\n> autovacuum, but you seem to have it turned off. It does not take a long\n> time to vacuum this table (only 0.3 sec) so it is not a high cost, you\n> should enable autovacuum and let it do the job (note that this doesn't\n> stop you from manual vacuuming after big updates).\n\nYes, you're right. I was doing some testing and I neglected to enable \nvacuuming after inserts. But what this shows is that table drones is \nhaving dead rows, and that table does get updated a lot. However, I \ndon't have any performance problems here. The UPDATE takes no more than \n10 seconds even if I update 50k (out of 150k) rows.\n\nI disabled autovacuum because I got a lot of \"WARNING: pgstat wait \ntimeout\" and I could see the autovacuum job (pg_stat_activity) running \nduring the run of the plpgsql function that handles inserts.\n\nI left the autovacuum off but I do VACUUM after each CSV insert.\n\n> good\n>\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.38s/0.12u sec elapsed 16.56 sec.\n>> INFO: \"drones_history\": found 0 removable, 16903164 nonremovable row\n>> versions in 129866 out of 195180 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> There were 0 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 2.00s/1.42u sec elapsed 49.24 sec.\n>\n> good\n>\n>> INFO: vacuuming \"pg_toast.pg_toast_2695510\"\n>> INFO: index \"pg_toast_2695510_index\" now contains 0 row versions in 1\n>> pages\n>> DETAIL: 0 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n>> INFO: \"pg_toast_2695510\": found 0 removable, 0 nonremovable row\n>> versions in 0 out of 0 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> There were 0 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n>\n> same as above, no toast\n\nYes. Just to make things clear, I never update/delete drones_history. I \njust INSERT, and every now and then I'll be doing SELECTs.\n\n>\n>\n>> realm_51=# select version();\n>> version\n>> ---------------------------------------------------------------------------------------------\n>>\n>> PostgreSQL 8.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (Debian\n>> 4.3.2-1.1) 4.3.2, 32-bit\n>> (1 row)\n>>\n>>\n>> Mario\n>\n> ok\n>\n> Try this :\n>\n> CLUSTER drones_pkey ON drones;\n>\n> Then check if your slow query gets a bit faster. If it does, try :\n>\n> ALTER TABLE drones SET ( fillfactor = 50 );\n> ALTER INDEX drones_pkey SET ( fillfactor = 50 );\n> CLUSTER drones_pkey ON drones; (again)\n>\n> This will make the updates on this table less problematic. VACUUM it\n> after each mass update.\n\nIs this going to make any difference considering slow insert on \ndrones_history? Because INSERTs/UPDATEs on drones tables are fast. The \nonly noticable difference is that drones is 150k rows 'large' and \ndrones_history has around 25M rows:\n\nrealm_51=# select count(*) from drones_history ;\n count\n----------\n 25550475\n(1 row)\n\n\tMario\n", "msg_date": "Wed, 01 Dec 2010 00:22:33 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 11/29/2010 05:53 PM, Pierre C wrote:\n>\n>> Yes, since (sample_id, drone_id) is primary key, postgres created\n>> composite index on those columns. Are you suggesting I add two more\n>> indexes, one for drone_id and one for sample_id?\n>\n> (sample_id,drone_id) covers sample_id but if you make searches on\n> drone_id alone it is likely to be very slow since you got a large number\n> of sample_ids. Postgres can use any column of a multicolumn index but it\n> is only interesting performance-wise if the cardinality of the first\n> (ignored) columns is low. If you often make searches on drone_id, create\n> an index. But this isn't what is slowing your foreign key checks.\n\nAgain, you have a point there. When I get to SELECTs to the history \ntable I'll be doing most of the filtering on the drone_id (but also on \nsample_id, because I'll seldom drill all the way back in time, I'll be \ninterested in just some periods), so I'll take this into consideration.\n\nBut, as you've said, that's not what it's slowing my FK checks.\n\n>\n>>> Also it would be worthwhile for you to post the output of:\n>>>\n>>> EXPLAIN ANALYZE INSERT INTO drones_history (sample_id, drone_id,\n>>> drone_log_notice, drone_temperature, drone_pressure)\n>>> SELECT * FROM tmpUpdate;\n>>>\n>>> to the list, so we can see what is taking the time.\n>>\n>> Is there a way to do so inside plpgsql function?\n>>\n>> I can recreate the whole process within psql and then post the explain\n>> analyze, it would just take me some time to do so. I'll post as soon\n>> as I'm done.\n>\n> Yes, this would be interesting.\n\nSo, I did. I run the whole script in psql, and here is the result for \nthe INSERT:\n\nrealm_51=# explain analyze INSERT INTO drones_history (2771, drone_id, \ndrone_log_notice, drone_temperature, drone_pressure) SELECT * FROM \ntmp_drones_history;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------\n Seq Scan on tmp_drones_history (cost=0.00..81.60 rows=4160 width=48) \n(actual time=0.008..5.296 rows=5150 loops=1)\n Trigger for constraint drones_history_fk__drones: time=92.948 calls=5150\n Total runtime: 16779.644 ms\n(3 rows)\n\n\nNow, this is only 16 seconds. In this 'batch' I've inserted 5150 rows.\nThe batch before, I run that one 'the usual way', it inserted 9922 rows, \nand it took 1 minute and 16 seconds.\n\nI did not, however, enclose the process into begin/end.\n\nSo, here are results when I, in psql, first issued BEGIN:\n\nrealm_51=# explain analyze INSERT INTO drones_history (2772, drone_id, \ndrone_log_notice, drone_temperature, drone_pressure) SELECT * FROM \ntmp_drones_history;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------\n Seq Scan on tmp_drones_history (cost=0.00..79.56 rows=4056 width=48) \n(actual time=0.008..6.490 rows=5059 loops=1)\n Trigger for constraint drones_history_fk__drones: time=120.224 calls=5059\n Total runtime: 39658.250 ms\n(3 rows)\n\nTime: 39658.906 ms\n\n\n\n\tMario\n", "msg_date": "Wed, 01 Dec 2010 00:24:32 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 30/11/10 05:53, Pierre C wrote:\n>\n>> Yes, since (sample_id, drone_id) is primary key, postgres created \n>> composite index on those columns. Are you suggesting I add two more \n>> indexes, one for drone_id and one for sample_id?\n>\n> (sample_id,drone_id) covers sample_id but if you make searches on \n> drone_id alone it is likely to be very slow since you got a large \n> number of sample_ids. Postgres can use any column of a multicolumn \n> index but it is only interesting performance-wise if the cardinality \n> of the first (ignored) columns is low. If you often make searches on \n> drone_id, create an index. But this isn't what is slowing your foreign \n> key checks.\n\nExactly, sorry - I was having a brain fade moment about which way your \nforeign key checks were going when I suggested adding those indexes... :-(\n", "msg_date": "Wed, 01 Dec 2010 12:50:29 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 11/30/2010 05:26 PM, Mladen Gogala wrote:\n> At the beginning of the load, you should defer all of the deferrable\n> constraints, setting constraints deferred and issuing the copy statement\n> within a transaction block, like this:\n>\n> scott=# begin; BEGIN\n> Time: 0.203 ms\n> scott=# set constraints all deferred;\n> SET CONSTRAINTS\n> Time: 0.201 ms\n> scott=# copy test1 from '/tmp/test1.csv';\n> COPY 100\n> Time: 11.939 ms\n> scott=# commit;\n> ERROR: insert or update on table \"test1\" violates foreign key\n> constraint \"fk_tst1_deptno\"\n> DETAIL: Key (col1)=(1) is not present in table \"dept\".\n>\n>\n> Of course, that will require complete rewrite of your load script,\n> because the errors will be checked at the commit time and transaction\n> can either fail as a whole or succeed as a whole. It's all or nothing\n\nWell, it is like that now. First I load the data from the CSV into the \ntemporary table (just named temporary, exists on the server). That table \nis usualy aroun 10k rows. Then I call the function which does the job.\n\n> situation. How frequently do you see records with an incorrect drone_id?\n\nSeldom.\n\n> If that happens only once in a blue moon, you may need no stinkin'\n> foreign keys in the first place, you may be able\n> to have a batch job that will flag all the records with an invalid\n> drone_id instead.\n\nI did have that idea, yes, but still, I'd like to know what is slowing \npostgres down. Because when I look at the disk I/O, it seems very random \n- i get around 800k of disk reads and ocasionaly 1500k of writes (during \ninsert into history table).\n\n> Furthermore, you can make sure that you have enough shared buffers to\n> cache the entire \"drones\" table. Also, do \"strace\" on the postgres\n> process handling your session and see whether the time is spent writing\n> to WAL archives. If that is slowing you down, you should consider buying\n> a SSD or a high end disk drive. I have never had such problem, but you\n> should also check whether pg_loader can do anything for you.\n>\n> As far as speed is concerned, inserting with deferred foreign keys is\n> almost as fast as inserting without foreign keys:\n>\n> scott=# alter table test1 drop constraint fk_tst1_deptno;\n> ALTER TABLE\n> Time: 16.219 ms\n> scott=# copy test1 from '/tmp/test1.csv';\n> COPY 100\n> Time: 10.418 ms\n>\n> If you take a look at the example above, you will see that inserting\n> with a deferred FK took 11.939 milliseconds while inserting into the\n> same table without the FK took 10.418 milliseconds, the difference of\n> 1.5 milliseconds per 100 rows. The timing of 2 seconds per 100\n> rows looks suspiciously high. Me thinks that your problem is not just\n> the foreign key, there must be something else devouring the time. You\n> should have a test instance, compiled with \"-g\" option and do profiling.\n\nI'll have to. So far I've been doing this only on that dedicated server. \nI'll try to download the database to my desktop and try the tests there.\n\nConcerning the shared_buffers, it's 256M, and the drones table is just 15M.\n\nI have tried your recommendation and it yielded no difference.\n\nNow I tried removing the constraints from the history table (including \nthe PK) and the inserts were fast. After few 'rounds' of inserts I added \nconstraints back, and several round after that were fast again. But then \nall the same. Insert of some 11k rows took 4 seconds (with all \nconstraints) and now the last one of only 4k rows took one minute. I did \nvacuum after each insert.\n\n\n\tMario\n", "msg_date": "Wed, 01 Dec 2010 01:00:23 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "\n> Now I tried removing the constraints from the history table (including \n> the PK) and the inserts were fast. After few 'rounds' of inserts I added \n> constraints back, and several round after that were fast again. But then \n> all the same. Insert of some 11k rows took 4 seconds (with all \n> constraints) and now the last one of only 4k rows took one minute. I did \n> vacuum after each insert.\n>\n>\n> \tMario\n\nHm, so for each line of drones_history you insert, you also update the \ncorrespoding drones table to reflect the latest data, right ?\nHow many times is the same row in \"drones\" updated ? ie, if you insert N \nrows in drones_nistory, how may drone_id's do you have ?\n", "msg_date": "Wed, 01 Dec 2010 01:51:33 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On Sun, 2010-11-28 at 12:46 +0100, Mario Splivalo wrote:\n> The database for monitoring certain drone statuses is quite simple:\n> \n\n> This is the slow part:\n> INSERT INTO drones_history (sample_id, drone_id, drone_log_notice, \n> drone_temperature, drone_pressure)\n> SELECT * FROM tmpUpdate;\n> \n> For 100 rows this takes around 2 seconds. For 1000 rows this takes \n> around 40 seconds. For 5000 rows this takes around 5 minutes.\n> For 50k rows this takes around 30 minutes! Now this is where I start lag \n> because I get new CSV every 10 minutes or so.\n\nHave you considered making the foreign key check deferrable?\n\nJD\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n", "msg_date": "Tue, 30 Nov 2010 17:47:32 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 12/01/2010 01:51 AM, Pierre C wrote:\n> \n>> Now I tried removing the constraints from the history table (including\n>> the PK) and the inserts were fast. After few 'rounds' of inserts I\n>> added constraints back, and several round after that were fast again.\n>> But then all the same. Insert of some 11k rows took 4 seconds (with\n>> all constraints) and now the last one of only 4k rows took one minute.\n>> I did vacuum after each insert.\n>>\n>>\n>> Mario\n> \n> Hm, so for each line of drones_history you insert, you also update the\n> correspoding drones table to reflect the latest data, right ?\n\nYes.\n\n> How many times is the same row in \"drones\" updated ? ie, if you insert N\n> rows in drones_nistory, how may drone_id's do you have ?\n\nJust once.\n\nIf I have 5000 lines in CSV file (that I load into 'temporary' table\nusing COPY) i can be sure that drone_id there is PK. That is because CSV\nfile contains measurements from all the drones, one measurement per\ndrone. I usualy have around 100 new drones, so I insert those to drones\nand to drones_history. Then I first insert into drones_history and then\nupdate those rows in drones. Should I try doing the other way around?\n\nAlthough, I think I'm having some disk-related problems because when\ninserting to the tables my IO troughput is pretty low. For instance,\nwhen I drop constraints and then recreate them that takes around 15-30\nseconds (on a 25M rows table) - disk io is steady, around 60 MB/s in\nread and write.\n\nIt just could be that the ext3 partition is so fragmented. I'll try\nlater this week on a new set of disks and ext4 filesystem to see how it\ngoes.\n\n\tMario\n", "msg_date": "Wed, 01 Dec 2010 08:52:09 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 12/01/2010 02:47 AM, Joshua D. Drake wrote:\n> On Sun, 2010-11-28 at 12:46 +0100, Mario Splivalo wrote:\n>> The database for monitoring certain drone statuses is quite simple:\n>>\n> \n>> This is the slow part:\n>> INSERT INTO drones_history (sample_id, drone_id, drone_log_notice, \n>> drone_temperature, drone_pressure)\n>> SELECT * FROM tmpUpdate;\n>>\n>> For 100 rows this takes around 2 seconds. For 1000 rows this takes \n>> around 40 seconds. For 5000 rows this takes around 5 minutes.\n>> For 50k rows this takes around 30 minutes! Now this is where I start lag \n>> because I get new CSV every 10 minutes or so.\n> \n> Have you considered making the foreign key check deferrable?\n> \n\nYes, as Mladen Gogala had advised. No noticable change in performance -\nit's still slow :)\n\nBut, just for the sake of clarification - I tought that DEFERRABLE would\nmatter if I do a lot of INSERTs, inside a FOR loop or something like\nthat. Since I'm doing INSERT INTO ... SELECT, does it makes any difference?\n\n\tMario\n", "msg_date": "Wed, 01 Dec 2010 08:53:46 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "\n> Just once.\n\nOK, another potential problem eliminated, it gets strange...\n\n> If I have 5000 lines in CSV file (that I load into 'temporary' table\n> using COPY) i can be sure that drone_id there is PK. That is because CSV\n> file contains measurements from all the drones, one measurement per\n> drone. I usualy have around 100 new drones, so I insert those to drones\n> and to drones_history. Then I first insert into drones_history and then\n> update those rows in drones. Should I try doing the other way around?\n\nNo, it doesn't really matter.\n\n> Although, I think I'm having some disk-related problems because when\n> inserting to the tables my IO troughput is pretty low. For instance,\n> when I drop constraints and then recreate them that takes around 15-30\n> seconds (on a 25M rows table) - disk io is steady, around 60 MB/s in\n> read and write.\n>\n> It just could be that the ext3 partition is so fragmented. I'll try\n> later this week on a new set of disks and ext4 filesystem to see how it\n> goes.\n\nIf you CLUSTER a table, it is entirely rebuilt so if your disk free space \nisn't heavily fragmented, you can hope the table and indexes will get \nallocated in a nice contiguous segment.\n", "msg_date": "Wed, 01 Dec 2010 09:23:23 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "\n> So, I did. I run the whole script in psql, and here is the result for \n> the INSERT:\n>\n> realm_51=# explain analyze INSERT INTO drones_history (2771, drone_id, \n> drone_log_notice, drone_temperature, drone_pressure) SELECT * FROM \n> tmp_drones_history;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------\n> Seq Scan on tmp_drones_history (cost=0.00..81.60 rows=4160 width=48) \n> (actual time=0.008..5.296 rows=5150 loops=1)\n> Trigger for constraint drones_history_fk__drones: time=92.948 \n> calls=5150\n> Total runtime: 16779.644 ms\n> (3 rows)\n>\n>\n> Now, this is only 16 seconds. In this 'batch' I've inserted 5150 rows.\n> The batch before, I run that one 'the usual way', it inserted 9922 rows, \n> and it took 1 minute and 16 seconds.\n>\n> I did not, however, enclose the process into begin/end.\n>\n> So, here are results when I, in psql, first issued BEGIN:\n>\n> realm_51=# explain analyze INSERT INTO drones_history (2772, drone_id, \n> drone_log_notice, drone_temperature, drone_pressure) SELECT * FROM \n> tmp_drones_history;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------\n> Seq Scan on tmp_drones_history (cost=0.00..79.56 rows=4056 width=48) \n> (actual time=0.008..6.490 rows=5059 loops=1)\n> Trigger for constraint drones_history_fk__drones: time=120.224 \n> calls=5059\n> Total runtime: 39658.250 ms\n> (3 rows)\n>\n> Time: 39658.906 ms\n>\n>\n>\n> \tMario\n>\n\nNote that in both cases postgres reports that the FK checks take 92-120 \nmilliseconds... which is a normal time for about 4000 rows.\nInserting 4000 lines with just a few fields like you got should take quite \nmuch less than 1 s...\n\nWhere the rest of the time goes, I have no idea. Disk thrashing ? Locks ? \nGremlins ?\n\n- try it on a fresh copy of all your tables (CREATE TABLE, INSERT INTO \nSELECT)\n- try to put the WAL on a separate physical disk (or do a check with \nfsync=off)\n- try it on another computer\n- try it on another harddisk\n- run oprofile on a debug compile of postgres\n- it could even be the process title updates (I don't think so but...)\n- try a ramdisk tablespace\n", "msg_date": "Wed, 01 Dec 2010 09:43:07 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "\n\n\n\n\nMario Splivalo wrote:\n\n\nYes, as Mladen Gogala had advised. No noticable change in performance -\nit's still slow :)\n \n\n\nDeclaring constraints as deferrable  doesn't do anything as such, you\nhave to actually set the constraints deferred to have an effect. You\nhave to do it within a transaction block. If done outside of the\ntransaction block, there is no effect:\n\nThis is what happens when \"set constraints\" is issued outside the\ntransaction block:\n\n< constraint test1_pk primary key(col1)\ndeferrable);            \nNOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index\n\"test1_pk\" for table \"test1\"\nCREATE TABLE\nTime: 41.218 ms\nscott=# set constraints all deferred;                           \nSET CONSTRAINTS\nTime: 0.228 ms\nscott=# begin;                                      \nBEGIN\nTime: 0.188 ms\nscott=#  insert into test1 values(1);               \nINSERT 0 1\nTime: 0.929 ms\nscott=#  insert into test1 values(1);   \nERROR:  duplicate key value violates unique constraint \"test1_pk\"\nDETAIL:  Key (col1)=(1) already exists.\nscott=# end;\nROLLBACK\nTime: 0.267 ms\nscott=# \n\n\nIt works like a charm when issued within the transaction block:\nscott=# begin;                          \nBEGIN\nTime: 0.202 ms\nscott=# set constraints all deferred;   \nSET CONSTRAINTS\nTime: 0.196 ms\nscott=#  insert into test1 values(1);   \nINSERT 0 1\nTime: 0.334 ms\nscott=#  insert into test1 values(1);   \nINSERT 0 1\nTime: 0.327 ms\nscott=# end;\nERROR:  duplicate key value violates unique constraint \"test1_pk\"\nDETAIL:  Key (col1)=(1) already exists.\nscott=# \n\nI was able to insert the same value twice, it only failed at the end of\nthe transaction.\n\n\nBut, just for the sake of clarification - I tought that DEFERRABLE would\nmatter if I do a lot of INSERTs, inside a FOR loop or something like\nthat. Since I'm doing INSERT INTO ... SELECT, does it makes any difference?\n \n\nYou cannot tell which part takes a long time, select or insert, without\nprofiling. I certainly cannot do it over the internet.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n\n\n", "msg_date": "Wed, 01 Dec 2010 11:34:04 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 12/01/2010 05:34 PM, Mladen Gogala wrote:\n> Mario Splivalo wrote:\n>>\n>>\n>> Yes, as Mladen Gogala had advised. No noticable change in performance -\n>> it's still slow :)\n>>\n>\n> Declaring constraints as deferrable doesn't do anything as such, you\n> have to actually set the constraints deferred to have an effect. You\n> have to do it within a transaction block. If done outside of the\n> transaction block, there is no effect:\n\nI understand, I did as you suggested.\n\nBegin; Set constraints all deferred; select my_insert_drones_function(); \ncommit\n\n\n> I was able to insert the same value twice, it only failed at the end of\n> the transaction.\n>> But, just for the sake of clarification - I tought that DEFERRABLE would\n>> matter if I do a lot of INSERTs, inside a FOR loop or something like\n>> that. Since I'm doing INSERT INTO ... SELECT, does it makes any difference?\n>>\n> You cannot tell which part takes a long time, select or insert, without\n> profiling. I certainly cannot do it over the internet.\n\nIf I first select to a dummy temprary table, that SELECT is fast. Just \nINSERT INTO SELECT is slow.\n\nI'll try what Pierre suggested, on whole new filesystem. This one did \nget quite filled with thousands of files that I deleted while the \ndatabase was working.\n\n\tMario\n", "msg_date": "Wed, 01 Dec 2010 18:00:26 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "Mario Splivalo wrote:\n> I'll try what Pierre suggested, on whole new filesystem. This one did \n> get quite filled with thousands of files that I deleted while the \n> database was working.\n>\n> \tMario\n> \n\nYes, that is a good idea. That's the reason why we need a \ndefragmentation tool on Linux. Unfortunately, the only file system that \ncurrently has a decent defragmentation tool is XFS and that is a paid \noption, at least with Red Hat. Greg Smith has recently posted a \nwonderful review of PostgreSQL on various file systems:\n\nhttp://blog.2ndquadrant.com/en/2010/04/the-return-of-xfs-on-linux.html\n\nThere is a operating system which comes with a very decent extent based \nfile system and a defragmentation tool, included in the OS. The file \nsystem is called \"NTFS\" and company is in the land of Redmond, WA where \nthe shadows lie. One OS to rule them all...\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Wed, 01 Dec 2010 12:15:19 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On Wed, Dec 01, 2010 at 12:15:19PM -0500, Mladen Gogala wrote:\n> Mario Splivalo wrote:\n>> I'll try what Pierre suggested, on whole new filesystem. This one did get \n>> quite filled with thousands of files that I deleted while the database was \n>> working.\n>>\n>> \tMario\n>> \n>\n> Yes, that is a good idea. That's the reason why we need a defragmentation \n> tool on Linux. Unfortunately, the only file system that currently has a \n> decent defragmentation tool is XFS and that is a paid option, at least with \n> Red Hat. Greg Smith has recently posted a wonderful review of PostgreSQL on \n> various file systems:\n>\n> http://blog.2ndquadrant.com/en/2010/04/the-return-of-xfs-on-linux.html\n>\n> There is a operating system which comes with a very decent extent based \n> file system and a defragmentation tool, included in the OS. The file system \n> is called \"NTFS\" and company is in the land of Redmond, WA where the \n> shadows lie. One OS to rule them all...\n>\n> -- \n> Mladen Gogala Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> http://www.vmsinfo.com The Leader in Integrated Media Intelligence \n> Solutions\n>\n\nRedhat6 comes with ext4 which is an extent based filesystem with\ndecent performance.\n\nKen\n", "msg_date": "Wed, 1 Dec 2010 11:22:08 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "Mladen Gogala <[email protected]> wrote:\n \n> There is a operating system which comes with a very decent extent\n> based file system and a defragmentation tool, included in the OS.\n> The file system is called \"NTFS\"\n \nBeen there, done that. Not only was performance quite poor compared\nto Linux, but reliability and staff time to manage things suffered\nin comparison to Linux.\n \nWe had the luxury of identical hardware and the ability to load\nbalance a web site with millions of hits per day evenly between them\nin both environments, as well as off-line saturation load testing. \nAt least for running a PostgreSQL database, my experience suggests\nthat the only reasonable excuse for running database on a Windows\nserver is that you're under a mandate from ill-informed managers to\ndo so.\n \n-Kevin\n", "msg_date": "Wed, 01 Dec 2010 11:24:35 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "Kenneth Marshall wrote:\n> Redhat6 comes with ext4 which is an extent based filesystem with\n> decent performance.\n>\n> Ken\n> \nBut e4defrag is still not available. And, of course, Red Hat 6 is still \nnot available, either. Maybe Red Hat 7 will do the trick? I assume it \nwill work beautifully with PostgreSQL 15.0.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Wed, 01 Dec 2010 12:33:27 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "Kevin Grittner wrote:\n> Mladen Gogala <[email protected]> wrote:\n> \n> \n> \n> Been there, done that. Not only was performance quite poor compared\n> to Linux, but reliability and staff time to manage things suffered\n> in comparison to Linux.\n> \n> \nI must say that I am quite impressed with Windows 7 servers, especially \n64 bit version. Unfortunately, I don't have any PostgreSQL instances on \nthose, but Exchange works very, very well. Also, personal impressions \nfrom clicking and running office applications are quite good. Don't get \nme wrong, I am an old Unix/Linux hack and I would like nothing better \nbut to see Linux succeed, but I don't like\nwhat I see.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Wed, 01 Dec 2010 16:07:35 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On Wed, 01 Dec 2010 18:24:35 +0100, Kevin Grittner \n<[email protected]> wrote:\n\n> Mladen Gogala <[email protected]> wrote:\n>\n>> There is a operating system which comes with a very decent extent\n>> based file system and a defragmentation tool, included in the OS.\n>> The file system is called \"NTFS\"\n> Been there, done that. Not only was performance quite poor compared\n> to Linux, but reliability and staff time to manage things suffered\n> in comparison to Linux.\n\nPlease don't start with NTFS. It is the worst excuse for a filesystem I've \never seen.\n", "msg_date": "Wed, 01 Dec 2010 22:43:08 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 12/01/2010 09:43 AM, Pierre C wrote:\n>\n> Note that in both cases postgres reports that the FK checks take 92-120\n> milliseconds... which is a normal time for about 4000 rows.\n> Inserting 4000 lines with just a few fields like you got should take\n> quite much less than 1 s...\n>\n> Where the rest of the time goes, I have no idea. Disk thrashing ? Locks\n> ? Gremlins ?\n>\n> - try it on a fresh copy of all your tables (CREATE TABLE, INSERT INTO\n> SELECT)\n> - try to put the WAL on a separate physical disk (or do a check with\n> fsync=off)\n> - try it on another computer\n> - try it on another harddisk\n> - run oprofile on a debug compile of postgres\n> - it could even be the process title updates (I don't think so but...)\n> - try a ramdisk tablespace\n\nI'm allready running it with fsync=off. The funny thing is, as I add new \nrealm it runs fine until the history table grows around 5M rows. After \nthat the slowdown is huge.\n\nI'm trying this on new hardware this weekend, I'll post here the results.\n\n\tMario\n", "msg_date": "Thu, 02 Dec 2010 09:36:40 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On 12/01/2010 10:43 PM, Pierre C wrote:\n> On Wed, 01 Dec 2010 18:24:35 +0100, Kevin Grittner\n> <[email protected]> wrote:\n>\n>> Mladen Gogala <[email protected]> wrote:\n>>\n>>> There is a operating system which comes with a very decent extent\n>>> based file system and a defragmentation tool, included in the OS.\n>>> The file system is called \"NTFS\"\n>> Been there, done that. Not only was performance quite poor compared\n>> to Linux, but reliability and staff time to manage things suffered\n>> in comparison to Linux.\n>\n> Please don't start with NTFS. It is the worst excuse for a filesystem\n> I've ever seen.\n\nIt is OT, but, could you please shead just some light on that? Part of \nmy next project is to test performance of pg9 on both windows and linux \nsystems so I'd appreciate any data/info you both may have.\n\n\tMario\n", "msg_date": "Thu, 02 Dec 2010 09:51:05 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "Mario Splivalo <[email protected]> wrote:\n \n> It is OT, but, could you please shead just some light on that?\n> Part of my next project is to test performance of pg9 on both\n> windows and linux systems so I'd appreciate any data/info you both\n> may have.\n \nI don't know how much was the filesystem, but with both tuned to the\nbest of our ability Linux on xfs ran much faster than Windows on\nNTFS. The lack of atomic operations and a lockfile utility on\nWindows/NTFS was something of a handicap. I have found Linux to be\nmuch more reliable and (once I got my bash scripting knowledge of\ncommon Linux utilities to a certain level), much easier to\nadminister. Getting my head around xargs was, I think, the tipping\npoint. ;-)\n \n-Kevin\n", "msg_date": "Thu, 02 Dec 2010 12:43:02 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" }, { "msg_contents": "On Thu, Dec 2, 2010 at 3:36 AM, Mario Splivalo\n<[email protected]> wrote:\n> On 12/01/2010 09:43 AM, Pierre C wrote:\n>>\n>> Note that in both cases postgres reports that the FK checks take 92-120\n>> milliseconds... which is a normal time for about 4000 rows.\n>> Inserting 4000 lines with just a few fields like you got should take\n>> quite much less than 1 s...\n>>\n>> Where the rest of the time goes, I have no idea. Disk thrashing ? Locks\n>> ? Gremlins ?\n>>\n>> - try it on a fresh copy of all your tables (CREATE TABLE, INSERT INTO\n>> SELECT)\n>> - try to put the WAL on a separate physical disk (or do a check with\n>> fsync=off)\n>> - try it on another computer\n>> - try it on another harddisk\n>> - run oprofile on a debug compile of postgres\n>> - it could even be the process title updates (I don't think so but...)\n>> - try a ramdisk tablespace\n>\n> I'm allready running it with fsync=off. The funny thing is, as I add new\n> realm it runs fine until the history table grows around 5M rows. After that\n> the slowdown is huge.\n\nPerhaps - that's the point at which the WAL volume becomes large\nenough to force a checkpoint in the middle of the operation? You\nmight try turning on log_checkpoints.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sun, 12 Dec 2010 22:30:27 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO large FKyed table is slow" } ]
[ { "msg_contents": "I have simple database schema, containing just three tables:\n\nsamples, drones, drones_history.\n\nNow, those tables hold data for the drones for a simulation. Each \nsimulation dataset will grow to around 10 GB in around 6 months.\n\nSince the data is not related in any way I was thinking in separating \neach simulation into it's own database. That way it would be much easier \nfor me to, at later date, move some of the databases to other servers \n(when dataset grows beyond the original server storage capacity limit).\n\nBut. At this time I have around 600 simulations, that would mean \ncreating 600 databases, and in future there could very well be around \n5000 simulations. Is postgres going to have 'issues' with that large \nnumber of databases?\n\nOr do I model my system in a way that each database holds around 100 \nsimulations?\n\n\tMario\n", "msg_date": "Sun, 28 Nov 2010 13:02:34 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Simple database, multiple instances?" }, { "msg_contents": "Mario Splivalo <[email protected]> writes:\n> I have simple database schema, containing just three tables:\n>\n> samples, drones, drones_history.\n>\n> Now, those tables hold data for the drones for a simulation. Each simulation\n> dataset will grow to around 10 GB in around 6 months.\n>\n> Since the data is not related in any way I was thinking in separating each\n> simulation into it's own database. That way it would be much easier for me\n> to, at later date, move some of the databases to other servers (when dataset\n> grows beyond the original server storage capacity limit).\n\nDo you intend to run queries across multiple simulations at once? If\nyes, you want to avoid multi databases. Other than that, I'd go with a\nnaming convention like samples_<simulation id> and maybe some\ninheritance to ease querying multiple simulations.\n\nRegards,\n-- \nDimitri Fontaine\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Tue, 30 Nov 2010 12:45:57 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple database, multiple instances?" }, { "msg_contents": "On 11/30/2010 12:45 PM, Dimitri Fontaine wrote:\n> Mario Splivalo<[email protected]> writes:\n>> I have simple database schema, containing just three tables:\n>>\n>> samples, drones, drones_history.\n>>\n>> Now, those tables hold data for the drones for a simulation. Each simulation\n>> dataset will grow to around 10 GB in around 6 months.\n>>\n>> Since the data is not related in any way I was thinking in separating each\n>> simulation into it's own database. That way it would be much easier for me\n>> to, at later date, move some of the databases to other servers (when dataset\n>> grows beyond the original server storage capacity limit).\n>\n> Do you intend to run queries across multiple simulations at once? If\n> yes, you want to avoid multi databases. Other than that, I'd go with a\n> naming convention like samples_<simulation id> and maybe some\n> inheritance to ease querying multiple simulations.\n\nNope, those 'realms' are completely separated, I'll just have hundreds \nof them. But each of them is in it's separate 'universe', they're not \naware of each other in any way (i might be creating some statistics, but \nthat is going to be really rarely).\n\n\tMario\n", "msg_date": "Tue, 30 Nov 2010 13:31:28 +0100", "msg_from": "Mario Splivalo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple database, multiple instances?" }, { "msg_contents": "I saw a presentation from Heroku where they discussed using a similar\nparadigm, and they ran into trouble once they hit a couple thousand\ndatabases. If memory serves, this was on an older version of\nPostgreSQL and may not be relevant with 9.0 (or even 8.4?), but you\nmay want to try to track down one of their developers (maybe through\ntheir mailing lists or forums?) and check.\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Tue, 30 Nov 2010 09:16:26 -0800", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple database, multiple instances?" }, { "msg_contents": "\n> Having that many instances is not practical at all, so I'll have as many \n> databases as I have 'realms'. I'll use pg_dump | nc and nc | psql to \n> move databases....\n>\n> \tMario\n\nThen you can use schemas, too, it'll be easier.\n", "msg_date": "Tue, 30 Nov 2010 23:35:04 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple database, multiple instances?" } ]
[ { "msg_contents": "Hi all,\n\nI am new to Postgres. I just wanted to know how to change the sleep time.\n\nI want to reduce the sleep time, How much will it affect other performance\nissues if sleep time is reduced.\n\nPlz help. I apologize if I am sending mail to wrong contact. Kindly suggest\nthe correct contact details if you know.\n\n\n-- \n\n\n-- \nThanks & Regards,\n\nAaliya Zarrin\n(+91)-9160665888\n\n  \n\nHi all,\nI am new to Postgres. I just wanted to know how to change the sleep time.\nI want to reduce the sleep time, How much will it affect other performance issues if sleep time is reduced.\nPlz help. I apologize if I am sending mail to wrong contact. Kindly suggest the correct contact details if you know.-- -- Thanks & Regards,\nAaliya Zarrin(+91)-9160665888", "msg_date": "Mon, 29 Nov 2010 10:09:31 +0530", "msg_from": "aaliya zarrin <[email protected]>", "msg_from_op": true, "msg_subject": "Hi- Sleeptime reduction" } ]
[ { "msg_contents": "aaliya zarrin wrote:\n \n> I am new to Postgres. I just wanted to know how to change the sleep\n> time.\n> \n> I want to reduce the sleep time, How much will it affect other\n> performance issues if sleep time is reduced.\n \nI don't know what sleep time you mean. It would probably be best if\nyou started from a description of some specific problem rather than a\nhypothetical solution.\n \nFor general advice on tuning, you could start here:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \nFor ideas on how to submit a request for help with a performance\nproblem, you might want to review this:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n", "msg_date": "Sun, 28 Nov 2010 23:09:51 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hi- Sleeptime reduction" } ]
[ { "msg_contents": "explain\nSELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\nactivity.subject,case when ( users.user_name not like '') then\nusers.user_name else groups.groupname end as user_name, activity.date_start\nFROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid\nand crmentity.deleted = 0\nLEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\ncrmentity.crmid\nLEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\nLEFT join users ON crmentity.smownerid= users.id\nWHERE\nto_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en',\nreplace(' Dhaka University of Bangladesh:*', ' ',':* & '))\nor\nto_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en',\nreplace(' Dhaka University of Bangladesh:*', ' ',':* & '))\nORDER BY crmentity.modifiedtime DESC LIMIT 100\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=112724.54..112724.54 rows=1 width=99)\n -> Sort (cost=112724.54..112724.54 rows=1 width=99)\n Sort Key: crmentity.modifiedtime\n -> Nested Loop Left Join (cost=0.00..112724.53 rows=1 width=99)\n -> Nested Loop Left Join (cost=0.00..112724.24 rows=1 width=82)\n -> Nested Loop Left Join (cost=0.00..112723.96 rows=1 width=79)\n -> Nested Loop (cost=0.00..112723.68 rows=1 width=56)\n Join Filter: ((to_tsvector('en'::regconfig,\nregexp_replace((activity.subject)::text,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ''::text, 'gs'::text)) @@ '''\nDhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery) OR\n(to_tsvector('en'::regconfig, regexp_replace(crmentity.description,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'::text)) @@\n''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery))\n -> Index Scan using activity_pkey on activity (cost=0.00..10223.89\nrows=343070 width=36)\n -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.27 rows=1\nwidth=151)\n Index Cond: (crmentity.crmid = activity.activityid)\n Filter: (crmentity.deleted = 0)\n -> Index Scan using activitygrouprelation_activityid_idx on\nactivitygrouprelation (cost=0.00..0.27 rows=1 width=27)\n Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1\nwidth=26)\n Index Cond: ((groups.groupname)::text =\n(activitygrouprelation.groupname)::text)\n -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)\n Index Cond: (crmentity.smownerid = users.id)\n\n\nThe above query are not using fts indexes, even hang the server.\n\nBut,\n\n\nexplain\nSELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\nactivity.subject,case when ( users.user_name not like '') then\nusers.user_name else groups.groupname end as user_name, activity.date_start\nFROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid\nand crmentity.deleted = 0\nLEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\ncrmentity.crmid\nLEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\nLEFT join users ON crmentity.smownerid= users.id\nWHERE\nto_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en',\nreplace(' Dhaka University of Bangladesh:*', ' ',':* & '))\nORDER BY crmentity.modifiedtime DESC LIMIT 100\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=1.46..1.47 rows=1 width=99) (actual time=0.824..0.824 rows=0\nloops=1)\n -> Sort (cost=1.46..1.47 rows=1 width=99) (actual time=0.819..0.819 rows=0\nloops=1)\n Sort Key: crmentity.modifiedtime\n Sort Method: quicksort Memory: 17kB\n -> Nested Loop Left Join (cost=0.27..1.45 rows=1 width=99) (actual\ntime=0.752..0.752 rows=0 loops=1)\n -> Nested Loop Left Join (cost=0.27..1.17 rows=1 width=82) (actual\ntime=0.750..0.750 rows=0 loops=1)\n -> Nested Loop Left Join (cost=0.27..0.88 rows=1 width=79) (actual\ntime=0.748..0.748 rows=0 loops=1)\n -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual time=0.746..0.746\nrows=0 loops=1)\n -> Bitmap Heap Scan on activity (cost=0.27..0.30 rows=1 width=36) (actual\ntime=0.744..0.744 rows=0 loops=1)\n Recheck Cond: (to_tsvector('en'::regconfig,\nregexp_replace((subject)::text,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text,\n'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n -> Bitmap Index Scan on ftx_en_activity_subject (cost=0.00..0.27 rows=1\nwidth=0) (actual time=0.740..0.740 rows=0 loops=1)\n Index Cond: (to_tsvector('en'::regconfig, regexp_replace((subject)::text,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::te\nxt, 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* &\n''bangladesh'':*'::tsquery)\n -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.29 rows=1\nwidth=24) (never executed)\n Index Cond: (crmentity.crmid = activity.activityid)\n Filter: (crmentity.deleted = 0)\n -> Index Scan using activitygrouprelation_activityid_idx on\nactivitygrouprelation (cost=0.00..0.27 rows=1 width=27) (never executed)\n Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1\nwidth=26) (never executed)\n Index Cond: ((groups.groupname)::text =\n(activitygrouprelation.groupname)::text)\n -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)\n(never executed)\n Index Cond: (crmentity.smownerid = users.id)\n Total runtime: 1.188 ms\n\n\n\n\nexplain\nSELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\nactivity.subject,case when ( users.user_name not like '') then\nusers.user_name else groups.groupname end as user_name, activity.date_start\nFROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid\nand crmentity.deleted = 0\nLEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\ncrmentity.crmid\nLEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\nLEFT join users ON crmentity.smownerid= users.id\nWHERE\nto_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en',\nreplace(' Dhaka University of Bangladesh:*', ' ',':* & '))\nORDER BY crmentity.modifiedtime DESC LIMIT 100\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=1.50..1.51 rows=1 width=99) (actual time=5.044..5.047 rows=1\nloops=1)\n -> Sort (cost=1.50..1.51 rows=1 width=99) (actual time=5.041..5.042 rows=1\nloops=1)\n Sort Key: crmentity.modifiedtime\n Sort Method: quicksort Memory: 17kB\n -> Nested Loop Left Join (cost=0.27..1.49 rows=1 width=99) (actual\ntime=4.998..5.012 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.27..1.19 rows=1 width=82) (actual\ntime=4.952..4.961 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.27..0.90 rows=1 width=79) (actual\ntime=4.949..4.956 rows=1 loops=1)\n -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual time=4.943..4.948\nrows=1 loops=1)\n -> Bitmap Heap Scan on crmentity (cost=0.27..0.30 rows=1 width=24) (actual\ntime=4.727..4.799 rows=3 loops=1)\n Recheck Cond: (to_tsvector('en'::regconfig, regexp_replace(description,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'\n::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n Filter: (deleted = 0)\n -> Bitmap Index Scan on ftx_en_crmentity_description (cost=0.00..0.27\nrows=1 width=0) (actual time=4.687..4.687 rows=3 loops=1)\n Index Cond: (to_tsvector('en'::regconfig, regexp_replace(description,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text,\n'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n -> Index Scan using activity_pkey on activity (cost=0.00..0.29 rows=1\nwidth=36) (actual time=0.043..0.043 rows=0 loops=3)\n Index Cond: (activity.activityid = crmentity.crmid)\n -> Index Scan using activitygrouprelation_activityid_idx on\nactivitygrouprelation (cost=0.00..0.29 rows=1 width=27) (actual\ntime=0.003..0.003\nrows=0 loops=1)\n Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1\nwidth=26) (actual time=0.001..0.001 rows=0 loops=1)\n Index Cond: ((groups.groupname)::text =\n(activitygrouprelation.groupname)::text)\n -> Index Scan using users_pkey on users (cost=0.00..0.29 rows=1 width=25)\n(actual time=0.033..0.035 rows=1 loops=1)\n Index Cond: (crmentity.smownerid = users.id)\n Total runtime: 5.229 ms\n(22 rows)\n\n\n\n\\d crmentity\n Table \"public.crmentity\"\n Column | Type | Modifiers\n--------------+-----------------------------+--------------------\n crmid | integer | not null\n smcreatorid | integer | not null default 0\n smownerid | integer | not null default 0\n modifiedby | integer | not null default 0\n setype | character varying(30) | not null\n description | text |\n createdtime | timestamp without time zone | not null\n modifiedtime | timestamp without time zone | not null\n viewedtime | timestamp without time zone |\n status | character varying(50) |\n version | integer | not null default 0\n presence | integer | default 1\n deleted | integer | not null default 0\nIndexes:\n \"crmentity_pkey\" PRIMARY KEY, btree (crmid)\n \"crmentity_createdtime_idx\" btree (createdtime)\n \"crmentity_modifiedby_idx\" btree (modifiedby)\n \"crmentity_modifiedtime_idx\" btree (modifiedtime)\n \"crmentity_smcreatorid_idx\" btree (smcreatorid)\n \"crmentity_smownerid_idx\" btree (smownerid)\n \"ftx_en_crmentity_description\" gin (to_tsvector('vcrm_en'::regconfig,\nfor_fts(description)))\n \"crmentity_deleted_idx\" btree (deleted)\nReferenced by:\n TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid)\nREFERENCES crmentity(crmid) ON DELETE CASCADE\n TABLE \"cc2crmentity\" CONSTRAINT \"fk_cc2crmentity_crmentity\" FOREIGN KEY\n(crm_id) REFERENCES crmentity(crmid) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n\\d activity\n\n Table \"public.activity\"\n Column | Type | Modifiers\n------------------+------------------------+-------------------------------------------\n activityid | integer | not null default 0\n subject | character varying(250) | not null\n semodule | character varying(20) |\n activitytype | character varying(200) | not null\n date_start | date | not null\n due_date | date |\n time_start | character varying(50) |\n time_end | character varying(50) |\n sendnotification | character varying(3) | not null default '0'::character\nvarying\n duration_hours | character varying(2) |\n duration_minutes | character varying(200) |\n status | character varying(200) |\n eventstatus | character varying(200) |\n priority | character varying(200) |\n location | character varying(150) |\n notime | character varying(3) | not null default '0'::character varying\n visibility | character varying(50) | not null default 'all'::character\nvarying\n recurringtype | character varying(200) |\n end_date | date |\n end_time | character varying(50) |\nIndexes:\n \"activity_pkey\" PRIMARY KEY, btree (activityid)\n \"activity_activitytype_idx\" btree (activitytype)\n \"activity_date_start_idx\" btree (date_start)\n \"activity_due_date_idx\" btree (due_date)\n \"activity_eventstatus_idx\" btree (eventstatus)\n \"activity_status_idx\" btree (status)\n \"activity_subject_idx\" btree (subject)\n \"activity_time_start_idx\" btree (time_start)\n \"ftx_en_activity_subject\" gin (to_tsvector('vcrm_en'::regconfig,\nfor_fts(subject::text)))\n\nexplain  SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime, activity.subject,case when ( users.user_name not like '') then users.user_name else groups.groupname end as user_name, activity.date_start \nFROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid and crmentity.deleted = 0LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid = crmentity.crmid LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname \nLEFT join users ON crmentity.smownerid= users.id WHERE  to_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en', replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\norto_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en', replace(' Dhaka University of Bangladesh:*', ' ',':* & ')) ORDER BY crmentity.modifiedtime DESC LIMIT 100 \n  QUERY PLAN     \n------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=112724.54..112724.54 rows=1 width=99)  -> Sort (cost=112724.54..112724.54 rows=1 width=99)\n  Sort Key: crmentity.modifiedtime  -> Nested Loop Left Join (cost=0.00..112724.53 rows=1 width=99)  -> Nested Loop Left Join (cost=0.00..112724.24 rows=1 width=82)  -> Nested Loop Left Join (cost=0.00..112723.96 rows=1 width=79)\n  -> Nested Loop (cost=0.00..112723.68 rows=1 width=56)  Join Filter: ((to_tsvector('en'::regconfig, regexp_replace((activity.subject)::text, '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ''::text, 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery) OR (to_tsvector('en'::regconfig, regexp_replace(crmentity.description, '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery))\n  -> Index Scan using activity_pkey on activity (cost=0.00..10223.89 rows=343070 width=36)  -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.27 rows=1 width=151)\n  Index Cond: (crmentity.crmid = activity.activityid)  Filter: (crmentity.deleted = 0)  -> Index Scan using activitygrouprelation_activityid_idx on activitygrouprelation (cost=0.00..0.27 rows=1 width=27)\n  Index Cond: (activitygrouprelation.activityid = crmentity.crmid)  -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1 width=26)  Index Cond: ((groups.groupname)::text = (activitygrouprelation.groupname)::text)\n  -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)  Index Cond: (crmentity.smownerid = users.id)The above query are not using fts indexes, even hang the server.\nBut,explain  SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime, activity.subject,case when ( users.user_name not like '') then users.user_name else groups.groupname end as user_name, activity.date_start \nFROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid and crmentity.deleted = 0LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid = crmentity.crmid LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname \nLEFT join users ON crmentity.smownerid= users.id WHERE  to_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en', replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\nORDER BY crmentity.modifiedtime DESC LIMIT 100    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1.46..1.47 rows=1 width=99) (actual time=0.824..0.824 rows=0 loops=1)  -> Sort (cost=1.46..1.47 rows=1 width=99) (actual time=0.819..0.819 rows=0 loops=1)  Sort Key: crmentity.modifiedtime\n  Sort Method: quicksort Memory: 17kB  -> Nested Loop Left Join (cost=0.27..1.45 rows=1 width=99) (actual time=0.752..0.752 rows=0 loops=1)  -> Nested Loop Left Join (cost=0.27..1.17 rows=1 width=82) (actual time=0.750..0.750 rows=0 loops=1)\n  -> Nested Loop Left Join (cost=0.27..0.88 rows=1 width=79) (actual time=0.748..0.748 rows=0 loops=1)  -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual time=0.746..0.746 rows=0 loops=1)\n  -> Bitmap Heap Scan on activity (cost=0.27..0.30 rows=1 width=36) (actual time=0.744..0.744 rows=0 loops=1)  Recheck Cond: (to_tsvector('en'::regconfig, regexp_replace((subject)::text, '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, \n'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)  -> Bitmap Index Scan on ftx_en_activity_subject (cost=0.00..0.27 rows=1 width=0) (actual time=0.740..0.740 rows=0 loops=1)\n  Index Cond: (to_tsvector('en'::regconfig, regexp_replace((subject)::text, '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n  -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.29 rows=1 width=24) (never executed)  Index Cond: (crmentity.crmid = activity.activityid)\n  Filter: (crmentity.deleted = 0)  -> Index Scan using activitygrouprelation_activityid_idx on activitygrouprelation (cost=0.00..0.27 rows=1 width=27) (never executed)\n  Index Cond: (activitygrouprelation.activityid = crmentity.crmid)  -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1 width=26) (never executed)\n  Index Cond: ((groups.groupname)::text = (activitygrouprelation.groupname)::text)  -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25) (never executed)\n  Index Cond: (crmentity.smownerid = users.id) Total runtime: 1.188 msexplain  SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime, activity.subject,case when ( users.user_name not like '') then users.user_name else groups.groupname end as user_name, activity.date_start \nFROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid and crmentity.deleted = 0LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid = crmentity.crmid LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname \nLEFT join users ON crmentity.smownerid= users.id WHERE  to_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en', replace(' Dhaka University of Bangladesh:*', ' ',':* & ')) \nORDER BY crmentity.modifiedtime DESC LIMIT 100   QUERY PLAN  \n   ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1.50..1.51 rows=1 width=99) (actual time=5.044..5.047 rows=1 loops=1)  -> Sort (cost=1.50..1.51 rows=1 width=99) (actual time=5.041..5.042 rows=1 loops=1)  Sort Key: crmentity.modifiedtime\n  Sort Method: quicksort Memory: 17kB  -> Nested Loop Left Join (cost=0.27..1.49 rows=1 width=99) (actual time=4.998..5.012 rows=1 loops=1)  -> Nested Loop Left Join (cost=0.27..1.19 rows=1 width=82) (actual time=4.952..4.961 rows=1 loops=1)\n  -> Nested Loop Left Join (cost=0.27..0.90 rows=1 width=79) (actual time=4.949..4.956 rows=1 loops=1)  -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual time=4.943..4.948 rows=1 loops=1)\n  -> Bitmap Heap Scan on crmentity (cost=0.27..0.30 rows=1 width=24) (actual time=4.727..4.799 rows=3 loops=1)  Recheck Cond: (to_tsvector('en'::regconfig, regexp_replace(description, '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'\n::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)  Filter: (deleted = 0)  -> Bitmap Index Scan on ftx_en_crmentity_description (cost=0.00..0.27 rows=1 width=0) (actual time=4.687..4.687 rows=3 loops=1)\n  Index Cond: (to_tsvector('en'::regconfig, regexp_replace(description, '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n  -> Index Scan using activity_pkey on activity (cost=0.00..0.29 rows=1 width=36) (actual time=0.043..0.043 rows=0 loops=3)  Index Cond: (activity.activityid = crmentity.crmid)\n  -> Index Scan using activitygrouprelation_activityid_idx on activitygrouprelation (cost=0.00..0.29 rows=1 width=27) (actual time=0.003..0.003 rows=0 loops=1)  Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n  -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1 width=26) (actual time=0.001..0.001 rows=0 loops=1)  Index Cond: ((groups.groupname)::text = (activitygrouprelation.groupname)::text)\n  -> Index Scan using users_pkey on users (cost=0.00..0.29 rows=1 width=25) (actual time=0.033..0.035 rows=1 loops=1)  Index Cond: (crmentity.smownerid = users.id)\n Total runtime: 5.229 ms(22 rows)\\d crmentity  Table \"public.crmentity\"  Column | Type | Modifiers  --------------+-----------------------------+--------------------\n crmid | integer | not null smcreatorid | integer | not null default 0 smownerid | integer | not null default 0 modifiedby | integer | not null default 0\n setype | character varying(30) | not null description | text |  createdtime | timestamp without time zone | not null modifiedtime | timestamp without time zone | not null\n viewedtime | timestamp without time zone |  status | character varying(50) |  version | integer | not null default 0 presence | integer | default 1\n deleted | integer | not null default 0Indexes:  \"crmentity_pkey\" PRIMARY KEY, btree (crmid)  \"crmentity_createdtime_idx\" btree (createdtime)  \"crmentity_modifiedby_idx\" btree (modifiedby)\n  \"crmentity_modifiedtime_idx\" btree (modifiedtime)  \"crmentity_smcreatorid_idx\" btree (smcreatorid)  \"crmentity_smownerid_idx\" btree (smownerid)  \"ftx_en_crmentity_description\" gin (to_tsvector('vcrm_en'::regconfig, for_fts(description)))\n  \"crmentity_deleted_idx\" btree (deleted)Referenced by:  TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid) REFERENCES crmentity(crmid) ON DELETE CASCADE  TABLE \"cc2crmentity\" CONSTRAINT \"fk_cc2crmentity_crmentity\" FOREIGN KEY (crm_id) REFERENCES crmentity(crmid) ON UPDATE CASCADE ON DELETE CASCADE\n\\d activity  Table \"public.activity\"  Column | Type | Modifiers  ------------------+------------------------+-------------------------------------------\n activityid | integer | not null default 0 subject | character varying(250) | not null semodule | character varying(20) |  activitytype | character varying(200) | not null\n date_start | date | not null due_date | date |  time_start | character varying(50) |  time_end | character varying(50) |  sendnotification | character varying(3) | not null default '0'::character varying\n duration_hours | character varying(2) |  duration_minutes | character varying(200) |  status | character varying(200) |  eventstatus | character varying(200) |  priority | character varying(200) | \n location | character varying(150) |  notime | character varying(3) | not null default '0'::character varying visibility | character varying(50) | not null default 'all'::character varying\n recurringtype | character varying(200) |  end_date | date |  end_time | character varying(50) | Indexes:  \"activity_pkey\" PRIMARY KEY, btree (activityid)\n  \"activity_activitytype_idx\" btree (activitytype)  \"activity_date_start_idx\" btree (date_start)  \"activity_due_date_idx\" btree (due_date)  \"activity_eventstatus_idx\" btree (eventstatus)\n  \"activity_status_idx\" btree (status)  \"activity_subject_idx\" btree (subject)  \"activity_time_start_idx\" btree (time_start)  \"ftx_en_activity_subject\" gin (to_tsvector('vcrm_en'::regconfig, for_fts(subject::text)))", "msg_date": "Mon, 29 Nov 2010 13:00:40 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "Full Text index is not using during OR operation" }, { "msg_contents": "What does replace(' Dhaka University of Bangladesh:*', ' ',':* & ') means ?\nI see it produces something wrong for to_tsquery:\n\ntest=# select replace(' Dhaka University of Bangladesh:*', ' ',':* & ');\n replace \n---------------------------------------------------\n :* & Dhaka:* & University:* & of:* & Bangladesh:*\n(1 row)\n\nOleg\n\nOn Mon, 29 Nov 2010, AI Rumman wrote:\n\n> explain\n> SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\n> activity.subject,case when ( users.user_name not like '') then\n> users.user_name else groups.groupname end as user_name, activity.date_start\n> FROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid\n> and crmentity.deleted = 0\n> LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\n> crmentity.crmid\n> LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\n> LEFT join users ON crmentity.smownerid= users.id\n> WHERE\n> to_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en',\n> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n> or\n> to_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en',\n> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n> ORDER BY crmentity.modifiedtime DESC LIMIT 100\n>\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=112724.54..112724.54 rows=1 width=99)\n> -> Sort (cost=112724.54..112724.54 rows=1 width=99)\n> Sort Key: crmentity.modifiedtime\n> -> Nested Loop Left Join (cost=0.00..112724.53 rows=1 width=99)\n> -> Nested Loop Left Join (cost=0.00..112724.24 rows=1 width=82)\n> -> Nested Loop Left Join (cost=0.00..112723.96 rows=1 width=79)\n> -> Nested Loop (cost=0.00..112723.68 rows=1 width=56)\n> Join Filter: ((to_tsvector('en'::regconfig,\n> regexp_replace((activity.subject)::text,\n> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ''::text, 'gs'::text)) @@ '''\n> Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery) OR\n> (to_tsvector('en'::regconfig, regexp_replace(crmentity.description,\n> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'::text)) @@\n> ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery))\n> -> Index Scan using activity_pkey on activity (cost=0.00..10223.89\n> rows=343070 width=36)\n> -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.27 rows=1\n> width=151)\n> Index Cond: (crmentity.crmid = activity.activityid)\n> Filter: (crmentity.deleted = 0)\n> -> Index Scan using activitygrouprelation_activityid_idx on\n> activitygrouprelation (cost=0.00..0.27 rows=1 width=27)\n> Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n> -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1\n> width=26)\n> Index Cond: ((groups.groupname)::text =\n> (activitygrouprelation.groupname)::text)\n> -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)\n> Index Cond: (crmentity.smownerid = users.id)\n>\n>\n> The above query are not using fts indexes, even hang the server.\n>\n> But,\n>\n>\n> explain\n> SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\n> activity.subject,case when ( users.user_name not like '') then\n> users.user_name else groups.groupname end as user_name, activity.date_start\n> FROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid\n> and crmentity.deleted = 0\n> LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\n> crmentity.crmid\n> LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\n> LEFT join users ON crmentity.smownerid= users.id\n> WHERE\n> to_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en',\n> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n> ORDER BY crmentity.modifiedtime DESC LIMIT 100\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Limit (cost=1.46..1.47 rows=1 width=99) (actual time=0.824..0.824 rows=0\n> loops=1)\n> -> Sort (cost=1.46..1.47 rows=1 width=99) (actual time=0.819..0.819 rows=0\n> loops=1)\n> Sort Key: crmentity.modifiedtime\n> Sort Method: quicksort Memory: 17kB\n> -> Nested Loop Left Join (cost=0.27..1.45 rows=1 width=99) (actual\n> time=0.752..0.752 rows=0 loops=1)\n> -> Nested Loop Left Join (cost=0.27..1.17 rows=1 width=82) (actual\n> time=0.750..0.750 rows=0 loops=1)\n> -> Nested Loop Left Join (cost=0.27..0.88 rows=1 width=79) (actual\n> time=0.748..0.748 rows=0 loops=1)\n> -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual time=0.746..0.746\n> rows=0 loops=1)\n> -> Bitmap Heap Scan on activity (cost=0.27..0.30 rows=1 width=36) (actual\n> time=0.744..0.744 rows=0 loops=1)\n> Recheck Cond: (to_tsvector('en'::regconfig,\n> regexp_replace((subject)::text,\n> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text,\n> 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n> -> Bitmap Index Scan on ftx_en_activity_subject (cost=0.00..0.27 rows=1\n> width=0) (actual time=0.740..0.740 rows=0 loops=1)\n> Index Cond: (to_tsvector('en'::regconfig, regexp_replace((subject)::text,\n> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::te\n> xt, 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* &\n> ''bangladesh'':*'::tsquery)\n> -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.29 rows=1\n> width=24) (never executed)\n> Index Cond: (crmentity.crmid = activity.activityid)\n> Filter: (crmentity.deleted = 0)\n> -> Index Scan using activitygrouprelation_activityid_idx on\n> activitygrouprelation (cost=0.00..0.27 rows=1 width=27) (never executed)\n> Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n> -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1\n> width=26) (never executed)\n> Index Cond: ((groups.groupname)::text =\n> (activitygrouprelation.groupname)::text)\n> -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)\n> (never executed)\n> Index Cond: (crmentity.smownerid = users.id)\n> Total runtime: 1.188 ms\n>\n>\n>\n>\n> explain\n> SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\n> activity.subject,case when ( users.user_name not like '') then\n> users.user_name else groups.groupname end as user_name, activity.date_start\n> FROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid\n> and crmentity.deleted = 0\n> LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\n> crmentity.crmid\n> LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\n> LEFT join users ON crmentity.smownerid= users.id\n> WHERE\n> to_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en',\n> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n> ORDER BY crmentity.modifiedtime DESC LIMIT 100\n>\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Limit (cost=1.50..1.51 rows=1 width=99) (actual time=5.044..5.047 rows=1\n> loops=1)\n> -> Sort (cost=1.50..1.51 rows=1 width=99) (actual time=5.041..5.042 rows=1\n> loops=1)\n> Sort Key: crmentity.modifiedtime\n> Sort Method: quicksort Memory: 17kB\n> -> Nested Loop Left Join (cost=0.27..1.49 rows=1 width=99) (actual\n> time=4.998..5.012 rows=1 loops=1)\n> -> Nested Loop Left Join (cost=0.27..1.19 rows=1 width=82) (actual\n> time=4.952..4.961 rows=1 loops=1)\n> -> Nested Loop Left Join (cost=0.27..0.90 rows=1 width=79) (actual\n> time=4.949..4.956 rows=1 loops=1)\n> -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual time=4.943..4.948\n> rows=1 loops=1)\n> -> Bitmap Heap Scan on crmentity (cost=0.27..0.30 rows=1 width=24) (actual\n> time=4.727..4.799 rows=3 loops=1)\n> Recheck Cond: (to_tsvector('en'::regconfig, regexp_replace(description,\n> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'\n> ::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n> Filter: (deleted = 0)\n> -> Bitmap Index Scan on ftx_en_crmentity_description (cost=0.00..0.27\n> rows=1 width=0) (actual time=4.687..4.687 rows=3 loops=1)\n> Index Cond: (to_tsvector('en'::regconfig, regexp_replace(description,\n> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text,\n> 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n> -> Index Scan using activity_pkey on activity (cost=0.00..0.29 rows=1\n> width=36) (actual time=0.043..0.043 rows=0 loops=3)\n> Index Cond: (activity.activityid = crmentity.crmid)\n> -> Index Scan using activitygrouprelation_activityid_idx on\n> activitygrouprelation (cost=0.00..0.29 rows=1 width=27) (actual\n> time=0.003..0.003\n> rows=0 loops=1)\n> Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n> -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1\n> width=26) (actual time=0.001..0.001 rows=0 loops=1)\n> Index Cond: ((groups.groupname)::text =\n> (activitygrouprelation.groupname)::text)\n> -> Index Scan using users_pkey on users (cost=0.00..0.29 rows=1 width=25)\n> (actual time=0.033..0.035 rows=1 loops=1)\n> Index Cond: (crmentity.smownerid = users.id)\n> Total runtime: 5.229 ms\n> (22 rows)\n>\n>\n>\n> \\d crmentity\n> Table \"public.crmentity\"\n> Column | Type | Modifiers\n> --------------+-----------------------------+--------------------\n> crmid | integer | not null\n> smcreatorid | integer | not null default 0\n> smownerid | integer | not null default 0\n> modifiedby | integer | not null default 0\n> setype | character varying(30) | not null\n> description | text |\n> createdtime | timestamp without time zone | not null\n> modifiedtime | timestamp without time zone | not null\n> viewedtime | timestamp without time zone |\n> status | character varying(50) |\n> version | integer | not null default 0\n> presence | integer | default 1\n> deleted | integer | not null default 0\n> Indexes:\n> \"crmentity_pkey\" PRIMARY KEY, btree (crmid)\n> \"crmentity_createdtime_idx\" btree (createdtime)\n> \"crmentity_modifiedby_idx\" btree (modifiedby)\n> \"crmentity_modifiedtime_idx\" btree (modifiedtime)\n> \"crmentity_smcreatorid_idx\" btree (smcreatorid)\n> \"crmentity_smownerid_idx\" btree (smownerid)\n> \"ftx_en_crmentity_description\" gin (to_tsvector('vcrm_en'::regconfig,\n> for_fts(description)))\n> \"crmentity_deleted_idx\" btree (deleted)\n> Referenced by:\n> TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid)\n> REFERENCES crmentity(crmid) ON DELETE CASCADE\n> TABLE \"cc2crmentity\" CONSTRAINT \"fk_cc2crmentity_crmentity\" FOREIGN KEY\n> (crm_id) REFERENCES crmentity(crmid) ON UPDATE CASCADE ON DELETE CASCADE\n>\n>\n> \\d activity\n>\n> Table \"public.activity\"\n> Column | Type | Modifiers\n> ------------------+------------------------+-------------------------------------------\n> activityid | integer | not null default 0\n> subject | character varying(250) | not null\n> semodule | character varying(20) |\n> activitytype | character varying(200) | not null\n> date_start | date | not null\n> due_date | date |\n> time_start | character varying(50) |\n> time_end | character varying(50) |\n> sendnotification | character varying(3) | not null default '0'::character\n> varying\n> duration_hours | character varying(2) |\n> duration_minutes | character varying(200) |\n> status | character varying(200) |\n> eventstatus | character varying(200) |\n> priority | character varying(200) |\n> location | character varying(150) |\n> notime | character varying(3) | not null default '0'::character varying\n> visibility | character varying(50) | not null default 'all'::character\n> varying\n> recurringtype | character varying(200) |\n> end_date | date |\n> end_time | character varying(50) |\n> Indexes:\n> \"activity_pkey\" PRIMARY KEY, btree (activityid)\n> \"activity_activitytype_idx\" btree (activitytype)\n> \"activity_date_start_idx\" btree (date_start)\n> \"activity_due_date_idx\" btree (due_date)\n> \"activity_eventstatus_idx\" btree (eventstatus)\n> \"activity_status_idx\" btree (status)\n> \"activity_subject_idx\" btree (subject)\n> \"activity_time_start_idx\" btree (time_start)\n> \"ftx_en_activity_subject\" gin (to_tsvector('vcrm_en'::regconfig,\n> for_fts(subject::text)))\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Mon, 29 Nov 2010 15:37:54 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full Text index is not using during OR operation" }, { "msg_contents": "Oh! Actualy it is:\nselect replace('Dhaka University of Bangladesh:*', ' ',':* & ');\nNo space at start.\n\nOn Mon, Nov 29, 2010 at 6:37 PM, Oleg Bartunov <[email protected]> wrote:\n\n> What does replace(' Dhaka University of Bangladesh:*', ' ',':* & ') means ?\n> I see it produces something wrong for to_tsquery:\n>\n> test=# select replace(' Dhaka University of Bangladesh:*', ' ',':* & ');\n>\n> replace\n> ---------------------------------------------------\n> :* & Dhaka:* & University:* & of:* & Bangladesh:*\n> (1 row)\n>\n> Oleg\n>\n>\n> On Mon, 29 Nov 2010, AI Rumman wrote:\n>\n> explain\n>> SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\n>> activity.subject,case when ( users.user_name not like '') then\n>> users.user_name else groups.groupname end as user_name,\n>> activity.date_start\n>> FROM crmentity INNER JOIN activity ON crmentity.crmid =\n>> activity.activityid\n>> and crmentity.deleted = 0\n>> LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\n>> crmentity.crmid\n>> LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\n>> LEFT join users ON crmentity.smownerid= users.id\n>> WHERE\n>> to_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en',\n>> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n>> or\n>> to_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en',\n>> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n>> ORDER BY crmentity.modifiedtime DESC LIMIT 100\n>>\n>> QUERY PLAN\n>>\n>>\n>> ------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=112724.54..112724.54 rows=1 width=99)\n>> -> Sort (cost=112724.54..112724.54 rows=1 width=99)\n>> Sort Key: crmentity.modifiedtime\n>> -> Nested Loop Left Join (cost=0.00..112724.53 rows=1 width=99)\n>> -> Nested Loop Left Join (cost=0.00..112724.24 rows=1 width=82)\n>> -> Nested Loop Left Join (cost=0.00..112723.96 rows=1 width=79)\n>> -> Nested Loop (cost=0.00..112723.68 rows=1 width=56)\n>> Join Filter: ((to_tsvector('en'::regconfig,\n>> regexp_replace((activity.subject)::text,\n>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ''::text, 'gs'::text)) @@\n>> '''\n>> Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery) OR\n>> (to_tsvector('en'::regconfig, regexp_replace(crmentity.description,\n>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'::text)) @@\n>> ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery))\n>> -> Index Scan using activity_pkey on activity (cost=0.00..10223.89\n>> rows=343070 width=36)\n>> -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.27 rows=1\n>> width=151)\n>> Index Cond: (crmentity.crmid = activity.activityid)\n>> Filter: (crmentity.deleted = 0)\n>> -> Index Scan using activitygrouprelation_activityid_idx on\n>> activitygrouprelation (cost=0.00..0.27 rows=1 width=27)\n>> Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n>> -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27\n>> rows=1\n>> width=26)\n>> Index Cond: ((groups.groupname)::text =\n>> (activitygrouprelation.groupname)::text)\n>> -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)\n>> Index Cond: (crmentity.smownerid = users.id)\n>>\n>>\n>> The above query are not using fts indexes, even hang the server.\n>>\n>> But,\n>>\n>>\n>> explain\n>> SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\n>> activity.subject,case when ( users.user_name not like '') then\n>> users.user_name else groups.groupname end as user_name,\n>> activity.date_start\n>> FROM crmentity INNER JOIN activity ON crmentity.crmid =\n>> activity.activityid\n>> and crmentity.deleted = 0\n>> LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\n>> crmentity.crmid\n>> LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\n>> LEFT join users ON crmentity.smownerid= users.id\n>> WHERE\n>> to_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en',\n>> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n>> ORDER BY crmentity.modifiedtime DESC LIMIT 100\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>\n>> Limit (cost=1.46..1.47 rows=1 width=99) (actual time=0.824..0.824 rows=0\n>> loops=1)\n>> -> Sort (cost=1.46..1.47 rows=1 width=99) (actual time=0.819..0.819\n>> rows=0\n>> loops=1)\n>> Sort Key: crmentity.modifiedtime\n>> Sort Method: quicksort Memory: 17kB\n>> -> Nested Loop Left Join (cost=0.27..1.45 rows=1 width=99) (actual\n>> time=0.752..0.752 rows=0 loops=1)\n>> -> Nested Loop Left Join (cost=0.27..1.17 rows=1 width=82) (actual\n>> time=0.750..0.750 rows=0 loops=1)\n>> -> Nested Loop Left Join (cost=0.27..0.88 rows=1 width=79) (actual\n>> time=0.748..0.748 rows=0 loops=1)\n>> -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual\n>> time=0.746..0.746\n>> rows=0 loops=1)\n>> -> Bitmap Heap Scan on activity (cost=0.27..0.30 rows=1 width=36) (actual\n>> time=0.744..0.744 rows=0 loops=1)\n>> Recheck Cond: (to_tsvector('en'::regconfig,\n>> regexp_replace((subject)::text,\n>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text,\n>> 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* &\n>> ''bangladesh'':*'::tsquery)\n>> -> Bitmap Index Scan on ftx_en_activity_subject (cost=0.00..0.27 rows=1\n>> width=0) (actual time=0.740..0.740 rows=0 loops=1)\n>> Index Cond: (to_tsvector('en'::regconfig, regexp_replace((subject)::text,\n>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::te\n>> xt, 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* &\n>> ''bangladesh'':*'::tsquery)\n>> -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.29 rows=1\n>> width=24) (never executed)\n>> Index Cond: (crmentity.crmid = activity.activityid)\n>> Filter: (crmentity.deleted = 0)\n>> -> Index Scan using activitygrouprelation_activityid_idx on\n>> activitygrouprelation (cost=0.00..0.27 rows=1 width=27) (never executed)\n>> Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n>> -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27\n>> rows=1\n>> width=26) (never executed)\n>> Index Cond: ((groups.groupname)::text =\n>> (activitygrouprelation.groupname)::text)\n>> -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)\n>> (never executed)\n>> Index Cond: (crmentity.smownerid = users.id)\n>> Total runtime: 1.188 ms\n>>\n>>\n>>\n>>\n>> explain\n>> SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\n>> activity.subject,case when ( users.user_name not like '') then\n>> users.user_name else groups.groupname end as user_name,\n>> activity.date_start\n>> FROM crmentity INNER JOIN activity ON crmentity.crmid =\n>> activity.activityid\n>> and crmentity.deleted = 0\n>> LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\n>> crmentity.crmid\n>> LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\n>> LEFT join users ON crmentity.smownerid= users.id\n>> WHERE\n>> to_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en',\n>> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n>> ORDER BY crmentity.modifiedtime DESC LIMIT 100\n>>\n>> QUERY PLAN\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>\n>> Limit (cost=1.50..1.51 rows=1 width=99) (actual time=5.044..5.047 rows=1\n>> loops=1)\n>> -> Sort (cost=1.50..1.51 rows=1 width=99) (actual time=5.041..5.042\n>> rows=1\n>> loops=1)\n>> Sort Key: crmentity.modifiedtime\n>> Sort Method: quicksort Memory: 17kB\n>> -> Nested Loop Left Join (cost=0.27..1.49 rows=1 width=99) (actual\n>> time=4.998..5.012 rows=1 loops=1)\n>> -> Nested Loop Left Join (cost=0.27..1.19 rows=1 width=82) (actual\n>> time=4.952..4.961 rows=1 loops=1)\n>> -> Nested Loop Left Join (cost=0.27..0.90 rows=1 width=79) (actual\n>> time=4.949..4.956 rows=1 loops=1)\n>> -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual\n>> time=4.943..4.948\n>> rows=1 loops=1)\n>> -> Bitmap Heap Scan on crmentity (cost=0.27..0.30 rows=1 width=24)\n>> (actual\n>> time=4.727..4.799 rows=3 loops=1)\n>> Recheck Cond: (to_tsvector('en'::regconfig, regexp_replace(description,\n>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'\n>> ::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n>> Filter: (deleted = 0)\n>> -> Bitmap Index Scan on ftx_en_crmentity_description (cost=0.00..0.27\n>> rows=1 width=0) (actual time=4.687..4.687 rows=3 loops=1)\n>> Index Cond: (to_tsvector('en'::regconfig, regexp_replace(description,\n>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text,\n>> 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* &\n>> ''bangladesh'':*'::tsquery)\n>> -> Index Scan using activity_pkey on activity (cost=0.00..0.29 rows=1\n>> width=36) (actual time=0.043..0.043 rows=0 loops=3)\n>> Index Cond: (activity.activityid = crmentity.crmid)\n>> -> Index Scan using activitygrouprelation_activityid_idx on\n>> activitygrouprelation (cost=0.00..0.29 rows=1 width=27) (actual\n>> time=0.003..0.003\n>> rows=0 loops=1)\n>> Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n>> -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27\n>> rows=1\n>> width=26) (actual time=0.001..0.001 rows=0 loops=1)\n>> Index Cond: ((groups.groupname)::text =\n>> (activitygrouprelation.groupname)::text)\n>> -> Index Scan using users_pkey on users (cost=0.00..0.29 rows=1 width=25)\n>> (actual time=0.033..0.035 rows=1 loops=1)\n>> Index Cond: (crmentity.smownerid = users.id)\n>> Total runtime: 5.229 ms\n>> (22 rows)\n>>\n>>\n>>\n>> \\d crmentity\n>> Table \"public.crmentity\"\n>> Column | Type | Modifiers\n>> --------------+-----------------------------+--------------------\n>> crmid | integer | not null\n>> smcreatorid | integer | not null default 0\n>> smownerid | integer | not null default 0\n>> modifiedby | integer | not null default 0\n>> setype | character varying(30) | not null\n>> description | text |\n>> createdtime | timestamp without time zone | not null\n>> modifiedtime | timestamp without time zone | not null\n>> viewedtime | timestamp without time zone |\n>> status | character varying(50) |\n>> version | integer | not null default 0\n>> presence | integer | default 1\n>> deleted | integer | not null default 0\n>> Indexes:\n>> \"crmentity_pkey\" PRIMARY KEY, btree (crmid)\n>> \"crmentity_createdtime_idx\" btree (createdtime)\n>> \"crmentity_modifiedby_idx\" btree (modifiedby)\n>> \"crmentity_modifiedtime_idx\" btree (modifiedtime)\n>> \"crmentity_smcreatorid_idx\" btree (smcreatorid)\n>> \"crmentity_smownerid_idx\" btree (smownerid)\n>> \"ftx_en_crmentity_description\" gin (to_tsvector('vcrm_en'::regconfig,\n>> for_fts(description)))\n>> \"crmentity_deleted_idx\" btree (deleted)\n>> Referenced by:\n>> TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid)\n>> REFERENCES crmentity(crmid) ON DELETE CASCADE\n>> TABLE \"cc2crmentity\" CONSTRAINT \"fk_cc2crmentity_crmentity\" FOREIGN KEY\n>> (crm_id) REFERENCES crmentity(crmid) ON UPDATE CASCADE ON DELETE CASCADE\n>>\n>>\n>> \\d activity\n>>\n>> Table \"public.activity\"\n>> Column | Type | Modifiers\n>>\n>> ------------------+------------------------+-------------------------------------------\n>> activityid | integer | not null default 0\n>> subject | character varying(250) | not null\n>> semodule | character varying(20) |\n>> activitytype | character varying(200) | not null\n>> date_start | date | not null\n>> due_date | date |\n>> time_start | character varying(50) |\n>> time_end | character varying(50) |\n>> sendnotification | character varying(3) | not null default '0'::character\n>> varying\n>> duration_hours | character varying(2) |\n>> duration_minutes | character varying(200) |\n>> status | character varying(200) |\n>> eventstatus | character varying(200) |\n>> priority | character varying(200) |\n>> location | character varying(150) |\n>> notime | character varying(3) | not null default '0'::character varying\n>> visibility | character varying(50) | not null default 'all'::character\n>> varying\n>> recurringtype | character varying(200) |\n>> end_date | date |\n>> end_time | character varying(50) |\n>> Indexes:\n>> \"activity_pkey\" PRIMARY KEY, btree (activityid)\n>> \"activity_activitytype_idx\" btree (activitytype)\n>> \"activity_date_start_idx\" btree (date_start)\n>> \"activity_due_date_idx\" btree (due_date)\n>> \"activity_eventstatus_idx\" btree (eventstatus)\n>> \"activity_status_idx\" btree (status)\n>> \"activity_subject_idx\" btree (subject)\n>> \"activity_time_start_idx\" btree (time_start)\n>> \"ftx_en_activity_subject\" gin (to_tsvector('vcrm_en'::regconfig,\n>> for_fts(subject::text)))\n>>\n>>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n>\n\nOh! Actualy it is:select replace('Dhaka University of Bangladesh:*', ' ',':* & ');No space at start.On Mon, Nov 29, 2010 at 6:37 PM, Oleg Bartunov <[email protected]> wrote:\nWhat does replace(' Dhaka University of Bangladesh:*', ' ',':* & ') means ?\nI see it produces something wrong for to_tsquery:\n\ntest=# select replace(' Dhaka University of Bangladesh:*', ' ',':* & ');\n                      replace ---------------------------------------------------\n :* & Dhaka:* & University:* & of:* & Bangladesh:*\n(1 row)\n\nOleg\n\nOn Mon, 29 Nov 2010, AI Rumman wrote:\n\n\nexplain\nSELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\nactivity.subject,case when ( users.user_name not like '') then\nusers.user_name else groups.groupname end as user_name, activity.date_start\nFROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid\nand crmentity.deleted = 0\nLEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\ncrmentity.crmid\nLEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\nLEFT join users ON crmentity.smownerid= users.id\nWHERE\nto_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en',\nreplace(' Dhaka University of Bangladesh:*', ' ',':* & '))\nor\nto_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en',\nreplace(' Dhaka University of Bangladesh:*', ' ',':* & '))\nORDER BY crmentity.modifiedtime DESC LIMIT 100\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=112724.54..112724.54 rows=1 width=99)\n -> Sort (cost=112724.54..112724.54 rows=1 width=99)\n Sort Key: crmentity.modifiedtime\n -> Nested Loop Left Join (cost=0.00..112724.53 rows=1 width=99)\n -> Nested Loop Left Join (cost=0.00..112724.24 rows=1 width=82)\n -> Nested Loop Left Join (cost=0.00..112723.96 rows=1 width=79)\n -> Nested Loop (cost=0.00..112723.68 rows=1 width=56)\n Join Filter: ((to_tsvector('en'::regconfig,\nregexp_replace((activity.subject)::text,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ''::text, 'gs'::text)) @@ '''\nDhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery) OR\n(to_tsvector('en'::regconfig, regexp_replace(crmentity.description,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'::text)) @@\n''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery))\n -> Index Scan using activity_pkey on activity (cost=0.00..10223.89\nrows=343070 width=36)\n -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.27 rows=1\nwidth=151)\n Index Cond: (crmentity.crmid = activity.activityid)\n Filter: (crmentity.deleted = 0)\n -> Index Scan using activitygrouprelation_activityid_idx on\nactivitygrouprelation (cost=0.00..0.27 rows=1 width=27)\n Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1\nwidth=26)\n Index Cond: ((groups.groupname)::text =\n(activitygrouprelation.groupname)::text)\n -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)\n Index Cond: (crmentity.smownerid = users.id)\n\n\nThe above query are not using fts indexes, even hang the server.\n\nBut,\n\n\nexplain\nSELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\nactivity.subject,case when ( users.user_name not like '') then\nusers.user_name else groups.groupname end as user_name, activity.date_start\nFROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid\nand crmentity.deleted = 0\nLEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\ncrmentity.crmid\nLEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\nLEFT join users ON crmentity.smownerid= users.id\nWHERE\nto_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en',\nreplace(' Dhaka University of Bangladesh:*', ' ',':* & '))\nORDER BY crmentity.modifiedtime DESC LIMIT 100\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nLimit (cost=1.46..1.47 rows=1 width=99) (actual time=0.824..0.824 rows=0\nloops=1)\n -> Sort (cost=1.46..1.47 rows=1 width=99) (actual time=0.819..0.819 rows=0\nloops=1)\n Sort Key: crmentity.modifiedtime\n Sort Method: quicksort Memory: 17kB\n -> Nested Loop Left Join (cost=0.27..1.45 rows=1 width=99) (actual\ntime=0.752..0.752 rows=0 loops=1)\n -> Nested Loop Left Join (cost=0.27..1.17 rows=1 width=82) (actual\ntime=0.750..0.750 rows=0 loops=1)\n -> Nested Loop Left Join (cost=0.27..0.88 rows=1 width=79) (actual\ntime=0.748..0.748 rows=0 loops=1)\n -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual time=0.746..0.746\nrows=0 loops=1)\n -> Bitmap Heap Scan on activity (cost=0.27..0.30 rows=1 width=36) (actual\ntime=0.744..0.744 rows=0 loops=1)\n Recheck Cond: (to_tsvector('en'::regconfig,\nregexp_replace((subject)::text,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text,\n'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n -> Bitmap Index Scan on ftx_en_activity_subject (cost=0.00..0.27 rows=1\nwidth=0) (actual time=0.740..0.740 rows=0 loops=1)\n Index Cond: (to_tsvector('en'::regconfig, regexp_replace((subject)::text,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::te\nxt, 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* &\n''bangladesh'':*'::tsquery)\n -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.29 rows=1\nwidth=24) (never executed)\n Index Cond: (crmentity.crmid = activity.activityid)\n Filter: (crmentity.deleted = 0)\n -> Index Scan using activitygrouprelation_activityid_idx on\nactivitygrouprelation (cost=0.00..0.27 rows=1 width=27) (never executed)\n Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1\nwidth=26) (never executed)\n Index Cond: ((groups.groupname)::text =\n(activitygrouprelation.groupname)::text)\n -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)\n(never executed)\n Index Cond: (crmentity.smownerid = users.id)\nTotal runtime: 1.188 ms\n\n\n\n\nexplain\nSELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\nactivity.subject,case when ( users.user_name not like '') then\nusers.user_name else groups.groupname end as user_name, activity.date_start\nFROM crmentity INNER JOIN activity ON crmentity.crmid = activity.activityid\nand crmentity.deleted = 0\nLEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\ncrmentity.crmid\nLEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\nLEFT join users ON crmentity.smownerid= users.id\nWHERE\nto_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en',\nreplace(' Dhaka University of Bangladesh:*', ' ',':* & '))\nORDER BY crmentity.modifiedtime DESC LIMIT 100\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nLimit (cost=1.50..1.51 rows=1 width=99) (actual time=5.044..5.047 rows=1\nloops=1)\n -> Sort (cost=1.50..1.51 rows=1 width=99) (actual time=5.041..5.042 rows=1\nloops=1)\n Sort Key: crmentity.modifiedtime\n Sort Method: quicksort Memory: 17kB\n -> Nested Loop Left Join (cost=0.27..1.49 rows=1 width=99) (actual\ntime=4.998..5.012 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.27..1.19 rows=1 width=82) (actual\ntime=4.952..4.961 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.27..0.90 rows=1 width=79) (actual\ntime=4.949..4.956 rows=1 loops=1)\n -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual time=4.943..4.948\nrows=1 loops=1)\n -> Bitmap Heap Scan on crmentity (cost=0.27..0.30 rows=1 width=24) (actual\ntime=4.727..4.799 rows=3 loops=1)\n Recheck Cond: (to_tsvector('en'::regconfig, regexp_replace(description,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'\n::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n Filter: (deleted = 0)\n -> Bitmap Index Scan on ftx_en_crmentity_description (cost=0.00..0.27\nrows=1 width=0) (actual time=4.687..4.687 rows=3 loops=1)\n Index Cond: (to_tsvector('en'::regconfig, regexp_replace(description,\n'(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text,\n'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n -> Index Scan using activity_pkey on activity (cost=0.00..0.29 rows=1\nwidth=36) (actual time=0.043..0.043 rows=0 loops=3)\n Index Cond: (activity.activityid = crmentity.crmid)\n -> Index Scan using activitygrouprelation_activityid_idx on\nactivitygrouprelation (cost=0.00..0.29 rows=1 width=27) (actual\ntime=0.003..0.003\nrows=0 loops=1)\n Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27 rows=1\nwidth=26) (actual time=0.001..0.001 rows=0 loops=1)\n Index Cond: ((groups.groupname)::text =\n(activitygrouprelation.groupname)::text)\n -> Index Scan using users_pkey on users (cost=0.00..0.29 rows=1 width=25)\n(actual time=0.033..0.035 rows=1 loops=1)\n Index Cond: (crmentity.smownerid = users.id)\nTotal runtime: 5.229 ms\n(22 rows)\n\n\n\n\\d crmentity\n Table \"public.crmentity\"\n Column | Type | Modifiers\n--------------+-----------------------------+--------------------\ncrmid | integer | not null\nsmcreatorid | integer | not null default 0\nsmownerid | integer | not null default 0\nmodifiedby | integer | not null default 0\nsetype | character varying(30) | not null\ndescription | text |\ncreatedtime | timestamp without time zone | not null\nmodifiedtime | timestamp without time zone | not null\nviewedtime | timestamp without time zone |\nstatus | character varying(50) |\nversion | integer | not null default 0\npresence | integer | default 1\ndeleted | integer | not null default 0\nIndexes:\n \"crmentity_pkey\" PRIMARY KEY, btree (crmid)\n \"crmentity_createdtime_idx\" btree (createdtime)\n \"crmentity_modifiedby_idx\" btree (modifiedby)\n \"crmentity_modifiedtime_idx\" btree (modifiedtime)\n \"crmentity_smcreatorid_idx\" btree (smcreatorid)\n \"crmentity_smownerid_idx\" btree (smownerid)\n \"ftx_en_crmentity_description\" gin (to_tsvector('vcrm_en'::regconfig,\nfor_fts(description)))\n \"crmentity_deleted_idx\" btree (deleted)\nReferenced by:\n TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid)\nREFERENCES crmentity(crmid) ON DELETE CASCADE\n TABLE \"cc2crmentity\" CONSTRAINT \"fk_cc2crmentity_crmentity\" FOREIGN KEY\n(crm_id) REFERENCES crmentity(crmid) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n\\d activity\n\n Table \"public.activity\"\n Column | Type | Modifiers\n------------------+------------------------+-------------------------------------------\nactivityid | integer | not null default 0\nsubject | character varying(250) | not null\nsemodule | character varying(20) |\nactivitytype | character varying(200) | not null\ndate_start | date | not null\ndue_date | date |\ntime_start | character varying(50) |\ntime_end | character varying(50) |\nsendnotification | character varying(3) | not null default '0'::character\nvarying\nduration_hours | character varying(2) |\nduration_minutes | character varying(200) |\nstatus | character varying(200) |\neventstatus | character varying(200) |\npriority | character varying(200) |\nlocation | character varying(150) |\nnotime | character varying(3) | not null default '0'::character varying\nvisibility | character varying(50) | not null default 'all'::character\nvarying\nrecurringtype | character varying(200) |\nend_date | date |\nend_time | character varying(50) |\nIndexes:\n \"activity_pkey\" PRIMARY KEY, btree (activityid)\n \"activity_activitytype_idx\" btree (activitytype)\n \"activity_date_start_idx\" btree (date_start)\n \"activity_due_date_idx\" btree (due_date)\n \"activity_eventstatus_idx\" btree (eventstatus)\n \"activity_status_idx\" btree (status)\n \"activity_subject_idx\" btree (subject)\n \"activity_time_start_idx\" btree (time_start)\n \"ftx_en_activity_subject\" gin (to_tsvector('vcrm_en'::regconfig,\nfor_fts(subject::text)))\n\n\n\n        Regards,\n                Oleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83", "msg_date": "Mon, 29 Nov 2010 20:03:24 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full Text index is not using during OR operation" }, { "msg_contents": "Just a general note re the subject, I've also had troubles with\npostgres being unable to optimize a query with OR. The work-around,\nalthough a bit messy, was to use a UNION-query instead.\n", "msg_date": "Mon, 29 Nov 2010 15:32:16 +0100", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full Text index is not using during OR operation" }, { "msg_contents": "On Mon, 29 Nov 2010, AI Rumman wrote:\n\n> Oh! Actualy it is:\n> select replace('Dhaka University of Bangladesh:*', ' ',':* & ');\n> No space at start.\n\nSo, what are actual problems with full text ? I mostly interesting with \nserver crush. We need test data, test query and error message.\n\n\n>\n> On Mon, Nov 29, 2010 at 6:37 PM, Oleg Bartunov <[email protected]> wrote:\n>\n>> What does replace(' Dhaka University of Bangladesh:*', ' ',':* & ') means ?\n>> I see it produces something wrong for to_tsquery:\n>>\n>> test=# select replace(' Dhaka University of Bangladesh:*', ' ',':* & ');\n>>\n>> replace\n>> ---------------------------------------------------\n>> :* & Dhaka:* & University:* & of:* & Bangladesh:*\n>> (1 row)\n>>\n>> Oleg\n>>\n>>\n>> On Mon, 29 Nov 2010, AI Rumman wrote:\n>>\n>> explain\n>>> SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\n>>> activity.subject,case when ( users.user_name not like '') then\n>>> users.user_name else groups.groupname end as user_name,\n>>> activity.date_start\n>>> FROM crmentity INNER JOIN activity ON crmentity.crmid =\n>>> activity.activityid\n>>> and crmentity.deleted = 0\n>>> LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\n>>> crmentity.crmid\n>>> LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\n>>> LEFT join users ON crmentity.smownerid= users.id\n>>> WHERE\n>>> to_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en',\n>>> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n>>> or\n>>> to_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en',\n>>> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n>>> ORDER BY crmentity.modifiedtime DESC LIMIT 100\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>> ------------------------------------------------------------------------------------------------------------------------------------------\n>>> Limit (cost=112724.54..112724.54 rows=1 width=99)\n>>> -> Sort (cost=112724.54..112724.54 rows=1 width=99)\n>>> Sort Key: crmentity.modifiedtime\n>>> -> Nested Loop Left Join (cost=0.00..112724.53 rows=1 width=99)\n>>> -> Nested Loop Left Join (cost=0.00..112724.24 rows=1 width=82)\n>>> -> Nested Loop Left Join (cost=0.00..112723.96 rows=1 width=79)\n>>> -> Nested Loop (cost=0.00..112723.68 rows=1 width=56)\n>>> Join Filter: ((to_tsvector('en'::regconfig,\n>>> regexp_replace((activity.subject)::text,\n>>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ''::text, 'gs'::text)) @@\n>>> '''\n>>> Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery) OR\n>>> (to_tsvector('en'::regconfig, regexp_replace(crmentity.description,\n>>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'::text)) @@\n>>> ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery))\n>>> -> Index Scan using activity_pkey on activity (cost=0.00..10223.89\n>>> rows=343070 width=36)\n>>> -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.27 rows=1\n>>> width=151)\n>>> Index Cond: (crmentity.crmid = activity.activityid)\n>>> Filter: (crmentity.deleted = 0)\n>>> -> Index Scan using activitygrouprelation_activityid_idx on\n>>> activitygrouprelation (cost=0.00..0.27 rows=1 width=27)\n>>> Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n>>> -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27\n>>> rows=1\n>>> width=26)\n>>> Index Cond: ((groups.groupname)::text =\n>>> (activitygrouprelation.groupname)::text)\n>>> -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)\n>>> Index Cond: (crmentity.smownerid = users.id)\n>>>\n>>>\n>>> The above query are not using fts indexes, even hang the server.\n>>>\n>>> But,\n>>>\n>>>\n>>> explain\n>>> SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\n>>> activity.subject,case when ( users.user_name not like '') then\n>>> users.user_name else groups.groupname end as user_name,\n>>> activity.date_start\n>>> FROM crmentity INNER JOIN activity ON crmentity.crmid =\n>>> activity.activityid\n>>> and crmentity.deleted = 0\n>>> LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\n>>> crmentity.crmid\n>>> LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\n>>> LEFT join users ON crmentity.smownerid= users.id\n>>> WHERE\n>>> to_tsvector(' en', for_fts( activity.subject)) @@ to_tsquery(' en',\n>>> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n>>> ORDER BY crmentity.modifiedtime DESC LIMIT 100\n>>>\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>\n>>> Limit (cost=1.46..1.47 rows=1 width=99) (actual time=0.824..0.824 rows=0\n>>> loops=1)\n>>> -> Sort (cost=1.46..1.47 rows=1 width=99) (actual time=0.819..0.819\n>>> rows=0\n>>> loops=1)\n>>> Sort Key: crmentity.modifiedtime\n>>> Sort Method: quicksort Memory: 17kB\n>>> -> Nested Loop Left Join (cost=0.27..1.45 rows=1 width=99) (actual\n>>> time=0.752..0.752 rows=0 loops=1)\n>>> -> Nested Loop Left Join (cost=0.27..1.17 rows=1 width=82) (actual\n>>> time=0.750..0.750 rows=0 loops=1)\n>>> -> Nested Loop Left Join (cost=0.27..0.88 rows=1 width=79) (actual\n>>> time=0.748..0.748 rows=0 loops=1)\n>>> -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual\n>>> time=0.746..0.746\n>>> rows=0 loops=1)\n>>> -> Bitmap Heap Scan on activity (cost=0.27..0.30 rows=1 width=36) (actual\n>>> time=0.744..0.744 rows=0 loops=1)\n>>> Recheck Cond: (to_tsvector('en'::regconfig,\n>>> regexp_replace((subject)::text,\n>>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text,\n>>> 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* &\n>>> ''bangladesh'':*'::tsquery)\n>>> -> Bitmap Index Scan on ftx_en_activity_subject (cost=0.00..0.27 rows=1\n>>> width=0) (actual time=0.740..0.740 rows=0 loops=1)\n>>> Index Cond: (to_tsvector('en'::regconfig, regexp_replace((subject)::text,\n>>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::te\n>>> xt, 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* &\n>>> ''bangladesh'':*'::tsquery)\n>>> -> Index Scan using crmentity_pkey on crmentity (cost=0.00..0.29 rows=1\n>>> width=24) (never executed)\n>>> Index Cond: (crmentity.crmid = activity.activityid)\n>>> Filter: (crmentity.deleted = 0)\n>>> -> Index Scan using activitygrouprelation_activityid_idx on\n>>> activitygrouprelation (cost=0.00..0.27 rows=1 width=27) (never executed)\n>>> Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n>>> -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27\n>>> rows=1\n>>> width=26) (never executed)\n>>> Index Cond: ((groups.groupname)::text =\n>>> (activitygrouprelation.groupname)::text)\n>>> -> Index Scan using users_pkey on users (cost=0.00..0.27 rows=1 width=25)\n>>> (never executed)\n>>> Index Cond: (crmentity.smownerid = users.id)\n>>> Total runtime: 1.188 ms\n>>>\n>>>\n>>>\n>>>\n>>> explain\n>>> SELECT crmentity.crmid, crmentity.setype, crmentity.modifiedtime,\n>>> activity.subject,case when ( users.user_name not like '') then\n>>> users.user_name else groups.groupname end as user_name,\n>>> activity.date_start\n>>> FROM crmentity INNER JOIN activity ON crmentity.crmid =\n>>> activity.activityid\n>>> and crmentity.deleted = 0\n>>> LEFT JOIN activitygrouprelation ON activitygrouprelation.activityid =\n>>> crmentity.crmid\n>>> LEFT JOIN groups ON groups.groupname = activitygrouprelation.groupname\n>>> LEFT join users ON crmentity.smownerid= users.id\n>>> WHERE\n>>> to_tsvector(' en', for_fts( crmentity.description)) @@ to_tsquery(' en',\n>>> replace(' Dhaka University of Bangladesh:*', ' ',':* & '))\n>>> ORDER BY crmentity.modifiedtime DESC LIMIT 100\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>>\n>>> Limit (cost=1.50..1.51 rows=1 width=99) (actual time=5.044..5.047 rows=1\n>>> loops=1)\n>>> -> Sort (cost=1.50..1.51 rows=1 width=99) (actual time=5.041..5.042\n>>> rows=1\n>>> loops=1)\n>>> Sort Key: crmentity.modifiedtime\n>>> Sort Method: quicksort Memory: 17kB\n>>> -> Nested Loop Left Join (cost=0.27..1.49 rows=1 width=99) (actual\n>>> time=4.998..5.012 rows=1 loops=1)\n>>> -> Nested Loop Left Join (cost=0.27..1.19 rows=1 width=82) (actual\n>>> time=4.952..4.961 rows=1 loops=1)\n>>> -> Nested Loop Left Join (cost=0.27..0.90 rows=1 width=79) (actual\n>>> time=4.949..4.956 rows=1 loops=1)\n>>> -> Nested Loop (cost=0.27..0.60 rows=1 width=56) (actual\n>>> time=4.943..4.948\n>>> rows=1 loops=1)\n>>> -> Bitmap Heap Scan on crmentity (cost=0.27..0.30 rows=1 width=24)\n>>> (actual\n>>> time=4.727..4.799 rows=3 loops=1)\n>>> Recheck Cond: (to_tsvector('en'::regconfig, regexp_replace(description,\n>>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text, 'gs'\n>>> ::text)) @@ ''' Dhaka'':* & ''univers'':* & ''bangladesh'':*'::tsquery)\n>>> Filter: (deleted = 0)\n>>> -> Bitmap Index Scan on ftx_en_crmentity_description (cost=0.00..0.27\n>>> rows=1 width=0) (actual time=4.687..4.687 rows=3 loops=1)\n>>> Index Cond: (to_tsvector('en'::regconfig, regexp_replace(description,\n>>> '(&[^;]+;)|(<[^>]+>)|([\\\\s\\\\r\\\\n\\\\t]+)'::text, ' '::text,\n>>> 'gs'::text)) @@ ''' Dhaka'':* & ''univers'':* &\n>>> ''bangladesh'':*'::tsquery)\n>>> -> Index Scan using activity_pkey on activity (cost=0.00..0.29 rows=1\n>>> width=36) (actual time=0.043..0.043 rows=0 loops=3)\n>>> Index Cond: (activity.activityid = crmentity.crmid)\n>>> -> Index Scan using activitygrouprelation_activityid_idx on\n>>> activitygrouprelation (cost=0.00..0.29 rows=1 width=27) (actual\n>>> time=0.003..0.003\n>>> rows=0 loops=1)\n>>> Index Cond: (activitygrouprelation.activityid = crmentity.crmid)\n>>> -> Index Scan using groups_groupname_idx on groups (cost=0.00..0.27\n>>> rows=1\n>>> width=26) (actual time=0.001..0.001 rows=0 loops=1)\n>>> Index Cond: ((groups.groupname)::text =\n>>> (activitygrouprelation.groupname)::text)\n>>> -> Index Scan using users_pkey on users (cost=0.00..0.29 rows=1 width=25)\n>>> (actual time=0.033..0.035 rows=1 loops=1)\n>>> Index Cond: (crmentity.smownerid = users.id)\n>>> Total runtime: 5.229 ms\n>>> (22 rows)\n>>>\n>>>\n>>>\n>>> \\d crmentity\n>>> Table \"public.crmentity\"\n>>> Column | Type | Modifiers\n>>> --------------+-----------------------------+--------------------\n>>> crmid | integer | not null\n>>> smcreatorid | integer | not null default 0\n>>> smownerid | integer | not null default 0\n>>> modifiedby | integer | not null default 0\n>>> setype | character varying(30) | not null\n>>> description | text |\n>>> createdtime | timestamp without time zone | not null\n>>> modifiedtime | timestamp without time zone | not null\n>>> viewedtime | timestamp without time zone |\n>>> status | character varying(50) |\n>>> version | integer | not null default 0\n>>> presence | integer | default 1\n>>> deleted | integer | not null default 0\n>>> Indexes:\n>>> \"crmentity_pkey\" PRIMARY KEY, btree (crmid)\n>>> \"crmentity_createdtime_idx\" btree (createdtime)\n>>> \"crmentity_modifiedby_idx\" btree (modifiedby)\n>>> \"crmentity_modifiedtime_idx\" btree (modifiedtime)\n>>> \"crmentity_smcreatorid_idx\" btree (smcreatorid)\n>>> \"crmentity_smownerid_idx\" btree (smownerid)\n>>> \"ftx_en_crmentity_description\" gin (to_tsvector('vcrm_en'::regconfig,\n>>> for_fts(description)))\n>>> \"crmentity_deleted_idx\" btree (deleted)\n>>> Referenced by:\n>>> TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid)\n>>> REFERENCES crmentity(crmid) ON DELETE CASCADE\n>>> TABLE \"cc2crmentity\" CONSTRAINT \"fk_cc2crmentity_crmentity\" FOREIGN KEY\n>>> (crm_id) REFERENCES crmentity(crmid) ON UPDATE CASCADE ON DELETE CASCADE\n>>>\n>>>\n>>> \\d activity\n>>>\n>>> Table \"public.activity\"\n>>> Column | Type | Modifiers\n>>>\n>>> ------------------+------------------------+-------------------------------------------\n>>> activityid | integer | not null default 0\n>>> subject | character varying(250) | not null\n>>> semodule | character varying(20) |\n>>> activitytype | character varying(200) | not null\n>>> date_start | date | not null\n>>> due_date | date |\n>>> time_start | character varying(50) |\n>>> time_end | character varying(50) |\n>>> sendnotification | character varying(3) | not null default '0'::character\n>>> varying\n>>> duration_hours | character varying(2) |\n>>> duration_minutes | character varying(200) |\n>>> status | character varying(200) |\n>>> eventstatus | character varying(200) |\n>>> priority | character varying(200) |\n>>> location | character varying(150) |\n>>> notime | character varying(3) | not null default '0'::character varying\n>>> visibility | character varying(50) | not null default 'all'::character\n>>> varying\n>>> recurringtype | character varying(200) |\n>>> end_date | date |\n>>> end_time | character varying(50) |\n>>> Indexes:\n>>> \"activity_pkey\" PRIMARY KEY, btree (activityid)\n>>> \"activity_activitytype_idx\" btree (activitytype)\n>>> \"activity_date_start_idx\" btree (date_start)\n>>> \"activity_due_date_idx\" btree (due_date)\n>>> \"activity_eventstatus_idx\" btree (eventstatus)\n>>> \"activity_status_idx\" btree (status)\n>>> \"activity_subject_idx\" btree (subject)\n>>> \"activity_time_start_idx\" btree (time_start)\n>>> \"ftx_en_activity_subject\" gin (to_tsvector('vcrm_en'::regconfig,\n>>> for_fts(subject::text)))\n>>>\n>>>\n>> Regards,\n>> Oleg\n>> _____________________________________________________________\n>> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n>> Sternberg Astronomical Institute, Moscow University, Russia\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(495)939-16-83, +007(495)939-23-83\n>>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Mon, 29 Nov 2010 18:02:50 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full Text index is not using during OR operation" } ]
[ { "msg_contents": "\nHi,\nI am working on a performance issue with a partitioned table. Some of my sql\nstatements against this partition table is in waiting state for long time. I\nhave queried waiting=true in pg_stat_activity. Now, is there a way to find\nout which sql is making other statements to wait.\n\nThanks for your help.\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/postgresql-statements-are-waiting-tp3285939p3285939.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Tue, 30 Nov 2010 03:38:00 -0800 (PST)", "msg_from": "bakkiya <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql statements are waiting" }, { "msg_contents": "bakkiya <[email protected]> wrote:\n \n> I am working on a performance issue with a partitioned table. Some\n> of my sql statements against this partition table is in waiting\n> state for long time. I have queried waiting=true in\n> pg_stat_activity. Now, is there a way to find out which sql is\n> making other statements to wait.\n \nYou probably need to take a look at pg_locks for details. If you\ncombine information from pg_locks, pg_stat_activity, and pg_class,\nyou can get a pretty good idea what's up.\n \n-Kevin\n", "msg_date": "Tue, 30 Nov 2010 10:52:53 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql statements are waiting" }, { "msg_contents": "On Tue, Nov 30, 2010 at 4:38 AM, bakkiya <[email protected]> wrote:\n>\n> Hi,\n> I am working on a performance issue with a partitioned table. Some of my sql\n> statements against this partition table is in waiting state for long time. I\n> have queried waiting=true in pg_stat_activity. Now, is there a way to find\n> out which sql is making other statements to wait.\n\nhttp://wiki.postgresql.org/wiki/Lock_Monitoring\n", "msg_date": "Tue, 30 Nov 2010 09:54:48 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql statements are waiting" } ]
[ { "msg_contents": "Hello.\n\nHow to use tid scans? This below not works :-(\nAlways is used merge join.\n\n\nDROP TABLE IF EXISTS test1;\nCREATE TABLE test1 as select i,hashint4(i)::text from\ngenerate_series(1,10000) as a(i);\n\nDROP TABLE IF EXISTS test2;\nCREATE TABLE test2 as select j,j%10000 as i,null::tid as ct from\ngenerate_series(1,1000000) as a(j);\n\nUPDATE test2 SET ct=test1.ctid\nFROM test1 WHERE test2.i=test1.i;\n\nVACUUM ANALYZE test1;\nVACUUM ANALYZE test2;\n\nSET enable_tidscan = true;\n\nSELECT * FROM test1 join test2 on(test1.ctid=test2.ct)\n\n------------------------\nExplain analyze\n------------------------\n\n\"Merge Join (cost=249703.68..283698.78 rows=1999633 width=28) (actual\ntime=7567.582..19524.865 rows=999900 loops=1)\"\n\" Output: test1.i, test1.hashint4, test2.j, test2.i, test2.ct\"\n\" Merge Cond: (test2.ct = test1.ctid)\"\n\" -> Sort (cost=248955.55..253955.30 rows=1999900 width=14) (actual\ntime=7513.539..10361.598 rows=999901 loops=1)\"\n\" Output: test2.j, test2.i, test2.ct\"\n\" Sort Key: test2.ct\"\n\" Sort Method: external sort Disk: 23456kB\"\n\" -> Seq Scan on test2 (cost=0.00..16456.80 rows=1999900\nwidth=14) (actual time=0.551..2234.130 rows=1000000 loops=1)\"\n\" Output: test2.j, test2.i, test2.ct\"\n\" -> Sort (cost=748.14..773.14 rows=10000 width=20) (actual\ntime=54.020..2193.688 rows=999901 loops=1)\"\n\" Output: test1.i, test1.hashint4, test1.ctid\"\n\" Sort Key: test1.ctid\"\n\" Sort Method: quicksort Memory: 960kB\"\n\" -> Seq Scan on test1 (cost=0.00..83.75 rows=10000 width=20)\n(actual time=0.030..26.205 rows=10000 loops=1)\"\n\" Output: test1.i, test1.hashint4, test1.ctid\"\n\"Total runtime: 21635.881 ms\"\n\n\n------------\npasman\n", "msg_date": "Tue, 30 Nov 2010 12:43:05 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "tidscan not work ? Pg 8.4.5 + WinXP" }, { "msg_contents": "pasman pasma*ski<[email protected]> wrote:\n \n> How to use tid scans?\n \nWrite a query where they are the fastest way to retrieve the data,\nand make sure your PostgreSQL installation is properly configured.\n \n> This below not works :-( Always is used merge join.\n \n> SELECT * FROM test1 join test2 on(test1.ctid=test2.ct)\n \nYou're reading through the entirety of two tables matching rows\nbetween them. What makes you think random access would be faster\nthan sequential? If all this data is cached, then maybe random\naccess could win, but you would need to configure your PostgreSQL to\nexpect that.\n \nHave you read this page:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \n-Kevin\n", "msg_date": "Tue, 30 Nov 2010 10:44:06 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tidscan not work ? Pg 8.4.5 + WinXP" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> pasman pasma*ski<[email protected]> wrote:\n>> This below not works :-( Always is used merge join.\n \n>> SELECT * FROM test1 join test2 on(test1.ctid=test2.ct)\n \n> You're reading through the entirety of two tables matching rows\n> between them. What makes you think random access would be faster\n> than sequential?\n\nFWIW, it isn't going to happen anyway, because the TID scan mechanism\ndoesn't support scanning based on a join condition. That hasn't gotten\nto the top of the to-do list because the use case is almost vanishingly\nsmall. ctids generally aren't stable enough for it to be useful to\nstore references to one table's ctids in another table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Nov 2010 11:49:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tidscan not work ? Pg 8.4.5 + WinXP " } ]
[ { "msg_contents": "I have a query that's running an IN/Subselect that joins three different \ntables and gets a list of IDs to compare against... the subselect \nbasically looks for records through a join table based on the 3rd \ntable's name, similar to:\n\n... IN (SELECT id FROM foo, foo_bar, bar\n WHERE foo.id = foo_bar.foo_id\n AND bar.id = foo_bar.bar_id\n AND bar.name = \"something\") ...\n\nThis is all nested in a fairly complex query, and several of these \nsubselects operate on different tables within the query. The whole \nthing, on some high-cardinality cases, can take 2.5 seconds to run \n(clearly something can be done about that).\n\nSo in this example, the cardinality of the bar table is very low, and \nfairly constant, something on the order of 5-7 records. In an \noptimization attempt, I reduced the joins in the subselect from 2 to 1 \nby passing in the ID of the bar with the correct name, which I can \neasily cache application-side or pre-fetch in a single query. Now it \nlooks like this:\n\n... IN (SELECT id FROM foo, foo_bar\n WHERE foo.id = foo_bar.foo_id\n AND foo_bar.bar_id = 1) ...\n\nCrazy thing is, that single optimization reduced the query time \nsignificantly, from 2.5-3 seconds down to 40-60ms.\n\nDoes anyone have any kind of explanation for this? Are the inner \nworkings of the IN clause taking the plan for the subselect into account \nwhen running, and doing something clever with it? Any insight on the \ninternal mechanisms of IN or subselects in Postgres would be greatly \nappreciated if anyone knows more.\n\nAlso, are there any better ways you can think of doing such an IN query, \nusing non-subselect means that might be more efficient?\n\nThanks in advance, any advice/help understanding this better is greatly \nappreciated.\n", "msg_date": "Tue, 30 Nov 2010 12:43:24 -0500", "msg_from": "\"T.H.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Question about subselect/IN performance" }, { "msg_contents": "\"T.H.\" <[email protected]> wrote:\n \n> Also, are there any better ways you can think of doing such an IN\n> query, using non-subselect means that might be more efficient?\n \nHave you tried the EXISTS predicate?\n \n-Kevin\n", "msg_date": "Tue, 30 Nov 2010 16:54:55 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about subselect/IN performance" }, { "msg_contents": "On 11/30/10 5:54 PM, Kevin Grittner wrote:\n> \"T.H.\"<[email protected]> wrote:\n>\n>> Also, are there any better ways you can think of doing such an IN\n>> query, using non-subselect means that might be more efficient?\n>\n> Have you tried the EXISTS predicate?\n>\n> -Kevin\n>\n\nJust looking into it now, thanks for the suggestion. Is there a reason \nthat EXISTS is generally faster than IN for this sort of query?\n\n-Tristan\n", "msg_date": "Tue, 30 Nov 2010 18:23:56 -0500", "msg_from": "\"T.H.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question about subselect/IN performance" }, { "msg_contents": "On Tue, Nov 30, 2010 at 3:23 PM, T.H. <[email protected]> wrote:\n> Just looking into it now, thanks for the suggestion. Is there a reason that\n> EXISTS is generally faster than IN for this sort of query?\n>\n> -Tristan\n\nExists will return immediately upon finding a match -- assuming there is one.\n", "msg_date": "Tue, 30 Nov 2010 17:32:16 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question about subselect/IN performance" } ]
[ { "msg_contents": ">FWIW, it isn't going to happen anyway, because the TID scan mechanism\n>doesn't support scanning based on a join condition. That hasn't gotten\n>to the top of the to-do list because the use case is almost vanishingly\n>small. ctids generally aren't stable enough for it to be useful to\n>store references to one table's ctids in another table.\n\nThanks for explanation.\n\n\n-- \n------------\npasman\n", "msg_date": "Wed, 1 Dec 2010 12:38:21 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tidscan not work ? Pg 8.4.5 + WinXP" } ]
[ { "msg_contents": "In Oracle, deferrable primary keys are enforced by non-unique indexes. \nThat seems logical, because index should tolerate duplicate values for \nthe duration of transaction:\n\n Connected to:\n Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production\n With the Partitioning, OLAP, Data Mining and Real Application\n Testing options\n\n SQL> create table test1\n 2 (col1 integer,\n 3 constraint test1_pk primary key(col1) deferrable);\n\n Table created.\n\n Elapsed: 00:00:00.35\n SQL> select uniqueness from user_indexes where index_name='TEST1_PK';\n\n UNIQUENES\n ---------\n NONUNIQUE\n\nPostgreSQL 9.0, however, creates a unique index:\n\n scott=# create table test1\n scott-# (col1 integer,\n scott(# constraint test1_pk primary key(col1) deferrable);\n NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n \"test1_pk\" for table \"test1\"\n CREATE TABLE\n Time: 67.263 ms\n scott=# select indexdef from pg_indexes where indexname='test1_pk';\n indexdef \n ----------------------------------------------------------\n CREATE UNIQUE INDEX test1_pk ON test1 USING btree (col1)\n (1 row)\n\nWhen the constraint is deferred in the transaction block, however, it \ntolerates duplicate values until the end of transaction:\n\n scott=# begin; \n BEGIN\n Time: 0.201 ms\n scott=# set constraints test1_pk deferred;\n SET CONSTRAINTS\n Time: 0.651 ms\n scott=# insert into test1 values(1);\n INSERT 0 1\n Time: 1.223 ms\n scott=# insert into test1 values(1);\n INSERT 0 1\n Time: 0.390 ms\n scott=# rollback;\n ROLLBACK\n Time: 0.254 ms\n scott=#\n\n\nNo errors here. How is it possible to insert the same value twice into a \nUNIQUE index? What's going on here?\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Wed, 01 Dec 2010 11:46:27 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Clarification, please" }, { "msg_contents": "On Wed, Dec 1, 2010 at 8:46 AM, Mladen Gogala <[email protected]> wrote:\n\n> PostgreSQL 9.0, however, creates a unique index:\n>\n>   scott=# create table test1\n>   scott-# (col1 integer,\n>   scott(#  constraint test1_pk primary key(col1) deferrable);\n>   NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index\n>   \"test1_pk\" for table \"test1\"\n>   CREATE TABLE\n>   Time: 67.263 ms\n>   scott=# select indexdef from pg_indexes where indexname='test1_pk';\n>                            indexdef\n> ----------------------------------------------------------\n>    CREATE UNIQUE INDEX test1_pk ON test1 USING btree (col1)\n>   (1 row)\n>\n> When the constraint is deferred in the transaction block, however, it\n> tolerates duplicate values until the end of transaction:\n>\n>   scott=# begin;                               BEGIN\n>   Time: 0.201 ms\n>   scott=# set constraints test1_pk  deferred;\n>   SET CONSTRAINTS\n>   Time: 0.651 ms\n>   scott=# insert into test1 values(1);\n>   INSERT 0 1\n>   Time: 1.223 ms\n>   scott=# insert into test1 values(1);\n>   INSERT 0 1\n>   Time: 0.390 ms\n>   scott=# rollback;\n>   ROLLBACK\n>   Time: 0.254 ms\n>   scott=#\n>\n>\n> No errors here. How is it possible to insert the same value twice into a\n> UNIQUE index? What's going on here?\n\nhttp://www.postgresql.org/docs/9.0/interactive/sql-createtable.html\nDEFERRABLE\nNOT DEFERRABLE\n\n This controls whether the constraint can be deferred. A constraint\nthat is not deferrable will be checked immediately after every\ncommand. Checking of constraints that are deferrable can be postponed\nuntil the end of the transaction (using the SET CONSTRAINTS command).\nNOT DEFERRABLE is the default. Currently, only UNIQUE, PRIMARY KEY,\nEXCLUDE, and REFERENCES (foreign key) constraints accept this clause.\nNOT NULL and CHECK constraints are not deferrable.\n\n\nIt looks like the check isn't preformed until COMMIT.\n\n-- \nRegards,\nRichard Broersma Jr.\n", "msg_date": "Wed, 1 Dec 2010 08:57:01 -0800", "msg_from": "Richard Broersma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification, please" }, { "msg_contents": "Richard Broersma wrote:\n>\n> It looks like the check isn't preformed until COMMIT.\n>\n> \nSo, the index is not actually updated until commit? Hmmmm, that seems \nunlikely.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Wed, 01 Dec 2010 12:06:35 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clarification, please" }, { "msg_contents": "Mladen Gogala <[email protected]> wrote:\n \n> How is it possible to insert the same value twice into a UNIQUE\n> index?\n \nYou get multiple entries for the same value in a UNIQUE indexes all\nthe time in PostgreSQL. Any non-HOT update of a table with a UNIQUE\nindex will cause that. You just can't have duplicate entries with\noverlapping visibility.\n \n-Kevin\n", "msg_date": "Wed, 01 Dec 2010 11:13:04 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clarification, please" } ]
[ { "msg_contents": "Hello Postgres Users.\n\nLast days I've installed and configured new x64 release of PostgreSQL \nrunning Windows 2008 R2 with dual XEON 5530 processors (2x4xHT = 16 \nworking units). Previously the database was running on Fedora 12 x86_64 \nunder Microsoft hypervisor (HyperV) thus because of the network card \ndriver limitation there was only one core available.\n\nThe problem I'm facing is a very long, single transaction lasting about \n12hours or even more (as it doesn't exist Pragma Autonomous Transaction \nlike Oracle has) that consist of tones of PLPGSQL code, processing a lot \nof data, causing huge CPU load and disk drive transfers.\nWhen moved to the x64 system as described above, the shared memory size \nis not a problem anymore, the disk channel is running very smoothly, the \nonly suprising think is that the transaction above utilizes only one \ncore of the machine - is it possible to parallelize it without rewriting \nall the code from scratch?\n\nIs there any configuration parameter limiting number of CPUs? The \nrelease is a standard/public x64 binary of PostgreSQL 9.0.1, taken \nfollowing official site.\n\nThanks in advance for any help.\n\nPiotr Czekalski\n\n-- \n\n--------------------------------------------------------------\n\"TECHBAZA.PL\" Sp. z o.o.\nTechnologie WEB, eDB& eCommerce\nOddział Gliwice\nul. Chorzowska 50\n44-100 Gliwice\ntel. (+4832) 7186081\nfax. (+4832) 7003289\n\n\n", "msg_date": "Thu, 02 Dec 2010 22:04:28 +0100", "msg_from": "Piotr Czekalski <[email protected]>", "msg_from_op": true, "msg_subject": "Multicore Postgres 9.0.1 issue - single transaction problem." }, { "msg_contents": "On Thu, Dec 2, 2010 at 2:04 PM, Piotr Czekalski <[email protected]> wrote:\n> Hello Postgres Users.\n>\n> Last days I've installed and configured new x64 release of PostgreSQL\n> running Windows 2008 R2 with dual XEON 5530 processors (2x4xHT = 16 working\n> units). Previously the database was running on Fedora 12 x86_64 under\n> Microsoft hypervisor (HyperV) thus because of the network card driver\n> limitation there was only one core available.\n>\n> The problem I'm facing is a very long, single transaction lasting about\n> 12hours or even more (as it doesn't exist Pragma Autonomous Transaction like\n> Oracle has) that consist of tones of PLPGSQL code, processing a lot of data,\n> causing huge CPU load and disk drive transfers.\n> When moved to the x64 system as described above, the shared memory size is\n> not a problem anymore, the disk channel is running very smoothly, the only\n> suprising think is that the transaction above utilizes only one core of the\n> machine - is it possible to parallelize it without rewriting all the code\n> from scratch?\n\nA single connection uses a single CPU. While there's been some talk\nof parallelizing some part of pgsql, nothing I know of has been done.\n\nIs it possible you're doing parts in plpgsql that should really be\ndone externally / with a batch processing system or hadoop or\nsomething?\n", "msg_date": "Fri, 3 Dec 2010 00:21:58 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multicore Postgres 9.0.1 issue - single transaction problem." } ]
[ { "msg_contents": "I've developed PostBIX, a multithread daemon to monitor Postgres SQL fully\nintegrated with zabbix, released under GPL V.3, and written in java.\n\nI hope people can find this post useful\neveryone can download Postbix from:\nhttp://www.smartmarmot.com\nsorces are available on sourceforge such as a binary distribution as well\nRegards\nAndrea Dalle Vacche\n\nI've developed PostBIX, a multithread daemon  to monitor Postgres SQL fully integrated with zabbix, released under GPL V.3, and written in java.I hope people can find this post useful\neveryone can download Postbix from:http://www.smartmarmot.comsorces are available on sourceforge such as a binary distribution as wellRegards\nAndrea Dalle Vacche", "msg_date": "Fri, 3 Dec 2010 14:48:15 +0100", "msg_from": "Andrea Dalle Vacche <[email protected]>", "msg_from_op": true, "msg_subject": "PostBIX to monitor postgresql performaces" } ]
[ { "msg_contents": "Hi everyone,\n\nI've been trialling different inheritance schemes for partitioning to a large number of tables. I am looking at ~1e9 records, totaling ~200GB.\n\nI've found that a k-ary table inheritance tree works quite well to reduce the O(n) CHECK constraint overhead [1] in the query planner when enabling partition constraint exclusion.\n\nI've played with binary (k=2) trees, and have found that query planning time is shorter for shallow trees where k>>2. (It appears that \"more work\" spent checking CHECK constraints is faster than to recur down the inheritance tree. Is this because fewer table locks are involved?)\n\nA given tree structure (e.g. k=16) has a good query-plan time for SELECT queries in my case. The query-plan times, however, for UPDATE and DELETE are unfortunately quite quite bad. (I was surprised that query-planning time was not similar across all three queries?)\n\nMy machine swaps wildly when PostgreSQL plans an UPDATE or DELETE. It does not swap for the SELECT query planning at all. There is no noticeable memory growth by the postgres process for the SELECT plans. There is huge memory usage growth when running a query-plan for UPDATE or DELETE. The difference is something like going from 50MB to over 10GB of the process' virtual memory.\n\nI'm trialling PostgreSQL on a MacBook Pro having 8GB physical RAM.\n\n\nHere's an example, where the DDL for the inheritance tree [2] is generated by a Python script [3].\n\n1. Query planning time for a SELECT query\n\n> $ echo \"explain select * from ptest where id = 34324234; \\q\" | time -p psql ptest\n> QUERY PLAN\n> -------------------------------------------------------------------------------\n> Result (cost=0.00..160.00 rows=48 width=4)\n> -> Append (cost=0.00..160.00 rows=48 width=4)\n> -> Seq Scan on ptest (cost=0.00..40.00 rows=12 width=4)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_0 ptest (cost=0.00..40.00 rows=12 width=4)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_0_4 ptest (cost=0.00..40.00 rows=12 width=4)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_0_4_1 ptest (cost=0.00..40.00 rows=12 width=4)\n> Filter: (id = 34324234)\n> (10 rows)\n> \n> real 0.99\n> user 0.00\n> sys 0.00\n> $\n\n2. Query planning time for a DELETE query\n\n> $ echo \"explain delete from ptest where id = 34324234; \\q\" | time -p psql ptest\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Delete (cost=0.00..160.00 rows=48 width=6)\n> -> Seq Scan on ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_0 ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_0_4 ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_0_4_1 ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> (9 rows)\n> \n> real 317.14\n> user 0.00\n> sys 0.00\n> $\n\n3. Query planning time for an UPDATE query\n\n> $ echo \"explain update ptest set id = 34324235 where id = 34324234;\n> \\q\" | time -p psql ptest\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Update (cost=0.00..160.00 rows=48 width=6)\n> -> Seq Scan on ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_0 ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_0_4 ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_0_4_1 ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> (9 rows)\n> \n> real 331.72\n> user 0.00\n> sys 0.00\n> $\n\n\nQuery planning on the leaf nodes works properly for all query-types:\n\n> $ echo \"explain delete from ptest_0_4_1 where id = 34324234; \\q\" | time -p psql ptest\n> QUERY PLAN \n> -------------------------------------------------------------------\n> Delete (cost=0.00..40.00 rows=12 width=6)\n> -> Seq Scan on ptest_0_4_1 (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> (3 rows)\n> \n> real 0.01\n> user 0.00\n> sys 0.00\n> \n> $ echo \"explain update ptest_0_4_1 set id = 34324235 where id = 34324234; \\q\" | time -p psql ptest\n> QUERY PLAN \n> -------------------------------------------------------------------\n> Update (cost=0.00..40.00 rows=12 width=6)\n> -> Seq Scan on ptest_0_4_1 (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> (3 rows)\n> \n> real 0.01\n> user 0.00\n> sys 0.00\n> $ \n\n\nWith SELECT constraint exclusion working, I can define plpgsql functions to UPDATE or DELETE the leaf tables directly, but using such an interface isn't terribly elegant.\n\nI therefore tried writing the plpgsql functions for UPDATE and DELETE anyway, with the idea of linking to a TRIGGER on the parent ptest table. This didn't work as expected either, unless I polluted my application's SQL queries with the \"ONLY\" keyword to make sure the trigger fired [4].\n\n\nIs the query-planning times and memory use as demonstrated above normal? I am hoping this is just a defect in the query-planner that we might be able to fix so that PostgreSQL can manage my large data set with more ease.\n\nAny advice appreciated,\n\nJohn\n\n\n[1] http://wiki.postgresql.org/wiki/Table_partitioning#SELECT.2C_UPDATE.2C_DELETE\n[2] http://jpap.org/files/partition-test.txt\n[3] http://jpap.org/files/partition-test.py\n[4] http://archives.postgresql.org/pgsql-hackers/2008-11/msg01883.php\n\n", "msg_date": "Fri, 03 Dec 2010 13:41:36 -0800", "msg_from": "John Papandriopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Query-plan for partitioned UPDATE/DELETE slow and swaps vmem compared\n\tto SELECT" }, { "msg_contents": "John Papandriopoulos <[email protected]> writes:\n> I've found that a k-ary table inheritance tree works quite well to\n> reduce the O(n) CHECK constraint overhead [1] in the query planner\n> when enabling partition constraint exclusion.\n\nUm ... you mean you're creating intermediate child tables for no reason\nexcept to reduce the number of direct descendants of any one table?\nThat's an utter waste of time, because the first thing the planner will\ndo with an inheritance tree is flatten it. Just create *one* parent\ntable and make all the leaf tables direct children of it.\n\n> My machine swaps wildly when PostgreSQL plans an UPDATE or DELETE.\n\nThis is a strong hint that you've got way too many child tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Dec 2010 01:20:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps vmem\n\tcompared to SELECT" }, { "msg_contents": "On 12/3/10 10:20 PM, Tom Lane wrote:\n> John Papandriopoulos<[email protected]> writes:\n>> I've found that a k-ary table inheritance tree works quite well to\n>> reduce the O(n) CHECK constraint overhead [1] in the query planner\n>> when enabling partition constraint exclusion.\n> \n> Um ... you mean you're creating intermediate child tables for no reason\n> except to reduce the number of direct descendants of any one table?\n> That's an utter waste of time, because the first thing the planner will\n> do with an inheritance tree is flatten it. Just create *one* parent\n> table and make all the leaf tables direct children of it.\n> \n>> My machine swaps wildly when PostgreSQL plans an UPDATE or DELETE.\n> \n> This is a strong hint that you've got way too many child tables.\n\nThanks for your advice, Tom.\n\nI've recreated the same example with just one parent table, and 4096 child tables.\n\nSELECT query planning is lightning fast as before; DELETE and UPDATE cause my machine to swap.\n\nWhat's different about DELETE and UPDATE here? If I've way too many child tables, why isn't the SELECT query plan causing the same large memory usage?\n\nKindest,\nJohn\n\n", "msg_date": "Sat, 04 Dec 2010 01:20:21 -0800", "msg_from": "John Papandriopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps\n\tvmem compared to SELECT" }, { "msg_contents": "John Papandriopoulos <[email protected]> writes:\n> I've recreated the same example with just one parent table, and 4096 child tables.\n\n> SELECT query planning is lightning fast as before; DELETE and UPDATE cause my machine to swap.\n\n> What's different about DELETE and UPDATE here?\n\nHmm. Rules? Triggers? You seem to be assuming the problem is at the\nplanner stage but I'm not sure you've proven that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Dec 2010 11:42:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps vmem\n\tcompared to SELECT" }, { "msg_contents": "\nOn 12/4/10 8:42 AM, Tom Lane wrote:\n> John Papandriopoulos<[email protected]> writes:\n>> I've recreated the same example with just one parent table, and 4096 child tables.\n>\n>> SELECT query planning is lightning fast as before; DELETE and UPDATE cause my machine to swap.\n>\n>> What's different about DELETE and UPDATE here?\n>\n> Hmm. Rules? Triggers? You seem to be assuming the problem is at the\n> planner stage but I'm not sure you've proven that.\n\n\nMy example starts off with a new database (e.g. createdb ptest).\n\nI set up my schema using a machine generated SQL file [1] that simply \ncreates a table\n\n create table ptest ( id integer );\n\nand N = 0..4095 inherited children\n\n create table ptest_N (\n check ( (id >= N_min) and (id <= N_max) )\n ) inherits (ptest);\n\nthat split the desired id::integer range into N buckets, one for each of \nthe N partitions.\n\nI then immediately run a query-plan using EXPLAIN that exhibits the \ndescribed behavior: super-fast plan for a SELECT statement, without \nswapping, and memory intensive (swapping) plans for DELETE and UPDATE.\n\nThere are no triggers, no rules, no plpgsql functions, no indexes and no \ninserted data.\n\n\nIs there a more simple example that might help me convince you that \nwe're exercising just the planner stage?\n\nKindest,\nJohn\n\n[1] http://jpap.org/files/partition-test-flat.txt\n\n", "msg_date": "Sat, 04 Dec 2010 13:34:28 -0800", "msg_from": "John Papandriopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps\n\tvmem compared to SELECT" }, { "msg_contents": "Tom Lane wrote:\n> Hmm. Rules? Triggers? You seem to be assuming the problem is at the\n> planner stage but I'm not sure you've proven that.\n>\n> \t\t\tregards, tom lane\n>\n> \nHmmm, I vaguely recollect a similar thread, started by me, although with \nfewer partitions. In my experience, planner doesn't do a very good job \nwith partitions, especially with things like \"min\" or \"max\" which should \nnot be resolved by a full table scan, if there are indexes on partitions.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Sat, 04 Dec 2010 16:58:10 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps\n\tvmem compared to SELECT" }, { "msg_contents": "John Papandriopoulos <[email protected]> writes:\n> I set up my schema using a machine generated SQL file [1] that simply \n> creates a table\n> create table ptest ( id integer );\n> and N = 0..4095 inherited children\n> create table ptest_N (\n> check ( (id >= N_min) and (id <= N_max) )\n> ) inherits (ptest);\n\n> that split the desired id::integer range into N buckets, one for each of \n> the N partitions.\n\n> I then immediately run a query-plan using EXPLAIN that exhibits the \n> described behavior: super-fast plan for a SELECT statement, without \n> swapping, and memory intensive (swapping) plans for DELETE and UPDATE.\n\n[ pokes at that for a bit ... ] Ah, I had forgotten that UPDATE/DELETE\ngo through inheritance_planner() while SELECT doesn't. And\ninheritance_planner() makes a copy of the querytree, including the\nalready-expanded range table, for each target relation. So the memory\nusage is O(N^2) in the number of child tables.\n\nIt's difficult to do much better than that in the general case where the\nchildren might have different rowtypes from the parent: you need a\ndistinct targetlist for each target relation. I expect that we can be a\nlot smarter when we have true partitioning support (which among other\nthings is going to have to enforce that all the children have identical\ncolumn sets). But the inheritance mechanism was never intended to scale\nto anything like this number of children.\n\nI remain of the opinion that you're using far too many child tables.\nPlease note the statement at the bottom of\nhttp://www.postgresql.org/docs/9.0/static/ddl-partitioning.html:\n\n\tPartitioning using these techniques will work well with up to\n\tperhaps a hundred partitions; don't try to use many thousands of\n\tpartitions. \n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Dec 2010 17:40:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps vmem\n\tcompared to SELECT" }, { "msg_contents": "Tom Lane wrote:\n> \tPartitioning using these techniques will work well with up to\n> \tperhaps a hundred partitions; don't try to use many thousands of\n> \tpartitions. \n>\n> \t\t\tregards, tom lane\n> \nHmmm, what happens if I need 10 years of data, in monthly partitions? It \nwould be 120 partitions. Can you please elaborate on that limitation? \nAny plans on lifting that restriction?\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Sat, 04 Dec 2010 18:19:29 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps\n\tvmem compared to SELECT" }, { "msg_contents": "Sunday, December 5, 2010, 12:19:29 AM you wrote:\n\n> Hmmm, what happens if I need 10 years of data, in monthly partitions? It \n> would be 120 partitions. Can you please elaborate on that limitation? \n> Any plans on lifting that restriction?\n\nI'm running a partitioning scheme using 256 tables with a maximum of 16\nmillion rows (namely IPv4-addresses) and a current total of about 2.5\nbillion rows, there are no deletes though, but lots of updates.\n\nUsing triggers or rules on the main table in my case showed to be not very\neffective, so I reverted to updating the inherited tables directly. This\nway you still can use a SELECT on the main table letting the optimizer do\nit's work, but do not run into the problem of oversized shared memory usage\nwhen doing DELETEs or UPDATEs\n\nIMHO if you are using large partitioning schemes, handle the logic of which\ntable to update or delete in your application. In most cases extending the\nunderlying application will be much less work and more flexible than trying\nto write a dynamic rule/trigger to do the same job.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Sun, 5 Dec 2010 00:38:39 +0100", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps vmem\n\tcompared to SELECT" }, { "msg_contents": "On 12/4/10 2:40 PM, Tom Lane wrote:\n> [ pokes at that for a bit ... ] Ah, I had forgotten that UPDATE/DELETE\n> go through inheritance_planner() while SELECT doesn't. And\n> inheritance_planner() makes a copy of the querytree, including the\n> already-expanded range table, for each target relation. So the memory\n> usage is O(N^2) in the number of child tables.\n\nThanks for the pointer to the code and explanation.\n\nIn inheritance_planner(...) I see the memcpy of the input query tree, but for my example constraint exclusion would only result in one child being included. Or is the O(N^2) memory usage from elsewhere?\n\n> It's difficult to do much better than that in the general case where the\n> children might have different rowtypes from the parent: you need a\n> distinct targetlist for each target relation. I expect that we can be a\n> lot smarter when we have true partitioning support (which among other\n> things is going to have to enforce that all the children have identical\n> column sets).\n\nIs this the same as saying that the inheritance_planner(...) can be avoided if it were known that the children have the same rowtype as the parent? Is it easy to check?\n\n> But the inheritance mechanism was never intended to scale to anything like\n> this number of children.\n\nUnfortunately so. :(\n\nWhen I push the number of child tables up to 10k, the SELECT planning starts to slow down (~1 sec), though no swapping.\n \n> I remain of the opinion that you're using far too many child tables.\n> Please note the statement at the bottom of\n> http://www.postgresql.org/docs/9.0/static/ddl-partitioning.html:\n> \n> \tPartitioning using these techniques will work well with up to\n> \tperhaps a hundred partitions; don't try to use many thousands of\n> \tpartitions.\n\nThanks for the reference---I'm well aware of it, but it was not clear to me why: the reason I was structuring my partition inheritance as a tree, because I thought it was simply a case of time-to-scan the CHECK constraints at any level in the inheritance hierarchy. You've been a great help in helping my understanding PostgreSQL inheritance.\n\nBest,\nJohn\n", "msg_date": "Sun, 05 Dec 2010 03:06:39 -0800", "msg_from": "John Papandriopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps\n\tvmem compared to SELECT" }, { "msg_contents": "On 12/4/10 3:19 PM, Mladen Gogala wrote:\n> Tom Lane wrote:\n>> Partitioning using these techniques will work well with up to\n>> perhaps a hundred partitions; don't try to use many thousands of\n>> partitions.\n>> regards, tom lane\n> Hmmm, what happens if I need 10 years of data, in monthly partitions? It \n> would be 120 partitions. Can you please elaborate on that limitation? \n> Any plans on lifting that restriction?\n> \n\nEven with 1k partitions, I don't have any issues any of the SELECT, UPDATE or DELETE queries and with 8GB RAM.\n\nI suppose if you're using INSERT triggers, you'd want to make sure your plpgsql function is fast: I'm partitioning by power-of-two, so can use right-shift n-bits to quickly compute the insertion table name, rather than using an if-else-if chain.\n\nJohn\n", "msg_date": "Sun, 05 Dec 2010 03:10:04 -0800", "msg_from": "John Papandriopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps\n\tvmem compared to SELECT" }, { "msg_contents": "On 12/4/10 3:38 PM, Jochen Erwied wrote:\n> Sunday, December 5, 2010, 12:19:29 AM you wrote:\n> \n>> Hmmm, what happens if I need 10 years of data, in monthly partitions? It\n>> would be 120 partitions. Can you please elaborate on that limitation?\n>> Any plans on lifting that restriction?\n> \n> I'm running a partitioning scheme using 256 tables with a maximum of 16\n> million rows (namely IPv4-addresses) and a current total of about 2.5\n> billion rows, there are no deletes though, but lots of updates.\n> \n> Using triggers or rules on the main table in my case showed to be not very\n> effective, so I reverted to updating the inherited tables directly. This\n> way you still can use a SELECT on the main table letting the optimizer do\n> it's work, but do not run into the problem of oversized shared memory usage\n> when doing DELETEs or UPDATEs\n> \n> IMHO if you are using large partitioning schemes, handle the logic of which\n> table to update or delete in your application. In most cases extending the\n> underlying application will be much less work and more flexible than trying\n> to write a dynamic rule/trigger to do the same job.\n> \n\nSounds like my experience exactly, however I am considering forgoing an update altogether, by just combining a DELETE with an INSERT. I'm not sure how that might affect indexing performance as compared to an UPDATE.\n\nI also had trouble with triggers; but found that if you use the \"ONLY\" keyword, they work again: see my original post of this thread. In that case, the application SQL still retrains some simplicity. On this topic, I think there's quite a bit of confusion and updates to the documentation would help greatly.\n\nJohn\n", "msg_date": "Sun, 05 Dec 2010 03:14:46 -0800", "msg_from": "John Papandriopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps\n\tvmem compared to SELECT" }, { "msg_contents": "John Papandriopoulos <[email protected]> writes:\n> On 12/4/10 2:40 PM, Tom Lane wrote:\n>> [ pokes at that for a bit ... ] Ah, I had forgotten that UPDATE/DELETE\n>> go through inheritance_planner() while SELECT doesn't. And\n>> inheritance_planner() makes a copy of the querytree, including the\n>> already-expanded range table, for each target relation. So the memory\n>> usage is O(N^2) in the number of child tables.\n\n> Thanks for the pointer to the code and explanation.\n\n> In inheritance_planner(...) I see the memcpy of the input query tree, but for my example constraint exclusion would only result in one child being included. Or is the O(N^2) memory usage from elsewhere?\n\nIt's copying the whole range table, even though any one child query only\nneeds one of the child table entries. There might be some way to\nfinesse that, but it's not clear how else to end up with a final plan\ntree in which each child table has the correct RT index number.\n\nYou could get rid of the memory growth, at the cost of a lot of\ntree-copying, by doing each child plan step in a discardable memory\ncontext. I'm not sure that'd be a win for normal sizes of inheritance\ntrees though --- you'd need to copy the querytree in and then copy the\nresulting plantree out again, for each child. (Hm, but we're doing the\nfront-end copy already ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Dec 2010 11:56:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps vmem\n\tcompared to SELECT" }, { "msg_contents": "I wrote:\n> You could get rid of the memory growth, at the cost of a lot of\n> tree-copying, by doing each child plan step in a discardable memory\n> context. I'm not sure that'd be a win for normal sizes of inheritance\n> trees though --- you'd need to copy the querytree in and then copy the\n> resulting plantree out again, for each child. (Hm, but we're doing the\n> front-end copy already ...)\n\nThat worked better than I thought it would --- see\nhttp://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=d1001a78ce612a16ea622b558f5fc2b68c45ab4c\nI'm not intending to back-patch this, but it ought to apply cleanly to\n9.0.x if you want it badly enough to carry a local patch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 05 Dec 2010 15:14:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps vmem\n\tcompared to SELECT" }, { "msg_contents": "On 12/5/10 12:14 PM, Tom Lane wrote:\n> I wrote:\n>> You could get rid of the memory growth, at the cost of a lot of\n>> tree-copying, by doing each child plan step in a discardable memory\n>> context. I'm not sure that'd be a win for normal sizes of inheritance\n>> trees though --- you'd need to copy the querytree in and then copy the\n>> resulting plantree out again, for each child. (Hm, but we're doing the\n>> front-end copy already ...)\n> \n> That worked better than I thought it would --- see\n> http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=d1001a78ce612a16ea622b558f5fc2b68c45ab4c\n> I'm not intending to back-patch this, but it ought to apply cleanly to\n> 9.0.x if you want it badly enough to carry a local patch.\n\nFantastic, Tom! Thank you kindly for taking the time to create the patch.\n\nThe memory issue has indeed disappeared---there was no noticeable memory increase in the three queries below, with 4096 children. Inheritance planning overhead is around 20x for UPDATE/DELETE compared to SELECT; thankfully they are required much less frequently in my case.\n\nI am still wondering whether the inheritance_planner(...) can be avoided if the rowtypes of children are the same as the parent? (I'm not yet sufficiently familiar with the source to determine on my own.) If that's the case, is there a simple test (like cardinality of columns) that can be used to differentiate partitioning from general inheritance cases?\n\nThanks again!\n\nJohn\n\n\nSimple partitioning test timing with 4096 children:\n\n> $ echo \"explain select * from ptest where id = 34324234; \\q\" | time -p psql ptest\n> QUERY PLAN \n> ----------------------------------------------------------------------------\n> Result (cost=0.00..80.00 rows=24 width=4)\n> -> Append (cost=0.00..80.00 rows=24 width=4)\n> -> Seq Scan on ptest (cost=0.00..40.00 rows=12 width=4)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_65 ptest (cost=0.00..40.00 rows=12 width=4)\n> Filter: (id = 34324234)\n> (6 rows)\n> \n> real 0.55\n> user 0.00\n> sys 0.00\n> $ echo \"explain delete from ptest where id = 34324234; \\q\" | time -p psql ptest \n> QUERY PLAN \n> ----------------------------------------------------------------------\n> Delete (cost=0.00..80.00 rows=24 width=6)\n> -> Seq Scan on ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_65 ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> (5 rows)\n> \n> real 10.47\n> user 0.00\n> sys 0.00\n> $ echo \"explain update ptest set id = 34324235 where id = 34324234; \\q\" | time -p psql ptest\n> QUERY PLAN \n> ----------------------------------------------------------------------\n> Update (cost=0.00..80.00 rows=24 width=6)\n> -> Seq Scan on ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> -> Seq Scan on ptest_65 ptest (cost=0.00..40.00 rows=12 width=6)\n> Filter: (id = 34324234)\n> (5 rows)\n> \n> real 9.53\n> user 0.00\n> sys 0.00\n> $ \n\n\n\n", "msg_date": "Sun, 05 Dec 2010 21:00:13 -0800", "msg_from": "John Papandriopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps\n\tvmem compared to SELECT" }, { "msg_contents": "John Papandriopoulos <[email protected]> writes:\n> The memory issue has indeed disappeared---there was no noticeable memory increase in the three queries below, with 4096 children. Inheritance planning overhead is around 20x for UPDATE/DELETE compared to SELECT; thankfully they are required much less frequently in my case.\n\n> I am still wondering whether the inheritance_planner(...) can be avoided if the rowtypes of children are the same as the parent?\n\nPossibly, but it's far from a trivial change. The difficulty is that\nyou'd need to generate a different plan tree structure.\ninheritance_planner generates a separate subtree for each target table,\nso that the ModifyTable node can execute each one separately and know\na priori which target table the rows coming out of a particular subplan\napply to. If we expand inheritance \"at the bottom\" like SELECT does,\nthat table identifier would have to propagate up as part of the returned\nrows. It's doable but not simple. Moreover, it's far from clear this\nactually would save much, and it could easily slow things down at\nexecution time.\n\nHave you done any profiling work to see where the extra time goes?\nI had thought that the unreferenced RTE entries would simply be ignored\nin each subplanning step, but maybe there's something that is examining\nthem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Dec 2010 13:03:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps vmem\n\tcompared to SELECT" }, { "msg_contents": "On 12/6/10 10:03 AM, Tom Lane wrote:\n> John Papandriopoulos<[email protected]> writes:\n>> I am still wondering whether the inheritance_planner(...) can be avoided if the rowtypes of children are the same as the parent?\n> \n> Possibly, but it's far from a trivial change. The difficulty is that\n> you'd need to generate a different plan tree structure.\n> inheritance_planner generates a separate subtree for each target table,\n> so that the ModifyTable node can execute each one separately and know\n> a priori which target table the rows coming out of a particular subplan\n> apply to. If we expand inheritance \"at the bottom\" like SELECT does,\n> that table identifier would have to propagate up as part of the returned\n> rows. It's doable but not simple. Moreover, it's far from clear this\n> actually would save much, and it could easily slow things down at\n> execution time.\n\nMaking more sense now... :-)\n\nI guess the real time-saver, in the specific case of partitioning, might then come from avoiding generation of subplans completely (rather than later dropping the dummies) by exploiting the disjointness of each partition.\n\n> Have you done any profiling work to see where the extra time goes?\n> I had thought that the unreferenced RTE entries would simply be ignored\n> in each subplanning step, but maybe there's something that is examining\n> them.\n\nI've run the following queries \n\n explain SELECT * FROM ptest where id = 121212;\n explain DELETE FROM ptest where id = 121212;\n\nunder the Google perftools sampling profiler with the same 4096 child inheritance tree. Results below.\n\n\nThe DELETE query-planning spend a lot of time maintaining a query tree. Might this be what you're referring to?\n\n> Total: 11808 samples\n> 1895 16.0% 16.0% 7316 62.0% _range_table_mutator\n> 1426 12.1% 28.1% 1426 12.1% _lseek\n> 1097 9.3% 37.4% 2854 24.2% _query_planner\n> 1048 8.9% 46.3% 1577 13.4% _AllocSetAlloc\n> 853 7.2% 53.5% 853 7.2% 0x00007fffffe008a5\n> 762 6.5% 60.0% 762 6.5% _posix_madvise\n> 696 5.9% 65.9% 696 5.9% _list_nth_cell\n> 575 4.9% 70.7% 575 4.9% 0x00007fffffe00b8b\n> 482 4.1% 74.8% 482 4.1% _AllocSetFreeIndex\n> 271 2.3% 77.1% 1284 10.9% _new_tail_cell\n> 181 1.5% 78.6% 181 1.5% 0x00007fffffe00ba7\n> 173 1.5% 80.1% 173 1.5% 0x00007fffffe00bb2\n> 160 1.4% 81.5% 1452 12.3% _lappend\n> 159 1.3% 82.8% 159 1.3% 0x00007fffffe00b96\n> 158 1.3% 84.1% 158 1.3% 0x00007fffffe00b9c\n> 139 1.2% 85.3% 139 1.2% 0x00007fffffe007c1\n> 136 1.2% 86.5% 1877 15.9% _MemoryContextAlloc\n> 129 1.1% 87.6% 129 1.1% 0x00007fffffe00673\n> 125 1.1% 88.6% 125 1.1% 0x00007fffffe008ab\n> 118 1.0% 89.6% 118 1.0% 0x00007fffffe008a0\n> 110 0.9% 90.6% 3055 25.9% ___inline_memcpy_chk\n> 106 0.9% 91.5% 106 0.9% _strlen\n> 105 0.9% 92.3% 105 0.9% 0x00007fffffe008b7\n> 95 0.8% 93.1% 95 0.8% _get_tabstat_entry\n> 85 0.7% 93.9% 93 0.8% _find_all_inheritors\n> 75 0.6% 94.5% 75 0.6% 0x00007fffffe00b85\n> 47 0.4% 94.9% 47 0.4% 0x00007fffffe008b1\n> 46 0.4% 95.3% 46 0.4% 0x00007fffffe00695\n> 42 0.4% 95.6% 42 0.4% ___memcpy_chk\n> 30 0.3% 95.9% 30 0.3% _pqGetpwuid\n> 29 0.2% 96.1% 29 0.2% 0x00007fffffe00b90\n> 29 0.2% 96.4% 60 0.5% _set_base_rel_pathlists\n> 28 0.2% 96.6% 28 0.2% 0x00007fffffe007bf\n> 24 0.2% 96.8% 24 0.2% 0x00007fffffe007cb\n> 23 0.2% 97.0% 23 0.2% 0x00007fffffe006ab\n> 22 0.2% 97.2% 23 0.2% _generate_base_implied_equalities\n> 20 0.2% 97.4% 20 0.2% _memcpy\n> 14 0.1% 97.5% 14 0.1% 0x00007fffffe0080d\n> 13 0.1% 97.6% 13 0.1% _open\n> 12 0.1% 97.7% 12 0.1% 0x00007fffffe007f9\n> [rest snipped]\n\n\nThe SELECT query-planning doesn't, where you can clearly see that a lot of time is spent amassing all children (find_all_inheritors) that could be avoided with true partitioning support.\n\n> Total: 433 samples\n> 111 25.6% 25.6% 111 25.6% _AllocSetAlloc\n> 79 18.2% 43.9% 124 28.6% _find_all_inheritors\n> 38 8.8% 52.7% 38 8.8% _lseek\n> 24 5.5% 58.2% 24 5.5% _read\n> 19 4.4% 62.6% 32 7.4% _new_list\n> 17 3.9% 66.5% 18 4.2% _get_tabstat_entry\n> 14 3.2% 69.7% 36 8.3% _MemoryContextAllocZeroAligned\n> 11 2.5% 72.3% 28 6.5% _MemoryContextAllocZero\n> 11 2.5% 74.8% 19 4.4% _systable_beginscan\n> 8 1.8% 76.7% 8 1.8% 0x00007fffffe007c5\n> 8 1.8% 78.5% 8 1.8% 0x00007fffffe00a2f\n> 8 1.8% 80.4% 32 7.4% _hash_search_with_hash_value\n> 7 1.6% 82.0% 7 1.6% _open\n> 6 1.4% 83.4% 6 1.4% 0x00007fffffe008c8\n> [rest snipped]\n\nKindest,\nJohn\n\n", "msg_date": "Mon, 06 Dec 2010 13:48:57 -0800", "msg_from": "John Papandriopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query-plan for partitioned UPDATE/DELETE slow and swaps\n\tvmem compared to SELECT" } ]
[ { "msg_contents": "Hi all,\n\nI have a table that stores all the page loads in my web application:\n\nshs-dev=# \\d log_event\n Table \"public.log_event\"\n Column | Type |\nModifiers\n-----------------+--------------------------+--------------------------------------------------------\n id | bigint | not null default\nnextval('log_event_id_seq'::regclass)\n user_id | integer |\n ip | inet | not null\n action_id | integer | not null\n object1_id | integer |\n object2_id | integer |\n event_timestamp | timestamp with time zone | not null\n data | text |\n comments | text |\nIndexes:\n \"log_event_pkey\" PRIMARY KEY, btree (id)\n \"log_event_action_id_idx\" btree (action_id)\n \"log_event_object1_idx\" btree (object1_id)\n \"log_event_object2_idx\" btree (object2_id)\n \"log_event_timestamp_idx\" btree (event_timestamp)\n \"log_event_user_id_idx\" btree (user_id)\nForeign-key constraints:\n \"log_event_action_id_fkey\" FOREIGN KEY (action_id) REFERENCES\nconfig.log_action(id)\nReferenced by:\n TABLE \"log_data\" CONSTRAINT \"log_data_event_id_fkey\" FOREIGN KEY\n(event_id) REFERENCES log_event(id) ON DELETE CASCADE DEFERRABLE\nINITIALLY DEFERRED\n TABLE \"log_report\" CONSTRAINT \"log_report_event_id_fkey\" FOREIGN\nKEY (event_id) REFERENCES log_event(id)\n\nshs-dev=# select count(*) from log_event;\n count\n---------\n 5755566\n\n\nFor each page load I first create an entry in that table, e.g.:\n\nINSERT INTO log_event (user_id, ip, action_id, object1_id, object2_id,\nevent_timestamp, comments) VALUES (1, '127.0.0.1', 96, null, null,\nNOW(), 'TEST');\n\nAfter that, I want to retrieve the data stored in log_event from a\ntrigger, e.g.:\n\nSELECT user_id FROM log_event WHERE id = CURRVAL('log_event_id_seq');\n\nThis way my insert-trigger knows who is creating the new row, while\nusing only one pg-user to query the database.\n\nThe problem is that this query is very slow because it refuses to use\nan index scan:\n\n\nshs-dev=# set enable_seqscan = off;\nSET\nshs-dev=# explain analyze SELECT user_id FROM log_event WHERE id =\nCURRVAL('log_event_id_seq');\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on log_event (cost=10000000000.00..10000139202.07 rows=1\nwidth=4) (actual time=2086.272..2086.273 rows=1 loops=1)\n Filter: (id = currval('log_event_id_seq'::regclass))\n Total runtime: 2086.305 ms\n\n\nIf I specify one specific value, it's OK:\n\nshs-dev=# explain analyze SELECT user_id FROM log_event WHERE id =\n1283470192837401;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Index Scan using log_event_pkey on log_event (cost=0.00..8.90 rows=1\nwidth=4) (actual time=0.034..0.034 rows=0 loops=1)\n Index Cond: (id = 1283470192837401::bigint)\n Total runtime: 0.056 ms\n\nIf I experiment with RANDOM, it's slow again:\n\nshs-dev=# explain analyze SELECT user_id FROM log_event WHERE id =\nRANDOM()::bigint;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on log_event (cost=10000000000.00..10000153591.24 rows=1\nwidth=4) (actual time=1353.425..1353.425 rows=0 loops=1)\n Filter: (id = (random())::bigint)\n Total runtime: 1353.452 ms\n\nOn the other hand, for some undeterministic cases, it does run fast:\n(in this example the planner cannot predict what will be the value of\nthe filter condition)\n\nshs-dev=# explain analyze SELECT user_id FROM log_event WHERE id =\n(select id from artist where id > 1000 limit 1);\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using log_event_pkey on log_event (cost=0.08..8.98 rows=1\nwidth=4) (actual time=0.069..0.069 rows=0 loops=1)\n Index Cond: (id = $0)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..0.08 rows=1 width=4) (actual\ntime=0.039..0.039 rows=1 loops=1)\n -> Index Scan using artist_pkey on artist\n(cost=0.00..3117.11 rows=40252 width=4) (actual time=0.038..0.038\nrows=1 loops=1)\n Index Cond: (id > 1000)\n\n\nI have no idea why in some cases the index scan is not considered.\nDoes anyone have an idea?\n\nThanks!\n\nKind regards,\nMathieu\n", "msg_date": "Sat, 4 Dec 2010 12:56:50 +0100", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query to get last created row using CURRVAL" }, { "msg_contents": "On Sat, Dec 4, 2010 at 13:56, Mathieu De Zutter <[email protected]> wrote:\n> I have no idea why in some cases the index scan is not considered.\n> Does anyone have an idea?\n\nI guess that it's because the currval() function is volatile -- its\nvalue has to be tested for again each row.\n\nTry this instead:\nSELECT user_id FROM log_event WHERE id = (SELECT CURRVAL('log_event_id_seq'));\n\nThis will assure that there's only one call to currval().\n\nRegards,\nMarti\n", "msg_date": "Sat, 4 Dec 2010 14:35:33 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query to get last created row using CURRVAL" }, { "msg_contents": "On Sat, Dec 4, 2010 at 1:35 PM, Marti Raudsepp <[email protected]> wrote:\n> On Sat, Dec 4, 2010 at 13:56, Mathieu De Zutter <[email protected]> wrote:\n>> I have no idea why in some cases the index scan is not considered.\n>> Does anyone have an idea?\n>\n> I guess that it's because the currval() function is volatile -- its\n> value has to be tested for again each row.\n>\n> Try this instead:\n> SELECT user_id FROM log_event WHERE id = (SELECT CURRVAL('log_event_id_seq'));\n>\n> This will assure that there's only one call to currval().\n\nOK, that makes a lot of sense. Your suggestion solves my problem.\n\nThanks!\n\nMathieu\n", "msg_date": "Sat, 4 Dec 2010 13:49:38 +0100", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query to get last created row using CURRVAL" }, { "msg_contents": "2010/12/4 Mathieu De Zutter <[email protected]>\n\n>\n> For each page load I first create an entry in that table, e.g.:\n>\n> INSERT INTO log_event (user_id, ip, action_id, object1_id, object2_id,\n> event_timestamp, comments) VALUES (1, '127.0.0.1', 96, null, null,\n> NOW(), 'TEST');\n>\n> After that, I want to retrieve the data stored in log_event from a\n> trigger, e.g.:\n>\n> SELECT user_id FROM log_event WHERE id = CURRVAL('log_event_id_seq');\n>\n> This way my insert-trigger knows who is creating the new row, while\n> using only one pg-user to query the database.\n>\n> Please note that you can use next query to perform both insert and select:\n\nINSERT INTO log_event (user_id, ip, action_id, object1_id, object2_id,\nevent_timestamp, comments) VALUES (1, '127.0.0.1', 96, null, null,\nNOW(), 'TEST') returning user_id;\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2010/12/4 Mathieu De Zutter <[email protected]>\n\nFor each page load I first create an entry in that table, e.g.:\n\nINSERT INTO log_event (user_id, ip, action_id, object1_id, object2_id,\nevent_timestamp, comments) VALUES (1, '127.0.0.1', 96, null, null,\nNOW(), 'TEST');\n\nAfter that, I want to retrieve the data stored in log_event from a\ntrigger, e.g.:\n\nSELECT user_id FROM log_event WHERE id = CURRVAL('log_event_id_seq');\n\nThis way my insert-trigger knows who is creating the new row, while\nusing only one pg-user to query the database.Please note that you can use next query to perform both insert and select: INSERT INTO log_event (user_id, ip, action_id, object1_id, object2_id,\nevent_timestamp, comments) VALUES (1, '127.0.0.1', 96, null, null,NOW(), 'TEST') returning user_id;-- Best regards, Vitalii Tymchyshyn", "msg_date": "Sat, 4 Dec 2010 17:32:16 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query to get last created row using CURRVAL" } ]
[ { "msg_contents": "Markus Schulz 11/24/10 1:02 PM >>>\n \n> if i set \"from_collapse_limit\" (to merge the views) and\n> join_collapse_limit (to explode the explicit joins) high enough\n> (approx 32), all is fine (good performance). But other queries are\n> really slow in our environment (therefore it's no option to raise\n> the join_collapse_limit to a higher value)\n> \n> With defaults (8) for both, the performance is ugly\n \nOne option would be to create a different user for running queries\nwhich read from complex views such as this. \n \npostgres=# create user bob;\nCREATE ROLE\npostgres=# alter user bob set from_collapse_limit = 40;\nALTER ROLE\npostgres=# alter user bob set join_collapse_limit = 40;\nALTER ROLE\n \nLog in as bob, and your queries should run fine.\n \nNothing leapt out at me as an issue in your postgresql.conf except:\n \nmax_prepared_transactions = 20\n \nDo you actually use prepared transactions? (Sometimes people confuse\nthis with prepared statements, which are a completely different\nfeature.)\n \n-Kevin\n", "msg_date": "Sat, 04 Dec 2010 09:29:06 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problem with from_collapse_limit and joined\n\t views" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Markus Schulz 11/24/10 1:02 PM >>>\n>> if i set \"from_collapse_limit\" (to merge the views) and\n>> join_collapse_limit (to explode the explicit joins) high enough\n>> (approx 32), all is fine (good performance). But other queries are\n>> really slow in our environment (therefore it's no option to raise\n>> the join_collapse_limit to a higher value)\n>> \n>> With defaults (8) for both, the performance is ugly\n \n> One option would be to create a different user for running queries\n> which read from complex views such as this. \n\nIf you don't want to change the collapse limits, the only other option\nis to restructure this specific query so that its syntactic structure\nis closer to the ideal join order. Look at the plan you get in the\ngood-performing case and re-order the join syntax to look like that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Dec 2010 11:59:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with from_collapse_limit and joined views " }, { "msg_contents": "Am Samstag 04 Dezember 2010 schrieb Kevin Grittner:\n> One option would be to create a different user for running queries\n> which read from complex views such as this.\n> \n> postgres=# create user bob;\n> CREATE ROLE\n> postgres=# alter user bob set from_collapse_limit = 40;\n> ALTER ROLE\n> postgres=# alter user bob set join_collapse_limit = 40;\n> ALTER ROLE\n> \n> Log in as bob, and your queries should run fine.\n\nthanks, that was really an option for us to use a different user for \ncreating the reports.\n\n> Nothing leapt out at me as an issue in your postgresql.conf except:\n> \n> max_prepared_transactions = 20\n> \n> Do you actually use prepared transactions? (Sometimes people confuse\n> this with prepared statements, which are a completely different\n> feature.)\n\nyes, they are needed for our JPA-based j2ee application.\n\nregards\nmsc\n", "msg_date": "Sat, 4 Dec 2010 19:20:28 +0100", "msg_from": "Markus Schulz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with from_collapse_limit and joined views" }, { "msg_contents": "Am Samstag 04 Dezember 2010 schrieb Tom Lane:\n> \"Kevin Grittner\" <[email protected]> writes:\n...\n> > One option would be to create a different user for running queries\n> > which read from complex views such as this.\n> \n> If you don't want to change the collapse limits, the only other\n> option is to restructure this specific query so that its syntactic\n> structure is closer to the ideal join order. Look at the plan you\n> get in the good-performing case and re-order the join syntax to look\n> like that.\n\nno that's not working in this case.\nview1 and view2 are written with explicit joins and no better join was \npossible. Each view works perfect standalone. \nIn my above example i have rewritten view1 without explicit joins only \nfor testing purpose. Without explicit joins i can gather the optimal \nquery plan from a slightly higher from_collapse_limit (see workaround 2 \nfrom my initial posting). \nIf both views using explicit joins the from_collapse_limit is useless \n(only join_collapse_limit usable).\n\nThe problem exists only for \"view1 JOIN view2\" and that pgsql don't \n\"see\" that an element of view2 contains an index-access for reducing the \ndata from view1. Only if he can break the complete join of both views \ninto one query-plan he can \"see\" this. But for this i must raise the \nlimits.\n\nLooks like some improvement to the geco optimizer was needed here ;)\n\n\nregards\nmsc\n", "msg_date": "Sat, 4 Dec 2010 19:46:21 +0100", "msg_from": "Markus Schulz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with from_collapse_limit and joined views" } ]
[ { "msg_contents": "Manual: http://www.postgresql.org/docs/9.0/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS\nRecent discussion:\nhttp://www.facebook.com/notes/mysql-at-facebook/group-commit-in-postgresql/465781235932\n\nIt is my understanding that group commit in PG works without the\ncommit_delay or commit_siblings being enabled. For many people coming\nfrom other databases, the existence of these GUC seems to suggest that\ngroup commit does not work without the being enabled.\n\nAre these setting useful, and if so how should they be tuned?\nIf they are generally are not useful, should these settings be removed?\n\n-- \nRob Wultsch\[email protected]\n", "msg_date": "Sun, 5 Dec 2010 13:40:28 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": true, "msg_subject": "Group commit and commit delay/siblings" }, { "msg_contents": "On Mon, Dec 6, 2010 at 4:40 AM, Rob Wultsch <[email protected]> wrote:\n> Manual: http://www.postgresql.org/docs/9.0/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS\n> Recent discussion:\n> http://www.facebook.com/notes/mysql-at-facebook/group-commit-in-postgresql/465781235932\n>\n> It is my understanding that group commit in PG works without the\n> commit_delay or commit_siblings being enabled. For many people coming\n> from other databases, the existence of these GUC seems to suggest that\n> group commit does not work without the being enabled.\n>\n> Are these setting useful, and if so how should they be tuned?\n> If they are generally are not useful, should these settings be removed?\n>\n> --\n> Rob Wultsch\n> [email protected]\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHey Rob,\n\nI think I can explain this with the bit that I understand.\n\nWhen you try to commit a transaction it will sync the WAL buffer to\ndisk. Here an optimization is done that before it syncs the disks it\ntries to find all the WAL Records in the buffer completed and ready to\nsync and it absorbs all the syncs together and does the commit for all\nof them together.\n\nWhat the GUC parameter commit_delay adds that before it syncs it\nsleeps for that period and then does the sync. (NOTE: The transaction\nis not committed till it syncs so this adds that latency to all\ntransactions) The benefit is that when number of transactions are\nhigh, while its sleeping someone who committs will also sync its\nrecords and when it awakes it doesnt have to do its own sync or if it\ndoes it helps others.\n\nThe commit_siblings = 5 basically checks that it sleeps only when that\nmany backends are active. This I think is a very expensive check and I\nwould rather make commit_siblings=0 (which the current code does not\nsupport.. it only supports minimum of 1) The check is expensive\nirrespective of the settings .. But anyway here is the real kicker.\nIn all the tests I did with recent verions 8.4 and version 9.0 , it\nseems that the default behavior handles the load well enough and one\ndoes not have to use commit_delay at all. Since when the load is very\nhigh all of them are basically in sync phase and the desired thing\nhappens anyway.\n\nInfact using commit_delay will actually add the cost of doing\ncommit_siblings check and can hurt the performance by increasing CPU\nconsumption.. Doing commit_siblings check for every transaction is a\nkiller since it does not return after meeting the minimum backends and\ngoes through every backend to calculate the total number before\ncomparing with the minimum. This is probably why most people see a\ndrop in performance when using commit_delay compared to the default.\n\nAnyway I would recommended right now to stick with the default and\nnot really use it. It does the sync absorbtion well if you have two\nmany users (though not perfect).\n\nRegards,\nJignesh\n", "msg_date": "Mon, 6 Dec 2010 10:30:40 +0800", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group commit and commit delay/siblings" }, { "msg_contents": "On Sun, Dec 5, 2010 at 7:30 PM, Jignesh Shah <[email protected]> wrote:\n> The commit_siblings = 5 basically checks that it sleeps only when that\n> many backends are active. This I think is a very expensive check and I\n> would rather make commit_siblings=0 (which the current code does not\n> support.. it only supports minimum of 1) The check is expensive\n> irrespective of the settings .. But anyway here is the real kicker.\n> In all the tests I did with recent verions 8.4 and version 9.0 , it\n> seems that the default behavior handles the load well enough and one\n> does not have to use commit_delay at all. Since when the load is very\n> high all of them are basically in sync phase and the desired thing\n> happens anyway.\n>\n> Infact using commit_delay will actually add the cost of doing\n> commit_siblings check and can hurt the performance by increasing CPU\n> consumption.. Doing commit_siblings check for every transaction is a\n> killer since it does not return after meeting the minimum backends and\n> goes through every backend to calculate the total number before\n> comparing with the minimum. This is probably why most people see a\n> drop in performance when using commit_delay compared to the default.\n>\n> Anyway  I would recommended right now to stick with the default and\n> not really use it. It does the sync absorbtion well if you have two\n> many users (though not perfect).\n\nSounds like this setting should go away unless there is a very good\nreason to keep it.\n\n\n-- \nRob Wultsch\[email protected]\n", "msg_date": "Sun, 5 Dec 2010 19:47:33 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Group commit and commit delay/siblings" }, { "msg_contents": "On Mon, Dec 6, 2010 at 10:47 AM, Rob Wultsch <[email protected]> wrote:\n> On Sun, Dec 5, 2010 at 7:30 PM, Jignesh Shah <[email protected]> wrote:\n>> The commit_siblings = 5 basically checks that it sleeps only when that\n>> many backends are active. This I think is a very expensive check and I\n>> would rather make commit_siblings=0 (which the current code does not\n>> support.. it only supports minimum of 1) The check is expensive\n>> irrespective of the settings .. But anyway here is the real kicker.\n>> In all the tests I did with recent verions 8.4 and version 9.0 , it\n>> seems that the default behavior handles the load well enough and one\n>> does not have to use commit_delay at all. Since when the load is very\n>> high all of them are basically in sync phase and the desired thing\n>> happens anyway.\n>>\n>> Infact using commit_delay will actually add the cost of doing\n>> commit_siblings check and can hurt the performance by increasing CPU\n>> consumption.. Doing commit_siblings check for every transaction is a\n>> killer since it does not return after meeting the minimum backends and\n>> goes through every backend to calculate the total number before\n>> comparing with the minimum. This is probably why most people see a\n>> drop in performance when using commit_delay compared to the default.\n>>\n>> Anyway  I would recommended right now to stick with the default and\n>> not really use it. It does the sync absorbtion well if you have two\n>> many users (though not perfect).\n>\n> Sounds like this setting should go away unless there is a very good\n> reason to keep it.\n>\n>\n> --\n> Rob Wultsch\n> [email protected]\n>\n\nI would say commit_siblings should go away but maybe keep commit_delay\nfor a while. The advantage of keeping commit_delay is to do a rhythmic\nwrite patterns which can be used to control writes on WAL. It is\ndebatable but I had used it couple of times to control WAL writes.\n\nTo me commit_siblings is expensive during heavy users/load and should\nbe killed.\n\nMy 2 cents.\nRegards,\nJignesh\n", "msg_date": "Mon, 6 Dec 2010 11:03:28 +0800", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group commit and commit delay/siblings" }, { "msg_contents": "Jignesh Shah wrote:\n> The commit_siblings = 5 basically checks that it sleeps only when that\n> many backends are active. This I think is a very expensive check and I\n> would rather make commit_siblings=0 (which the current code does not\n> support.. it only supports minimum of 1)\n\nI just posted a message to the Facebook group sorting out the confusion \nin terminology there.\n\nThe code Jignesh is alluding to does this:\n\n if (CommitDelay > 0 && enableFsync &&\n CountActiveBackends() >= CommitSiblings)\n pg_usleep(CommitDelay);\n\nAnd the expensive part of the overhead beyond the delay itself is \nCountActiveBackends(), which iterates over the entire procArray \nstructure. Note that it doesn't bother acquiring ProcArrayLock for \nthat, as some small inaccuracy isn't really a problem for what it's \nusing the number for. And it ignores backends waiting on a lock too, as \nunlikely to commit in the near future.\n\nThe siblings count is the only thing that keeps this delay from kicking \nin on every single commit when the feature is turned on, which it is by \ndefault. I fear that a reworking in the direction Jignesh is suggesting \nhere, where that check was removed, would cripple situations where only \na single process was trying to get commits accomplished. \n\nAs for why this somewhat weird feature hasn't been removed yet, it's \nmainly because we have some benchmarks from Jignesh proving its value in \nthe hands of an expert. If you have a system with a really \nhigh-transaction rate, where you can expect that the server is \nconstantly busy and commits are being cached (and subsequently written \nto physical disk asyncronously), a brief pause after each commit helps \nchunk commits into the write cache as more efficient blocks. It seems a \nlittle counter-intuititive, but it does seem to work.\n\nThe number of people who are actually in that position are very few \nthough, so for the most part this parameter is just a magnet for people \nto set incorrectly because they don't understand it. With this \nadditional insight from Jignesh clearing up some of the questions I had \nabout this, I'm tempted to pull commit_siblings altogether, make \ncommit_delay default to 0, and update the docs to say something \nsuggesting \"this will slow down every commit you make; only increase it \nif you have a high commit rate system where that's necessary to get \nbetter commit chunking\".\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sun, 05 Dec 2010 23:35:32 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group commit and commit delay/siblings" }, { "msg_contents": "On Mon, Dec 6, 2010 at 12:35 PM, Greg Smith <[email protected]> wrote:\n> Jignesh Shah wrote:\n>>\n>> The commit_siblings = 5 basically checks that it sleeps only when that\n>> many backends are active. This I think is a very expensive check and I\n>> would rather make commit_siblings=0 (which the current code does not\n>> support.. it only supports minimum of 1)\n>\n> I just posted a message to the Facebook group sorting out the confusion in\n> terminology there.\n>\n> The code Jignesh is alluding to does this:\n>\n>       if (CommitDelay > 0 && enableFsync &&\n>           CountActiveBackends() >= CommitSiblings)\n>           pg_usleep(CommitDelay);\n>\n> And the expensive part of the overhead beyond the delay itself is\n> CountActiveBackends(), which iterates over the entire procArray structure.\n>  Note that it doesn't bother acquiring ProcArrayLock for that, as some small\n> inaccuracy isn't really a problem for what it's using the number for.  And\n> it ignores backends waiting on a lock too, as unlikely to commit in the near\n> future.\n>\n> The siblings count is the only thing that keeps this delay from kicking in\n> on every single commit when the feature is turned on, which it is by\n> default.  I fear that a reworking in the direction Jignesh is suggesting\n> here, where that check was removed, would cripple situations where only a\n> single process was trying to get commits accomplished.\n> As for why this somewhat weird feature hasn't been removed yet, it's mainly\n> because we have some benchmarks from Jignesh proving its value in the hands\n> of an expert.  If you have a system with a really high-transaction rate,\n> where you can expect that the server is constantly busy and commits are\n> being cached (and subsequently written to physical disk asyncronously), a\n> brief pause after each commit helps chunk commits into the write cache as\n> more efficient blocks.  It seems a little counter-intuititive, but it does\n> seem to work.\n>\n> The number of people who are actually in that position are very few though,\n> so for the most part this parameter is just a magnet for people to set\n> incorrectly because they don't understand it.  With this additional insight\n> from Jignesh clearing up some of the questions I had about this, I'm tempted\n> to pull commit_siblings altogether, make commit_delay default to 0, and\n> update the docs to say something suggesting \"this will slow down every\n> commit you make; only increase it if you have a high commit rate system\n> where that's necessary to get better commit chunking\".\n>\n> --\n> Greg Smith   2ndQuadrant US    [email protected]   Baltimore, MD\n> PostgreSQL Training, Services and Support        www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n\nYes I agree with the plan here.. Take out commit_siblings, default\ncommit_delay to zero (which is already the case) and add a warning\nin the docs that it will slow down your commits (on an individual\nbasis).\n\nThe only reason I still want to keep commit_delay is to make it act as\na controller or drummer if you will.. ( if you have read the book:\n\"Theory of contstraints\" by Dr Eli Goldratt)\n\nThanks.\nRegards,\nJignesh\n", "msg_date": "Mon, 6 Dec 2010 15:27:10 +0800", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group commit and commit delay/siblings" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> And the expensive part of the overhead beyond the delay itself is \n> CountActiveBackends(), which iterates over the entire procArray \n> structure.\n\nI could have sworn we'd refactored that to something like\n\tbool ThereAreAtLeastNActiveBackends(int n)\nwhich could drop out of the loop as soon as it'd established what we\nreally need to know. In general it's unclear that this'd really save\nmuch, since in a large fraction of executions the answer would be\n\"no\", and then you can't drop out of the loop early, or at least not\nvery early. But it clearly wins when n == 0 since then you can just\nreturn true on sight.\n\n> As for why this somewhat weird feature hasn't been removed yet, it's \n> mainly because we have some benchmarks from Jignesh proving its value in \n> the hands of an expert.\n\nRemoval has been proposed several times, but as long as it's off by\ndefault, it's fairly harmless to leave it there. I rather expect\nit'll stay as it is until someone proposes something that actually works\nbetter. In particular I see no advantage in simply deleting some of the\nparameters to the existing code. I'd suggest that we just improve the\ncoding so that we don't scan ProcArray at all when commit_siblings is 0.\n\n(I do agree with improving the docs to warn people away from assuming\nthis is a knob to frob mindlessly.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 06 Dec 2010 12:55:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group commit and commit delay/siblings " }, { "msg_contents": "On Tue, Dec 7, 2010 at 1:55 AM, Tom Lane <[email protected]> wrote:\n> Greg Smith <[email protected]> writes:\n>> And the expensive part of the overhead beyond the delay itself is\n>> CountActiveBackends(), which iterates over the entire procArray\n>> structure.\n>\n> I could have sworn we'd refactored that to something like\n>        bool ThereAreAtLeastNActiveBackends(int n)\n> which could drop out of the loop as soon as it'd established what we\n> really need to know.  In general it's unclear that this'd really save\n> much, since in a large fraction of executions the answer would be\n> \"no\", and then you can't drop out of the loop early, or at least not\n> very early.  But it clearly wins when n == 0 since then you can just\n> return true on sight.\n>\n>> As for why this somewhat weird feature hasn't been removed yet, it's\n>> mainly because we have some benchmarks from Jignesh proving its value in\n>> the hands of an expert.\n>\n> Removal has been proposed several times, but as long as it's off by\n> default, it's fairly harmless to leave it there.  I rather expect\n> it'll stay as it is until someone proposes something that actually works\n> better.  In particular I see no advantage in simply deleting some of the\n> parameters to the existing code.  I'd suggest that we just improve the\n> coding so that we don't scan ProcArray at all when commit_siblings is 0.\n>\n> (I do agree with improving the docs to warn people away from assuming\n> this is a knob to frob mindlessly.)\n>\n>                        regards, tom lane\n>\n\nIn that case I propose that we support commit_siblings=0 which is not\ncurrently supported. Minimal value for commit_siblings is currently\n1. If we support commit_siblings=0 then it should short-circuit that\nfunction call which is often what I do in my tests with commit_delay.\n\nThanks.\nRegards,\nJignesh\n", "msg_date": "Tue, 7 Dec 2010 11:07:44 +0800", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group commit and commit delay/siblings" }, { "msg_contents": "Jignesh Shah wrote:\n> On Tue, Dec 7, 2010 at 1:55 AM, Tom Lane <[email protected]> wrote:\n> \n>> I could have sworn we'd refactored that to something like\n>> bool ThereAreAtLeastNActiveBackends(int n)\n>> which could drop out of the loop as soon as it'd established what we\n>> really need to know...I'd suggest that we just improve the\n>> coding so that we don't scan ProcArray at all when commit_siblings is 0.\n>>\n>> (I do agree with improving the docs to warn people away from assuming\n>> this is a knob to frob mindlessly.)\n>> \n> In that case I propose that we support commit_siblings=0 which is not\n> currently supported. Minimal value for commit_siblings is currently\n> 1. If we support commit_siblings=0 then it should short-circuit that\n> function call which is often what I do in my tests with commit_delay.\n> \n\nEverybody should be happy now: attached patch refactors the code to \nexit as soon as the siblings count is exceeded, short-circuits with no \nscanning of ProcArray if the minimum is 0, and allows setting the \nsiblings to 0 to enable that shortcut:\n\npostgres# select name,setting,min_val,max_val from pg_settings where \nname='commit_siblings';\n name | setting | min_val | max_val\n-----------------+---------+---------+---------\n commit_siblings | 5 | 0 | 1000\n\nIt also makes it clear in the docs that a) group commit happens even \nwithout this setting being touched, and b) it's unlikely to actually \nhelp anyone. Those are the two parts that seem to confuse people \nwhenever this comes up. Thanks to Rob and the rest of the Facebook \ncommentators for helping highlight exactly what was wrong with the way \nthose were written. (It almost makes up for the slight distaste I get \nfrom seeing \"Greg likes MySQL at Facebook\" on my Wall after joining in \nthat discussion)\n\nI can't rebuild the docs on the system I wrote this on at the moment; I \nhope I didn't break anything with my edits but didn't test that yet.\n\nI'll add this into the next CommitFest so we don't forget about it, but \nof course Jignesh is welcome to try this out at his convience to see if \nI've kept the behavior he wants while improving its downside.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us", "msg_date": "Mon, 06 Dec 2010 23:52:50 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group commit and commit delay/siblings" }, { "msg_contents": "On Mon, 2010-12-06 at 23:52 -0500, Greg Smith wrote:\n> Jignesh Shah wrote:\n> > On Tue, Dec 7, 2010 at 1:55 AM, Tom Lane <[email protected]> wrote:\n> > \n> >> I could have sworn we'd refactored that to something like\n> >> bool ThereAreAtLeastNActiveBackends(int n)\n> >> which could drop out of the loop as soon as it'd established what we\n> >> really need to know...I'd suggest that we just improve the\n> >> coding so that we don't scan ProcArray at all when commit_siblings is 0.\n> >>\n> >> (I do agree with improving the docs to warn people away from assuming\n> >> this is a knob to frob mindlessly.)\n> >> \n> > In that case I propose that we support commit_siblings=0 which is not\n> > currently supported. Minimal value for commit_siblings is currently\n> > 1. If we support commit_siblings=0 then it should short-circuit that\n> > function call which is often what I do in my tests with commit_delay.\n> > \n> \n> Everybody should be happy now: attached patch refactors the code to \n> exit as soon as the siblings count is exceeded, short-circuits with no \n> scanning of ProcArray if the minimum is 0, and allows setting the \n> siblings to 0 to enable that shortcut:\n\nMinor patch, no downsides. Docs checked. Committed.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/books/\n PostgreSQL Development, 24x7 Support, Training and Services\n \n\n", "msg_date": "Wed, 08 Dec 2010 19:00:52 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Group commit and commit delay/siblings" } ]
[ { "msg_contents": "hello.\n\ni tested how are distributed values xmin,xmax on pages.\nin my tables . typically there are no more than 80 records\non pages.\n\nmaybe its possible to compress xmin & xmax values to\n1 byte/per record (+table of transactions per page)?\nits reduce the space when more than 1 record is\nfrom the same transaction.\n\n\nTesting query:\n\nSELECT\n (string_to_array(ctid::text,','))[1] as page\n ,count(*) as records\n ,array_upper(array_agg(distinct (xmin::text)),1) as trans\nFROM only\n \"Rejestr stacji do naprawy\"\ngroup by\n (string_to_array(ctid::text,','))[1]\norder by\n 3 desc\n\n------------\npasman\n", "msg_date": "Mon, 6 Dec 2010 15:30:34 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Strange optimization - xmin,xmax compression :)" }, { "msg_contents": "2010/12/6 pasman pasmański <[email protected]>:\n> hello.\n>\n> i tested how are distributed values xmin,xmax on pages.\n> in my tables . typically there are no more than 80 records\n> on pages.\n>\n> maybe its possible to compress xmin & xmax values to\n> 1 byte/per record (+table of transactions per page)?\n> its reduce the space when more than 1 record is\n> from the same transaction.\n\nNot a bad idea, but not easy to implement, I think.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 17 Dec 2010 21:46:55 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange optimization - xmin,xmax compression :)" }, { "msg_contents": "On Dec 17, 2010, at 8:46 PM, Robert Haas wrote:\n> 2010/12/6 pasman pasmański <[email protected]>:\n>> hello.\n>> \n>> i tested how are distributed values xmin,xmax on pages.\n>> in my tables . typically there are no more than 80 records\n>> on pages.\n>> \n>> maybe its possible to compress xmin & xmax values to\n>> 1 byte/per record (+table of transactions per page)?\n>> its reduce the space when more than 1 record is\n>> from the same transaction.\n> \n> Not a bad idea, but not easy to implement, I think.\n\nAnother option that would help even more for data warehousing would be storing the XIDs at the table level, because you'll typically have a very limited number of transactions per table.\n\nBut as Robert mentioned, this is not easy to implement. The community would probably need to see some pretty compelling performance numbers to even consider it.\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n", "msg_date": "Sun, 19 Dec 2010 12:22:01 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange optimization - xmin,xmax compression :)" } ]
[ { "msg_contents": "I have encountered a problem while restoring the database. There is a \ntable that contains XML data (BLOB), ~ 3 000 000 records, ~ 5.5Gb of \ndata. pg_restore has been running for a week without any considerable \nprogress. There are plenty of lines like these in the log:\n\npg_restore: processing item 3125397 BLOB 10001967\npg_restore: executing BLOB 10001967\n\nCPU usage is 100% always. The total database size is about 100 Gb and it \nrestores in an hour or so without BLOBs.\n", "msg_date": "Tue, 07 Dec 2010 16:36:59 +0800", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Slow BLOBs restoring" }, { "msg_contents": "I discovered this issue a bit more. -j option is slowing down BLOBs \nrestoring. It's about 1000x times slower if you specify this option. \nDoes anybody plan to fix it?\n> I have encountered a problem while restoring the database. There is a \n> table that contains XML data (BLOB), ~ 3 000 000 records, ~ 5.5Gb of \n> data. pg_restore has been running for a week without any considerable \n> progress. There are plenty of lines like these in the log:\n>\n> pg_restore: processing item 3125397 BLOB 10001967\n> pg_restore: executing BLOB 10001967\n>\n> CPU usage is 100% always. The total database size is about 100 Gb and \n> it restores in an hour or so without BLOBs.\n>\n\n", "msg_date": "Wed, 08 Dec 2010 16:50:05 +0800", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow BLOBs restoring" }, { "msg_contents": "Vlad Arkhipov <[email protected]> writes:\n> I discovered this issue a bit more. -j option is slowing down BLOBs \n> restoring. It's about 1000x times slower if you specify this option. \n\nAre you by any chance restoring from an 8.3 or older pg_dump file made\non Windows? If so, it's a known issue.\n\n> Does anybody plan to fix it?\n\nNot without a complete reproducible example ... and not at all if it's\nthe known problem. The fix for that is to update pg_dump to 8.4 or\nlater.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Dec 2010 09:46:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow BLOBs restoring " }, { "msg_contents": "08.12.2010 22:46, Tom Lane writes:\n> Are you by any chance restoring from an 8.3 or older pg_dump file made\n> on Windows? If so, it's a known issue.\n> \nNo, I tried Linux only.\n\n> Not without a complete reproducible example ... and not at all if it's\n> the known problem. The fix for that is to update pg_dump to 8.4 or\n> later.\n> \nI think you can reproduce it. First I created a database full of many \nBLOBs on Postres 8.4.5. Then I created a dump:\n\npg_dump -F c test > test.backup8\n\nIt took about 15 minutes. Then I tried to restore it on Postgres 8.\n\npg_restore -v -d test2 -j 2 test.backup8\n\nIt restored in 18 minutes. Then I restored it to Postgres 9.0.1, it took \n20 minutes. Then I created a dump there:\n\n/usr/pgsql-9.0/bin/pg_dump -F c test > test.backup9\n\nIt took 25 minutes. Finally I tried to restore it and got what I've \nalready described:\n\n/usr/pgsql-9.0/bin/pg_restore -v -d test2 -j 2 test.backup9\n\nHowever if I remove the option '-j', the database restores in 45 minutes.\n", "msg_date": "Thu, 09 Dec 2010 11:58:08 +0800", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow BLOBs restoring" }, { "msg_contents": "08.12.2010 22:46, Tom Lane writes:\n> Are you by any chance restoring from an 8.3 or older pg_dump file made\n> on Windows? If so, it's a known issue.\n>\nNo, I tried Linux only.\n\n> Not without a complete reproducible example ... and not at all if it's\n> the known problem. The fix for that is to update pg_dump to 8.4 or\n> later.\n>\nI think you can reproduce it. First I created a database full of many \nBLOBs on Postres 8.4.5. Then I created a dump:\n\npg_dump -F c test > test.backup8\n\nIt took about 15 minutes. Then I tried to restore it on Postgres 8.\n\npg_restore -v -d test2 -j 2 test.backup8\n\nIt restored in 18 minutes. Then I restored it to Postgres 9.0.1, it took \n20 minutes. Then I created a dump there:\n\n/usr/pgsql-9.0/bin/pg_dump -F c test > test.backup9\n\nIt took 25 minutes. Finally I tried to restore it and got what I've \nalready described:\n\n/usr/pgsql-9.0/bin/pg_restore -v -d test2 -j 2 test.backup9\n\nHowever if I remove the option '-j', the database restores in 45 minutes.\n", "msg_date": "Thu, 09 Dec 2010 12:05:48 +0800", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow BLOBs restoring" }, { "msg_contents": "Vlad Arkhipov <[email protected]> writes:\n> 08.12.2010 22:46, Tom Lane writes:\n>> Are you by any chance restoring from an 8.3 or older pg_dump file made\n>> on Windows? If so, it's a known issue.\n\n> No, I tried Linux only.\n\nOK, then it's not the missing-data-offsets issue.\n\n> I think you can reproduce it. First I created a database full of many \n> BLOBs on Postres 8.4.5. Then I created a dump:\n\nOh, you should have said how many was \"many\". I had tried with several\nthousand large blobs yesterday and didn't see any problem. However,\nwith several hundred thousand small blobs, indeed it gets pretty slow\nas soon as you use -j.\n\noprofile shows all the time is going into reduce_dependencies during the\nfirst loop in restore_toc_entries_parallel (ie, before we've actually\nstarted doing anything in parallel). The reason is that for each blob,\nwe're iterating through all of the several hundred thousand TOC entries,\nuselessly looking for anything that depends on the blob. And to add\ninsult to injury, because the blobs are all marked as SECTION_PRE_DATA,\nwe don't get to parallelize at all. I think we won't get to parallelize\nthe blob data restoration either, since all the blob data is hidden in a\nsingle TOC entry :-(\n\nSo the short answer is \"don't bother to use -j in a mostly-blobs restore,\nbecausw it isn't going to help you in 9.0\".\n\nOne fairly simple, if ugly, thing we could do about this is skip calling\nreduce_dependencies during the first loop if the TOC object is a blob;\neffectively assuming that nothing could depend on a blob. But that does\nnothing about the point that we're failing to parallelize blob\nrestoration. Right offhand it seems hard to do much about that without\nsome changes to the archive representation of blobs. Some things that\nmight be worth looking at for 9.1:\n\n* Add a flag to TOC objects saying \"this object has no dependencies\",\nto provide a generalized and principled way to skip the\nreduce_dependencies loop. This is only a good idea if pg_dump knows\nthat or can cheaply determine it at dump time, but I think it can.\n\n* Mark BLOB TOC entries as SECTION_DATA, or somehow otherwise make them\nparallelizable. Also break the BLOBS data item apart into an item per\nBLOB, so that that part's parallelizable. Maybe we should combine the\nmetadata and data for each blob into one TOC item --- if we don't, it\nseems like we need a dependency, which will put us back behind the\neight-ball. I think the reason it's like this is we didn't originally\nhave a separate TOC item per blob; but now that we added that to support\nper-blob ACL data, the monolithic BLOBS item seems pretty pointless.\n(Another thing that would have to be looked at here is the dependency\nbetween a BLOB and any BLOB COMMENT for it.)\n\nThoughts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Dec 2010 00:28:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow BLOBs restoring " }, { "msg_contents": "On Thu, Dec 9, 2010 at 12:28 AM, Tom Lane <[email protected]> wrote:\n> Vlad Arkhipov <[email protected]> writes:\n>> 08.12.2010 22:46, Tom Lane writes:\n>>> Are you by any chance restoring from an 8.3 or older pg_dump file made\n>>> on Windows?  If so, it's a known issue.\n>\n>> No, I tried Linux only.\n>\n> OK, then it's not the missing-data-offsets issue.\n>\n>> I think you can reproduce it. First I created a database full of many\n>> BLOBs on Postres 8.4.5. Then I created a dump:\n>\n> Oh, you should have said how many was \"many\".  I had tried with several\n> thousand large blobs yesterday and didn't see any problem.  However,\n> with several hundred thousand small blobs, indeed it gets pretty slow\n> as soon as you use -j.\n>\n> oprofile shows all the time is going into reduce_dependencies during the\n> first loop in restore_toc_entries_parallel (ie, before we've actually\n> started doing anything in parallel).  The reason is that for each blob,\n> we're iterating through all of the several hundred thousand TOC entries,\n> uselessly looking for anything that depends on the blob.  And to add\n> insult to injury, because the blobs are all marked as SECTION_PRE_DATA,\n> we don't get to parallelize at all.  I think we won't get to parallelize\n> the blob data restoration either, since all the blob data is hidden in a\n> single TOC entry :-(\n>\n> So the short answer is \"don't bother to use -j in a mostly-blobs restore,\n> becausw it isn't going to help you in 9.0\".\n>\n> One fairly simple, if ugly, thing we could do about this is skip calling\n> reduce_dependencies during the first loop if the TOC object is a blob;\n> effectively assuming that nothing could depend on a blob.  But that does\n> nothing about the point that we're failing to parallelize blob\n> restoration.  Right offhand it seems hard to do much about that without\n> some changes to the archive representation of blobs.  Some things that\n> might be worth looking at for 9.1:\n>\n> * Add a flag to TOC objects saying \"this object has no dependencies\",\n> to provide a generalized and principled way to skip the\n> reduce_dependencies loop.  This is only a good idea if pg_dump knows\n> that or can cheaply determine it at dump time, but I think it can.\n>\n> * Mark BLOB TOC entries as SECTION_DATA, or somehow otherwise make them\n> parallelizable.  Also break the BLOBS data item apart into an item per\n> BLOB, so that that part's parallelizable.  Maybe we should combine the\n> metadata and data for each blob into one TOC item --- if we don't, it\n> seems like we need a dependency, which will put us back behind the\n> eight-ball.  I think the reason it's like this is we didn't originally\n> have a separate TOC item per blob; but now that we added that to support\n> per-blob ACL data, the monolithic BLOBS item seems pretty pointless.\n> (Another thing that would have to be looked at here is the dependency\n> between a BLOB and any BLOB COMMENT for it.)\n>\n> Thoughts?\n\nIs there any use case for restoring a BLOB but not the BLOB COMMENT or\nBLOB ACLs? Can we just smush everything together into one section?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 9 Dec 2010 08:05:33 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow BLOBs restoring" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Dec 9, 2010 at 12:28 AM, Tom Lane <[email protected]> wrote:\n>> * Mark BLOB TOC entries as SECTION_DATA, or somehow otherwise make them\n>> parallelizable. �Also break the BLOBS data item apart into an item per\n>> BLOB, so that that part's parallelizable. �Maybe we should combine the\n>> metadata and data for each blob into one TOC item --- if we don't, it\n>> seems like we need a dependency, which will put us back behind the\n>> eight-ball. �I think the reason it's like this is we didn't originally\n>> have a separate TOC item per blob; but now that we added that to support\n>> per-blob ACL data, the monolithic BLOBS item seems pretty pointless.\n>> (Another thing that would have to be looked at here is the dependency\n>> between a BLOB and any BLOB COMMENT for it.)\n\n> Is there any use case for restoring a BLOB but not the BLOB COMMENT or\n> BLOB ACLs? Can we just smush everything together into one section?\n\nThe ACLs are already part of the main TOC entry for the blob. As for\ncomments, I'd want to keep the handling of those the same as they are\nfor every other kind of object. But that just begs the question of why\ncomments are separate TOC entries in the first place. We could\neliminate this problem if they became fields of object entries across\nthe board. Which would be a non-backwards-compatible change in dump\nfile format, but doing anything about the other issues mentioned above\nwill require that anyway.\n\nI'm not certain however about whether it's safe to treat the\nobject-metadata aspects of a blob as SECTION_DATA rather than\nSECTION_PRE_DATA. That will take a bit of investigation. It seems like\nthere shouldn't be any fundamental reason for it not to work, but that\ndoesn't mean there's not any weird assumptions buried someplace in\npg_dump ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Dec 2010 09:50:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow BLOBs restoring " }, { "msg_contents": "I wrote:\n> One fairly simple, if ugly, thing we could do about this is skip calling\n> reduce_dependencies during the first loop if the TOC object is a blob;\n> effectively assuming that nothing could depend on a blob. But that does\n> nothing about the point that we're failing to parallelize blob\n> restoration. Right offhand it seems hard to do much about that without\n> some changes to the archive representation of blobs. Some things that\n> might be worth looking at for 9.1:\n\n> * Add a flag to TOC objects saying \"this object has no dependencies\",\n> to provide a generalized and principled way to skip the\n> reduce_dependencies loop. This is only a good idea if pg_dump knows\n> that or can cheaply determine it at dump time, but I think it can.\n\nI had further ideas about this part of the problem. First, there's no\nneed for a file format change to fix this: parallel restore is already\ngroveling over all the dependencies in its fix_dependencies step, so it\ncould count them for itself easily enough. Second, the real problem\nhere is that reduce_dependencies processing is O(N^2) in the number of\nTOC objects. Skipping it for blobs, or even for all dependency-free\nobjects, doesn't make that very much better: the kind of people who\nreally need parallel restore are still likely to bump into unreasonable\nprocessing time. I think what we need to do is make fix_dependencies\nbuild a reverse lookup list of all the objects dependent on each TOC\nobject, so that the searching behavior in reduce_dependencies can be\neliminated outright. That will take O(N) time and O(N) extra space,\nwhich is a good tradeoff because you won't care if N is small, while if\nN is large you have got to have it anyway.\n\nBarring objections, I will do this and back-patch into 9.0. There is\nmaybe some case for trying to fix 8.4 as well, but since 8.4 didn't\nmake a separate TOC entry for each blob, it isn't as exposed to the\nproblem. We didn't back-patch the last round of efficiency hacks in\nthis area, so I'm thinking it's not necessary here either. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Dec 2010 10:05:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow BLOBs restoring " }, { "msg_contents": "On Thu, Dec 9, 2010 at 10:05 AM, Tom Lane <[email protected]> wrote:\n> I wrote:\n>> One fairly simple, if ugly, thing we could do about this is skip calling\n>> reduce_dependencies during the first loop if the TOC object is a blob;\n>> effectively assuming that nothing could depend on a blob.  But that does\n>> nothing about the point that we're failing to parallelize blob\n>> restoration.  Right offhand it seems hard to do much about that without\n>> some changes to the archive representation of blobs.  Some things that\n>> might be worth looking at for 9.1:\n>\n>> * Add a flag to TOC objects saying \"this object has no dependencies\",\n>> to provide a generalized and principled way to skip the\n>> reduce_dependencies loop.  This is only a good idea if pg_dump knows\n>> that or can cheaply determine it at dump time, but I think it can.\n>\n> I had further ideas about this part of the problem.  First, there's no\n> need for a file format change to fix this: parallel restore is already\n> groveling over all the dependencies in its fix_dependencies step, so it\n> could count them for itself easily enough.  Second, the real problem\n> here is that reduce_dependencies processing is O(N^2) in the number of\n> TOC objects.  Skipping it for blobs, or even for all dependency-free\n> objects, doesn't make that very much better: the kind of people who\n> really need parallel restore are still likely to bump into unreasonable\n> processing time.  I think what we need to do is make fix_dependencies\n> build a reverse lookup list of all the objects dependent on each TOC\n> object, so that the searching behavior in reduce_dependencies can be\n> eliminated outright.  That will take O(N) time and O(N) extra space,\n> which is a good tradeoff because you won't care if N is small, while if\n> N is large you have got to have it anyway.\n>\n> Barring objections, I will do this and back-patch into 9.0.  There is\n> maybe some case for trying to fix 8.4 as well, but since 8.4 didn't\n> make a separate TOC entry for each blob, it isn't as exposed to the\n> problem.  We didn't back-patch the last round of efficiency hacks in\n> this area, so I'm thinking it's not necessary here either.  Comments?\n\nAh, that sounds like a much cleaner solution.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 9 Dec 2010 10:56:39 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow BLOBs restoring" }, { "msg_contents": "\n\nOn 12/09/2010 10:05 AM, Tom Lane wrote:\n> I think what we need to do is make fix_dependencies\n> build a reverse lookup list of all the objects dependent on each TOC\n> object, so that the searching behavior in reduce_dependencies can be\n> eliminated outright. That will take O(N) time and O(N) extra space,\n> which is a good tradeoff because you won't care if N is small, while if\n> N is large you have got to have it anyway.\n>\n> Barring objections, I will do this and back-patch into 9.0. There is\n> maybe some case for trying to fix 8.4 as well, but since 8.4 didn't\n> make a separate TOC entry for each blob, it isn't as exposed to the\n> problem. We didn't back-patch the last round of efficiency hacks in\n> this area, so I'm thinking it's not necessary here either. Comments?\n>\n> \t\t\t\n\n\nSound good. Re 8.4: at a pinch people could probably use the 9.0 \npg_restore with their 8.4 dump.\n\ncheers\n\nandrew\n", "msg_date": "Thu, 09 Dec 2010 11:01:41 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow BLOBs restoring" } ]
[ { "msg_contents": "We are in the process of deciding on how to proceed on a database upgrade.\nWe currently have MS SQL 2000 running on Windows 2003 (on my test server).\nI was shocked at the cost for MS SQL 2008 R2 for a new server (2 CPU\nlicense). I started comparing DB’s and came across postgresql. It seemed\nto be exactly what I was after. All of our programming is in ASP.net.\nSince I am running MSSQL 2000 I have no benefit for .Net integration, so it\nis not a concern.\n\n\n\nI ran a head to head test of MS SQL 2000 and Postgresql 9.0. Both are\nrunning on Windows 2003. What I found was quite surprising and I am\nwondering if anyone can point out what is going on here.\nHere is the test I ran.\nI created 2 tables, the main table had 5 fields with a serial ID field. The\nsecond table linked to table 1 for a state field.\n\nI had ASP.net via MSSQL create 1,000 records in the main table. Took 9.85\nseconds to complete.\nNext I had ASP.net via Postgresql create 1,000 records. Took .65625\nseconds.\nPostgresql smoked MS SQL server on that test.\n\n\n\nNext test is to use ASP.net and join all 1,000 rows with table 2 and then\ndisplay the text out.\n\nMS SQL took 0.76 seconds to display\nselect name,address,city,state,statename,stateid,other from pgtemp1 left\njoin pgtemp2 on state=stateid\n\n\n\nThen I did the same test via Postgresql and it took 8.85 seconds! I tried\nit again as I thought I did something wrong. I did a few tweaks such as\nincreasing the shared buffers. Still the best I could get it to was 7.5\nseconds. This is insanely slow compared to MSSQL 2000. What am I missing.\nHere is my SQL statement for postgresql:\nselect name,address,city,state,statename,stateid,other from pgtemp1 left\njoin pgtemp2 on state=stateid\n\n\n\nAny ideas on why the Postgres server is soooo much slower on the joins? I\nam trying to understand what is going on here so please don’t flame me. Any\nadvice is appreciated.\n\n\n\n\n\n*Thanks,\nTom Polak\nRockford Area Association of Realtors\n**\nThe information contained in this email message is intended only for the use\nof the individual or entity named. If the reader of this email is not the\nintended recipient or the employee or agent responsible for delivering it to\nthe intended recipient, you are hereby notified that any dissemination,\ndistribution or copying of this email is strictly prohibited. If you have\nreceived this email in error, please immediately notify us by telephone and\nreply email. Thank you.*\n\n*Although this email and any attachments are believed to be free of any\nviruses or other defects that might affect any computer system into which it\nis received and opened, it is the responsibility of the recipient to ensure\nthat it is free of viruses, and the Rockford Area Association of Realtors\nhereby disclaims any liability for any loss or damage that results.*", "msg_date": "Tue, 7 Dec 2010 11:34:25 -0600", "msg_from": "Tom Polak <[email protected]>", "msg_from_op": true, "msg_subject": "Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "Tom Polak <[email protected]> wrote:\n \n> the best I could get it to was 7.5 seconds.\n \n> select name,address,city,state,statename,stateid,other from\n> pgtemp1 left join pgtemp2 on state=stateid\n \nWe'd need a lot more information. Please read this and post again:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nBe sure to include hardware info, postgresql.conf settings\n(excluding comments), table layouts including indexes and\nconstraints, and the results of:\n \nEXPLAIN ANALYZE select ...\n \n-Kevin\n", "msg_date": "Tue, 07 Dec 2010 12:11:53 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on\n\t Windows" }, { "msg_contents": "On 12/7/2010 11:34 AM, Tom Polak wrote:\n> We are in the process of deciding on how to proceed on a database\n> upgrade. We currently have MS SQL 2000 running on Windows 2003 (on my\n> test server). I was shocked at the cost for MS SQL 2008 R2 for a new\n> server (2 CPU license). I started comparing DB�s and came across\n> postgresql. It seemed to be exactly what I was after. All of our\n> programming is in ASP.net. Since I am running MSSQL 2000 I have no\n> benefit for .Net integration, so it is not a concern.\n>\n> I ran a head to head test of MS SQL 2000 and Postgresql 9.0. Both are\n> running on Windows 2003. What I found was quite surprising and I am\n> wondering if anyone can point out what is going on here.\n> Here is the test I ran.\n> I created 2 tables, the main table had 5 fields with a serial ID field.\n> The second table linked to table 1 for a state field.\n>\n> I had ASP.net via MSSQL create 1,000 records in the main table. Took\n> 9.85 seconds to complete.\n> Next I had ASP.net via Postgresql create 1,000 records. Took .65625\n> seconds.\n> Postgresql smoked MS SQL server on that test.\n\ndid you play with the postgresql.conf file? Maybe turn off fsync? I'd \nguess the above is mssql is flushing to disk while PG isnt.\n\n>\n> Next test is to use ASP.net and join all 1,000 rows with table 2 and\n> then display the text out.\n>\n> MS SQL took 0.76 seconds to display\n> select name,address,city,state,statename,stateid,other from pgtemp1 left\n> join pgtemp2 on state=stateid\n>\n> Then I did the same test via Postgresql and it took 8.85 seconds! I\n> tried it again as I thought I did something wrong. I did a few tweaks\n> such as increasing the shared buffers. Still the best I could get it to\n> was 7.5 seconds. This is insanely slow compared to MSSQL 2000. What am\n> I missing. Here is my SQL statement for postgresql:\n> select name,address,city,state,statename,stateid,other from pgtemp1 left\n> join pgtemp2 on state=stateid\n>\n> Any ideas on why the Postgres server is soooo much slower on the\n> joins? I am trying to understand what is going on here so please don�t\n> flame me. Any advice is appreciated.\n>\n\nDid you create an index? That'd be my first guess. Also, can you run \nthe sql from the command line client (psql) and see if it takes that \nlong? While your in psql, stick a 'explain analyze' infront of your \nquery, and let's see its output.\n\nAlso, as a fair warning: mssql doesn't really care about transactions, \nbut PG really does. Make sure all your code is properly starting and \ncommiting transactions.\n\n-Andy\n", "msg_date": "Tue, 07 Dec 2010 12:13:56 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On Tuesday 07 December 2010 18:34:25 Tom Polak wrote:\n> Then I did the same test via Postgresql and it took 8.85 seconds! I tried\n> it again as I thought I did something wrong. I did a few tweaks such as\n> increasing the shared buffers. Still the best I could get it to was 7.5\n> seconds. This is insanely slow compared to MSSQL 2000. What am I missing.\n> Here is my SQL statement for postgresql:\n> select name,address,city,state,statename,stateid,other from pgtemp1 left\n> join pgtemp2 on state=stateid\nI think you would at least provide the exact schema and possibly some example \ndata (pg_dump) to get us somewhere.\n\nI would suggest you post the output of EXPLAIN ANALYZE $yourquery - that gives \nus information about how that query was executed.\n\nGreetings,\n\nAndres\n", "msg_date": "Tue, 7 Dec 2010 19:20:36 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On 12/7/10 9:34 AM, Tom Polak wrote:\n> We are in the process of deciding on how to proceed on a database upgrade. We currently have MS SQL 2000 running on Windows 2003 (on my test server). I was shocked at the cost for MS SQL 2008 R2 for a new server (2 CPU license). I started comparing DB�s and came across postgresql. It seemed to be exactly what I was after. All of our programming is in ASP.net. Since I am running MSSQL 2000 I have no benefit for .Net integration, so it is not a concern.\n>\n> I ran a head to head test of MS SQL 2000 and Postgresql 9.0. Both are running on Windows 2003. What I found was quite surprising and I am wondering if anyone can point out what is going on here.\n> Here is the test I ran.\n> I created 2 tables, the main table had 5 fields with a serial ID field. The second table linked to table 1 for a state field.\n\nDid you run ANALYZE on the database after creating it and loading the data? If not, do it and try again (along with the other suggestions you'll get here). ANALYZE gathers the statistics that allow the planner to do its job. Without statistics, all bets are off.\n\nCraig\n\n> I had ASP.net via MSSQL create 1,000 records in the main table. Took 9.85 seconds to complete.\n> Next I had ASP.net via Postgresql create 1,000 records. Took .65625 seconds.\n> Postgresql smoked MS SQL server on that test.\n>\n> Next test is to use ASP.net and join all 1,000 rows with table 2 and then display the text out.\n>\n> MS SQL took 0.76 seconds to display\n> select name,address,city,state,statename,stateid,other from pgtemp1 left join pgtemp2 on state=stateid\n>\n> Then I did the same test via Postgresql and it took 8.85 seconds! I tried it again as I thought I did something wrong. I did a few tweaks such as increasing the shared buffers. Still the best I could get it to was 7.5 seconds. This is insanely slow compared to MSSQL 2000. What am I missing. Here is my SQL statement for postgresql:\n> select name,address,city,state,statename,stateid,other from pgtemp1 left join pgtemp2 on state=stateid\n>\n> Any ideas on why the Postgres server is soooo much slower on the joins? I am trying to understand what is going on here so please don�t flame me. Any advice is appreciated.\n>\n> *Thanks,\n> Tom Polak\n> Rockford Area Association of Realtors\n> */\n> The information contained in this email message is intended only for the use of the individual or entity named. If the reader of this email is not the intended recipient or the employee or agent responsible for delivering it to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this email is strictly prohibited. If you have received this email in error, please immediately notify us by telephone and reply email. Thank you./\n>\n> /Although this email and any attachments are believed to be free of any viruses or other defects that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is free of viruses, and the Rockford Area Association of Realtors hereby disclaims any liability for any loss or damage that results./\n>\n\n", "msg_date": "Tue, 07 Dec 2010 11:23:24 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On 12/7/2010 1:22 PM, Justin Pitts wrote:\n>>\n>> Also, as a fair warning: mssql doesn't really care about transactions, but\n>> PG really does. Make sure all your code is properly starting and commiting\n>> transactions.\n>>\n>> -Andy\n>\n> I do not understand that statement. Can you explain it a bit better?\n\nIn mssql you can write code that connects to the db, fire off updates \nand inserts, and then disconnects. I believe mssql will keep all your \nchanges, and the transaction stuff is done for you.\n\nIn PG the first statement you fire off (like an \"insert into\" for \nexample) will start a transaction. If you dont commit before you \ndisconnect that transaction will be rolled back. Even worse, if your \nprogram does not commit, but keeps the connection to the db open, the \ntransaction will stay open too.\n\nThere are differences in the way mssql and pg do transactions. mssql \nuses a transaction log and keeps current data in the table. In mssql if \nyou open a transaction and write a bunch of stuff, the table contains \nthat new stuff. Everyone can see it. (I think default transaction \nisolation level is read commited). But if you set your isolation level \nto something with repeatable read, then your program will block and have \nto wait on every little change to the table. (or, probably page.. I \nthink mssql has page level locking?)\n\nanyway, in PG, multiple versions of the same row are kept, and when you \nopen, and keep open a transaction, PG has to keep a version of the row \nfor every change that other people make. So a long lasting transaction \ncould create hundreds of versions of one row. Then when somebody goes \nto select against that table, it has to scan not only the rows, but \nevery version of every row!\n\nSo my point is, in PG, use transactions as they were meant to be used, \nas single atomic operations. Start, do some work, commit.\n\nmssql made it easy to ignore transactions by doing it for you. Ignoring \ntransaction in PG will hurt you.\n\nyou can google MVCC and \"postgres idle in transaction\" for more.\n\n-Andy\n", "msg_date": "Tue, 07 Dec 2010 13:43:21 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On Tue, Dec 7, 2010 at 11:43 AM, Andy Colson <[email protected]> wrote:\n\n> In PG the first statement you fire off (like an \"insert into\" for example)\n> will start a transaction.  If you dont commit before you disconnect that\n> transaction will be rolled back.  Even worse, if your program does not\n> commit, but keeps the connection to the db open, the transaction will stay\n> open too.\n\nHuh - is this new? I always thought that every statement was wrapped\nin its own transaction unless you explicitly start your own. So you\nshouldn't need to commit before closing a connection if you never\nopened a transaction to begin with.\n\n\n-- \nRegards,\nRichard Broersma Jr.\n", "msg_date": "Tue, 7 Dec 2010 11:56:51 -0800", "msg_from": "Richard Broersma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On 07/12/2010 7:43 PM, Andy Colson wrote:\n> On 12/7/2010 1:22 PM, Justin Pitts wrote:\n>>>\n>>> Also, as a fair warning: mssql doesn't really care about \n>>> transactions, but\n>>> PG really does. Make sure all your code is properly starting and \n>>> commiting\n>>> transactions.\n>>>\n>>> -Andy\n>>\n>> I do not understand that statement. Can you explain it a bit better?\n>\n> In mssql you can write code that connects to the db, fire off updates \n> and inserts, and then disconnects. I believe mssql will keep all your \n> changes, and the transaction stuff is done for you.\n>\n> In PG the first statement you fire off (like an \"insert into\" for \n> example) will start a transaction. If you dont commit before you \n> disconnect that transaction will be rolled back. Even worse, if your \n> program does not commit, but keeps the connection to the db open, the \n> transaction will stay open too.\nAs far as I know both MS SQL and and Postgres work just the same as \nregards explicit and implicit (autocommit) transactions, only the \nunderlying storage/logging mechanisms are different.\n\nTransactions shouldn't make ay real difference to the select/join \nperformance being complained about though. It's already stated that the \ninsert performance of postgres far exceeds SQL Server, which is my \nexperience also.\n\nAs already suggested, until we see the exact table definitions including \nindexes etc. there's no real way to tell what the problem is. How many \nrows are in the second table? It really shouldn't take that much time to \nread 1000 rows unless you have a bizarrely slow hard disk.\n\nIt would be nice to eliminate any programmatic or driver influence too. \nHow does the SQL select execute in enterprise manager for mssql and psql \nor pgadmin for postgres?\n\nCheers,\nGary.\n\n", "msg_date": "Tue, 07 Dec 2010 19:58:37 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On Tue, Dec 07, 2010 at 11:56:51AM -0800, Richard Broersma wrote:\n> On Tue, Dec 7, 2010 at 11:43 AM, Andy Colson <[email protected]> wrote:\n> \n> > In PG the first statement you fire off (like an \"insert into\" for example)\n> > will start a transaction. ?If you dont commit before you disconnect that\n> > transaction will be rolled back. ?Even worse, if your program does not\n> > commit, but keeps the connection to the db open, the transaction will stay\n> > open too.\n> \n> Huh - is this new? I always thought that every statement was wrapped\n> in its own transaction unless you explicitly start your own. So you\n> shouldn't need to commit before closing a connection if you never\n> opened a transaction to begin with.\n> \n> \n> -- \n> Regards,\n> Richard Broersma Jr.\n> \n\nThe default of autocommit unless explicitly starting a transaction with\nBEGIN is the normal behavior that I have seen as well.\n\nCheers,\nKen\n", "msg_date": "Tue, 7 Dec 2010 14:10:28 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "Tom Polak wrote:\n>\n> We are in the process of deciding on how to proceed on a database \n> upgrade. We currently have MS SQL 2000 running on Windows 2003 (on my \n> test server). I was shocked at the cost for MS SQL 2008 R2 for a new \n> server (2 CPU license). I started comparing DB�s and came across \n> postgresql. It seemed to be exactly what I was after. All of our \n> programming is in ASP.net. Since I am running MSSQL 2000 I have no \n> benefit for .Net integration, so it is not a concern.\n>\n> \n>\n> I ran a head to head test of MS SQL 2000 and Postgresql 9.0. Both are \n> running on Windows 2003. What I found was quite surprising and I am \n> wondering if anyone can point out what is going on here. \n> Here is the test I ran. \n> I created 2 tables, the main table had 5 fields with a serial ID \n> field. The second table linked to table 1 for a state field.\n>\n> I had ASP.net via MSSQL create 1,000 records in the main table. Took \n> 9.85 seconds to complete.\n> Next I had ASP.net via Postgresql create 1,000 records. Took .65625 \n> seconds.\n> Postgresql smoked MS SQL server on that test.\n>\n> \n>\n> Next test is to use ASP.net and join all 1,000 rows with table 2 and \n> then display the text out.\n>\n> MS SQL took 0.76 seconds to display\n> select name,address,city,state,statename,stateid,other from pgtemp1 \n> left join pgtemp2 on state=stateid\n>\n> \n>\n> Then I did the same test via Postgresql and it took 8.85 seconds! I \n> tried it again as I thought I did something wrong. I did a few tweaks \n> such as increasing the shared buffers. Still the best I could get it \n> to was 7.5 seconds. This is insanely slow compared to MSSQL 2000. \n> What am I missing. Here is my SQL statement for postgresql:\n> select name,address,city,state,statename,stateid,other from pgtemp1 \n> left join pgtemp2 on state=stateid\n>\n> \n>\n> Any ideas on why the Postgres server is soooo much slower on the \n> joins? I am trying to understand what is going on here so please \n> don�t flame me. Any advice is appreciated. \n>\n> \n>\n> \n>\n>\nAre all structures the same? Are all indexes the same? What does \n\"explain analyze\" tell you?\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Tue, 07 Dec 2010 15:22:15 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On 12/7/2010 2:10 PM, Kenneth Marshall wrote:\n> On Tue, Dec 07, 2010 at 11:56:51AM -0800, Richard Broersma wrote:\n>> On Tue, Dec 7, 2010 at 11:43 AM, Andy Colson<[email protected]> wrote:\n>>\n>>> In PG the first statement you fire off (like an \"insert into\" for example)\n>>> will start a transaction. ?If you dont commit before you disconnect that\n>>> transaction will be rolled back. ?Even worse, if your program does not\n>>> commit, but keeps the connection to the db open, the transaction will stay\n>>> open too.\n>>\n>> Huh - is this new? I always thought that every statement was wrapped\n>> in its own transaction unless you explicitly start your own. So you\n>> shouldn't need to commit before closing a connection if you never\n>> opened a transaction to begin with.\n>>\n>>\n>> --\n>> Regards,\n>> Richard Broersma Jr.\n>>\n>\n> The default of autocommit unless explicitly starting a transaction with\n> BEGIN is the normal behavior that I have seen as well.\n>\n> Cheers,\n> Ken\n\nCrikey! You're right. I need to be more careful with my assumptions.\n\nI maintain that people need to be more careful with pg transactions. \nI've seen several posts about \"idle in transaction\". But its not as bad \nas I made out. My confusion comes from the library I use to hit PG, \nwhich fires off a \"begin\" for me, and if I dont explicitly commit, it \ngets rolled back.\n\nsorry, it was confused between framework and PG.\n\n-Andy\n", "msg_date": "Tue, 07 Dec 2010 14:23:16 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "What I was really after was a quick comparison between the two. I did not\ncreate anything special, just the two tables. One table SQL generated the\nrecords for me. I did not tweak anything after installing either system.\nThere was a primary key on the ID field of both tables, no indexes though\nin either system. The second table had 1 record in it. The hardware it\nis running on is fairly good, dual Xeon CPUs, 4 GB of RAM, Raid 5. Btw,\nthe cost for MS SQL 2008 R2 is ~$14,000 for 2 cpus,\nhttp://www.cdw.com/shop/products/default.aspx?EDC=2167810 . That is why I\nam pursuing this. :)\n\nHere is the ASP.net code that I was running\nDim starttime As Date = Date.Now\n Dim endtime As Date\n Dim reader As NpgsqlDataReader\n Dim output2 As String = \"\"\n\n\n Dim oConn As New\nNpgsqlConnection(\"Server=192.168.1.5;Port=5432;Userid=postgres;Password=12\n345;Protocol=3;SSL=false;Pooling=true;MinPoolSize=1;MaxPoolSize=20;Encodin\ng=UNICODE;Timeout=15;SslMode=Disable;Database=tomtemp\")\n oConn.Open()\n Dim x As Integer = 0\n 'For x = 0 To 1000 'uncomment to insert records.\n 'Dim command As New NpgsqlCommand(\"insert into pgtemp1(name,\naddress, city, state) values ('Tom\" & x & \"','123\" & x & \" main\nst','rockford',1) \", oConn) 'meant for loop to put in 1,000 records in\npgtemp1 table\n 'Dim command As New NpgsqlCommand(\"insert into pgtemp2(statename,\nstateid, other) values ('Illinois',1,'This is a lot of fun') \", oConn)\n'only sends 1 record into the table pgtemp2\n 'command.ExecuteNonQuery()\n 'Next\n\n 'join table and read 1000 rows.\n Dim command As New NpgsqlCommand(\"select\nname,address,city,state,statename,stateid,other from pgtemp1 left join\npgtemp2 on state=stateid\", oConn)\n reader = command.ExecuteReader()\n While reader.read()\n output2 += \"<tr><td>\" & reader(\"name\") & \"</td><td>\" &\nreader(\"address\") & \"</td><td>\" & reader(\"city\") & \"</td><td>\" &\nreader(\"statename\") & \"</td><td>\" & reader(\"other\") & \"</td></tr>\"\n End While\n oConn.Close()\n readeroutput.text =\n\"<table><tr><td>Name:</td><td>Address:</td><td>City:</td><td>State</td><td\n>Other</td></tr>\" & output2 & \"</table>\"\n\n endtime = Date.Now\n Dim runtime As String\n runtime = endtime.Subtract(starttime).TotalSeconds\n output.text = starttime.ToString & \" \" & runtime\n\nThe SQL is a straight convert from MS SQL code. I did not tweak either\nsystem.\n\n From EXPLAIN ANALYZE I can see the query ran much faster.\n\"Nested Loop Left Join (cost=0.00..138.04 rows=1001 width=1298) (actual\ntime=0.036..4.679 rows=1001 loops=1)\"\n\" Join Filter: (pgtemp1.state = pgtemp2.stateid)\"\n\" -> Seq Scan on pgtemp1 (cost=0.00..122.01 rows=1001 width=788)\n(actual time=0.010..0.764 rows=1001 loops=1)\"\n\" -> Materialize (cost=0.00..1.01 rows=1 width=510) (actual\ntime=0.000..0.001 rows=1 loops=1001)\"\n\" -> Seq Scan on pgtemp2 (cost=0.00..1.01 rows=1 width=510)\n(actual time=0.006..0.008 rows=1 loops=1)\"\n\"Total runtime: 5.128 ms\"\n\nThe general question comes down to, can I expect decent perfomance from\nPostgresql compared to MSSQL. I was hoping that Postgresql 9.0 beat MSSQL\n2000 since MS 2000 is over 10 years old.\n\nThanks,\nTom Polak\nRockford Area Association of Realtors\n815-395-6776 x203\n\nThe information contained in this email message is intended only for the\nuse of the individual or entity named. If the reader of this email is not\nthe intended recipient or the employee or agent responsible for delivering\nit to the intended recipient, you are hereby notified that any\ndissemination, distribution or copying of this email is strictly\nprohibited. If you have received this email in error, please immediately\nnotify us by telephone and reply email. Thank you.\n\nAlthough this email and any attachments are believed to be free of any\nviruses or other defects that might affect any computer system into which\nit is received and opened, it is the responsibility of the recipient to\nensure that it is free of viruses, and the Rockford Area Association of\nRealtors hereby disclaims any liability for any loss or damage that\nresults.\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Andy Colson\nSent: Tuesday, December 07, 2010 2:23 PM\nTo: Kenneth Marshall\nCc: Richard Broersma; Justin Pitts; [email protected]\nSubject: Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows\n\nOn 12/7/2010 2:10 PM, Kenneth Marshall wrote:\n> On Tue, Dec 07, 2010 at 11:56:51AM -0800, Richard Broersma wrote:\n>> On Tue, Dec 7, 2010 at 11:43 AM, Andy Colson<[email protected]>\nwrote:\n>>\n>>> In PG the first statement you fire off (like an \"insert into\" for\nexample)\n>>> will start a transaction. ?If you dont commit before you disconnect\nthat\n>>> transaction will be rolled back. ?Even worse, if your program does not\n>>> commit, but keeps the connection to the db open, the transaction will\nstay\n>>> open too.\n>>\n>> Huh - is this new? I always thought that every statement was wrapped\n>> in its own transaction unless you explicitly start your own. So you\n>> shouldn't need to commit before closing a connection if you never\n>> opened a transaction to begin with.\n>>\n>>\n>> --\n>> Regards,\n>> Richard Broersma Jr.\n>>\n>\n> The default of autocommit unless explicitly starting a transaction with\n> BEGIN is the normal behavior that I have seen as well.\n>\n> Cheers,\n> Ken\n\nCrikey! You're right. I need to be more careful with my assumptions.\n\nI maintain that people need to be more careful with pg transactions.\nI've seen several posts about \"idle in transaction\". But its not as bad\nas I made out. My confusion comes from the library I use to hit PG,\nwhich fires off a \"begin\" for me, and if I dont explicitly commit, it\ngets rolled back.\n\nsorry, it was confused between framework and PG.\n\n-Andy\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 7 Dec 2010 15:29:54 -0600", "msg_from": "Tom Polak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On 12/7/10 1:29 PM, Tom Polak wrote:\n> What I was really after was a quick comparison between the two. I did not\n> create anything special, just the two tables. One table SQL generated the\n> records for me. I did not tweak anything after installing either system.\n\nThat's not a valid test. Postgres is NOT intended to be used out of the box. The default parameters aren't useful.\n\n> There was a primary key on the ID field of both tables, no indexes though\n> in either system. The second table had 1 record in it. The hardware it\n> is running on is fairly good, dual Xeon CPUs, 4 GB of RAM, Raid 5. Btw,\n> the cost for MS SQL 2008 R2 is ~$14,000 for 2 cpus,\n> http://www.cdw.com/shop/products/default.aspx?EDC=2167810 . That is why I\n> am pursuing this. :)\n>\n> Here is the ASP.net code that I was running\n> Dim starttime As Date = Date.Now\n> Dim endtime As Date\n> Dim reader As NpgsqlDataReader\n> Dim output2 As String = \"\"\n>\n>\n> Dim oConn As New\n> NpgsqlConnection(\"Server=192.168.1.5;Port=5432;Userid=postgres;Password=12\n> 345;Protocol=3;SSL=false;Pooling=true;MinPoolSize=1;MaxPoolSize=20;Encodin\n> g=UNICODE;Timeout=15;SslMode=Disable;Database=tomtemp\")\n> oConn.Open()\n> Dim x As Integer = 0\n> 'For x = 0 To 1000 'uncomment to insert records.\n> 'Dim command As New NpgsqlCommand(\"insert into pgtemp1(name,\n> address, city, state) values ('Tom\"& x& \"','123\"& x& \" main\n> st','rockford',1) \", oConn) 'meant for loop to put in 1,000 records in\n> pgtemp1 table\n> 'Dim command As New NpgsqlCommand(\"insert into pgtemp2(statename,\n> stateid, other) values ('Illinois',1,'This is a lot of fun') \", oConn)\n> 'only sends 1 record into the table pgtemp2\n> 'command.ExecuteNonQuery()\n> 'Next\n\nYou still haven't done an ANALYZE sql statement after filling your tables with data. You should execute \"analyze pgtemp1\" and \"analyze pgtemp2\" before you do any performance tests. Otherwise your results are meaningless.\n\nCraig\n\n>\n> 'join table and read 1000 rows.\n> Dim command As New NpgsqlCommand(\"select\n> name,address,city,state,statename,stateid,other from pgtemp1 left join\n> pgtemp2 on state=stateid\", oConn)\n> reader = command.ExecuteReader()\n> While reader.read()\n> output2 += \"<tr><td>\"& reader(\"name\")& \"</td><td>\"&\n> reader(\"address\")& \"</td><td>\"& reader(\"city\")& \"</td><td>\"&\n> reader(\"statename\")& \"</td><td>\"& reader(\"other\")& \"</td></tr>\"\n> End While\n> oConn.Close()\n> readeroutput.text =\n> \"<table><tr><td>Name:</td><td>Address:</td><td>City:</td><td>State</td><td\n>> Other</td></tr>\"& output2& \"</table>\"\n>\n> endtime = Date.Now\n> Dim runtime As String\n> runtime = endtime.Subtract(starttime).TotalSeconds\n> output.text = starttime.ToString& \" \"& runtime\n>\n> The SQL is a straight convert from MS SQL code. I did not tweak either\n> system.\n>\n>> From EXPLAIN ANALYZE I can see the query ran much faster.\n> \"Nested Loop Left Join (cost=0.00..138.04 rows=1001 width=1298) (actual\n> time=0.036..4.679 rows=1001 loops=1)\"\n> \" Join Filter: (pgtemp1.state = pgtemp2.stateid)\"\n> \" -> Seq Scan on pgtemp1 (cost=0.00..122.01 rows=1001 width=788)\n> (actual time=0.010..0.764 rows=1001 loops=1)\"\n> \" -> Materialize (cost=0.00..1.01 rows=1 width=510) (actual\n> time=0.000..0.001 rows=1 loops=1001)\"\n> \" -> Seq Scan on pgtemp2 (cost=0.00..1.01 rows=1 width=510)\n> (actual time=0.006..0.008 rows=1 loops=1)\"\n> \"Total runtime: 5.128 ms\"\n>\n> The general question comes down to, can I expect decent perfomance from\n> Postgresql compared to MSSQL. I was hoping that Postgresql 9.0 beat MSSQL\n> 2000 since MS 2000 is over 10 years old.\n>\n> Thanks,\n> Tom Polak\n> Rockford Area Association of Realtors\n> 815-395-6776 x203\n>\n> The information contained in this email message is intended only for the\n> use of the individual or entity named. If the reader of this email is not\n> the intended recipient or the employee or agent responsible for delivering\n> it to the intended recipient, you are hereby notified that any\n> dissemination, distribution or copying of this email is strictly\n> prohibited. If you have received this email in error, please immediately\n> notify us by telephone and reply email. Thank you.\n>\n> Although this email and any attachments are believed to be free of any\n> viruses or other defects that might affect any computer system into which\n> it is received and opened, it is the responsibility of the recipient to\n> ensure that it is free of viruses, and the Rockford Area Association of\n> Realtors hereby disclaims any liability for any loss or damage that\n> results.\n>\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Andy Colson\n> Sent: Tuesday, December 07, 2010 2:23 PM\n> To: Kenneth Marshall\n> Cc: Richard Broersma; Justin Pitts; [email protected]\n> Subject: Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows\n>\n> On 12/7/2010 2:10 PM, Kenneth Marshall wrote:\n>> On Tue, Dec 07, 2010 at 11:56:51AM -0800, Richard Broersma wrote:\n>>> On Tue, Dec 7, 2010 at 11:43 AM, Andy Colson<[email protected]>\n> wrote:\n>>>\n>>>> In PG the first statement you fire off (like an \"insert into\" for\n> example)\n>>>> will start a transaction. ?If you dont commit before you disconnect\n> that\n>>>> transaction will be rolled back. ?Even worse, if your program does not\n>>>> commit, but keeps the connection to the db open, the transaction will\n> stay\n>>>> open too.\n>>>\n>>> Huh - is this new? I always thought that every statement was wrapped\n>>> in its own transaction unless you explicitly start your own. So you\n>>> shouldn't need to commit before closing a connection if you never\n>>> opened a transaction to begin with.\n>>>\n>>>\n>>> --\n>>> Regards,\n>>> Richard Broersma Jr.\n>>>\n>>\n>> The default of autocommit unless explicitly starting a transaction with\n>> BEGIN is the normal behavior that I have seen as well.\n>>\n>> Cheers,\n>> Ken\n>\n> Crikey! You're right. I need to be more careful with my assumptions.\n>\n> I maintain that people need to be more careful with pg transactions.\n> I've seen several posts about \"idle in transaction\". But its not as bad\n> as I made out. My confusion comes from the library I use to hit PG,\n> which fires off a \"begin\" for me, and if I dont explicitly commit, it\n> gets rolled back.\n>\n> sorry, it was confused between framework and PG.\n>\n> -Andy\n>\n\n", "msg_date": "Tue, 07 Dec 2010 13:53:41 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On 07/12/2010 9:29 PM, Tom Polak wrote:\n>\n> From EXPLAIN ANALYZE I can see the query ran much faster.\n> \"Nested Loop Left Join (cost=0.00..138.04 rows=1001 width=1298) (actual\n> time=0.036..4.679 rows=1001 loops=1)\"\n> \" Join Filter: (pgtemp1.state = pgtemp2.stateid)\"\n> \" -> Seq Scan on pgtemp1 (cost=0.00..122.01 rows=1001 width=788)\n> (actual time=0.010..0.764 rows=1001 loops=1)\"\n> \" -> Materialize (cost=0.00..1.01 rows=1 width=510) (actual\n> time=0.000..0.001 rows=1 loops=1001)\"\n> \" -> Seq Scan on pgtemp2 (cost=0.00..1.01 rows=1 width=510)\n> (actual time=0.006..0.008 rows=1 loops=1)\"\n> \"Total runtime: 5.128 ms\"\n>\n> The general question comes down to, can I expect decent perfomance from\n> Postgresql compared to MSSQL. I was hoping that Postgresql 9.0 beat MSSQL\n> 2000 since MS 2000 is over 10 years old.\n>\nSo postgres actually executed the select in around 5 miiliseconds. \nPretty good I would say. The problem therefore lies not with postgres \nitself, but what is done with the results afterwards? Assuming that this \nis pure local and therefore no network issues, perhaps there is a \nperformance issue in this case with the Npgsql driver? Someone who knows \nmore about this driver could perhaps shed some light on this?\n\nI have used .NET (C#) with postgres before, but only using the odbc \ndriver. Perhaps you could try that instead (using OdbcCommand, \nOdbcDataReader etc.).\n\nI mainly use ruby (jruby) with postgres both under linux and Windows, \nbut I can certainly process 1000 records of similar structure in well \nunder 1 second.\n\nCheers,\nGary.\n\n", "msg_date": "Tue, 07 Dec 2010 22:11:48 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "Tom Polak <[email protected]> wrote:\n \n> I did not tweak anything after installing either system.\n \nPostgreSQL is set up with defaults such that it will start up and\nrun on the most ancient an underpowered system people are likely to\nhave lying around. It is expected that people will tune it for\nserious production use, although people often run for years before\nthey hit a case where the tuning makes enough of a difference that\nthey do something about it. For guidelines see this page:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \nYou can get a quick comparison without doing any tuning, but it\nwon't tell you much about how something else compares to PostgreSQL\nwhen it is configured for production use.\n \n> The hardware it is running on is fairly good, dual Xeon CPUs, 4 GB\n> of RAM, Raid 5.\n \nFor comparison, I would set shared_buffers to at least 200 MB,\neffective_cache_size to 2 to 3 GB, and I would probably drop both\nseq_page_cost and random_page_cost to 0.1, unless you actually\nexpect to be using a database large enough that the active portion\nwon't be cached. (In that case, a test with tiny tables *really*\nmeans nothing, though.) There are other settings that will also\nhelp.\n \n> \"Nested Loop Left Join (cost=0.00..138.04 rows=1001 width=1298)\n> (actual time=0.036..4.679 rows=1001 loops=1)\"\n \n> \"Total runtime: 5.128 ms\"\n \nThe 0.036 ms is how long it took to produce the first row of the\nresult once it started running, 4.679 ms is the total run time, and\n5.128 includes miscellaneous other time, such as planning time. Of\ncourse, the EXPLAIN ANALYZE adds some overhead, so the actual run\ntime would normally be faster, and with tuning it might be still\nfaster.\n \n> The general question comes down to, can I expect decent perfomance\n> from Postgresql compared to MSSQL.\n \nThat has been my experience. There's something about your runtime\nenvironment which isn't playing well with PostgreSQL. If it were\nme, I would make sure that as little of my stack as possible\ndepended on products provided by anyone with an interest in seeing\nPostgreSQL look bad compared to the alternative. I can think of at\nleast one company with fourteen thousand reasons to do so.\n \n-Kevin\n", "msg_date": "Tue, 07 Dec 2010 16:39:59 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on\n\t Windows" }, { "msg_contents": "> The hardware it\n> is running on is fairly good, dual Xeon CPUs, 4 GB of RAM, Raid 5.\n\nFor a database you'd want to consider replacing the RAID1 with a RAID1 (or \nRAID10). RAID5 is slow for small random updates, which are common in \ndatabases. Since you probably have enough harddisks anyway, this won't \ncost you. Linux or freebsd would also be better choices for postgres \nrather than windows.\n\nAlso, as said, your issue looks very much like a problem in the way your \napplication communicates with postgres : if it takes postgres 5 ms to \nprocess the query and your application gets the result 8 seconds later, \nthere is a problem. Note that SQL Server probably takes just a few ms for \nsuch a simple query, too, so your not really benchmarking SQL server \neither.\n", "msg_date": "Thu, 09 Dec 2010 00:35:34 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "So, I am back on this topic again.\nI have a related question, but this might be the correct thread (and\nplease let me know that). The boss is pressing the issue because of the\ncost of MSSQL.\n\nWhat kind of performance can I expect out of Postgres compare to MSSQL?\nLet's assume that Postgres is running on Cent OS x64 and MSSQL is running\non Windows 2008 x64, both are on identical hardware running RAID 5 (for\ndata redundancy/security), SAS drives 15k RPM, dual XEON Quad core CPUs,\n24 GB of RAM. I have searched around and I do not see anyone ever really\ncompare the two in terms of performance. I have learned from this thread\nthat Postgres needs a lot of configuration to perform the best.\n\nWe provide the MLS service to our members. Our data goes back to 1997 and\nnothing is ever deleted. Here is a general overview of our current MSSQL\nsetup. We have over 10GB of data in a couple of tables (no pictures are\nstored in SQL server). Our searches do a lot of joins to combine data to\ndisplay a listing, history, comparables, etc. We probably do 3 or 4 reads\nfor every write in the database.\n\nAny comparisons in terms of performance would be great. If not, how can I\nquickly truly compare the two systems myself without coding everything to\nwork for both? Thoughts? Opinions?\n\nThanks,\nTom Polak\nRockford Area Association of Realtors\n815-395-6776 x203\n\nThe information contained in this email message is intended only for the\nuse of the individual or entity named. If the reader of this email is not\nthe intended recipient or the employee or agent responsible for delivering\nit to the intended recipient, you are hereby notified that any\ndissemination, distribution or copying of this email is strictly\nprohibited. If you have received this email in error, please immediately\nnotify us by telephone and reply email. Thank you.\n\nAlthough this email and any attachments are believed to be free of any\nviruses or other defects that might affect any computer system into which\nit is received and opened, it is the responsibility of the recipient to\nensure that it is free of viruses, and the Rockford Area Association of\nRealtors hereby disclaims any liability for any loss or damage that\nresults.\n\n-----Original Message-----\nFrom: Pierre C [mailto:[email protected]]\nSent: Wednesday, December 08, 2010 5:36 PM\nTo: [email protected]; Tom Polak\nSubject: Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows\n\n> The hardware it\n> is running on is fairly good, dual Xeon CPUs, 4 GB of RAM, Raid 5.\n\nFor a database you'd want to consider replacing the RAID1 with a RAID1 (or\n\nRAID10). RAID5 is slow for small random updates, which are common in\ndatabases. Since you probably have enough harddisks anyway, this won't\ncost you. Linux or freebsd would also be better choices for postgres\nrather than windows.\n\nAlso, as said, your issue looks very much like a problem in the way your\napplication communicates with postgres : if it takes postgres 5 ms to\nprocess the query and your application gets the result 8 seconds later,\nthere is a problem. Note that SQL Server probably takes just a few ms for\n\nsuch a simple query, too, so your not really benchmarking SQL server\neither.\n", "msg_date": "Fri, 17 Dec 2010 11:08:28 -0600", "msg_from": "Tom Polak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On 12/17/10 9:08 AM, Tom Polak wrote:\n> So, I am back on this topic again.\n> I have a related question, but this might be the correct thread (and\n> please let me know that). The boss is pressing the issue because of the\n> cost of MSSQL.\n\nYou need to analyze the total cost of the system. For the price of MSSQL and Windows, you can probably buy a couple more really nice servers, or one Really Big Server that would walk all over a Windows/MSSQL system of the same total cost (hardware+software).\n\nBut that said, if Postgres is properly tuned and your application tuned to make good use of Postgres' features, it will compare well with any modern database.\n\n> What kind of performance can I expect out of Postgres compare to MSSQL?\n> Let's assume that Postgres is running on Cent OS x64 and MSSQL is running\n> on Windows 2008 x64, both are on identical hardware running RAID 5 (for\n> data redundancy/security), SAS drives 15k RPM, dual XEON Quad core CPUs,\n> 24 GB of RAM.\n\nRAID5 is a Really Bad Idea for any database. It is S...L...O...W. It does NOT give better redundancy and security; RAID 10 with a battery-backed RAID controller card is massively better for performance and just as good for redundancy and security.\n\nCraig\n", "msg_date": "Fri, 17 Dec 2010 09:32:32 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On Fri, Dec 17, 2010 at 9:08 AM, Tom Polak <[email protected]> wrote:\n> Any comparisons in terms of performance would be great.  If not, how can I\n> quickly truly compare the two systems myself without coding everything to\n> work for both?  Thoughts? Opinions?\n\nI can only offer anecdotal information.\n\nIf you strictly have an OLTP workload, with lots of simultaneous\nconnections issuing queries across small chunks of data, then\nPostgreSQL would be a good match for SQL server.\n\nOn the other-hand, if some of your work load is OLAP with a few\nconnections issuing complicated queries across large chunks of data,\nthen PostgreSQL will not perform as well as SQL server. SQL server\ncan divide processing load of complicated queries across several\nprocessor, while PostgreSQL cannot.\n\nSo, I guess it depends upon your workload.\n\n\n-- \nRegards,\nRichard Broersma Jr.\n", "msg_date": "Fri, 17 Dec 2010 09:33:13 -0800", "msg_from": "Richard Broersma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On Fri, Dec 17, 2010 at 10:08 AM, Tom Polak\n<[email protected]> wrote:\n> What kind of performance can I expect out of Postgres compare to MSSQL?\n\nYou should take any generalizations with a grain of salt. I suggest\nthat you do a POC.\n\n> Let's assume that Postgres is running on Cent OS x64 and MSSQL is running\n> on Windows 2008 x64, both are on identical hardware running RAID 5 (for\n> data redundancy/security), SAS drives 15k RPM, dual XEON Quad core CPUs,\n> 24 GB of RAM.\n\nRAID-5 = suckage for databases.\n\nThings to think about:\nHow big is your data set and how big is your working set?\nDo you have a raid card? Is it properly configured?\n\n\n-- \nRob Wultsch\[email protected]\n", "msg_date": "Fri, 17 Dec 2010 10:36:36 -0700", "msg_from": "Rob Wultsch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On Fri, Dec 17, 2010 at 12:08 PM, Tom Polak\n<[email protected]> wrote:\n> What kind of performance can I expect out of Postgres compare to MSSQL?\n> Let's assume that Postgres is running on Cent OS x64 and MSSQL is running\n> on Windows 2008 x64, both are on identical hardware running RAID 5 (for\n> data redundancy/security), SAS drives 15k RPM, dual XEON Quad core CPUs,\n> 24 GB of RAM.  I have searched around and I do not see anyone ever really\n> compare the two in terms of performance.  I have learned from this thread\n> that Postgres needs a lot of configuration to perform the best.\n\nI think this is a pretty difficult question to answer. There are\ncertainly people who are running databases on hardware like that -\neven databases much bigger than yours - on PostgreSQL - and getting\nacceptable performance. But it does take some work. In all fairness,\nI think that if you started on PostgreSQL and moved to MS SQL (or any\nother product), you'd probably need to make some adjustments going the\nother direction to get good performance, too. You're not going to\ncompare two major database systems across the board and find that one\nof them is just twice as fast, across the board. They have different\nadvantages and disadvantages. When you're using one product, you\nnaturally do things in a way that works well for that product, and\nmoving to a different product means starting over. Oh, putting this\nin a stored procedure was faster on MS SQL, but it's slower on\nPostgreSQL. Using a view here was terrible on MS SQL, but much faster\nunder PostgreSQL.\n\nThe real answer here is that anything could be true for your workload,\nand asking people on a mailing list to guess is a recipe for\ndisappointment. You probably need to do some real benchmarking, and\nPostgreSQL will be slower at first, and you'll tune it, and it's\nLIKELY that you'll be able to achieve parity, or close enough that\nit's worth it to save the $$$. But you won't really know until you\ntry it, I think.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 17 Dec 2010 12:37:40 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On 12/17/2010 11:08 AM, Tom Polak wrote:\n> So, I am back on this topic again.\n> I have a related question, but this might be the correct thread (and\n> please let me know that). The boss is pressing the issue because of the\n> cost of MSSQL.\n>\n> What kind of performance can I expect out of Postgres compare to MSSQL?\n> Let's assume that Postgres is running on Cent OS x64 and MSSQL is running\n> on Windows 2008 x64, both are on identical hardware running RAID 5 (for\n> data redundancy/security), SAS drives 15k RPM, dual XEON Quad core CPUs,\n> 24 GB of RAM. I have searched around and I do not see anyone ever really\n> compare the two in terms of performance. I have learned from this thread\n> that Postgres needs a lot of configuration to perform the best.\n>\n> We provide the MLS service to our members. Our data goes back to 1997 and\n> nothing is ever deleted. Here is a general overview of our current MSSQL\n> setup. We have over 10GB of data in a couple of tables (no pictures are\n> stored in SQL server). Our searches do a lot of joins to combine data to\n> display a listing, history, comparables, etc. We probably do 3 or 4 reads\n> for every write in the database.\n>\n> Any comparisons in terms of performance would be great. If not, how can I\n> quickly truly compare the two systems myself without coding everything to\n> work for both? Thoughts? Opinions?\n>\n> Thanks,\n> Tom Polak\n> Rockford Area Association of Realtors\n> 815-395-6776 x203\n>\n> The information contained in this email message is intended only for the\n> use of the individual or entity named. If the reader of this email is not\n> the intended recipient or the employee or agent responsible for delivering\n> it to the intended recipient, you are hereby notified that any\n> dissemination, distribution or copying of this email is strictly\n> prohibited. If you have received this email in error, please immediately\n> notify us by telephone and reply email. Thank you.\n>\n> Although this email and any attachments are believed to be free of any\n> viruses or other defects that might affect any computer system into which\n> it is received and opened, it is the responsibility of the recipient to\n> ensure that it is free of viruses, and the Rockford Area Association of\n> Realtors hereby disclaims any liability for any loss or damage that\n> results.\n\nMost of the time, the database is not the bottle neck. So find the spot \nwhere your current database IS the bottleneck. Then write a test that \nkinda matches that situation.\n\nLets say its 20 people doing an mls lookup at the exact same time, while \nand update is running in the background to copy in new data.\n\nThen write a simple test (I use perl for my simple tests) for both \ndatabases. If PG can hold up to your worst case situation, then maybe \nyou'll be alright.\n\nAlso: Are you pegged right now? Do you have slowness problems? Even \nif PG is a tad slower, will anybody even notice? Maybe its not worth \nworrying about? If your database isnt pegging the box, I'd bet you wont \neven notice a switch.\n\nThe other's that have answered have sound advice... but I thought I'd \nsay: I'm using raid-5! Gasp!\n\nIts true, I'm hosting maps with PostGIS, and the slowest part of the \nprocess is the arial imagery, which is HUGE. The database query's sit \naround 1% of my cpu. I needed the disk space for the imagery. The \nimagery code uses more cpu that PG does. The database is 98% read, \nthough, so my setup is different that yours.\n\nMy maps get 100K hits a day. The cpu's never use more than 20%. I'm \nrunning on a $350 computer, AMD Dual core, with 4 IDE disks in software \nraid-5. On Slackware Linux, of course!\n\n-Andy\n", "msg_date": "Fri, 17 Dec 2010 11:44:07 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": ">The real answer here is that anything could be true for your workload,\nand >asking people on a mailing list to guess is a recipe for\ndisappointment. >You probably need to do some real benchmarking, and\nPostgreSQL will be >slower at first, and you'll tune it, and it's LIKELY\nthat you'll be able to >achieve parity, or close enough that it's worth it\nto save the $$$. But >you won't really know until you try it, I think.\n\nThat is what I am really after. I know that it will be a lot of work, but\nat $15,000 for MSSQL server that is a lot of man hours. Before I invest a\nlot of time to do some real benchmarking I need to make sure it would be\nworth my time. I realize going into this that we will need to change\nalmost everything expect maybe the simplest Select statements.\n\n> How big is your data set and how big is your working set?\n> Do you have a raid card? Is it properly configured?\n\nThe data set can get large. Just think of a real estate listing. When we\ndisplay a Full View, EVERYTHING must be pulled from the database.\nSometimes we are talking about 75-100 fields if not more. We can have up\nto 300 members logged (we usually peak at about 30-50 requests per second)\nin the system at one time doing various tasks.\n\nThe servers would be running on a RAID hardware solution, so it would all\nbe offloaded from the CPU. I will have to check out RAID 10 for the next\nserver.\n\n\nThanks for all your help and opinions.\n\nThanks,\nTom Polak\nRockford Area Association of Realtors\n\nThe information contained in this email message is intended only for the\nuse of the individual or entity named. If the reader of this email is not\nthe intended recipient or the employee or agent responsible for delivering\nit to the intended recipient, you are hereby notified that any\ndissemination, distribution or copying of this email is strictly\nprohibited. If you have received this email in error, please immediately\nnotify us by telephone and reply email. Thank you.\n\nAlthough this email and any attachments are believed to be free of any\nviruses or other defects that might affect any computer system into which\nit is received and opened, it is the responsibility of the recipient to\nensure that it is free of viruses, and the Rockford Area Association of\nRealtors hereby disclaims any liability for any loss or damage that\nresults.\n\n\n-----Original Message-----\nFrom: Robert Haas [mailto:[email protected]]\nSent: Friday, December 17, 2010 11:38 AM\nTo: Tom Polak\nCc: [email protected]\nSubject: Re: [PERFORM] Compared MS SQL 2000 to Postgresql 9.0 on Windows\n\nOn Fri, Dec 17, 2010 at 12:08 PM, Tom Polak\n<[email protected]> wrote:\n> What kind of performance can I expect out of Postgres compare to MSSQL?\n> Let's assume that Postgres is running on Cent OS x64 and MSSQL is\nrunning\n> on Windows 2008 x64, both are on identical hardware running RAID 5 (for\n> data redundancy/security), SAS drives 15k RPM, dual XEON Quad core CPUs,\n> 24 GB of RAM.  I have searched around and I do not see anyone ever\nreally\n> compare the two in terms of performance.  I have learned from this\nthread\n> that Postgres needs a lot of configuration to perform the best.\n\nI think this is a pretty difficult question to answer. There are\ncertainly people who are running databases on hardware like that -\neven databases much bigger than yours - on PostgreSQL - and getting\nacceptable performance. But it does take some work. In all fairness,\nI think that if you started on PostgreSQL and moved to MS SQL (or any\nother product), you'd probably need to make some adjustments going the\nother direction to get good performance, too. You're not going to\ncompare two major database systems across the board and find that one\nof them is just twice as fast, across the board. They have different\nadvantages and disadvantages. When you're using one product, you\nnaturally do things in a way that works well for that product, and\nmoving to a different product means starting over. Oh, putting this\nin a stored procedure was faster on MS SQL, but it's slower on\nPostgreSQL. Using a view here was terrible on MS SQL, but much faster\nunder PostgreSQL.\n\nThe real answer here is that anything could be true for your workload,\nand asking people on a mailing list to guess is a recipe for\ndisappointment. You probably need to do some real benchmarking, and\nPostgreSQL will be slower at first, and you'll tune it, and it's\nLIKELY that you'll be able to achieve parity, or close enough that\nit's worth it to save the $$$. But you won't really know until you\ntry it, I think.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 17 Dec 2010 11:49:42 -0600", "msg_from": "Tom Polak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On 12/17/2010 11:37 AM, Robert Haas wrote:\n> On Fri, Dec 17, 2010 at 12:08 PM, Tom Polak\n> <[email protected]> wrote:\n\n> other direction to get good performance, too. You're not going to\n> compare two major database systems across the board and find that one\n> of them is just twice as fast, across the board. They have different\n> advantages and disadvantages. When you're using one product, you\n> naturally do things in a way that works well for that product, and\n> moving to a different product means starting over. Oh, putting this\n> in a stored procedure was faster on MS SQL, but it's slower on\n> PostgreSQL. Using a view here was terrible on MS SQL, but much faster\n> under PostgreSQL.\n>\n\nYeah, totally agree with that. Every database has its own personality, \nand you have to work with it. Its way. Dont expect one bit of code to \nwork great on all the different databases. You need 5 different bits of \ncode, one for each database.\n\nIn the end, can PG be fast? Yes. Very. But only when you treat is as \nPG. If you try to use PG as if it were mssql, you wont be a happy camper.\n\n-Andy\n", "msg_date": "Fri, 17 Dec 2010 11:50:03 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "> If you strictly have an OLTP workload, with lots of simultaneous\n> connections issuing queries across small chunks of data, then\n> PostgreSQL would be a good match for SQL server.\n\nThis matches my observations. In fact, PostgreSQL's MVCC seems to work\nheavily in my favor in OLTP workloads.\n\n> On the other-hand, if some of your work load is OLAP with a few\n> connections issuing complicated queries across large chunks of data,\n> then PostgreSQL will not perform as well as SQL server.  SQL server\n> can divide processing load of complicated queries across several\n> processor, while PostgreSQL cannot.\n\nWhile I agree with this in theory, it may or may not have a big impact\nin practice. If you're not seeing multi-cpu activity spike up on your\nMSSQL box during complex queries, you aren't likely to benefit much.\nYou can test by timing a query with and without a query hint of MAXDOP\n1\n\n select * from foo with (MAXDOP = 1)\n\nwhich limits it to one processor. If it runs just as fast on one\nprocessor, then this feature isn't something you'll miss.\n\nAnother set of features that could swing performance in MSSQL's favor\nare covering indexes and clustered indexes. You can sort-of get around\nclustered indexes being unavailable in PostgreSQL - especially on\nlow-churn tables, by scheduling CLUSTER commands. I've seen\ndiscussions recently that one or both of these features are being\nlooked at pretty closely for inclusion in PostgreSQL.\n", "msg_date": "Fri, 17 Dec 2010 14:53:48 -0500", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On Fri, Dec 17, 2010 at 12:49 PM, Tom Polak\n<[email protected]> wrote:\n> That is what I am really after.  I know that it will be a lot of work, but\n> at $15,000 for MSSQL server that is a lot of man hours.  Before I invest a\n> lot of time to do some real benchmarking I need to make sure it would be\n> worth my time.  I realize going into this that we will need to change\n> almost everything expect maybe the simplest Select statements.\n\nI doubt it will be as bad as all that. I think you'll need to spend\nsome time getting the database configured properly (you can ask for\nhelp here, or buy support) and then I'd guess that much of it will\njust work. 60%? 80%? 95%? And then there will be some number of\nproblem cases that you'll need to spend time beating into submission.\nI've had really good luck with PG over the years, and in fact switched\nto it originally because I was having problems with another database\nand when I switched to PG they just... went away. Now your data set\nis bigger than the ones I've worked with, so that tends to make things\na bit more complicated, but the important thing is to have some\npatience and don't assume that any problems you run into are\ninsoluble. They probably aren't. Run EXPLAIN ANALYZE a lot, read the\ndocumentation, ask questions, and if all else fails pay somebody a few\nbucks to help you get through it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 17 Dec 2010 18:49:19 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "On Fri, Dec 17, 2010 at 10:32 AM, Craig James\n<[email protected]> wrote:\n> RAID5 is a Really Bad Idea for any database.  It is S...L...O...W.  It does\n> NOT give better redundancy and security; RAID 10 with a battery-backed RAID\n> controller card is massively better for performance and just as good for\n> redundancy and security.\n\nThe real performance problem with RAID 5 won't show up until a drive\ndies and it starts rebuilding, at which point it's then WAYYYY slower,\nand while it's rebuilding you don't have redundancy. If you HAVE to\nuse stripes with redundancy, use RAID-6. It's no faster when working\nright, but with a single dead drive it's still ok on performance and\ncan rebuild at leisure since there's till redundancy in the system.\nBut really if you're running a db on anything other than RAID-10 you\nneed to reassess your priorities.\n", "msg_date": "Fri, 17 Dec 2010 19:06:15 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "Hello Scott!\n\nFri, 17 Dec 2010 19:06:15 -0700, you wrote: \n\n > On Fri, Dec 17, 2010 at 10:32 AM, Craig James\n > <[email protected]> wrote:\n >> RAID5 is a Really Bad Idea for any database. �It is S...L...O...W. �It does\n >> NOT give better redundancy and security; RAID 10 with a battery-backed RAID\n >> controller card is massively better for performance and just as good for\n >> redundancy and security.\n\n > The real performance problem with RAID 5 won't show up until a drive\n > dies and it starts rebuilding\n\nI don't agree with that. RAID5 is very slow for random writes, since\nit needs to :\n\n1. Read a copy of the old sector you are writing (usually in cache, but\nnot always) ;\n\n2. Read a copy of the parity sector conresponding to it ;\n\n3. Recompute the parity ;\n\n4. Write the new data on the sector you are writing ;\n\n5. Write the new parity data.\n\nOperation 3. is fast, but that's still 2 reads and 2 writes for writing\na sector, and the writes have to come after the reads, so it can't even\nbe fully parallelised.\n\nAnd if the database has heavy indexes, any INSERT/UPDATE will trigger\nrandom writes to update the indexes. Increasing checkpointing interval\ncan group some of the random writes, but they'll still occur.\n\nA RAID controller with a lot of cache can mitigate the random write\nslowness, but with enough random writes, the cache will be saturated\nanyway.\n\nAs other people commented, RAID10 is much more efficient for databases,\neven if it \"costs\" a bit more (if you put 4 disks in RAID10, you've 2x\nthe capacity of one disk, if you put them in RAID5 you've 3x the\ncapacity of one disk).\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n", "msg_date": "Sat, 18 Dec 2010 09:38:41 +0100", "msg_from": "Gael Le Mignot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "2010/12/18 Gael Le Mignot <[email protected]>:\n> Hello Scott!\n>\n> Fri, 17 Dec 2010 19:06:15 -0700, you wrote:\n>\n>  > On Fri, Dec 17, 2010 at 10:32 AM, Craig James\n>  > <[email protected]> wrote:\n>  >> RAID5 is a Really Bad Idea for any database.  It is S...L...O...W.  It does\n>  >> NOT give better redundancy and security; RAID 10 with a battery-backed RAID\n>  >> controller card is massively better for performance and just as good for\n>  >> redundancy and security.\n>\n>  > The real performance problem with RAID 5 won't show up until a drive\n>  > dies and it starts rebuilding\n>\n> I don't  agree with that. RAID5 is  very slow for random  writes, since\n> it needs to :\n\nTrust me I'm well aware of how bad RAID 5 is for write performance.\nBut as bad as that is, when the array is degraded it's 100 times\nworse. For a lot of workloads, the meh-grade performance of a working\nRAID-5 is ok. \"Not a lot of write\" data warehousing often runs just\nfine on RAID-5. Until the array degrades. Then it's much much slower\nthan even a single drive would be.\n", "msg_date": "Sat, 18 Dec 2010 04:31:38 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" }, { "msg_contents": "\n> > The real performance problem with RAID 5 won't show up until a drive\n> > dies and it starts rebuilding\n>\n> I don't agree with that. RAID5 is very slow for random writes, since\n> it needs to :\n\n\"The real problem\" is when RAID5 loses a drive and goes from \"acceptable\" \nkind of slow, to \"someone's fired\" kind of slow. Then of course in the \nmiddle the rebuild, a bad sector is discovered in some place the \nfilesystem has never visited yet on one of the remaining drives, and all \nhell breaks loose.\n\nRAID6 is only one extra disk...\n", "msg_date": "Sat, 18 Dec 2010 14:55:26 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows" } ]
[ { "msg_contents": "\nPessoal, \n\nEstou com uma dúvida ao fazer Tunning no arquivo de configuração do\nPostgres. \n\nMinha aplicação tem varios acessos simultâneos, chegando picos de 2000 mil\nou até mais. Por ser uma aplicação Web fica dificil de se estipular o\n\"max_connections\", sem contar que o restante dos parâmetros faz dependencia\ncom este.\n\nTenho um servidor dedicado ao Postgre e gostaria de saber qual a melhor ou\numa sugestão de configuração, para esta máquina e aplicação.\n\nServidor DELL\nIntel 2 processadores 3.6 GHz Xeon\n4 GBs RAM\n2 HDs de 320 GB (RAID1)\nSistema Operacional Linux - CentOS \n\n[]'s\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/Tunning-Postgres-tp3297619p3297619.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Wed, 8 Dec 2010 08:53:21 -0800 (PST)", "msg_from": "salima <[email protected]>", "msg_from_op": true, "msg_subject": "Tunning Postgres" }, { "msg_contents": "2010/12/8 salima <[email protected]>:\n>\n> Pessoal,\n>\n> Estou com uma dúvida ao fazer Tunning no arquivo de configuração do\n> Postgres.\n>\n> Minha aplicação tem varios acessos simultâneos, chegando picos de 2000 mil\n> ou até mais. Por ser uma aplicação Web fica dificil de se estipular o\n> \"max_connections\", sem contar que o restante dos parâmetros faz dependencia\n> com este.\n>\n> Tenho um servidor dedicado ao Postgre e gostaria de saber qual a melhor ou\n> uma sugestão de configuração, para esta máquina e aplicação.\n>\n> Servidor DELL\n> Intel 2 processadores 3.6 GHz Xeon\n> 4 GBs RAM\n> 2 HDs de 320 GB (RAID1)\n> Sistema Operacional Linux - CentOS\n\nI think you'll need to post this message in English to get much help\nhere. Or you could try:\n\nhttps://listas.postgresql.org.br/cgi-bin/mailman/listinfo/pgbr-geral\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 13 Dec 2010 18:06:39 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tunning Postgres" }, { "msg_contents": "Try going through the archives first because your question probably\nhas been answered many times already (altho there is no definitive\nquestion as to what server postgresql would need to run to fit your\npurpose).\n\nAlso, this is English list. If you prefer to ask questions in\nBrazilian/Portuguese than try postgresql.org.br\n", "msg_date": "Tue, 14 Dec 2010 07:30:13 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tunning Postgres" } ]
[ { "msg_contents": "Can you help me understand how to optimize the following. There's a \nsubplan which in this case returns 3 rows,\nbut it is really expensive:\n\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=83054.41..443755.45 rows=261077 width=4) (actual \ntime=4362.143..6002.808 rows=28 loops=1)\n Hash Cond: (articles.context_key = contexts.context_key)\n -> Seq Scan on articles (cost=0.00..345661.91 rows=522136 width=4) \n(actual time=0.558..3953.002 rows=517356 loops=1)\n Filter: indexed\n -> Hash (cost=69921.25..69921.25 rows=800493 width=4) (actual \ntime=829.501..829.501 rows=31 loops=1)\n -> Seq Scan on contexts (cost=14.31..69921.25 rows=800493 \nwidth=4) (actual time=1.641..829.339 rows=31 loops=1)\n Filter: ((parent_key = 392210) OR (hashed subplan))\n SubPlan\n -> Index Scan using collection_data_context_key_index \non collection_data (cost=0.00..14.30 rows=6 width=4) (actual \ntime=0.018..0.023 rows=3 loops=1)\n Index Cond: (collection_context_key = 392210)\n Total runtime: 6002.976 ms\n(11 rows)\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN \n(392210,392210,395073,1304250))\nAND articles.indexed\n;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=14.35..1863.85 rows=94 width=4) (actual \ntime=0.098..1.038 rows=57 loops=1)\n -> Bitmap Heap Scan on contexts (cost=14.35..572.57 rows=288 \nwidth=4) (actual time=0.079..0.274 rows=59 loops=1)\n Recheck Cond: ((parent_key = 392210) OR (parent_key = ANY \n('{392210,392210,395073,1304250}'::integer[])))\n -> BitmapOr (cost=14.35..14.35 rows=288 width=0) (actual \ntime=0.066..0.066 rows=0 loops=1)\n -> Bitmap Index Scan on parent_key_idx \n(cost=0.00..3.07 rows=58 width=0) (actual time=0.028..0.028 rows=28 loops=1)\n Index Cond: (parent_key = 392210)\n -> Bitmap Index Scan on parent_key_idx \n(cost=0.00..11.13 rows=231 width=0) (actual time=0.035..0.035 rows=87 \nloops=1)\n Index Cond: (parent_key = ANY \n('{392210,392210,395073,1304250}'::integer[]))\n -> Index Scan using article_key_idx on articles (cost=0.00..4.47 \nrows=1 width=4) (actual time=0.007..0.008 rows=1 loops=59)\n Index Cond: (articles.context_key = contexts.context_key)\n Filter: articles.indexed\n Total runtime: 1.166 ms\n(12 rows)\n\nproduction=> explain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n\n\n=========================================================================\n# select version();\nPostgreSQL 8.3.4 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2 \n20061115 (prerelease) (Debian 4.1.1-21)\n\n", "msg_date": "Wed, 08 Dec 2010 10:53:58 -0800", "msg_from": "Bryce Nesbitt <[email protected]>", "msg_from_op": true, "msg_subject": "hashed subplan 5000x slower than two sequential operations" }, { "msg_contents": "Bryce,\n\nThe two queries are different:\n\nYou are looking for contexts.context_key in first query\n\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n\n\nbut second query has context.parent_key\n\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN\n(392210,392210,395073,1304250))\n\nIs the contexts.context_key an indexed field? contexts.parent_key certainly seems to be.\n\n\nHTH,\n\n\nShrirang Chitnis\nSr. Manager, Applications Development\nHOV Services\n\n\nOffice: (866) 808-0935 Ext: 39210\[email protected]\nwww.hovservices.com\n\nThe information contained in this message, including any attachments, is attorney privileged and/or confidential information intended only for the use of the individual or entity named as addressee. The review, dissemination, distribution or copying of this communication by or to anyone other than the intended addressee is strictly prohibited. If you have received this communication in error, please immediately notify the sender by replying to the message and destroy all copies of the original message.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Bryce Nesbitt\nSent: Thursday, December 09, 2010 12:24 AM\nTo: [email protected]\nSubject: [PERFORM] hashed subplan 5000x slower than two sequential operations\n\nCan you help me understand how to optimize the following. There's a\nsubplan which in this case returns 3 rows,\nbut it is really expensive:\n\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=83054.41..443755.45 rows=261077 width=4) (actual\ntime=4362.143..6002.808 rows=28 loops=1)\n Hash Cond: (articles.context_key = contexts.context_key)\n -> Seq Scan on articles (cost=0.00..345661.91 rows=522136 width=4)\n(actual time=0.558..3953.002 rows=517356 loops=1)\n Filter: indexed\n -> Hash (cost=69921.25..69921.25 rows=800493 width=4) (actual\ntime=829.501..829.501 rows=31 loops=1)\n -> Seq Scan on contexts (cost=14.31..69921.25 rows=800493\nwidth=4) (actual time=1.641..829.339 rows=31 loops=1)\n Filter: ((parent_key = 392210) OR (hashed subplan))\n SubPlan\n -> Index Scan using collection_data_context_key_index\non collection_data (cost=0.00..14.30 rows=6 width=4) (actual\ntime=0.018..0.023 rows=3 loops=1)\n Index Cond: (collection_context_key = 392210)\n Total runtime: 6002.976 ms\n(11 rows)\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN\n(392210,392210,395073,1304250))\nAND articles.indexed\n;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=14.35..1863.85 rows=94 width=4) (actual\ntime=0.098..1.038 rows=57 loops=1)\n -> Bitmap Heap Scan on contexts (cost=14.35..572.57 rows=288\nwidth=4) (actual time=0.079..0.274 rows=59 loops=1)\n Recheck Cond: ((parent_key = 392210) OR (parent_key = ANY\n('{392210,392210,395073,1304250}'::integer[])))\n -> BitmapOr (cost=14.35..14.35 rows=288 width=0) (actual\ntime=0.066..0.066 rows=0 loops=1)\n -> Bitmap Index Scan on parent_key_idx\n(cost=0.00..3.07 rows=58 width=0) (actual time=0.028..0.028 rows=28 loops=1)\n Index Cond: (parent_key = 392210)\n -> Bitmap Index Scan on parent_key_idx\n(cost=0.00..11.13 rows=231 width=0) (actual time=0.035..0.035 rows=87\nloops=1)\n Index Cond: (parent_key = ANY\n('{392210,392210,395073,1304250}'::integer[]))\n -> Index Scan using article_key_idx on articles (cost=0.00..4.47\nrows=1 width=4) (actual time=0.007..0.008 rows=1 loops=59)\n Index Cond: (articles.context_key = contexts.context_key)\n Filter: articles.indexed\n Total runtime: 1.166 ms\n(12 rows)\n\nproduction=> explain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n\n\n=========================================================================\n# select version();\nPostgreSQL 8.3.4 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2\n20061115 (prerelease) (Debian 4.1.1-21)\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 8 Dec 2010 14:05:48 -0500", "msg_from": "Shrirang Chitnis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashed subplan 5000x slower than two sequential\n operations" }, { "msg_contents": "Shrirang Chitnis wrote:\n> Bryce,\n> The two queries are different:\n> \nAh, due to a mistake. The first version with the hashed subplan is from \nproduction.\nThe second version should have read:\n\n====================================================================================\nproduction=> SELECT collection_data.context_key FROM collection_data \nWHERE collection_data.collection_context_key = 392210;\n 392210\n 395073\n 1304250\nproduction=> explain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210 OR contexts.context_key IN \n(392210,395073,1304250))\nAND articles.indexed\n;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=12.32..414.41 rows=20 width=4) (actual \ntime=0.112..0.533 rows=28 loops=1)\n -> Bitmap Heap Scan on contexts (cost=12.32..135.13 rows=62 \nwidth=4) (actual time=0.079..0.152 rows=31 loops=1)\n Recheck Cond: ((parent_key = 392210) OR (context_key = ANY \n('{392210,392210,395073,1304250}'::integer[])))\n -> BitmapOr (cost=12.32..12.32 rows=62 width=0) (actual \ntime=0.070..0.070 rows=0 loops=1)\n -> Bitmap Index Scan on parent_key_idx \n(cost=0.00..3.07 rows=58 width=0) (actual time=0.029..0.029 rows=28 loops=1)\n Index Cond: (parent_key = 392210)\n -> Bitmap Index Scan on contexts_pkey (cost=0.00..9.22 \nrows=4 width=0) (actual time=0.037..0.037 rows=4 loops=1)\n Index Cond: (context_key = ANY \n('{392210,392210,395073,1304250}'::integer[]))\n -> Index Scan using article_key_idx on articles (cost=0.00..4.49 \nrows=1 width=4) (actual time=0.007..0.008 rows=1 loops=31)\n Index Cond: (articles.context_key = contexts.context_key)\n Filter: articles.indexed\n Total runtime: 0.614 ms\n(12 rows)\n\n\n\n\n\n\n====================================================================================\nproduction=> explain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=83054.41..443755.45 rows=261077 width=4) (actual \ntime=3415.609..6737.863 rows=28 loops=1)\n Hash Cond: (articles.context_key = contexts.context_key)\n -> Seq Scan on articles (cost=0.00..345661.91 rows=522136 width=4) \n(actual time=0.038..4587.914 rows=517416 loops=1)\n Filter: indexed\n -> Hash (cost=69921.25..69921.25 rows=800493 width=4) (actual \ntime=926.965..926.965 rows=31 loops=1)\n -> Seq Scan on contexts (cost=14.31..69921.25 rows=800493 \nwidth=4) (actual time=2.113..926.794 rows=31 loops=1)\n Filter: ((parent_key = 392210) OR (hashed subplan))\n SubPlan\n -> Index Scan using collection_data_context_key_index \non collection_data (cost=0.00..14.30 rows=6 width=4) (actual \ntime=0.084..0.088 rows=3 loops=1)\n Index Cond: (collection_context_key = 392210)\n Total runtime: 6738.042 ms\n(11 rows)\n", "msg_date": "Wed, 08 Dec 2010 12:05:26 -0800", "msg_from": "Bryce Nesbitt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashed subplan 5000x slower than two sequential operations" }, { "msg_contents": "Hello,\n\nare the table freshly analyzed, with a sufficient default_statistics_target ?\n\nYou may try to get a better plan while rewriting the query as an UNION to get rid of the OR clause.\nSomething like (not tested):\n\nSELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE contexts.parent_key = 392210\nAND articles.indexed\n\n\nUNION\nSELECT context_key\nFROM\n(\n SELECT contexts.context_key\n FROM contexts JOIN collection_data ON ( contexts.context_key = collection_data .context_key)\n WHERE collection_data.collection_context_key = 392210)\n) foo JOIN articles ON (foo.context_key=contexts.context_key)\nWHERE articles.indexed\n;\n\n\nI've had one similar problem where there was no way for the planner to notice that the query would systematically return very few rows. Here, my last resort was to disable some planner methods within the given transaction.\n\nregards,\n\nMarc Mamin\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] im Auftrag von Shrirang Chitnis\nGesendet: Mi 12/8/2010 8:05\nAn: Bryce Nesbitt; [email protected]\nBetreff: Re: [PERFORM] hashed subplan 5000x slower than two sequential operations\n \nBryce,\n\nThe two queries are different:\n\nYou are looking for contexts.context_key in first query\n\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n\n\nbut second query has context.parent_key\n\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN\n(392210,392210,395073,1304250))\n\nIs the contexts.context_key an indexed field? contexts.parent_key certainly seems to be.\n\n\nHTH,\n\n\nShrirang Chitnis\nSr. Manager, Applications Development\nHOV Services\n\n\nOffice: (866) 808-0935 Ext: 39210\[email protected]\nwww.hovservices.com\n\nThe information contained in this message, including any attachments, is attorney privileged and/or confidential information intended only for the use of the individual or entity named as addressee. The review, dissemination, distribution or copying of this communication by or to anyone other than the intended addressee is strictly prohibited. If you have received this communication in error, please immediately notify the sender by replying to the message and destroy all copies of the original message.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Bryce Nesbitt\nSent: Thursday, December 09, 2010 12:24 AM\nTo: [email protected]\nSubject: [PERFORM] hashed subplan 5000x slower than two sequential operations\n\nCan you help me understand how to optimize the following. There's a\nsubplan which in this case returns 3 rows,\nbut it is really expensive:\n\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=83054.41..443755.45 rows=261077 width=4) (actual\ntime=4362.143..6002.808 rows=28 loops=1)\n Hash Cond: (articles.context_key = contexts.context_key)\n -> Seq Scan on articles (cost=0.00..345661.91 rows=522136 width=4)\n(actual time=0.558..3953.002 rows=517356 loops=1)\n Filter: indexed\n -> Hash (cost=69921.25..69921.25 rows=800493 width=4) (actual\ntime=829.501..829.501 rows=31 loops=1)\n -> Seq Scan on contexts (cost=14.31..69921.25 rows=800493\nwidth=4) (actual time=1.641..829.339 rows=31 loops=1)\n Filter: ((parent_key = 392210) OR (hashed subplan))\n SubPlan\n -> Index Scan using collection_data_context_key_index\non collection_data (cost=0.00..14.30 rows=6 width=4) (actual\ntime=0.018..0.023 rows=3 loops=1)\n Index Cond: (collection_context_key = 392210)\n Total runtime: 6002.976 ms\n(11 rows)\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN\n(392210,392210,395073,1304250))\nAND articles.indexed\n;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=14.35..1863.85 rows=94 width=4) (actual\ntime=0.098..1.038 rows=57 loops=1)\n -> Bitmap Heap Scan on contexts (cost=14.35..572.57 rows=288\nwidth=4) (actual time=0.079..0.274 rows=59 loops=1)\n Recheck Cond: ((parent_key = 392210) OR (parent_key = ANY\n('{392210,392210,395073,1304250}'::integer[])))\n -> BitmapOr (cost=14.35..14.35 rows=288 width=0) (actual\ntime=0.066..0.066 rows=0 loops=1)\n -> Bitmap Index Scan on parent_key_idx\n(cost=0.00..3.07 rows=58 width=0) (actual time=0.028..0.028 rows=28 loops=1)\n Index Cond: (parent_key = 392210)\n -> Bitmap Index Scan on parent_key_idx\n(cost=0.00..11.13 rows=231 width=0) (actual time=0.035..0.035 rows=87\nloops=1)\n Index Cond: (parent_key = ANY\n('{392210,392210,395073,1304250}'::integer[]))\n -> Index Scan using article_key_idx on articles (cost=0.00..4.47\nrows=1 width=4) (actual time=0.007..0.008 rows=1 loops=59)\n Index Cond: (articles.context_key = contexts.context_key)\n Filter: articles.indexed\n Total runtime: 1.166 ms\n(12 rows)\n\nproduction=> explain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n\n\n=========================================================================\n# select version();\nPostgreSQL 8.3.4 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2\n20061115 (prerelease) (Debian 4.1.1-21)\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\nAW: [PERFORM] hashed subplan 5000x slower than two sequential operations\n\n\n\n\n\nHello,\n\nare the table freshly analyzed, with a sufficient default_statistics_target ?\n\nYou may try to get a better plan while rewriting the query as an UNION to get rid of the OR clause.\nSomething like (not tested):\n\nSELECT contexts.context_key\nFROM contexts\n     JOIN articles\n     ON (articles.context_key=contexts.context_key)\nWHERE contexts.parent_key = 392210\nAND articles.indexed\n\n\nUNION\nSELECT context_key\nFROM\n(\n  SELECT contexts.context_key\n  FROM contexts JOIN collection_data ON ( contexts.context_key = collection_data .context_key)\n  WHERE collection_data.collection_context_key = 392210)\n) foo JOIN articles ON (foo.context_key=contexts.context_key)\nWHERE articles.indexed\n;\n\n\nI've had one similar problem where there was no way for the planner to notice that the query would systematically return very few rows. Here, my last resort was to disable some planner methods within the given transaction.\n\nregards,\n\nMarc Mamin\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] im Auftrag von Shrirang Chitnis\nGesendet: Mi 12/8/2010 8:05\nAn: Bryce Nesbitt; [email protected]\nBetreff: Re: [PERFORM] hashed subplan 5000x slower than two sequential operations\n\nBryce,\n\nThe two queries are different:\n\nYou are looking for contexts.context_key in first query\n\nWHERE (contexts.parent_key = 392210\n      OR contexts.context_key IN\n         (SELECT collection_data.context_key\n         FROM collection_data\n          WHERE collection_data.collection_context_key = 392210)\n\n\nbut second query has context.parent_key\n\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN\n(392210,392210,395073,1304250))\n\nIs the contexts.context_key an indexed field? contexts.parent_key certainly seems to be.\n\n\nHTH,\n\n\nShrirang Chitnis\nSr. Manager, Applications Development\nHOV Services\n\n\nOffice: (866) 808-0935 Ext: 39210\[email protected]\nwww.hovservices.com\n\nThe information contained in this message, including any attachments, is attorney privileged and/or confidential information intended only for the use of the individual or entity named as addressee.  The review, dissemination, distribution or copying of this communication by or to anyone other than the intended addressee is strictly prohibited.  If you have received this communication in error, please immediately notify the sender by replying to the message and destroy all copies of the original message.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Bryce Nesbitt\nSent: Thursday, December 09, 2010 12:24 AM\nTo: [email protected]\nSubject: [PERFORM] hashed subplan 5000x slower than two sequential operations\n\nCan you help me understand how to optimize the following.  There's a\nsubplan which in this case returns 3 rows,\nbut it is really expensive:\n\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n     JOIN articles\n     ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n      OR contexts.context_key IN\n         (SELECT collection_data.context_key\n         FROM collection_data\n          WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n                                                                                QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n  Hash Join  (cost=83054.41..443755.45 rows=261077 width=4) (actual\ntime=4362.143..6002.808 rows=28 loops=1)\n    Hash Cond: (articles.context_key = contexts.context_key)\n    ->  Seq Scan on articles  (cost=0.00..345661.91 rows=522136 width=4)\n(actual time=0.558..3953.002 rows=517356 loops=1)\n          Filter: indexed\n    ->  Hash  (cost=69921.25..69921.25 rows=800493 width=4) (actual\ntime=829.501..829.501 rows=31 loops=1)\n          ->  Seq Scan on contexts  (cost=14.31..69921.25 rows=800493\nwidth=4) (actual time=1.641..829.339 rows=31 loops=1)\n                Filter: ((parent_key = 392210) OR (hashed subplan))\n                SubPlan\n                  ->  Index Scan using collection_data_context_key_index\non collection_data  (cost=0.00..14.30 rows=6 width=4) (actual\ntime=0.018..0.023 rows=3 loops=1)\n                        Index Cond: (collection_context_key = 392210)\n  Total runtime: 6002.976 ms\n(11 rows)\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n     JOIN articles\n     ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN\n(392210,392210,395073,1304250))\nAND articles.indexed\n;\n                                                               QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n  Nested Loop  (cost=14.35..1863.85 rows=94 width=4) (actual\ntime=0.098..1.038 rows=57 loops=1)\n    ->  Bitmap Heap Scan on contexts  (cost=14.35..572.57 rows=288\nwidth=4) (actual time=0.079..0.274 rows=59 loops=1)\n          Recheck Cond: ((parent_key = 392210) OR (parent_key = ANY\n('{392210,392210,395073,1304250}'::integer[])))\n          ->  BitmapOr  (cost=14.35..14.35 rows=288 width=0) (actual\ntime=0.066..0.066 rows=0 loops=1)\n                ->  Bitmap Index Scan on parent_key_idx\n(cost=0.00..3.07 rows=58 width=0) (actual time=0.028..0.028 rows=28 loops=1)\n                      Index Cond: (parent_key = 392210)\n                ->  Bitmap Index Scan on parent_key_idx\n(cost=0.00..11.13 rows=231 width=0) (actual time=0.035..0.035 rows=87\nloops=1)\n                      Index Cond: (parent_key = ANY\n('{392210,392210,395073,1304250}'::integer[]))\n    ->  Index Scan using article_key_idx on articles  (cost=0.00..4.47\nrows=1 width=4) (actual time=0.007..0.008 rows=1 loops=59)\n          Index Cond: (articles.context_key = contexts.context_key)\n          Filter: articles.indexed\n  Total runtime: 1.166 ms\n(12 rows)\n\nproduction=> explain analyze SELECT contexts.context_key\nFROM contexts\n     JOIN articles\n     ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n      OR contexts.context_key IN\n         (SELECT collection_data.context_key\n         FROM collection_data\n          WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n\n\n=========================================================================\n# select version();\nPostgreSQL 8.3.4 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2\n20061115 (prerelease) (Debian 4.1.1-21)\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 8 Dec 2010 21:06:12 +0100", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashed subplan 5000x slower than two sequential operations" }, { "msg_contents": "Another point: would a conditionl index help ?\n\non articles (context_key) where indexed\n\nregards,\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] im Auftrag von Marc Mamin\nGesendet: Mi 12/8/2010 9:06\nAn: Shrirang Chitnis; Bryce Nesbitt; [email protected]\nBetreff: Re: [PERFORM] hashed subplan 5000x slower than two sequential operations\n \n\n\nHello,\n\nare the table freshly analyzed, with a sufficient default_statistics_target ?\n\nYou may try to get a better plan while rewriting the query as an UNION to get rid of the OR clause.\nSomething like (not tested):\n\nSELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE contexts.parent_key = 392210\nAND articles.indexed\n\n\nUNION\nSELECT context_key\nFROM\n(\n SELECT contexts.context_key\n FROM contexts JOIN collection_data ON ( contexts.context_key = collection_data .context_key)\n WHERE collection_data.collection_context_key = 392210)\n) foo JOIN articles ON (foo.context_key=contexts.context_key)\nWHERE articles.indexed\n;\n\n\nI've had one similar problem where there was no way for the planner to notice that the query would systematically return very few rows. Here, my last resort was to disable some planner methods within the given transaction.\n\nregards,\n\nMarc Mamin\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] im Auftrag von Shrirang Chitnis\nGesendet: Mi 12/8/2010 8:05\nAn: Bryce Nesbitt; [email protected]\nBetreff: Re: [PERFORM] hashed subplan 5000x slower than two sequential operations\n \nBryce,\n\nThe two queries are different:\n\nYou are looking for contexts.context_key in first query\n\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n\n\nbut second query has context.parent_key\n\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN\n(392210,392210,395073,1304250))\n\nIs the contexts.context_key an indexed field? contexts.parent_key certainly seems to be.\n\n\nHTH,\n\n\nShrirang Chitnis\nSr. Manager, Applications Development\nHOV Services\n\n\nOffice: (866) 808-0935 Ext: 39210\[email protected]\nwww.hovservices.com\n\nThe information contained in this message, including any attachments, is attorney privileged and/or confidential information intended only for the use of the individual or entity named as addressee. The review, dissemination, distribution or copying of this communication by or to anyone other than the intended addressee is strictly prohibited. If you have received this communication in error, please immediately notify the sender by replying to the message and destroy all copies of the original message.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Bryce Nesbitt\nSent: Thursday, December 09, 2010 12:24 AM\nTo: [email protected]\nSubject: [PERFORM] hashed subplan 5000x slower than two sequential operations\n\nCan you help me understand how to optimize the following. There's a\nsubplan which in this case returns 3 rows,\nbut it is really expensive:\n\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=83054.41..443755.45 rows=261077 width=4) (actual\ntime=4362.143..6002.808 rows=28 loops=1)\n Hash Cond: (articles.context_key = contexts.context_key)\n -> Seq Scan on articles (cost=0.00..345661.91 rows=522136 width=4)\n(actual time=0.558..3953.002 rows=517356 loops=1)\n Filter: indexed\n -> Hash (cost=69921.25..69921.25 rows=800493 width=4) (actual\ntime=829.501..829.501 rows=31 loops=1)\n -> Seq Scan on contexts (cost=14.31..69921.25 rows=800493\nwidth=4) (actual time=1.641..829.339 rows=31 loops=1)\n Filter: ((parent_key = 392210) OR (hashed subplan))\n SubPlan\n -> Index Scan using collection_data_context_key_index\non collection_data (cost=0.00..14.30 rows=6 width=4) (actual\ntime=0.018..0.023 rows=3 loops=1)\n Index Cond: (collection_context_key = 392210)\n Total runtime: 6002.976 ms\n(11 rows)\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN\n(392210,392210,395073,1304250))\nAND articles.indexed\n;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=14.35..1863.85 rows=94 width=4) (actual\ntime=0.098..1.038 rows=57 loops=1)\n -> Bitmap Heap Scan on contexts (cost=14.35..572.57 rows=288\nwidth=4) (actual time=0.079..0.274 rows=59 loops=1)\n Recheck Cond: ((parent_key = 392210) OR (parent_key = ANY\n('{392210,392210,395073,1304250}'::integer[])))\n -> BitmapOr (cost=14.35..14.35 rows=288 width=0) (actual\ntime=0.066..0.066 rows=0 loops=1)\n -> Bitmap Index Scan on parent_key_idx\n(cost=0.00..3.07 rows=58 width=0) (actual time=0.028..0.028 rows=28 loops=1)\n Index Cond: (parent_key = 392210)\n -> Bitmap Index Scan on parent_key_idx\n(cost=0.00..11.13 rows=231 width=0) (actual time=0.035..0.035 rows=87\nloops=1)\n Index Cond: (parent_key = ANY\n('{392210,392210,395073,1304250}'::integer[]))\n -> Index Scan using article_key_idx on articles (cost=0.00..4.47\nrows=1 width=4) (actual time=0.007..0.008 rows=1 loops=59)\n Index Cond: (articles.context_key = contexts.context_key)\n Filter: articles.indexed\n Total runtime: 1.166 ms\n(12 rows)\n\nproduction=> explain analyze SELECT contexts.context_key\nFROM contexts\n JOIN articles\n ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n OR contexts.context_key IN\n (SELECT collection_data.context_key\n FROM collection_data\n WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n\n\n=========================================================================\n# select version();\nPostgreSQL 8.3.4 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2\n20061115 (prerelease) (Debian 4.1.1-21)\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\nAW: [PERFORM] hashed subplan 5000x slower than two sequential operations\n\n\n\n\nAnother point: would a conditionl index help ?\n\non articles (context_key) where indexed\n\nregards,\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] im Auftrag von Marc Mamin\nGesendet: Mi 12/8/2010 9:06\nAn: Shrirang Chitnis; Bryce Nesbitt; [email protected]\nBetreff: Re: [PERFORM] hashed subplan 5000x slower than two sequential operations\n\n\n\nHello,\n\nare the table freshly analyzed, with a sufficient default_statistics_target ?\n\nYou may try to get a better plan while rewriting the query as an UNION to get rid of the OR clause.\nSomething like (not tested):\n\nSELECT contexts.context_key\nFROM contexts\n     JOIN articles\n     ON (articles.context_key=contexts.context_key)\nWHERE contexts.parent_key = 392210\nAND articles.indexed\n\n\nUNION\nSELECT context_key\nFROM\n(\n  SELECT contexts.context_key\n  FROM contexts JOIN collection_data ON ( contexts.context_key = collection_data .context_key)\n  WHERE collection_data.collection_context_key = 392210)\n) foo JOIN articles ON (foo.context_key=contexts.context_key)\nWHERE articles.indexed\n;\n\n\nI've had one similar problem where there was no way for the planner to notice that the query would systematically return very few rows. Here, my last resort was to disable some planner methods within the given transaction.\n\nregards,\n\nMarc Mamin\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected] im Auftrag von Shrirang Chitnis\nGesendet: Mi 12/8/2010 8:05\nAn: Bryce Nesbitt; [email protected]\nBetreff: Re: [PERFORM] hashed subplan 5000x slower than two sequential operations\n\nBryce,\n\nThe two queries are different:\n\nYou are looking for contexts.context_key in first query\n\nWHERE (contexts.parent_key = 392210\n      OR contexts.context_key IN\n         (SELECT collection_data.context_key\n         FROM collection_data\n          WHERE collection_data.collection_context_key = 392210)\n\n\nbut second query has context.parent_key\n\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN\n(392210,392210,395073,1304250))\n\nIs the contexts.context_key an indexed field? contexts.parent_key certainly seems to be.\n\n\nHTH,\n\n\nShrirang Chitnis\nSr. Manager, Applications Development\nHOV Services\n\n\nOffice: (866) 808-0935 Ext: 39210\[email protected]\nwww.hovservices.com\n\nThe information contained in this message, including any attachments, is attorney privileged and/or confidential information intended only for the use of the individual or entity named as addressee.  The review, dissemination, distribution or copying of this communication by or to anyone other than the intended addressee is strictly prohibited.  If you have received this communication in error, please immediately notify the sender by replying to the message and destroy all copies of the original message.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Bryce Nesbitt\nSent: Thursday, December 09, 2010 12:24 AM\nTo: [email protected]\nSubject: [PERFORM] hashed subplan 5000x slower than two sequential operations\n\nCan you help me understand how to optimize the following.  There's a\nsubplan which in this case returns 3 rows,\nbut it is really expensive:\n\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n     JOIN articles\n     ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n      OR contexts.context_key IN\n         (SELECT collection_data.context_key\n         FROM collection_data\n          WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n                                                                                QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n  Hash Join  (cost=83054.41..443755.45 rows=261077 width=4) (actual\ntime=4362.143..6002.808 rows=28 loops=1)\n    Hash Cond: (articles.context_key = contexts.context_key)\n    ->  Seq Scan on articles  (cost=0.00..345661.91 rows=522136 width=4)\n(actual time=0.558..3953.002 rows=517356 loops=1)\n          Filter: indexed\n    ->  Hash  (cost=69921.25..69921.25 rows=800493 width=4) (actual\ntime=829.501..829.501 rows=31 loops=1)\n          ->  Seq Scan on contexts  (cost=14.31..69921.25 rows=800493\nwidth=4) (actual time=1.641..829.339 rows=31 loops=1)\n                Filter: ((parent_key = 392210) OR (hashed subplan))\n                SubPlan\n                  ->  Index Scan using collection_data_context_key_index\non collection_data  (cost=0.00..14.30 rows=6 width=4) (actual\ntime=0.018..0.023 rows=3 loops=1)\n                        Index Cond: (collection_context_key = 392210)\n  Total runtime: 6002.976 ms\n(11 rows)\n\n\n=========================================================================\nexplain analyze SELECT contexts.context_key\nFROM contexts\n     JOIN articles\n     ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210 OR contexts.parent_key IN\n(392210,392210,395073,1304250))\nAND articles.indexed\n;\n                                                               QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n  Nested Loop  (cost=14.35..1863.85 rows=94 width=4) (actual\ntime=0.098..1.038 rows=57 loops=1)\n    ->  Bitmap Heap Scan on contexts  (cost=14.35..572.57 rows=288\nwidth=4) (actual time=0.079..0.274 rows=59 loops=1)\n          Recheck Cond: ((parent_key = 392210) OR (parent_key = ANY\n('{392210,392210,395073,1304250}'::integer[])))\n          ->  BitmapOr  (cost=14.35..14.35 rows=288 width=0) (actual\ntime=0.066..0.066 rows=0 loops=1)\n                ->  Bitmap Index Scan on parent_key_idx\n(cost=0.00..3.07 rows=58 width=0) (actual time=0.028..0.028 rows=28 loops=1)\n                      Index Cond: (parent_key = 392210)\n                ->  Bitmap Index Scan on parent_key_idx\n(cost=0.00..11.13 rows=231 width=0) (actual time=0.035..0.035 rows=87\nloops=1)\n                      Index Cond: (parent_key = ANY\n('{392210,392210,395073,1304250}'::integer[]))\n    ->  Index Scan using article_key_idx on articles  (cost=0.00..4.47\nrows=1 width=4) (actual time=0.007..0.008 rows=1 loops=59)\n          Index Cond: (articles.context_key = contexts.context_key)\n          Filter: articles.indexed\n  Total runtime: 1.166 ms\n(12 rows)\n\nproduction=> explain analyze SELECT contexts.context_key\nFROM contexts\n     JOIN articles\n     ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key = 392210\n      OR contexts.context_key IN\n         (SELECT collection_data.context_key\n         FROM collection_data\n          WHERE collection_data.collection_context_key = 392210)\n)\nAND articles.indexed\n;\n\n\n=========================================================================\n# select version();\nPostgreSQL 8.3.4 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2\n20061115 (prerelease) (Debian 4.1.1-21)\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 8 Dec 2010 21:12:23 +0100", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashed subplan 5000x slower than two sequential operations" }, { "msg_contents": "Shrirang Chitnis <[email protected]> writes:\n> Bryce,\n> The two queries are different:\n\nI suspect the second one is a typo and not what he really wanted.\n\n> WHERE (contexts.parent_key = 392210\n> OR contexts.context_key IN\n> (SELECT collection_data.context_key\n> FROM collection_data\n> WHERE collection_data.collection_context_key = 392210)\n\nThe only really effective way the planner knows to optimize an\n\"IN (sub-SELECT)\" is to turn it into a semi-join, which is not possible\nhere because of the unrelated OR clause. You might consider replacing\nthis with a UNION of two scans of \"contexts\". (And yes, I know it'd be\nnicer if the planner did that for you.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 08 Dec 2010 15:12:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashed subplan 5000x slower than two sequential operations " }, { "msg_contents": "2010/12/8 Tom Lane <[email protected]>:\n> Shrirang Chitnis <[email protected]> writes:\n>> Bryce,\n>> The two queries are different:\n>\n> I suspect the second one is a typo and not what he really wanted.\n>\n>> WHERE (contexts.parent_key = 392210\n>>       OR contexts.context_key IN\n>>          (SELECT collection_data.context_key\n>>          FROM collection_data\n>>           WHERE collection_data.collection_context_key = 392210)\n>\n> The only really effective way the planner knows to optimize an\n> \"IN (sub-SELECT)\" is to turn it into a semi-join, which is not possible\n> here because of the unrelated OR clause.  You might consider replacing\n> this with a UNION of two scans of \"contexts\".  (And yes, I know it'd be\n> nicer if the planner did that for you.)\n\nI remeber a similar case - 9 years ago.\n\nslow variant:\n\nWHERE pk = C1 OR pk IN (SELECT .. FROM .. WHERE some = C2)\n\nI had to rewrite to form\n\nWHERE pk IN (SELECT .. FROM WHERE some = C2 UNION ALL SELECT C1)\n\nRegards\n\nPavel Stehule\n\n\n>\n>                        regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 8 Dec 2010 21:25:40 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashed subplan 5000x slower than two sequential operations" }, { "msg_contents": "\n\n\n\n\nMarc Mamin wrote:\n\n\n\nAW: [PERFORM] hashed subplan 5000x slower than two sequential\noperations\n\n\nHello,\nare the table freshly analyzed, with a sufficient\ndefault_statistics_target ?\n\n\n\nautovacuum = on                            # Enable autovacuum\nsubprocess?  'on' \nautovacuum_naptime = 5min         # time between autovacuum runs\ndefault_statistics_target = 150       # range 1-1000\n\n\n\n\n\nYou may try to get a better plan while rewriting the query as an UNION\nto get rid of the OR clause.\nSomething like (not tested):\n\n\nIt is way better\n\n\nEXPLAIN ANALYZE SELECT contexts.context_key\nFROM contexts\n    JOIN articles\n    ON (articles.context_key=contexts.context_key)\nWHERE (contexts.parent_key =\n392210)                                                          \nAND articles.indexed\n\nUNION\nSELECT collection_data.context_key\nFROM collection_data\nJOIN articles ON (articles.context_key=collection_data.context_key)\nWHERE collection_data.collection_context_key = 392210\nAND articles.indexed;\n\n                                                                                \nQUERY\nPLAN                                                                                \n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique  (cost=418.50..418.61 rows=22 width=4) (actual\ntime=0.582..0.671 rows=28 loops=1)\n   ->  Sort  (cost=418.50..418.55 rows=22 width=4) (actual\ntime=0.579..0.608 rows=28 loops=1)\n         Sort Key: contexts.context_key\n         Sort Method:  quicksort  Memory: 26kB\n         ->  Append  (cost=0.00..418.01 rows=22 width=4) (actual\ntime=0.042..0.524 rows=28 loops=1)\n               ->  Nested Loop  (cost=0.00..376.46 rows=19 width=4)\n(actual time=0.040..0.423 rows=28 loops=1)\n                     ->  Index Scan using parent_key_idx on\ncontexts  (cost=0.00..115.20 rows=58 width=4) (actual time=0.021..0.082\nrows=28 loops=1)\n                           Index Cond: (parent_key = 392210)\n                     ->  Index Scan using article_key_idx on\narticles  (cost=0.00..4.49 rows=1 width=4) (actual time=0.007..0.008\nrows=1 loops=28)\n                           Index Cond: (public.articles.context_key =\ncontexts.context_key)\n                           Filter: public.articles.indexed\n               ->  Nested Loop  (cost=0.00..41.32 rows=3 width=4)\n(actual time=0.043..0.043 rows=0 loops=1)\n                     ->  Index Scan using\ncollection_data_context_key_index on collection_data  (cost=0.00..14.30\nrows=6 width=4) (actual time=0.012..0.015 rows=3 loops=1)\n                           Index Cond: (collection_context_key = 392210)\n                     ->  Index Scan using article_key_idx on\narticles  (cost=0.00..4.49 rows=1 width=4) (actual time=0.006..0.006\nrows=0 loops=3)\n                           Index Cond: (public.articles.context_key =\ncollection_data.context_key)\n                           Filter: public.articles.indexed\n Total runtime: 0.812 ms\n\n\n\n\n\n\n", "msg_date": "Wed, 08 Dec 2010 12:31:53 -0800", "msg_from": "Bryce Nesbitt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashed subplan 5000x slower than two sequential\n operations" }, { "msg_contents": "Marc Mamin wrote:\n>\n> Another point: would a conditionl index help ?\n> on articles (context_key) where indexed\n>\nno.\n\nproduction=> select count(*),indexed from articles group by indexed;\n count | indexed\n--------+---------\n 517433 | t\n 695814 | f\n", "msg_date": "Wed, 08 Dec 2010 12:33:57 -0800", "msg_from": "Bryce Nesbitt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashed subplan 5000x slower than two sequential\n operations" }, { "msg_contents": "\n> Tom Lane wrote:\n>\n> The only really effective way the planner knows to optimize an\n> \"IN (sub-SELECT)\" is to turn it into a semi-join, which is not possible\n> here because of the unrelated OR clause. You might consider replacing\n> this with a UNION of two scans of \"contexts\". (And yes, I know it'd be\n> nicer if the planner did that for you.)\n\nIn moving our application from Oracle to Postgres, we've discovered that a\nlarge number of our reports fall into this category. If we rewrite them as\na UNION of two scans, it would be quite a big undertaking. Is there a way\nto tell the planner explicitly to use a semi-join (I may not grasp the\nconcepts here)? If not, would your advice be to hunker down and rewrite the\nqueries?\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/hashed-subplan-5000x-slower-than-two-sequential-operations-tp3297790p3346652.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Tue, 18 Jan 2011 10:56:59 -0800 (PST)", "msg_from": "masterchief <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashed subplan 5000x slower than two sequential operations" }, { "msg_contents": "2011/1/18 masterchief <[email protected]>\n\n>\n> > Tom Lane wrote:\n> >\n> > The only really effective way the planner knows to optimize an\n> > \"IN (sub-SELECT)\" is to turn it into a semi-join, which is not possible\n> > here because of the unrelated OR clause. You might consider replacing\n> > this with a UNION of two scans of \"contexts\". (And yes, I know it'd be\n> > nicer if the planner did that for you.)\n>\n> In moving our application from Oracle to Postgres, we've discovered that a\n> large number of our reports fall into this category. If we rewrite them as\n> a UNION of two scans, it would be quite a big undertaking. Is there a way\n> to tell the planner explicitly to use a semi-join (I may not grasp the\n> concepts here)? If not, would your advice be to hunker down and rewrite\n> the\n> queries?\n>\n>\n You can try \"exists\" instead of \"in\". Postgresql likes exists better.\nAlternatively, you can do something like \"set enable_seqscan=false\". Note\nthat such set is more like a hammer, so should be avoided. If it is the only\nthing that helps, it can be set right before calling query and reset to\ndefault afterwards.\n--\n\nBest regards,\n Vitalii Tymchyshyn\n\n2011/1/18 masterchief <[email protected]>\n\n> Tom Lane wrote:\n>\n> The only really effective way the planner knows to optimize an\n> \"IN (sub-SELECT)\" is to turn it into a semi-join, which is not possible\n> here because of the unrelated OR clause.  You might consider replacing\n> this with a UNION of two scans of \"contexts\".  (And yes, I know it'd be\n> nicer if the planner did that for you.)\n\nIn moving our application from Oracle to Postgres, we've discovered that a\nlarge number of our reports fall into this category.  If we rewrite them as\na UNION of two scans, it would be quite a big undertaking.  Is there a way\nto tell the planner explicitly to use a semi-join (I may not grasp the\nconcepts here)?  If not, would your advice be to hunker down and rewrite the\nqueries?  You can try \"exists\" instead of \"in\". Postgresql likes exists better. Alternatively, you can do something like \"set enable_seqscan=false\". Note that such set is more like a hammer, so should be avoided. If it is the only thing that helps, it can be set right before calling query and reset to default afterwards.\n--Best regards, Vitalii Tymchyshyn", "msg_date": "Tue, 18 Jan 2011 23:29:40 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashed subplan 5000x slower than two sequential operations" } ]
[ { "msg_contents": "I need to build a new high performance server to replace our current production database server.\nThe current server is a SuperMicro 1U with 2 RAID-1 containers (one for data, one for log, SAS - data is 600GB, Logs 144GB), 16GB of RAM, running 2 quad core processors (E5405 @ 2GHz), Adaptec 5405 Controller with BBU.  I am already having serious I/O bottlenecks with iostat -x showing extended periods where the disk subsystem on the data partition (the one with all the random i/o) at over 85% busy.  The system is running FreeBSD 7.2 amd64 and PostgreSQL 8.4.4 on amd64-portbld-freebsd7.2, compiled by GCC cc (GCC) 4.2.1 20070719  [FreeBSD], 64-bit.\nCurrently I have about 4GB of shared memory allocated to PostgreSQL.  Database is currently about 80GB, with about 60GB being in partitioned tables which get rotated nightly to purge old data (sort of like a circular buffer of historic data).\n\nI was looking at one of the machines which Aberdeen has (the X438), and was planning  on something along the lines of 96GB RAM with 16 SAS drives (15K).  If I create a RAID 10 (stripe of mirrors), leaving 2 hot spares, should I still place the logs in a separate RAID-1 mirror, or can they be left on the same RAID-10 container?\nOn the processor front, are there advantages to going to X series processors as opposed to the E series (especially since I am I/O bound)?  Is anyone running this type of hardware, specially on FreeBSD?  Any opinions, especially concerning the Areca controllers which they use?\n\nThe new box would ideally be built with the latest released version of FreeBSD, PG 9.x.  Also, is anyone running the 8.x series of FreeBSD with PG 9 in a high throughput production environment?  I will be upgrading one of our test servers in one week to this same configuration to test out, but just wanted to make sure there aren't any caveats others have experienced, especially as it pertains with the autovacuum not launching worker processes which I have experienced.\n\nBest regards,\n\nBenjamin \n", "msg_date": "Wed, 8 Dec 2010 16:03:43 -0700", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware recommendations" }, { "msg_contents": "If you are IO-bound, you might want to consider using SSD.\n\nA single SSD could easily give you more IOPS than 16 15k SAS in RAID 10.\n \n--- On Wed, 12/8/10, Benjamin Krajmalnik <[email protected]> wrote:\n\n> From: Benjamin Krajmalnik <[email protected]>\n> Subject: [PERFORM] Hardware recommendations\n> To: [email protected]\n> Date: Wednesday, December 8, 2010, 6:03 PM\n> I need to build a new high\n> performance server to replace our current production\n> database server.\n> The current server is a SuperMicro 1U with 2 RAID-1\n> containers (one for data, one for log, SAS - data is 600GB,\n> Logs 144GB), 16GB of RAM, running 2 quad core processors\n> (E5405 @ 2GHz), Adaptec 5405 Controller with BBU.  I am\n> already having serious I/O bottlenecks with iostat -x\n> showing extended periods where the disk subsystem on the\n> data partition (the one with all the random i/o) at over 85%\n> busy.  The system is running FreeBSD 7.2 amd64 and\n> PostgreSQL 8.4.4 on amd64-portbld-freebsd7.2, compiled by\n> GCC cc (GCC) 4.2.1 20070719  [FreeBSD], 64-bit.\n> Currently I have about 4GB of shared memory allocated to\n> PostgreSQL.  Database is currently about 80GB, with about\n> 60GB being in partitioned tables which get rotated nightly\n> to purge old data (sort of like a circular buffer of\n> historic data).\n> \n> I was looking at one of the machines which Aberdeen has\n> (the X438), and was planning  on something along the lines\n> of 96GB RAM with 16 SAS drives (15K).  If I create a RAID\n> 10 (stripe of mirrors), leaving 2 hot spares, should I still\n> place the logs in a separate RAID-1 mirror, or can they be\n> left on the same RAID-10 container?\n> On the processor front, are there advantages to going to X\n> series processors as opposed to the E series (especially\n> since I am I/O bound)?  Is anyone running this type of\n> hardware, specially on FreeBSD?  Any opinions, especially\n> concerning the Areca controllers which they use?\n> \n> The new box would ideally be built with the latest released\n> version of FreeBSD, PG 9.x.  Also, is anyone running the\n> 8.x series of FreeBSD with PG 9 in a high throughput\n> production environment?  I will be upgrading one of our\n> test servers in one week to this same configuration to test\n> out, but just wanted to make sure there aren't any caveats\n> others have experienced, especially as it pertains with the\n> autovacuum not launching worker processes which I have\n> experienced.\n> \n> Best regards,\n> \n> Benjamin \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Wed, 8 Dec 2010 15:26:57 -0800 (PST)", "msg_from": "Andy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "Ben,\r\n\r\nIt would help if you could tell us a bit more about the read/write mix and transaction requirements. *IF* you are heavy writes I would suggest moving off the RAID1 configuration to a RAID10 setup. I would highly suggest looking at SLC based solid state drives or if your budget has legs, look at fusionIO drives.\r\n\r\nWe currently have several setups with two FusionIO Duo cards that produce > 2GB second reads, and over 1GB/sec writes. They are pricey but, long term cheaper for me than putting SAN in place that can meet that sort of performance.\r\n\r\nIt all really depends on your workload:\r\n\r\nhttp://www.fusionio.com/products/iodrive/ - BEST in slot currently IMHO.\r\nhttp://www.intel.com/design/flash/nand/extreme/index.htm?wapkw=(X25-E) - not a bad alternative.\r\n\r\nThere are other SSD controllers on the market but I have experience with both so I can recommend both as well.\r\n\r\n- John\r\n\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Benjamin Krajmalnik\r\nSent: Wednesday, December 08, 2010 5:04 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] Hardware recommendations\r\n\r\nI need to build a new high performance server to replace our current production database server.\r\nThe current server is a SuperMicro 1U with 2 RAID-1 containers (one for data, one for log, SAS - data is 600GB, Logs 144GB), 16GB of RAM, running 2 quad core processors (E5405 @ 2GHz), Adaptec 5405 Controller with BBU.  I am already having serious I/O bottlenecks with iostat -x showing extended periods where the disk subsystem on the data partition (the one with all the random i/o) at over 85% busy.  The system is running FreeBSD 7.2 amd64 and PostgreSQL 8.4.4 on amd64-portbld-freebsd7.2, compiled by GCC cc (GCC) 4.2.1 20070719  [FreeBSD], 64-bit.\r\nCurrently I have about 4GB of shared memory allocated to PostgreSQL.  Database is currently about 80GB, with about 60GB being in partitioned tables which get rotated nightly to purge old data (sort of like a circular buffer of historic data).\r\n\r\nI was looking at one of the machines which Aberdeen has (the X438), and was planning  on something along the lines of 96GB RAM with 16 SAS drives (15K).  If I create a RAID 10 (stripe of mirrors), leaving 2 hot spares, should I still place the logs in a separate RAID-1 mirror, or can they be left on the same RAID-10 container?\r\nOn the processor front, are there advantages to going to X series processors as opposed to the E series (especially since I am I/O bound)?  Is anyone running this type of hardware, specially on FreeBSD?  Any opinions, especially concerning the Areca controllers which they use?\r\n\r\nThe new box would ideally be built with the latest released version of FreeBSD, PG 9.x.  Also, is anyone running the 8.x series of FreeBSD with PG 9 in a high throughput production environment?  I will be upgrading one of our test servers in one week to this same configuration to test out, but just wanted to make sure there aren't any caveats others have experienced, especially as it pertains with the autovacuum not launching worker processes which I have experienced.\r\n\r\nBest regards,\r\n\r\nBenjamin \r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Wed, 8 Dec 2010 18:31:51 -0500", "msg_from": "John W Strange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "On Thu, Dec 9, 2010 at 01:26, Andy <[email protected]> wrote:\n> If you are IO-bound, you might want to consider using SSD.\n>\n> A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10.\n\nAre there any that don't risk your data on power loss, AND are cheaper\nthan SAS RAID 10?\n\nRegards,\nMarti\n", "msg_date": "Thu, 9 Dec 2010 02:02:54 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "John,\n\nThe platform is a network monitoring system, so we have quite a lot of inserts/updates (every data point has at least one record insert as well as at least 3 record updates). The management GUI has a lot of selects. We are refactoring the database to some degree to aid in the performance, since the performance degradations are correlated to the number of users viewing the system GUI.\nMy biggest concern with SSD drives is their life expectancy, as well as our need for relatively high capacity. From a purely scalability perspective, this setup will need to support terabytes of data. I suppose I could use table spaces to use the most accessed data in SSD drives and the rest on regular drives.\nAs I stated, I am moving to RAID 10, and was just wondering if the logs should still be moved off to different spindles, or will leaving them on the RAID10 be fine and not affect performance.\n\n> -----Original Message-----\n> From: John W Strange [mailto:[email protected]]\n> Sent: Wednesday, December 08, 2010 4:32 PM\n> To: Benjamin Krajmalnik; [email protected]\n> Subject: RE: Hardware recommendations\n> \n> Ben,\n> \n> It would help if you could tell us a bit more about the read/write mix\n> and transaction requirements. *IF* you are heavy writes I would suggest\n> moving off the RAID1 configuration to a RAID10 setup. I would highly\n> suggest looking at SLC based solid state drives or if your budget has\n> legs, look at fusionIO drives.\n> \n> We currently have several setups with two FusionIO Duo cards that\n> produce > 2GB second reads, and over 1GB/sec writes. They are pricey\n> but, long term cheaper for me than putting SAN in place that can meet\n> that sort of performance.\n> \n> It all really depends on your workload:\n> \n> http://www.fusionio.com/products/iodrive/ - BEST in slot currently\n> IMHO.\n> http://www.intel.com/design/flash/nand/extreme/index.htm?wapkw=(X25-E)\n> - not a bad alternative.\n> \n> There are other SSD controllers on the market but I have experience\n> with both so I can recommend both as well.\n> \n> - John\n> \n> \n> \n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Benjamin Krajmalnik\n> Sent: Wednesday, December 08, 2010 5:04 PM\n> To: [email protected]\n> Subject: [PERFORM] Hardware recommendations\n> \n> I need to build a new high performance server to replace our current\n> production database server.\n> The current server is a SuperMicro 1U with 2 RAID-1 containers (one for\n> data, one for log, SAS - data is 600GB, Logs 144GB), 16GB of RAM,\n> running 2 quad core processors (E5405 @ 2GHz), Adaptec 5405 Controller\n> with BBU.  I am already having serious I/O bottlenecks with iostat -x\n> showing extended periods where the disk subsystem on the data partition\n> (the one with all the random i/o) at over 85% busy.  The system is\n> running FreeBSD 7.2 amd64 and PostgreSQL 8.4.4 on amd64-portbld-\n> freebsd7.2, compiled by GCC cc (GCC) 4.2.1 20070719  [FreeBSD], 64-bit.\n> Currently I have about 4GB of shared memory allocated to PostgreSQL.\n> Database is currently about 80GB, with about 60GB being in partitioned\n> tables which get rotated nightly to purge old data (sort of like a\n> circular buffer of historic data).\n> \n> I was looking at one of the machines which Aberdeen has (the X438), and\n> was planning  on something along the lines of 96GB RAM with 16 SAS\n> drives (15K).  If I create a RAID 10 (stripe of mirrors), leaving 2 hot\n> spares, should I still place the logs in a separate RAID-1 mirror, or\n> can they be left on the same RAID-10 container?\n> On the processor front, are there advantages to going to X series\n> processors as opposed to the E series (especially since I am I/O\n> bound)?  Is anyone running this type of hardware, specially on\n> FreeBSD?  Any opinions, especially concerning the Areca controllers\n> which they use?\n> \n> The new box would ideally be built with the latest released version of\n> FreeBSD, PG 9.x.  Also, is anyone running the 8.x series of FreeBSD\n> with PG 9 in a high throughput production environment?  I will be\n> upgrading one of our test servers in one week to this same\n> configuration to test out, but just wanted to make sure there aren't\n> any caveats others have experienced, especially as it pertains with the\n> autovacuum not launching worker processes which I have experienced.\n> \n> Best regards,\n> \n> Benjamin\n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> This communication is for informational purposes only. It is not\n> intended as an offer or solicitation for the purchase or sale of\n> any financial instrument or as an official confirmation of any\n> transaction. All market prices, data and other information are not\n> warranted as to completeness or accuracy and are subject to change\n> without notice. Any comments or statements made herein do not\n> necessarily reflect those of JPMorgan Chase & Co., its subsidiaries\n> and affiliates.\n> \n> This transmission may contain information that is privileged,\n> confidential, legally privileged, and/or exempt from disclosure\n> under applicable law. If you are not the intended recipient, you\n> are hereby notified that any disclosure, copying, distribution, or\n> use of the information contained herein (including any reliance\n> thereon) is STRICTLY PROHIBITED. Although this transmission and any\n> attachments are believed to be free of any virus or other defect\n> that might affect any computer system into which it is received and\n> opened, it is the responsibility of the recipient to ensure that it\n> is virus free and no responsibility is accepted by JPMorgan Chase &\n> Co., its subsidiaries and affiliates, as applicable, for any loss\n> or damage arising in any way from its use. If you received this\n> transmission in error, please immediately contact the sender and\n> destroy the material in its entirety, whether in electronic or hard\n> copy format. Thank you.\n> \n> Please refer to http://www.jpmorgan.com/pages/disclosures for\n> disclosures relating to European legal entities.\n", "msg_date": "Wed, 8 Dec 2010 17:03:31 -0700", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "\n\n> > If you are IO-bound, you might want to consider using\n> SSD.\n> >\n> > A single SSD could easily give you more IOPS than 16\n> 15k SAS in RAID 10.\n> \n> Are there any that don't risk your data on power loss, AND\n> are cheaper\n> than SAS RAID 10?\n> \n\nVertex 2 Pro has a built-in supercapacitor to save data on power loss. It's spec'd at 50K IOPS and a 200GB one costs around $1,000.\n\n\n \n", "msg_date": "Wed, 8 Dec 2010 16:23:41 -0800 (PST)", "msg_from": "Andy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "On Wed, Dec 8, 2010 at 5:03 PM, Benjamin Krajmalnik <[email protected]> wrote:\n> John,\n>\n> The platform is a network monitoring system, so we have quite a lot of inserts/updates (every data point has at least one record insert as well as at least 3 record updates).  The management GUI has a lot of selects.  We are refactoring the database to some degree to aid in the performance, since the performance degradations are correlated to the number of users viewing the system GUI.\n\nScalability here may be better addressed by having something like hot\nread only slaves for the users who want to view data.\n\n> My biggest concern with SSD drives is their life expectancy,\n\nGenerally that's not a big issue, especially as the SSDs get larger.\nBeing able to survive a power loss without corruption is more of an\nissue, so if you go SSD get ones with a supercapacitor that can write\nout the data before power down.\n\n> as well as our need for relatively high capacity.\n\nAhhh, capacity is where SSDs start to lose out quickly. Cheap 10k SAS\ndrives and less so 15k drives are way less per gigabyte than SSDs, and\nyou can only fit so many SSDs onto a single controller / in a single\ncage before you're broke.\n\n>  From a purely scalability perspective, this setup will need to support terabytes of data.  I suppose I could use table spaces to use the most accessed data in SSD drives and the rest on regular drives.\n> As I stated, I am moving to RAID 10, and was just wondering if the logs should still be moved off to different spindles, or will leaving them on the RAID10 be fine and not affect performance.\n\nWith a battery backed caching RAID controller, it's more important\nthat you have the pg_xlog files on a different partition than on a\ndifferen RAID set. I.e. you can have one big RAID set, and set aside\nthe first 100G or so for pg_xlog. This has to do with fsync\nbehaviour. In linux this is a known issue, I'm not sure how much so\nit would be in BSD. But you should test for fsync contention.\n\nAs for the Areca controllers, I haven't tested them with the latest\ndrivers or firmware, but we would routinely get 180 to 460 days of\nuptime between lockups on our 1680s we installed 2.5 or so years ago.\nOf the two brand new LSI 8888 controllers we installed this summer,\nwe've had one fail already. However, the database didn't get\ncorrupted so not too bad. My preference still leans towards the\nAreca, but no RAID controller is perfect and infallible.\n\nPerformance wise the Areca is still faster than the LSI 8888, and the\nnewer faster LSI just didn't work with out quad 12 core AMD mobo.\nNote that all of that hardware was brand new, so things may have\nimproved by now. I have to say Aberdeen took great care of us in\ngetting the systems up and running.\n\nAs for CPUs, almost any modern CPU will do fine.\n", "msg_date": "Wed, 8 Dec 2010 19:28:38 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Andy\nSent: Wednesday, December 08, 2010 5:24 PM\nTo: Marti Raudsepp\nCc: [email protected]; Benjamin Krajmalnik\nSubject: Re: [PERFORM] Hardware recommendations\n\n\n\n>> > If you are IO-bound, you might want to consider using\n>> SSD.\n>> >\n>> > A single SSD could easily give you more IOPS than 16\n>> 15k SAS in RAID 10.\n>> \n>> Are there any that don't risk your data on power loss, AND\n>> are cheaper\n>> than SAS RAID 10?\n>> \n\n>Vertex 2 Pro has a built-in supercapacitor to save data on power loss. It's\nspec'd at 50K IOPS and a 200GB one costs around $1,000.\n\n\nViking offers 6Gbps SAS physical connector SSD drives as well - with a super\ncapacitor.\n\nI have not seen any official pricing yet, but I would suspect it would be in\nthe same ballpark.\n\n I am currently begging to get some for eval. I will let everyone know if I\nswing that and can post numbers. \n\n-mark\n\n", "msg_date": "Wed, 8 Dec 2010 22:17:18 -0700", "msg_from": "\"mark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "On Thu, Dec 9, 2010 at 04:28, Scott Marlowe <[email protected]> wrote:\n> On Wed, Dec 8, 2010 at 5:03 PM, Benjamin Krajmalnik <[email protected]> wrote:\n>> My biggest concern with SSD drives is their life expectancy,\n>\n> Generally that's not a big issue, especially as the SSDs get larger.\n> Being able to survive a power loss without corruption is more of an\n> issue, so if you go SSD get ones with a supercapacitor that can write\n> out the data before power down.\n\nI agree with Benjamin here. Even if you put multiple SSD drives into a\nRAID array, all the drives get approximately the same write load and\nthus will likely wear out and fail at the same time!\n\n> As for the Areca controllers, I haven't tested them with the latest\n> drivers or firmware, but we would routinely get 180 to 460 days of\n> uptime between lockups\n\nThat sucks! But does a BBU even help with SSDs? The flash eraseblock\nis larger than the RAID cache unit size anyway, so as far as I can\ntell, it might not save you in the case of a power loss.\n\nAny thoughts whether software RAID on SSD is a good idea?\n\nRegards,\nMarti\n", "msg_date": "Thu, 9 Dec 2010 14:09:28 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "If you are worried about wearing out the SSD's long term get a larger SSD and create the partition smaller than the disk, this will reduce the write amplification and extend the life of the drive.\r\n\r\nTRIM support also helps lower write amplification issues by not requiring as many pages to do the writes, and improve performance as well!\r\n\r\nAs a test I bought 4 cheap 40GB drives in a raid0 software stripe, I have run it for almost a year now with a lot of random IO. I portioned them as 30GB drives, leaving an extra 25% spare area to reduce the write amplification, I can still get over 600MB/sec on these for a whopping cost of $400 and a little of my time.\r\n\r\nSSD's can be very useful, but you have to be aware of the shortcomings and how to overcome them.\r\n\r\n- John\r\n\r\n-----Original Message-----\r\nFrom: Marti Raudsepp [mailto:[email protected]] \r\nSent: Thursday, December 09, 2010 6:09 AM\r\nTo: Scott Marlowe\r\nCc: Benjamin Krajmalnik; John W Strange; [email protected]\r\nSubject: Re: [PERFORM] Hardware recommendations\r\n\r\nOn Thu, Dec 9, 2010 at 04:28, Scott Marlowe <[email protected]> wrote:\r\n> On Wed, Dec 8, 2010 at 5:03 PM, Benjamin Krajmalnik <[email protected]> wrote:\r\n>> My biggest concern with SSD drives is their life expectancy,\r\n>\r\n> Generally that's not a big issue, especially as the SSDs get larger.\r\n> Being able to survive a power loss without corruption is more of an\r\n> issue, so if you go SSD get ones with a supercapacitor that can write\r\n> out the data before power down.\r\n\r\nI agree with Benjamin here. Even if you put multiple SSD drives into a\r\nRAID array, all the drives get approximately the same write load and\r\nthus will likely wear out and fail at the same time!\r\n\r\n> As for the Areca controllers, I haven't tested them with the latest\r\n> drivers or firmware, but we would routinely get 180 to 460 days of\r\n> uptime between lockups\r\n\r\nThat sucks! But does a BBU even help with SSDs? The flash eraseblock\r\nis larger than the RAID cache unit size anyway, so as far as I can\r\ntell, it might not save you in the case of a power loss.\r\n\r\nAny thoughts whether software RAID on SSD is a good idea?\r\n\r\nRegards,\r\nMarti\r\n\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.", "msg_date": "Thu, 9 Dec 2010 20:57:17 -0500", "msg_from": "John W Strange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "On Wed, Dec 8, 2010 at 3:03 PM, Benjamin Krajmalnik <[email protected]> wrote:\n> I need to build a new high performance server to replace our current production database server.\n\n\nWe run FreeBSD 8.1 with PG 8.4 (soon to upgrade to PG 9). Hardware is:\n\nSupermicro 2u 6026T-NTR+\n2x Intel Xeon E5520 Nehalem 2.26GHz Quad-Core (8 cores total), 48GB RAM\n\nWe use ZFS and use SSDs for both the log device and L2ARC. All disks\nand SSDs are behind a 3ware with BBU in single disk mode. This has\ngiven us the capacity of the spinning disks with (mostly) the\nperformance of the SSDs.\n\nThe main issue we've had is that if the server is rebooted performance\nis horrible for a few minutes until the various memory and ZFS caches\nare warmed up. Luckily, that doesn't happen very often.\n", "msg_date": "Thu, 9 Dec 2010 22:38:00 -0800", "msg_from": "alan bryan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "\n\n> We use ZFS and use SSDs for both the log device and\n> L2ARC.  All disks\n> and SSDs are behind a 3ware with BBU in single disk\n> mode.  \n\nOut of curiosity why do you put your log on SSD? Log is all sequential IOs, an area in which SSD is not any faster than HDD. So I'd think putting log on SSD wouldn't give you any performance boost.\n\n\n \n", "msg_date": "Fri, 10 Dec 2010 05:58:24 -0800 (PST)", "msg_from": "Andy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "On 10-12-2010 14:58 Andy wrote:\n>> We use ZFS and use SSDs for both the log device and L2ARC. All\n>> disks and SSDs are behind a 3ware with BBU in single disk mode.\n>\n> Out of curiosity why do you put your log on SSD? Log is all\n> sequential IOs, an area in which SSD is not any faster than HDD. So\n> I'd think putting log on SSD wouldn't give you any performance\n> boost.\n\nThe \"common knowledge\" you based that comment on, may actually not be \nvery up-to-date anymore. Current consumer-grade SSD's can achieve up to \n200MB/sec when writing sequentially and they can probably do that a lot \nmore consistent than a hard disk.\n\nHave a look here: http://www.anandtech.com/show/2829/21\nThe sequential writes-graphs consistently put several SSD's at twice the \nperformance of the VelociRaptor 300GB 10k rpm disk and that's a test \nfrom over a year old, current SSD's have increased in performance, \nwhereas I'm not so sure there was much improvement in platter based \ndisks lately?\n\nApart from that, I'd guess that log-devices benefit from reduced latencies.\n\nIts actually the recommended approach from Sun to add a pair of (small \nSLC-based) ssd log devices to increase performance (especially for \nnfs-scenario's where a lot of synchonous writes occur) and they offer it \nas an option for most of their \"Unified Storage\" appliances.\n\nBest regards,\n\nArjen\n", "msg_date": "Fri, 10 Dec 2010 18:57:29 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "\nOn 10-12-2010 18:57 Arjen van der Meijden wrote:\n> Have a look here: http://www.anandtech.com/show/2829/21\n> The sequential writes-graphs consistently put several SSD's at twice the\n> performance of the VelociRaptor 300GB 10k rpm disk and that's a test\n> from over a year old, current SSD's have increased in performance,\n> whereas I'm not so sure there was much improvement in platter based\n> disks lately?\n\nHere's a more recent test:\nhttp://www.anandtech.com/show/4020/ocz-vertex-plus-preview-introducing-the-indilinx-martini/3\n\nThat shows several consumer grade SSD's and a 600GB VelociRaptor, its \n200+ vs 140MB/sec. I'm not sure how recent 15k rpm sas disks would do, \nnor do I know how recent server grade SSD's would behave. But if we \nassume similar gains for both, its still in favor of SSD's :-)\n\nBest regards,\n\nArjen\n", "msg_date": "Fri, 10 Dec 2010 19:05:43 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "On Fri, Dec 10, 2010 at 11:05 AM, Arjen van der Meijden\n<[email protected]> wrote:\n>\n> On 10-12-2010 18:57 Arjen van der Meijden wrote:\n>>\n>> Have a look here: http://www.anandtech.com/show/2829/21\n>> The sequential writes-graphs consistently put several SSD's at twice the\n>> performance of the VelociRaptor 300GB 10k rpm disk and that's a test\n>> from over a year old, current SSD's have increased in performance,\n>> whereas I'm not so sure there was much improvement in platter based\n>> disks lately?\n>\n> Here's a more recent test:\n> http://www.anandtech.com/show/4020/ocz-vertex-plus-preview-introducing-the-indilinx-martini/3\n>\n> That shows several consumer grade SSD's and a 600GB VelociRaptor, its 200+\n> vs 140MB/sec. I'm not sure how recent 15k rpm sas disks would do, nor do I\n> know how recent server grade SSD's would behave. But if we assume similar\n> gains for both, its still in favor of SSD's :-)\n\nThe latest Seagate Cheetahs (15k.7) can do 122 to 204 depending on\nwhat part of the drive you're writing to.\n", "msg_date": "Fri, 10 Dec 2010 11:08:01 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "\n> The \"common knowledge\" you based that comment on, may\n> actually not be very up-to-date anymore. Current\n> consumer-grade SSD's can achieve up to 200MB/sec when\n> writing sequentially and they can probably do that a lot\n> more consistent than a hard disk.\n> \n> Have a look here: http://www.anandtech.com/show/2829/21\n> The sequential writes-graphs consistently put several SSD's\n> at twice the performance of the VelociRaptor 300GB 10k rpm\n> disk and that's a test from over a year old, current SSD's\n> have increased in performance, whereas I'm not so sure there\n> was much improvement in platter based disks lately?\n\nThe sequential IO performance of SSD may be twice faster than HDD, but the random IO performance of SSD is at least an order of magnitude faster. I'd think it'd make more sense to take advantage of SSD's greatest strength, which is random IO.\n\nThe same website you linked, anandtech, also benchmarked various configurations of utilizing SSD: http://www.anandtech.com/show/2739/11\n\nAccording to their benchmarks putting logs on SSD results in no performance improvements, while putting data on SSD leads to massive improvement. \n\nThey used MySQL for the benchmarks. So perhaps Postgresql is different in this regard?\n\n\n\n \n", "msg_date": "Fri, 10 Dec 2010 11:27:28 -0800 (PST)", "msg_from": "Andy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "John W Strange wrote:\n> http://www.fusionio.com/products/iodrive/ - BEST in slot currently IMHO.\n> http://www.intel.com/design/flash/nand/extreme/index.htm?wapkw=(X25-E) - not a bad alternative.\n> \n\nThe FusionIO drives are OK, so long as you don't mind the possibility \nthat your system will be down for >15 minutes after any unexpected \ncrash. They can do a pretty time consuming verification process on the \nnext boot if you didn't shut the server down properly before mounting.\n\nIntel's drives have been so bad about respecting OS cache flush calls \nthat I can't recommend them for any PostgreSQL use, due to their \ntendency for the database to get corrupted in the same sort of \npost-crash situation. See \nhttp://wiki.postgresql.org/wiki/Reliable_Writes for more background. If \nyou get one of the models that allows being setup for reliability, those \nare so slow that's not even worth the trouble.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sat, 11 Dec 2010 04:10:45 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "Benjamin Krajmalnik wrote:\n> I am already having serious I/O bottlenecks with iostat -x showing extended periods where the disk subsystem on the data partition (the one with all the random i/o) at over 85% busy. The system is running FreeBSD 7.2 amd64 and PostgreSQL 8.4.4 on amd64-portbld-freebsd7.2, compiled by GCC cc (GCC) 4.2.1 20070719 [FreeBSD], 64-bit.\n> Currently I have about 4GB of shared memory allocated to PostgreSQL. Database is currently about 80GB, with about 60GB being in partitioned tables which get rotated nightly to purge old data (sort of like a circular buffer of historic data).\n> \n\nWhat sort of total read/write rates are you seeing when iostat is \nshowing the system 85% busy? That's a useful number to note as an \nestimate of just how random the workload is.\n\nHave you increased checkpoint parameters like checkpoint_segments? You \nneed to avoid having checkpoints too often if you're going to try and \nuse 4GB of memory for shared_buffers.\n\n> I was looking at one of the machines which Aberdeen has (the X438), and was planning on something along the lines of 96GB RAM with 16 SAS drives (15K). If I create a RAID 10 (stripe of mirrors), leaving 2 hot spares, should I still place the logs in a separate RAID-1 mirror, or can they be left on the same RAID-10 container?\n> \n\nIt's nice to put the logs onto a separate disk because it lets you \nmeasure exactly how much I/O is going to them, relative to the \ndatabase. It's not really necessary though; with 14 disks you'll be at \nthe range where you can mix them together and things should still be fine.\n\n\n> On the processor front, are there advantages to going to X series processors as opposed to the E series (especially since I am I/O bound)? Is anyone running this type of hardware, specially on FreeBSD? Any opinions, especially concerning the Areca controllers which they use?\n> \n\nIt sounds like you should be saving your hardware dollars for more RAM \nand disks, not getting faster procesors. The Areca controllers are fast \nand pretty reliable under Linux. I'm not aware of anyone using them for \nPostgreSQL in production on FreeBSD. Aberdeen may have enough customers \ndoing that to give you a good opinion on how stable that is likely to \nbe; they're pretty straight as vendors go. You'd want to make sure to \nstress test that hardware/software combo as early as possible \nregardless, it's generally a good idea and you wouldn't be running a \nreally popular combination.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sat, 11 Dec 2010 04:18:12 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: Greg Smith [mailto:[email protected]]\n> Sent: Saturday, December 11, 2010 2:18 AM\n> To: Benjamin Krajmalnik\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Hardware recommendations\n> \n> \n> What sort of total read/write rates are you seeing when iostat is\n> showing the system 85% busy? That's a useful number to note as an\n> estimate of just how random the workload is.\n> \n\nI did a vacuum full of the highly bloated, constantly accessed tables,\nwhich has improved the situation significantly. I am not seeing over\n75% busy right now, but these are some values for the high busy\npresently:\n\n71% 344 w/s 7644 kw/s\n81% 392 w/s 8880 kw/s\n79% 393 w/s 9526 kw/s\n75% 443 w/s 10245 kw/s\n80% 436 w/s 10157 kw/s\n76% 392 w/s 8438 kw/s\n\n\n\n\n> Have you increased checkpoint parameters like checkpoint_segments?\nYou\n> need to avoid having checkpoints too often if you're going to try and\n> use 4GB of memory for shared_buffers.\n> \n\nYes, I have it configured at 1024 checkpoint_segments, 5min timeout,0.9\ncompiostat -x 5letion_target\n> \n> It's nice to put the logs onto a separate disk because it lets you\n> measure exactly how much I/O is going to them, relative to the\n> database. It's not really necessary though; with 14 disks you'll be\nat\n> the range where you can mix them together and things should still be\n> fine.\n> \n\nThx. I will place them in their own RAID1 (or mirror if I end up going\nto ZFS)\n\n> \n> > On the processor front, are there advantages to going to X series\n> processors as opposed to the E series (especially since I am I/O\n> bound)? Is anyone running this type of hardware, specially on\nFreeBSD?\n> Any opinions, especially concerning the Areca controllers which they\n> use?\n> >\n> \n> It sounds like you should be saving your hardware dollars for more RAM\n> and disks, not getting faster procesors. The Areca controllers are\n> fast\n> and pretty reliable under Linux. I'm not aware of anyone using them\n> for\n> PostgreSQL in production on FreeBSD. Aberdeen may have enough\n> customers\n> doing that to give you a good opinion on how stable that is likely to\n> be; they're pretty straight as vendors go. You'd want to make sure to\n> stress test that hardware/software combo as early as possible\n> regardless, it's generally a good idea and you wouldn't be running a\n> really popular combination.\n> \n\nThx. That was my overall plan - that's why I am opting for drives,\ncache on the controller, and memory.\n\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services and Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 13 Dec 2010 13:45:10 -0700", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations" }, { "msg_contents": "\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Benjamin Krajmalnik\n> Sent: Monday, December 13, 2010 1:45 PM\n> To: Greg Smith\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Hardware recommendations\n> \n> \n> \n> > -----Original Message-----\n> > From: Greg Smith [mailto:[email protected]]\n> > Sent: Saturday, December 11, 2010 2:18 AM\n> > To: Benjamin Krajmalnik\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] Hardware recommendations\n> >\n \n> > Have you increased checkpoint parameters like checkpoint_segments?\n> You\n> > need to avoid having checkpoints too often if you're going to try and\n> > use 4GB of memory for shared_buffers.\n> >\n> \n> Yes, I have it configured at 1024 checkpoint_segments, 5min timeout,0.9\n> compiostat -x 5letion_target\n\n\nI would consider bumping that checkpoint timeout duration to a bit longer\nand see if that helps any if you are still looking for knobs to fiddle with.\n\n\nYMMV. \n\n-Mark\n\n\n\n\n", "msg_date": "Mon, 13 Dec 2010 20:29:16 -0700", "msg_from": "\"mark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations" } ]
[ { "msg_contents": "Sent from my android device.\n\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\n\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed, 8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed, 8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per\n\nSent from my android device.\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n  (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed,  8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed,  8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per", "msg_date": "Wed, 8 Dec 2010 17:27:12 -0600 (CST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations" } ]
[ { "msg_contents": "Sent from my android device.\n\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\n\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed, 8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed, 8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per\n\nSent from my android device.\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n  (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed,  8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed,  8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per", "msg_date": "Wed, 8 Dec 2010 17:27:12 -0600 (CST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations" } ]
[ { "msg_contents": "Sent from my android device.\n\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\n\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed, 8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed, 8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per\n\nSent from my android device.\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n  (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed,  8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed,  8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per", "msg_date": "Wed, 8 Dec 2010 17:27:12 -0600 (CST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations" } ]
[ { "msg_contents": "Sent from my android device.\n\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\n\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed, 8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed, 8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per\n\nSent from my android device.\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n  (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed,  8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed,  8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per", "msg_date": "Wed, 8 Dec 2010 17:27:12 -0600 (CST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations" } ]
[ { "msg_contents": "Sent from my android device.\n\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\n\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed, 8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed, 8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per\n\nSent from my android device.\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n  (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed,  8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed,  8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per", "msg_date": "Wed, 8 Dec 2010 17:27:12 -0600 (CST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations" } ]
[ { "msg_contents": "Sent from my android device.\n\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\n\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed, 8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed, 8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed, 8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per\n\nSent from my android device.\n-----Original Message-----\nFrom: Benjamin Krajmalnik <[email protected]>\nTo: [email protected]\nSent: Wed, 08 Dec 2010 17:14\nSubject: [PERFORM] Hardware recommendations\nReceived: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP (EHLO mx2.hub.org)\n  (ArGoSoft Mail Server Pro for WinNT/2000/XP, Version 1.8 (1.8.9.4)); Wed, 8 Dec 2010 23:14:07\nReceived: from postgresql.org (mail.postgresql.org [200.46.204.86])\n\tby mx2.hub.org (Postfix) with ESMTP id C1EAD3EAD610;\n\tWed,  8 Dec 2010 19:16:09 -0400 (AST)\nReceived: from maia.hub.org (maia-3.hub.org [200.46.204.243])\n\tby mail.postgresql.org (Postfix) with ESMTP id BEF461337B83\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:16:02 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.243]) (amavisd-maia, port 10024)\n with ESMTP id 69961-09\n for <[email protected]>;\n Wed,  8 Dec 2010 23:15:55 +0000 (UTC)\nX-Greylist: delayed 00:12:11.193596 by SQLgrey-1.7.6\nReceived: from mail.illumen.com (unknown [64.207.29.137])\n\tby mail.postgresql.org (Postfix) with ESMTP id 69A021337B8C\n\tfor <[email protected]>; Wed,  8 Dec 2010 19:15:55 -0400 (AST)\nX-MimeOLE: Produced By Microsoft Exchange V6.5\nContent-class: urn:content-classes:message\nMIME-Version: 1.0\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: quoted-printable\nSubject: [PERFORM] Hardware recommendations\nDate: Wed, 8 Dec 2010 16:03:43 -0700\nMessage-ID: <[email protected]>\nIn-Reply-To: <[email protected]>\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nThread-Topic: Hardware recommendations\nThread-Index: AcuXJy2x5aJ1UxfPTAK6bTXXH/raOgAABuAQ\nReferences: <[email protected]>\nFrom: \"Benjamin Krajmalnik\" <[email protected]>\nTo: <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits.107 tagged_above0 required=5\n testsºYES_00.9, RDNS_NONE=0.793\nX-Spam-Level:\nX-Mailing-List: pgsql-performance\nList-Archive: <http://archives.postgresql.org/pgsql-performance>\nList-Help: <mailto:[email protected]?body=help>\nList-ID: <pgsql-performance.postgresql.org>\nList-Owner: <mailto:[email protected]>\nList-Post: <mailto:pgsql-per", "msg_date": "Wed, 8 Dec 2010 17:27:12 -0600 (CST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations" } ]
[ { "msg_contents": "Is there any performance penalty when I use ODBC library vs using libpq?\n\n Best Regards,\nDivakar\n\n\n\n \nIs there any performance penalty when I use ODBC library vs using libpq? Best Regards,Divakar", "msg_date": "Wed, 8 Dec 2010 20:31:30 -0800 (PST)", "msg_from": "Divakar Singh <[email protected]>", "msg_from_op": true, "msg_subject": "libpq vs ODBC" }, { "msg_contents": ",--- You/Divakar (Wed, 8 Dec 2010 20:31:30 -0800 (PST)) ----*\n| Is there any performance penalty when I use ODBC library vs using libpq?\n\nIn general, yes.\n\nIn degenerate cases when most of the work happens in the server, no.\n\nYou need to measure in the contents of your specific application.\n\n-- Alex -- [email protected] --\n", "msg_date": "Thu, 09 Dec 2010 00:01:17 -0500", "msg_from": "Alex Goncharov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq vs ODBC" }, { "msg_contents": "So it means there will be visible impact if the nature of DB interaction is DB \ninsert/select. We do that mostly in my app.\nPerformance difference would be negligible if the query is server intensive \nwhere execution time is far more than time taken by e.g. communication interface \nor transaction handling.\nAm I right?\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Alex Goncharov <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: [email protected]\nSent: Thu, December 9, 2010 10:31:17 AM\nSubject: Re: [PERFORM] libpq vs ODBC\n\n,--- You/Divakar (Wed, 8 Dec 2010 20:31:30 -0800 (PST)) ----*\n| Is there any performance penalty when I use ODBC library vs using libpq?\n\nIn general, yes.\n\nIn degenerate cases when most of the work happens in the server, no.\n\nYou need to measure in the contents of your specific application.\n\n-- Alex -- [email protected] --\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nSo it means there will be visible impact if the nature of DB interaction is DB insert/select. We do that mostly in my app.Performance difference would be negligible if the query is server intensive where execution time is far more than time taken by e.g. communication interface or transaction handling.Am I right? Best Regards,DivakarFrom: Alex Goncharov <[email protected]>To: Divakar Singh <[email protected]>Cc: [email protected]: Thu, December 9, 2010 10:31:17 AMSubject: Re: [PERFORM] libpq vs ODBC,--- You/Divakar (Wed, 8 Dec 2010 20:31:30 -0800 (PST)) ----*| Is there any performance penalty when I use ODBC library vs using libpq?In general, yes.In degenerate cases when most of the work happens in the server, no.You need to measure in the contents of your specific application.-- Alex -- [email protected] ---- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 8 Dec 2010 21:17:22 -0800 (PST)", "msg_from": "Divakar Singh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq vs ODBC" }, { "msg_contents": ",--- You/Divakar (Wed, 8 Dec 2010 21:17:22 -0800 (PST)) ----*\n| So it means there will be visible impact if the nature of DB interaction is DB \n| insert/select. We do that mostly in my app.\n\nYou can't say a \"visible impact\" unless you can measure it in your\nspecific application.\n\nLet's say ODBC takes 10 times of .001 sec for libpq. Is this a\n\"visible impact\"?\n\n| Performance difference would be negligible if the query is server intensive \n| where execution time is far more than time taken by e.g. communication interface \n| or transaction handling.\n| Am I right?\n\nYou've got to measure -- there are too many variables to give you the\nanswer you are trying to get.\n\nTo a different question, \"Would I use ODBC to work with PostgreSQL if\nI had the option of using libpq?\", I'd certainly answer, \"No\".\n\nYou'd need to have the option of using libpq, though. ODBC takes care\nof a lot of difficult details for you, and libpq's higher performance\nmay turn out to be a loss for you, in your specific situation.\n\n-- Alex -- [email protected] --\n\n", "msg_date": "Thu, 09 Dec 2010 00:51:26 -0500", "msg_from": "Alex Goncharov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq vs ODBC" }, { "msg_contents": "hmm\nIf I understand it correctly you argument is valid from performance point of \nview.\nBut in practical scenarios, it would make more sense to do ODBC if the \ndifference is only 5% or so, because it opens up so many choices of databases \nfor me.\nDo we have some published data in this area.\n\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Alex Goncharov <[email protected]>\nTo: Divakar Singh <[email protected]>\nCc: [email protected]; [email protected]\nSent: Thu, December 9, 2010 11:21:26 AM\nSubject: Re: [PERFORM] libpq vs ODBC\n\n,--- You/Divakar (Wed, 8 Dec 2010 21:17:22 -0800 (PST)) ----*\n| So it means there will be visible impact if the nature of DB interaction is DB \n\n| insert/select. We do that mostly in my app.\n\nYou can't say a \"visible impact\" unless you can measure it in your\nspecific application.\n\nLet's say ODBC takes 10 times of .001 sec for libpq. Is this a\n\"visible impact\"?\n\n| Performance difference would be negligible if the query is server intensive \n| where execution time is far more than time taken by e.g. communication \ninterface \n\n| or transaction handling.\n| Am I right?\n\nYou've got to measure -- there are too many variables to give you the\nanswer you are trying to get.\n\nTo a different question, \"Would I use ODBC to work with PostgreSQL if\nI had the option of using libpq?\", I'd certainly answer, \"No\".\n\nYou'd need to have the option of using libpq, though. ODBC takes care\nof a lot of difficult details for you, and libpq's higher performance\nmay turn out to be a loss for you, in your specific situation.\n\n-- Alex -- [email protected] --\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nhmmIf I understand it correctly you argument is valid from performance point of view.But in practical scenarios, it would make more sense to do ODBC if the difference is only 5% or so, because it opens up so many choices of databases for me.Do we have some published data in this area. Best Regards,DivakarFrom: Alex Goncharov <[email protected]>To: Divakar Singh <[email protected]>Cc:\n [email protected]; [email protected]: Thu, December 9, 2010 11:21:26 AMSubject: Re: [PERFORM] libpq vs ODBC,--- You/Divakar (Wed, 8 Dec 2010 21:17:22 -0800 (PST)) ----*| So it means there will be visible impact if the nature of DB interaction is DB | insert/select. We do that mostly in my app.You can't say a \"visible impact\" unless you can measure it in yourspecific application.Let's say ODBC takes 10 times of .001 sec for libpq.  Is this a\"visible impact\"?| Performance difference would be negligible if the query is server intensive | where execution time is far more than time taken by e.g. communication interface | or transaction handling.| Am I right?You've got to measure -- there are too many variables to give you theanswer you are trying\n to get.To a different question, \"Would I use ODBC to work with PostgreSQL ifI had the option of using libpq?\", I'd certainly answer, \"No\".You'd need to have the option of using libpq, though.  ODBC takes careof a lot of difficult details for you, and libpq's higher performancemay turn out to be a loss for you, in your specific situation.-- Alex -- [email protected] ---- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 8 Dec 2010 22:39:36 -0800 (PST)", "msg_from": "Divakar Singh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq vs ODBC" }, { "msg_contents": "Hello\n\n2010/12/9 Divakar Singh <[email protected]>:\n> hmm\n> If I understand it correctly you argument is valid from performance point of\n> view.\n> But in practical scenarios, it would make more sense to do ODBC if the\n> difference is only 5% or so, because it opens up so many choices of\n> databases for me.\n> Do we have some published data in this area.\n>\n\nIt's depend on your environment - VB or VBA has not native drivers, so\nyou have to use a ODBC. The overhead from ODBC or ADO or ADO.NET for\nalmost task unsignificant. So people use it. The performance problems\ncan be detected in some special tasks - and then is necessary to use a\nstored procedures.\n\nRegards\n\nPavel Stehule\n\n>\n> Best Regards,\n> Divakar\n>\n> ________________________________\n> From: Alex Goncharov <[email protected]>\n> To: Divakar Singh <[email protected]>\n> Cc: [email protected]; [email protected]\n> Sent: Thu, December 9, 2010 11:21:26 AM\n> Subject: Re: [PERFORM] libpq vs ODBC\n>\n> ,--- You/Divakar (Wed, 8 Dec 2010 21:17:22 -0800 (PST)) ----*\n> | So it means there will be visible impact if the nature of DB interaction\n> is DB\n> | insert/select. We do that mostly in my app.\n>\n> You can't say a \"visible impact\" unless you can measure it in your\n> specific application.\n>\n> Let's say ODBC takes 10 times of .001 sec for libpq.  Is this a\n> \"visible impact\"?\n>\n> | Performance difference would be negligible if the query is server\n> intensive\n> | where execution time is far more than time taken by e.g. communication\n> interface\n> | or transaction handling.\n> | Am I right?\n>\n> You've got to measure -- there are too many variables to give you the\n> answer you are trying to get.\n>\n> To a different question, \"Would I use ODBC to work with PostgreSQL if\n> I had the option of using libpq?\", I'd certainly answer, \"No\".\n>\n> You'd need to have the option of using libpq, though.  ODBC takes care\n> of a lot of difficult details for you, and libpq's higher performance\n> may turn out to be a loss for you, in your specific situation.\n>\n> -- Alex -- [email protected] --\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n", "msg_date": "Thu, 9 Dec 2010 07:57:04 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq vs ODBC" }, { "msg_contents": "On Thu, 09 Dec 2010 06:51:26 +0100, Alex Goncharov\n<[email protected]> wrote:\n\n> ,--- You/Divakar (Wed, 8 Dec 2010 21:17:22 -0800 (PST)) ----*\n> | So it means there will be visible impact if the nature of DB \n> interaction is DB\n> | insert/select. We do that mostly in my app.\n>\n> You can't say a \"visible impact\" unless you can measure it in your\n> specific application.\n>\n> Let's say ODBC takes 10 times of .001 sec for libpq. Is this a\n> \"visible impact\"?\n\nWell you have to consider server and client resources separately. If you \nwaste a bit of CPU time on the client by using a suboptimal driver, that \nmay be a problem, or not. It you waste server resources, that is much more \nlikely to be a problem, because it is multiplied by the number of clients. \nI don't know about the specifics of ODBC performance, but for instance \nphp's PDO driver's handling of prepared statements with postgres comes up \nas an example of what not to do.\n", "msg_date": "Fri, 10 Dec 2010 03:32:24 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq vs ODBC" } ]
[ { "msg_contents": "Hi,\n\nI have a performance trouble with UNION query\n\n\nFirst I have this view :\n\n SELECT a,b,c FROM table1, table2 WHERE jointure AND condition\n\n Result : 150 000 records in 1~2 s\n\n\n\nThen, I adding an UNION into the same view :\n\n SELECT a,b,c FROM table1, table2 WHERE jointure AND condition\n UNION\n SELECT a,b,c FROM table3\n\n Result : 150 200 records in 6~7 s\n\n\nWhy, do I have bad performance only for 200 adding records ?\n\nThanks\n\n*SGBD : Postgres 8.3 et 8.4*\n\n\n\n\n\n\nHi,\n\n\nI have a performance trouble with UNION query\n\n\nFirst I have this view :\n\n    SELECT a,b,c FROM table1, table2 WHERE jointure AND condition\n\n    Result : 150\n000 records\nin 1~2 s\n\n\n\nThen,\nI adding an UNION into the same view :\n\n   \nSELECT a,b,c FROM table1, table2 WHERE jointure AND condition\n    UNION\n    SELECT a,b,c FROM table3\n\n    Result : 150\n200\nrecords in 6~7 s\n\n\nWhy, do I have bad performance only for 200 adding records ?\n\nThanks\n\nSGBD\n: Postgres 8.3 et 8.4", "msg_date": "Thu, 09 Dec 2010 11:52:14 +0100", "msg_from": "Olivier Pala <[email protected]>", "msg_from_op": true, "msg_subject": "UNION and bad performance" }, { "msg_contents": "Hello,\n\n \n\nUNION will remove all duplicates, so that the result additionally\nrequires to be sorted.\n\nAnyway, for performance issues, you should always start investigation\nwith explain analyze .\n\nregards,\n\n \n\nMarc Mamin\n\n \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Olivier\nPala\nSent: Donnerstag, 9. Dezember 2010 11:52\nTo: [email protected]\nCc: Olivier Pala\nSubject: [PERFORM] UNION and bad performance\n\n \n\nHi, \n\nI have a performance trouble with UNION query\n\n\nFirst I have this view :\n\n SELECT a,b,c FROM table1, table2 WHERE jointure AND condition\n\n Result : 150 000 records in 1~2 s\n\n\n\nThen, I adding an UNION into the same view :\n\n SELECT a,b,c FROM table1, table2 WHERE jointure AND condition\n UNION\n SELECT a,b,c FROM table3\n\n Result : 150 200 records in 6~7 s\n\n\nWhy, do I have bad performance only for 200 adding records ?\n\nThanks\n\nSGBD : Postgres 8.3 et 8.4 \n\n\n\n\n\n\n\n\n\n\n\nHello,\n \nUNION will remove all duplicates, so that the result\nadditionally requires to be sorted.\nAnyway,  for performance issues, you should always start\ninvestigation with explain analyze .\nregards,\n \nMarc Mamin\n \n\n\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of Olivier\nPala\nSent: Donnerstag, 9. Dezember 2010 11:52\nTo: [email protected]\nCc: Olivier Pala\nSubject: [PERFORM] UNION and bad performance\n\n\n \nHi, \n\nI have a performance trouble with UNION query\n\n\nFirst I have this view :\n\n    SELECT a,b,c FROM table1,\ntable2 WHERE jointure AND condition\n\n    Result : 150 000 records in 1~2\ns\n\n\n\nThen, I adding an UNION into the same view :\n\n    SELECT a,b,c FROM table1,\ntable2 WHERE jointure AND condition\n    UNION\n    SELECT a,b,c FROM table3\n\n    Result : 150 200 records in 6~7\ns\n\n\nWhy, do I have bad performance only for 200 adding\nrecords ?\n\nThanks\n\nSGBD : Postgres 8.3 et 8.4", "msg_date": "Sat, 11 Dec 2010 11:27:59 +0100", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION and bad performance" }, { "msg_contents": "Marc Mamin <[email protected]> wrote:\n\n> Hello,\n> \n> \n> \n> UNION will remove all duplicates, so that the result additionally requires to\n> be sorted.\n\nRight, to avoid the SORT and UNIQUE - operation you can use UNION ALL\n\n\n> \n> Anyway, for performance issues, you should always start investigation with\n> explain analyze .\n\nACK.\n\n Arguments to support bottom-posting...\n\n A: Because we read from top to bottom, left to right.\n Q: Why should I start my reply below the quoted text?\n\n A: Because it messes up the order in which people normally read text.\n Q: Why is top-posting such a bad thing?\n\n A: The lost context.\n Q: What makes top-posted replies harder to read than bottom-posted?\n\n A: Yes.\n Q: Should I trim down the quoted part of an email to which I'm replying?\n\n\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Sat, 11 Dec 2010 14:45:47 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION and bad performance" }, { "msg_contents": "> UNION will remove all duplicates, so that the result additionally requires to\n> be sorted.\n\n>Right, to avoid the SORT and UNIQUE - operation you can use UNION ALL\n\n\nby the way maybe apply hashing to calculate UNION be better ?\n\n\n------------\npasman\n", "msg_date": "Sun, 12 Dec 2010 14:12:59 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION and bad performance" }, { "msg_contents": "2010/12/12 pasman pasmański <[email protected]>:\n>> UNION will remove all duplicates, so that the result additionally requires to\n>> be sorted.\n>\n>>Right, to avoid the SORT and UNIQUE - operation you can use UNION ALL\n>\n>\n> by the way maybe apply hashing to calculate UNION be better ?\n\nThe planner already considers such plans.\n\nrhaas=# explain select a from generate_series(1,100) a union select a\nfrom generate_series(1,100) a;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n HashAggregate (cost=45.00..65.00 rows=2000 width=4)\n -> Append (cost=0.00..40.00 rows=2000 width=4)\n -> Function Scan on generate_series a (cost=0.00..10.00\nrows=1000 width=4)\n -> Function Scan on generate_series a (cost=0.00..10.00\nrows=1000 width=4)\n(4 rows)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 20 Dec 2010 13:57:09 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION and bad performance" }, { "msg_contents": ">> rhaas=# explain select a from generate_series(1,100) a union select a\n>> from generate_series(1,100) a;\n>> QUERY PLAN\n>> --------------------------------------------------------------------------------------\n>> HashAggregate (cost=45.00..65.00 rows=2000 width=4)\n>> -> Append (cost=0.00..40.00 rows=2000 width=4)\n\n\nWhy in this case the estimated number of rows is 2000? Is it standard\nplanner behavior?\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/UNION-and-bad-performance-tp3301375p5806445.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 8 Jun 2014 06:58:55 -0700 (PDT)", "msg_from": "pinker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION and bad performance" }, { "msg_contents": "pinker wrote\n>>> rhaas=# explain select a from generate_series(1,100) a union select a\n>>> from generate_series(1,100) a;\n>>> QUERY PLAN\n>>> --------------------------------------------------------------------------------------\n>>> HashAggregate (cost=45.00..65.00 rows=2000 width=4)\n>>> -> Append (cost=0.00..40.00 rows=2000 width=4)\n> \n> \n> Why in this case the estimated number of rows is 2000? Is it standard\n> planner behavior?\n\nhttp://www.postgresql.org/docs/9.1/static/sql-createfunction.html\n\nNote the \"ROWS\" property.\n\nFunctions are black-boxes to the planner so it has no means of estimating a\nrow count. So a set returning function uses 1,000 and all others use 1.\n\nDetermining \"COST\" is similarly problematic.\n\nDavid J.\n\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/UNION-and-bad-performance-tp3301375p5806450.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 8 Jun 2014 08:53:27 -0700 (PDT)", "msg_from": "David G Johnston <[email protected]>", "msg_from_op": false, "msg_subject": "Re: UNION and bad performance" } ]
[ { "msg_contents": "Hi all,\n\nI notice that when restoring a DB on a laptop with an SDD, typically postgres is maxing out a CPU - even during a COPY. I wonder, what is postgres usually doing with the CPU? I would have thought the disk would usually be the bottleneck in the DB, but occasionally it's not. We're embarking on a new DB server project and it'd be helpful to understand where the CPU is likely to be the bottleneck.\n\nCheers,\n\n--Royce\n\n", "msg_date": "Mon, 13 Dec 2010 13:43:10 +1100", "msg_from": "Royce Ausburn <[email protected]>", "msg_from_op": true, "msg_subject": "CPU bound" }, { "msg_contents": "On 12/13/2010 10:43 AM, Royce Ausburn wrote:\n> Hi all,\n>\n> I notice that when restoring a DB on a laptop with an SDD, typically postgres is maxing out a CPU - even during a COPY. I wonder, what is postgres usually doing with the CPU? I would have thought the disk would usually be the bottleneck in the DB, but occasionally it's not. We're embarking on a new DB server project and it'd be helpful to understand where the CPU is likely to be the bottleneck.\n\nA few thoughts:\n\n- Pg isn't capable of using more than one core for a single task, so if \nyou have one really big job, you'll more easily start struggling on CPU. \nRestores appear to be a pain point here, though recent work has been \ndone to address that.\n\n- Even with pg_dump/pg_restore's parallel restore, you can't be using \nmore than one core to do work for a single COPY or other individual \noperation. You can only parallelize down to the table level at the moment.\n\n- Pg's design has always focused on rotating media. It can make sense to \ntrade increased CPU costs for reduced I/O when disk storage is slower \nrelative to CPU/RAM. There aren't, AFAIK, many controls beyond the \nrandom/seq io knobs to get Pg to try to save CPU at the cost of more I/O \nwhen opportunities to do so appear.\n\n- Pg's CPU load depends a lot on the data types and table structures in \nuse. What're your tables like? Do they have indexes added at the end, or \nare they created with indexes then populated with rows? The former is \nMUCH faster. Are they full of NUMERIC fields? Those seem to be \nincredibly slow compared to int/float/etc, which is hardly surprising \ngiven their storage and how they work.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 13 Dec 2010 19:10:37 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound" }, { "msg_contents": "On 12/12/10 6:43 PM, Royce Ausburn wrote:\n> Hi all,\n> \n> I notice that when restoring a DB on a laptop with an SDD, typically postgres is maxing out a CPU - even during a COPY. I wonder, what is postgres usually doing with the CPU? I would have thought the disk would usually be the bottleneck in the DB, but occasionally it's not. We're embarking on a new DB server project and it'd be helpful to understand where the CPU is likely to be the bottleneck.\n\nThat's pretty normal; as soon as you get decent disk, especially\nsomething like an SSD with a RAM cache, you become CPU-bound. COPY does\na LOT of parsing and data manipulation. Index building, of course, is\nalmost pure CPU if you have a decent amount of RAM available.\n\nIf you're restoring from a pg_dump file, and have several cores\navailable, I suggest using parallel pg_restore.\n\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 13 Dec 2010 10:59:26 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound" }, { "msg_contents": "Thanks guys - interesting. \n\n\nOn 14/12/2010, at 5:59 AM, Josh Berkus wrote:\n\n> On 12/12/10 6:43 PM, Royce Ausburn wrote:\n>> Hi all,\n>> \n>> I notice that when restoring a DB on a laptop with an SDD, typically postgres is maxing out a CPU - even during a COPY. I wonder, what is postgres usually doing with the CPU? I would have thought the disk would usually be the bottleneck in the DB, but occasionally it's not. We're embarking on a new DB server project and it'd be helpful to understand where the CPU is likely to be the bottleneck.\n> \n> That's pretty normal; as soon as you get decent disk, especially\n> something like an SSD with a RAM cache, you become CPU-bound. COPY does\n> a LOT of parsing and data manipulation. Index building, of course, is\n> almost pure CPU if you have a decent amount of RAM available.\n> \n> If you're restoring from a pg_dump file, and have several cores\n> available, I suggest using parallel pg_restore.\n> \n> \n> -- \n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Tue, 14 Dec 2010 08:03:54 +1100", "msg_from": "Royce Ausburn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU bound" }, { "msg_contents": ">>>>> \"RA\" == Royce Ausburn <[email protected]> writes:\n\nRA> I notice that when restoring a DB on a laptop with an SDD,\nRA> typically postgres is maxing out a CPU - even during a COPY.\n\nThe time the CPUs spend waiting on system RAM shows up as CPU\ntime, not as Wait time. It could be just that the SSD is fast\nenough that the RAM is now the bottleneck, although parsing\nand text<=>binary conversions (especially for integers, reals\nand anything stored as an integer) also can be CPU-intensive.\n\n-JimC\n-- \nJames Cloos <[email protected]> OpenPGP: 1024D/ED7DAEA6\n", "msg_date": "Sun, 19 Dec 2010 19:57:48 -0500", "msg_from": "James Cloos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound" }, { "msg_contents": "On 12/19/2010 7:57 PM, James Cloos wrote:\n>>>>>> \"RA\" == Royce Ausburn<[email protected]> writes:\n> RA> I notice that when restoring a DB on a laptop with an SDD,\n> RA> typically postgres is maxing out a CPU - even during a COPY.\n>\n> The time the CPUs spend waiting on system RAM shows up as CPU\n> time, not as Wait time. It could be just that the SSD is fast\n> enough that the RAM is now the bottleneck, although parsing\n> and text<=>binary conversions (especially for integers, reals\n> and anything stored as an integer) also can be CPU-intensive.\n>\n> -JimC\n\nGood time accounting is the most compelling reason for having a wait \nevent interface, like Oracle. Without the wait event interface, one \ncannot really tell where the time is spent, at least not without \nprofiling the database code, which is not an option for a production \ndatabase.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Mon, 20 Dec 2010 01:47:30 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound" }, { "msg_contents": ">>>>> \"MG\" == Mladen Gogala <[email protected]> writes:\n\nMG> Good time accounting is the most compelling reason for having a wait\nMG> event interface, like Oracle. Without the wait event interface, one\nMG> cannot really tell where the time is spent, at least not without\nMG> profiling the database code, which is not an option for a production\nMG> database.\n\nAnd how exactly, given that the kernel does not know whether the CPU is\nactive or waiting on ram, could an application do so?\n\n-JimC\n-- \nJames Cloos <[email protected]> OpenPGP: 1024D/ED7DAEA6\n", "msg_date": "Mon, 20 Dec 2010 10:33:26 -0500", "msg_from": "James Cloos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound" }, { "msg_contents": "On Mon, Dec 20, 2010 at 10:33:26AM -0500, James Cloos wrote:\n> >>>>> \"MG\" == Mladen Gogala <[email protected]> writes:\n> \n> MG> Good time accounting is the most compelling reason for having a wait\n> MG> event interface, like Oracle. Without the wait event interface, one\n> MG> cannot really tell where the time is spent, at least not without\n> MG> profiling the database code, which is not an option for a production\n> MG> database.\n> \n> And how exactly, given that the kernel does not know whether the CPU is\n> active or waiting on ram, could an application do so?\n> \n\nExactly. I have only seen this data from hardware emulators. It would\nbe nice to have... :)\n\nKen\n", "msg_date": "Mon, 20 Dec 2010 09:48:47 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound" }, { "msg_contents": "On 2010-12-20 15:48, Kenneth Marshall wrote:\n>> And how exactly, given that the kernel does not know whether the CPU is\n>> active or waiting on ram, could an application do so?\n>>\n>\n> Exactly. I have only seen this data from hardware emulators. It would\n> be nice to have... :)\n\nThere's no reason that the cpu hardware couldn't gather such, and\nIMHO it's be dead useful, at least at the outermost cache level\n(preferably separately at each level). But people have trouble\nunderstanding vmstat already....\n\nNote that dtrace *can* get to the cpu performance counters,\njust that the kernel doesn't routinely account for all that info\nper-process as routine. I'd expect IBM to have equivalent\nfacilities.\n\n-- \nJeremy\n", "msg_date": "Mon, 20 Dec 2010 17:59:42 +0000", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound" }, { "msg_contents": "On 12/20/2010 10:33 AM, James Cloos wrote:\n>\n> And how exactly, given that the kernel does not know whether the CPU is\n> active or waiting on ram, could an application do so?\n>\n> -JimC\nThat particular aspect will remain hidden, it's a domain of the hardware\narchitecture. Nevertheless, there are things like waiting on I/O or\nwaiting on lock, that would be extremely useful.\n\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Tue, 21 Dec 2010 06:34:27 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound" }, { "msg_contents": "On Dec 20, 2010, at 12:47 AM, Mladen Gogala wrote:\n> Good time accounting is the most compelling reason for having a wait event interface, like Oracle. Without the wait event interface, one cannot really tell where the time is spent, at least not without profiling the database code, which is not an option for a production database.\n\nOut of curiosity, have you tried using the information that Postgres exposes to dtrace? I suspect it comes close to what you can get directly out of Oracle...\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n", "msg_date": "Sun, 2 Jan 2011 15:57:01 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound" }, { "msg_contents": "Jim Nasby wrote:\n> On Dec 20, 2010, at 12:47 AM, Mladen Gogala wrote:\n> \n>> Good time accounting is the most compelling reason for having a wait event interface, like Oracle. Without the wait event interface, one cannot really tell where the time is spent, at least not without profiling the database code, which is not an option for a production database.\n>> \n>\n> Out of curiosity, have you tried using the information that Postgres exposes to dtrace? I suspect it comes close to what you can get directly out of Oracle...\n> --\n> Jim C. Nasby, Database Architect [email protected]\n> 512.569.9461 (cell) http://jim.nasby.net\n>\n>\n>\n> \nNo, I haven't but I looked it in the documentation. I surmise, however, \nthat I will have to build my software with \"--enable-dtrace\", which is \nnot enabled by default. This certainly looks promising.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Mon, 03 Jan 2011 11:17:18 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU bound" }, { "msg_contents": "Has anyone had a chance to recompile and try larger a larger blocksize than 8192 with pSQL 8.4.x? I'm finally getting around to tuning some FusionIO drives that we are setting up. We are looking to setup 4 fusionIO drives per server, and then use pgpooler to scale them to 3 servers so that we can scale up to 72 processors. I'm almost done with the configuration to start heavily testing but wanted to know if anyone has really messed with the blocksize options?\r\n\r\nIf anyone has done load balancing with pgpooler I'd love to hear their experience with it as well. I have attached the randwrite performance test below, as you can see going from 8K -> 32K -> 1M blocksize the drives really start to move.\r\n\r\nThanks,\r\n- John\r\n\r\n[v025554@athenaash05 /]$ fio --filename=/fusionIO/export1/test1 --size=25G --bs=8k --direct=1 --rw=randwrite --numjobs=8 --runtime=30 --group_reporting --name=file1\r\nfile1: (g=0): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1\r\n...\r\nfile1: (g=0): rw=randwrite, bs=8K-8K/8K-8K, ioengine=sync, iodepth=1\r\nStarting 8 processes\r\nJobs: 8 (f=8): [wwwwwwww] [100.0% done] [0K/138M /s] [0/17K iops] [eta 00m:00s]\r\nfile1: (groupid=0, jobs=8): err= 0: pid=23287\r\n write: io=3,819MB, bw=127MB/s, iops=16,292, runt= 30001msec\r\n clat (usec): min=41, max=1,835K, avg=268.42, stdev=3627.29\r\n bw (KB/s) : min= 4, max=142304, per=15.13%, avg=19714.31, stdev=8364.40\r\n cpu : usr=0.16%, sys=3.13%, ctx=1123544, majf=0, minf=176\r\n IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%\r\n submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%\r\n complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%\r\n issued r/w: total=0/488779, short=0/0\r\n lat (usec): 50=14.81%, 100=58.17%, 250=0.28%, 500=22.33%, 750=3.67%\r\n lat (usec): 1000=0.16%\r\n lat (msec): 2=0.35%, 4=0.09%, 10=0.11%, 20=0.01%, 50=0.01%\r\n lat (msec): 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%\r\n lat (msec): 2000=0.01%\r\n\r\nRun status group 0 (all jobs):\r\n WRITE: io=3,819MB, aggrb=127MB/s, minb=130MB/s, maxb=130MB/s, mint=30001msec, maxt=30001msec\r\n\r\nDisk stats (read/write):\r\n md0: ios=0/514993, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%\r\n fiod: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n fioc: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n fiob: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n fioa: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n[v025554@athenaash05 /]$ fio --filename=/fusionIO/export1/test1 --size=25G --bs=32k --direct=1 --rw=randwrite --numjobs=8 --runtime=30 --group_reporting --name=file1\r\nfile1: (g=0): rw=randwrite, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1\r\n...\r\nfile1: (g=0): rw=randwrite, bs=32K-32K/32K-32K, ioengine=sync, iodepth=1\r\nStarting 8 processes\r\nJobs: 8 (f=8): [wwwwwwww] [100.0% done] [0K/343M /s] [0/11K iops] [eta 00m:00s]\r\nfile1: (groupid=0, jobs=8): err= 0: pid=23835\r\n write: io=9,833MB, bw=328MB/s, iops=10,487, runt= 30002msec\r\n clat (usec): min=64, max=227K, avg=349.31, stdev=1517.64\r\n bw (KB/s) : min= 883, max=171712, per=16.25%, avg=54548.49, stdev=13973.76\r\n cpu : usr=0.18%, sys=2.82%, ctx=669138, majf=0, minf=176\r\n IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%\r\n submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%\r\n complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%\r\n issued r/w: total=0/314659, short=0/0\r\n lat (usec): 100=84.14%, 250=8.73%, 500=0.07%, 750=2.45%, 1000=3.19%\r\n lat (msec): 2=0.29%, 4=0.17%, 10=0.22%, 20=0.23%, 50=0.31%\r\n lat (msec): 100=0.13%, 250=0.05%\r\n\r\nRun status group 0 (all jobs):\r\n WRITE: io=9,833MB, aggrb=328MB/s, minb=336MB/s, maxb=336MB/s, mint=30002msec, maxt=30002msec\r\n\r\nDisk stats (read/write):\r\n md0: ios=0/455522, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%\r\n fiod: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n fioc: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n fiob: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n fioa: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n[v025554@athenaash05 /]$ fio --filename=/fusionIO/export1/test1 --size=25G --bs=1M --direct=1 --rw=randwrite --numjobs=8 --runtime=30 --group_reporting --name=file1\r\nfile1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1\r\n...\r\nfile1: (g=0): rw=randwrite, bs=1M-1M/1M-1M, ioengine=sync, iodepth=1\r\nStarting 8 processes\r\nJobs: 8 (f=8): [wwwwwwww] [100.0% done] [0K/912M /s] [0/890 iops] [eta 00m:00s]\r\nfile1: (groupid=0, jobs=8): err= 0: pid=24877\r\n write: io=25,860MB, bw=862MB/s, iops=861, runt= 30004msec\r\n clat (usec): min=456, max=83,766, avg=5599.02, stdev=2026.74\r\n bw (KB/s) : min=28603, max=216966, per=11.93%, avg=105311.22, stdev=10668.06\r\n cpu : usr=0.06%, sys=2.03%, ctx=91888, majf=0, minf=176\r\n IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%\r\n submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%\r\n complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%\r\n issued r/w: total=0/25860, short=0/0\r\n lat (usec): 500=12.74%, 750=20.60%, 1000=7.37%\r\n lat (msec): 2=3.12%, 4=6.57%, 10=26.95%, 20=21.37%, 50=1.12%\r\n lat (msec): 100=0.14%\r\n\r\nRun status group 0 (all jobs):\r\n WRITE: io=25,860MB, aggrb=862MB/s, minb=883MB/s, maxb=883MB/s, mint=30004msec, maxt=30004msec\r\n\r\nDisk stats (read/write):\r\n md0: ios=0/500382, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%\r\n fiod: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n fioc: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n fiob: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n fioa: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=nan%\r\n\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Mon, 3 Jan 2011 19:33:18 -0500", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": false, "msg_subject": "Question: BlockSize > 8192 with FusionIO" }, { "msg_contents": "Strange, John W wrote:\n> Has anyone had a chance to recompile and try larger a larger blocksize than 8192 with pSQL 8.4.x?\n\nWhile I haven't done the actual experiment you're asking about, the \nproblem working against you here is how WAL data is used to protect \nagainst partial database writes. See the documentation for \nfull_page_writes at \nhttp://www.postgresql.org/docs/current/static/runtime-config-wal.html \nBecause full size copies of the blocks have to get written there, \nattempts to chunk writes into larger pieces end up requiring a \ncorrespondingly larger volume of writes to protect against partial \nwrites to those pages. You might get a nice efficiency gain on the read \nside, but the situation when under a heavy write load (the main thing \nyou have to be careful about with these SSDs) is much less clear.\n\nI wouldn't draw any conclusions whatsoever from what fio says about \nthis; it's pretty useless IMHO for simulating anything like a real \ndatabase workload. I don't even use that utility anymore, as I found it \njust wasted my time compared with moving straight onto something that \ntries to act to like a database application simulation. You might try \nrunning pgbench with the database scale set to large enough that the \nresulting database is large relative to total system RAM instead.\n\nP.S. Make sure you put the FusionIO drives under a heavy write load and \npower down the server hard, so you can see what happens if there's a \nreal-world crash. Recovery time to remount the drives in that situation \nis the main drawback of their design. It does the right thing to \nprotect your data as far as I know, but the recovery can be quite time \nintensive.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 03 Jan 2011 21:13:27 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question: BlockSize > 8192 with FusionIO" }, { "msg_contents": "On Mon, Jan 3, 2011 at 9:13 PM, Greg Smith <[email protected]> wrote:\n> Strange, John W wrote:\n>>\n>> Has anyone had a chance to recompile and try larger a larger blocksize\n>> than 8192 with pSQL 8.4.x?\n>\n> While I haven't done the actual experiment you're asking about, the problem\n> working against you here is how WAL data is used to protect against partial\n> database writes.  See the documentation for full_page_writes at\n> http://www.postgresql.org/docs/current/static/runtime-config-wal.html\n>  Because full size copies of the blocks have to get written there, attempts\n> to chunk writes into larger pieces end up requiring a correspondingly larger\n> volume of writes to protect against partial writes to those pages.  You\n> might get a nice efficiency gain on the read side, but the situation when\n> under a heavy write load (the main thing you have to be careful about with\n> these SSDs) is much less clear.\n\nmost flash drives, especially mlc flash, use huge blocks anyways on\nphysical level. the numbers claimed here\n(http://www.fusionio.com/products/iodrive/) (141k write iops) are\nsimply not believable without write buffering. i didn't see any note\nof how fault tolerance is maintained through the buffer (anyone\nknow?).\n\nassuming they do buffer, i would expect a smaller blocksize would be\nbetter/easier on the ssd, since this would mean less gross writing,\nhigher maximum throughput, and less wear and tear on the flash; the\nadvantages of the larger blocksize are very hardware driven and\nalready managed by the controller.\n\nif they don't buffer (again, I'm very skeptical this is the case), a\nlarger block size, possibly even a much larger block size (like 256k)\nwould be an interesting test. i'm pretty skeptical about the fusion\ni/o product generally, because i don't think the sata interface is a\nbottleneck save for read caching, and the o/s is already buffering\nreads. the storage medium is still the bottleneck for the most part\n(even if it's much faster at certain things). note fusion is still\nuseful for some things, but the nice is narrower than it looks on the\nsurface.\n\nmerlin\n", "msg_date": "Tue, 4 Jan 2011 11:48:59 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question: BlockSize > 8192 with FusionIO" }, { "msg_contents": "\nOn Jan 4, 2011, at 8:48 AM, Merlin Moncure wrote:\n\n> \n> most flash drives, especially mlc flash, use huge blocks anyways on\n> physical level. the numbers claimed here\n> (http://www.fusionio.com/products/iodrive/) (141k write iops) are\n> simply not believable without write buffering. i didn't see any note\n> of how fault tolerance is maintained through the buffer (anyone\n> know?).\n\nFusionIO buffers. They have capacitors onboard to protect against crashing and power failure. They passed our crash attempts to corrupt writes to them before we put them into production, for whatever that's worth, but they do take a long time to come back online after an unclean shutdown.", "msg_date": "Tue, 4 Jan 2011 10:36:53 -0800", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question: BlockSize > 8192 with FusionIO" }, { "msg_contents": "This has gotten a lot better with the 2.x drivers as well.\r\n\r\nI'm completely aware of the FusionIO and it's advantages/disadvantages.. I'm currently getting the following pgbench results but still only hitting the array for about 800MB/sec, short of the 3GB/sec that it's capable of. This is simply a trash DB for us to store results in for short periods of time. If something bad was to happen we can regenerate the results. So performance with limited risk is what we are looking to achieve. \r\n\r\nasgprod@ASH01_riskresults $ pgbench -v -j 4 -t 200000 -c 16 -h localhost -p 4410 pgbench_10000\r\nstarting vacuum...end.\r\nstarting vacuum pgbench_accounts...end.\r\ntransaction type: TPC-B (sort of)\r\nscaling factor: 10000\r\nquery mode: simple\r\nnumber of clients: 16\r\nnumber of threads: 4\r\nnumber of transactions per client: 200000\r\nnumber of transactions actually processed: 3200000/3200000\r\ntps = 16783.841042 (including connections establishing)\r\ntps = 16785.592722 (excluding connections establishing)\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Ben Chobot\r\nSent: Tuesday, January 04, 2011 12:37 PM\r\nTo: Merlin Moncure\r\nCc: [email protected] Performance\r\nSubject: Re: [PERFORM] Question: BlockSize > 8192 with FusionIO\r\n\r\n\r\nOn Jan 4, 2011, at 8:48 AM, Merlin Moncure wrote:\r\n\r\n> \r\n> most flash drives, especially mlc flash, use huge blocks anyways on\r\n> physical level. the numbers claimed here\r\n> (http://www.fusionio.com/products/iodrive/) (141k write iops) are\r\n> simply not believable without write buffering. i didn't see any note\r\n> of how fault tolerance is maintained through the buffer (anyone\r\n> know?).\r\n\r\nFusionIO buffers. They have capacitors onboard to protect against crashing and power failure. They passed our crash attempts to corrupt writes to them before we put them into production, for whatever that's worth, but they do take a long time to come back online after an unclean shutdown.\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Tue, 4 Jan 2011 14:01:28 -0500", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question: BlockSize > 8192 with FusionIO" }, { "msg_contents": "Test,\r\n\r\nSorry trying to fix why my email is getting formatted to bits when posting to the list.\r\n\r\n- John\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Strange, John W\r\nSent: Tuesday, January 04, 2011 1:01 PM\r\nTo: Ben Chobot; Merlin Moncure\r\nCc: [email protected] Performance\r\nSubject: Re: [PERFORM] Question: BlockSize > 8192 with FusionIO\r\n\r\nThis has gotten a lot better with the 2.x drivers as well.\r\n\r\n\r\n\r\nI'm completely aware of the FusionIO and it's advantages/disadvantages.. I'm currently getting the following pgbench results but still only hitting the array for about 800MB/sec, short of the 3GB/sec that it's capable of. This is simply a trash DB for us to store results in for short periods of time. If something bad was to happen we can regenerate the results. So performance with limited risk is what we are looking to achieve. \r\n\r\n\r\n\r\nasgprod@ASH01_riskresults $ pgbench -v -j 4 -t 200000 -c 16 -h localhost -p 4410 pgbench_10000\r\n\r\nstarting vacuum...end.\r\n\r\nstarting vacuum pgbench_accounts...end.\r\n\r\ntransaction type: TPC-B (sort of)\r\n\r\nscaling factor: 10000\r\n\r\nquery mode: simple\r\n\r\nnumber of clients: 16\r\n\r\nnumber of threads: 4\r\n\r\nnumber of transactions per client: 200000\r\n\r\nnumber of transactions actually processed: 3200000/3200000\r\n\r\ntps = 16783.841042 (including connections establishing)\r\n\r\ntps = 16785.592722 (excluding connections establishing)\r\n\r\n\r\n\r\n\r\n\r\n-----Original Message-----\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Ben Chobot\r\n\r\nSent: Tuesday, January 04, 2011 12:37 PM\r\n\r\nTo: Merlin Moncure\r\n\r\nCc: [email protected] Performance\r\n\r\nSubject: Re: [PERFORM] Question: BlockSize > 8192 with FusionIO\r\n\r\n\r\n\r\n\r\n\r\nOn Jan 4, 2011, at 8:48 AM, Merlin Moncure wrote:\r\n\r\n\r\n\r\n> \r\n\r\n> most flash drives, especially mlc flash, use huge blocks anyways on\r\n\r\n> physical level. the numbers claimed here\r\n\r\n> (http://www.fusionio.com/products/iodrive/) (141k write iops) are\r\n\r\n> simply not believable without write buffering. i didn't see any note\r\n\r\n> of how fault tolerance is maintained through the buffer (anyone\r\n\r\n> know?).\r\n\r\n\r\n\r\nFusionIO buffers. They have capacitors onboard to protect against crashing and power failure. They passed our crash attempts to corrupt writes to them before we put them into production, for whatever that's worth, but they do take a long time to come back online after an unclean shutdown.\r\n\r\n-- \r\n\r\nSent via pgsql-performance mailing list ([email protected])\r\n\r\nTo make changes to your subscription:\r\n\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\r\nThis communication is for informational purposes only. It is not\r\nintended as an offer or solicitation for the purchase or sale of\r\nany financial instrument or as an official confirmation of any\r\ntransaction. All market prices, data and other information are not\r\nwarranted as to completeness or accuracy and are subject to change\r\nwithout notice. Any comments or statements made herein do not\r\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\r\nand affiliates.\r\n\r\n\r\n\r\nThis transmission may contain information that is privileged,\r\nconfidential, legally privileged, and/or exempt from disclosure\r\nunder applicable law. If you are not the intended recipient, you\r\nare hereby notified that any disclosure, copying, distribution, or\r\nuse of the information contained herein (including any reliance\r\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\r\nattachments are believed to be free of any virus or other defect\r\nthat might affect any computer system into which it is received and\r\nopened, it is the responsibility of the recipient to ensure that it\r\nis virus free and no responsibility is accepted by JPMorgan Chase &\r\nCo., its subsidiaries and affiliates, as applicable, for any loss\r\nor damage arising in any way from its use. If you received this\r\ntransmission in error, please immediately contact the sender and\r\ndestroy the material in its entirety, whether in electronic or hard\r\ncopy format. Thank you.\r\n\r\n\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\r\ndisclosures relating to European legal entities.\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Tue, 4 Jan 2011 15:31:21 -0500", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question: BlockSize > 8192 with FusionIO" }, { "msg_contents": "\nOn Jan 4, 2011, at 8:48 AM, Merlin Moncure wrote:\n\n> On Mon, Jan 3, 2011 at 9:13 PM, Greg Smith <[email protected]> wrote:\n>> Strange, John W wrote:\n>>> \n>>> Has anyone had a chance to recompile and try larger a larger blocksize\n>>> than 8192 with pSQL 8.4.x?\n>> \n>> While I haven't done the actual experiment you're asking about, the problem\n>> working against you here is how WAL data is used to protect against partial\n>> database writes. See the documentation for full_page_writes at\n>> http://www.postgresql.org/docs/current/static/runtime-config-wal.html\n>> Because full size copies of the blocks have to get written there, attempts\n>> to chunk writes into larger pieces end up requiring a correspondingly larger\n>> volume of writes to protect against partial writes to those pages. You\n>> might get a nice efficiency gain on the read side, but the situation when\n>> under a heavy write load (the main thing you have to be careful about with\n>> these SSDs) is much less clear.\n> \n> most flash drives, especially mlc flash, use huge blocks anyways on\n> physical level. the numbers claimed here\n> (http://www.fusionio.com/products/iodrive/) (141k write iops) are\n> simply not believable without write buffering. i didn't see any note\n> of how fault tolerance is maintained through the buffer (anyone\n> know?).\n\n\nFlash may have very large erase blocks -- 4k to 16M, but you can write to it at much smaller block sizes sequentially.\n\nIt has to delete a block in bulk, but it can write to an erased block bit by bit, sequentially (512 or 4096 bytes typically, but some is 8k and 16k).\n\nOlder MLC NAND flash could be written to at a couple bytes at a time -- but drives today incorporate too much EEC and use larger chunks to do that. The minimum write size now is caused by the EEC requirements and not the physical NAND flash requirements. \n\nSo, buffering isn't that big of a requirement with the current LBA > Physical translations which change all writes -- random or not -- to sequential writes in one erase block.\n But performance if waiting for the write to complete will not be all that good, especially with MLC. Turn off the buffer on an Intel SLC drive for example, and write IOPS is cut by 1/3 or more -- to 'only' 1000 or so iops.", "msg_date": "Tue, 4 Jan 2011 22:41:20 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question: BlockSize > 8192 with FusionIO" } ]
[ { "msg_contents": "How can you tell when your indexes are starting to get bloated and when you need to rebuild them. I haven't seen a quick way to tell and not sure if it's being tracked.\r\n\r\n_______________________________________________________________________________________________\r\n| John W. Strange | Investment Bank | Global Commodities Technology \r\n| J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C: 281-744-6476 | F: 713 236-3333\r\n| [email protected] | jpmorgan.com\r\n\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Tue, 14 Dec 2010 09:47:49 -0500", "msg_from": "John W Strange <[email protected]>", "msg_from_op": true, "msg_subject": "Index Bloat - how to tell?" }, { "msg_contents": "I have used this in the past ... run this against the database that you want to inspect.\n\n\nSELECT\n current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/\n ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat,\n CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes,\n iname, /*ituples::bigint, ipages::bigint, iotta,*/\n ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat,\n CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes\nFROM (\n SELECT\n schemaname, tablename, cc.reltuples, cc.relpages, bs,\n CEIL((cc.reltuples*((datahdr+ma-\n (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta,\n COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,\n COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols\n FROM (\n SELECT\n ma,bs,schemaname,tablename,\n (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,\n (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2\n FROM (\n SELECT\n schemaname, tablename, hdr, ma, bs,\n SUM((1-null_frac)*avg_width) AS datawidth,\n MAX(null_frac) AS maxfracsum,\n hdr+(\n SELECT 1+count(*)/8\n FROM pg_stats s2\n WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename\n ) AS nullhdr\n FROM pg_stats s, (\n SELECT\n (SELECT current_setting('block_size')::numeric) AS bs,\n CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr,\n CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma\n FROM (SELECT version() AS v) AS foo\n ) AS constants\n GROUP BY 1,2,3,4,5\n ) AS foo\n ) AS rs\n JOIN pg_class cc ON cc.relname = rs.tablename\n JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema'\n LEFT JOIN pg_index i ON indrelid = cc.oid\n LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid\n) AS sml\nORDER BY wastedbytes DESC\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of John W Strange\nSent: Tuesday, December 14, 2010 8:48 AM\nTo: [email protected]\nSubject: [PERFORM] Index Bloat - how to tell?\n\nHow can you tell when your indexes are starting to get bloated and when you need to rebuild them. I haven't seen a quick way to tell and not sure if it's being tracked.\n\n\n\n_______________________________________________________________________________________________\n\n| John W. Strange | Investment Bank | Global Commodities Technology \n\n| J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C: 281-744-6476 | F: 713 236-3333\n\n| [email protected] | jpmorgan.com\n\n\n\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\n\n\n\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\n\n\n\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 14 Dec 2010 08:54:26 -0600", "msg_from": "\"Plugge, Joe R.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Bloat - how to tell?" }, { "msg_contents": "Can you explain this query a bit? It isn't at all clear to me.\n\n\nPlugge, Joe R. wrote:\n> I have used this in the past ... run this against the database that you want to inspect.\n>\n>\n> SELECT\n> current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/\n> ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat,\n> CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes,\n> iname, /*ituples::bigint, ipages::bigint, iotta,*/\n> ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat,\n> CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes\n> FROM (\n> SELECT\n> schemaname, tablename, cc.reltuples, cc.relpages, bs,\n> CEIL((cc.reltuples*((datahdr+ma-\n> (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta,\n> COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,\n> COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols\n> FROM (\n> SELECT\n> ma,bs,schemaname,tablename,\n> (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,\n> (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2\n> FROM (\n> SELECT\n> schemaname, tablename, hdr, ma, bs,\n> SUM((1-null_frac)*avg_width) AS datawidth,\n> MAX(null_frac) AS maxfracsum,\n> hdr+(\n> SELECT 1+count(*)/8\n> FROM pg_stats s2\n> WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename\n> ) AS nullhdr\n> FROM pg_stats s, (\n> SELECT\n> (SELECT current_setting('block_size')::numeric) AS bs,\n> CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr,\n> CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma\n> FROM (SELECT version() AS v) AS foo\n> ) AS constants\n> GROUP BY 1,2,3,4,5\n> ) AS foo\n> ) AS rs\n> JOIN pg_class cc ON cc.relname = rs.tablename\n> JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema'\n> LEFT JOIN pg_index i ON indrelid = cc.oid\n> LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid\n> ) AS sml\n> ORDER BY wastedbytes DESC\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of John W Strange\n> Sent: Tuesday, December 14, 2010 8:48 AM\n> To: [email protected]\n> Subject: [PERFORM] Index Bloat - how to tell?\n>\n> How can you tell when your indexes are starting to get bloated and when you need to rebuild them. I haven't seen a quick way to tell and not sure if it's being tracked.\n>\n>\n>\n> _______________________________________________________________________________________________\n>\n> | John W. Strange | Investment Bank | Global Commodities Technology \n>\n> | J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C: 281-744-6476 | F: 713 236-3333\n>\n> | [email protected] | jpmorgan.com\n>\n>\n>\n> This communication is for informational purposes only. It is not\n> intended as an offer or solicitation for the purchase or sale of\n> any financial instrument or as an official confirmation of any\n> transaction. All market prices, data and other information are not\n> warranted as to completeness or accuracy and are subject to change\n> without notice. Any comments or statements made herein do not\n> necessarily reflect those of JPMorgan Chase & Co., its subsidiaries\n> and affiliates.\n>\n>\n>\n> This transmission may contain information that is privileged,\n> confidential, legally privileged, and/or exempt from disclosure\n> under applicable law. If you are not the intended recipient, you\n> are hereby notified that any disclosure, copying, distribution, or\n> use of the information contained herein (including any reliance\n> thereon) is STRICTLY PROHIBITED. Although this transmission and any\n> attachments are believed to be free of any virus or other defect\n> that might affect any computer system into which it is received and\n> opened, it is the responsibility of the recipient to ensure that it\n> is virus free and no responsibility is accepted by JPMorgan Chase &\n> Co., its subsidiaries and affiliates, as applicable, for any loss\n> or damage arising in any way from its use. If you received this\n> transmission in error, please immediately contact the sender and\n> destroy the material in its entirety, whether in electronic or hard\n> copy format. Thank you.\n>\n>\n>\n> Please refer to http://www.jpmorgan.com/pages/disclosures for\n> disclosures relating to European legal entities.\n>\n> \n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Tue, 14 Dec 2010 11:21:53 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Bloat - how to tell?" }, { "msg_contents": "There is a plugin called pgstattuple which can be quite informative ....\nhowever, it actually does a full scan of the table / index files, which may\nbe a bit invasive depending on your environment and load.\n\nhttp://www.postgresql.org/docs/current/static/pgstattuple.html\n\nIt's in the contrib (at least for 8.4), and so you have to import its\nfunctions into your schema using the script in the contrib directory.\n\nCheers\nDave\n\nOn Tue, Dec 14, 2010 at 8:54 AM, Plugge, Joe R. <[email protected]> wrote:\n\n> I have used this in the past ... run this against the database that you\n> want to inspect.\n>\n>\n> SELECT\n> current_database(), schemaname, tablename, /*reltuples::bigint,\n> relpages::bigint, otta,*/\n> ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS\n> tbloat,\n> CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END\n> AS wastedbytes,\n> iname, /*ituples::bigint, ipages::bigint, iotta,*/\n> ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric\n> END,1) AS ibloat,\n> CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes\n> FROM (\n> SELECT\n> schemaname, tablename, cc.reltuples, cc.relpages, bs,\n> CEIL((cc.reltuples*((datahdr+ma-\n> (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma\n> END))+nullhdr2+4))/(bs-20::float)) AS otta,\n> COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples,\n> COALESCE(c2.relpages,0) AS ipages,\n> COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta --\n> very rough approximation, assumes all cols\n> FROM (\n> SELECT\n> ma,bs,schemaname,tablename,\n> (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma\n> END)))::numeric AS datahdr,\n> (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE\n> nullhdr%ma END))) AS nullhdr2\n> FROM (\n> SELECT\n> schemaname, tablename, hdr, ma, bs,\n> SUM((1-null_frac)*avg_width) AS datawidth,\n> MAX(null_frac) AS maxfracsum,\n> hdr+(\n> SELECT 1+count(*)/8\n> FROM pg_stats s2\n> WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND\n> s2.tablename = s.tablename\n> ) AS nullhdr\n> FROM pg_stats s, (\n> SELECT\n> (SELECT current_setting('block_size')::numeric) AS bs,\n> CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23\n> END AS hdr,\n> CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma\n> FROM (SELECT version() AS v) AS foo\n> ) AS constants\n> GROUP BY 1,2,3,4,5\n> ) AS foo\n> ) AS rs\n> JOIN pg_class cc ON cc.relname = rs.tablename\n> JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname =\n> rs.schemaname AND nn.nspname <> 'information_schema'\n> LEFT JOIN pg_index i ON indrelid = cc.oid\n> LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid\n> ) AS sml\n> ORDER BY wastedbytes DESC\n>\n> -----Original Message-----\n> From: [email protected] [mailto:\n> [email protected]] On Behalf Of John W Strange\n> Sent: Tuesday, December 14, 2010 8:48 AM\n> To: [email protected]\n> Subject: [PERFORM] Index Bloat - how to tell?\n>\n> How can you tell when your indexes are starting to get bloated and when you\n> need to rebuild them. I haven't seen a quick way to tell and not sure if\n> it's being tracked.\n>\n>\n>\n>\n> _______________________________________________________________________________________________\n>\n> | John W. Strange | Investment Bank | Global Commodities Technology\n>\n> | J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C:\n> 281-744-6476 | F: 713 236-3333\n>\n> | [email protected] | jpmorgan.com\n>\n>\n>\n> This communication is for informational purposes only. It is not\n> intended as an offer or solicitation for the purchase or sale of\n> any financial instrument or as an official confirmation of any\n> transaction. All market prices, data and other information are not\n> warranted as to completeness or accuracy and are subject to change\n> without notice. Any comments or statements made herein do not\n> necessarily reflect those of JPMorgan Chase & Co., its subsidiaries\n> and affiliates.\n>\n>\n>\n> This transmission may contain information that is privileged,\n> confidential, legally privileged, and/or exempt from disclosure\n> under applicable law. If you are not the intended recipient, you\n> are hereby notified that any disclosure, copying, distribution, or\n> use of the information contained herein (including any reliance\n> thereon) is STRICTLY PROHIBITED. Although this transmission and any\n> attachments are believed to be free of any virus or other defect\n> that might affect any computer system into which it is received and\n> opened, it is the responsibility of the recipient to ensure that it\n> is virus free and no responsibility is accepted by JPMorgan Chase &\n> Co., its subsidiaries and affiliates, as applicable, for any loss\n> or damage arising in any way from its use. If you received this\n> transmission in error, please immediately contact the sender and\n> destroy the material in its entirety, whether in electronic or hard\n> copy format. Thank you.\n>\n>\n>\n> Please refer to http://www.jpmorgan.com/pages/disclosures for\n> disclosures relating to European legal entities.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThere is a plugin called pgstattuple which can be quite informative .... however, it actually does a full scan of the table / index files, which may be a bit invasive depending on your environment and load.http://www.postgresql.org/docs/current/static/pgstattuple.html\nIt's in the contrib (at least for 8.4), and so you have to import its functions into your schema using the script in the contrib directory. CheersDaveOn Tue, Dec 14, 2010 at 8:54 AM, Plugge, Joe R. <[email protected]> wrote:\nI have used this in the past ... run this against the database that you want to inspect.\n\n\nSELECT\n  current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/\n  ROUND(CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat,\n  CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes,\n  iname, /*ituples::bigint, ipages::bigint, iotta,*/\n  ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat,\n  CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes\nFROM (\n  SELECT\n    schemaname, tablename, cc.reltuples, cc.relpages, bs,\n    CEIL((cc.reltuples*((datahdr+ma-\n      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta,\n    COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,\n    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols\n  FROM (\n    SELECT\n      ma,bs,schemaname,tablename,\n      (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,\n      (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2\n    FROM (\n      SELECT\n        schemaname, tablename, hdr, ma, bs,\n        SUM((1-null_frac)*avg_width) AS datawidth,\n        MAX(null_frac) AS maxfracsum,\n        hdr+(\n          SELECT 1+count(*)/8\n          FROM pg_stats s2\n          WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename\n        ) AS nullhdr\n      FROM pg_stats s, (\n        SELECT\n          (SELECT current_setting('block_size')::numeric) AS bs,\n          CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr,\n          CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma\n        FROM (SELECT version() AS v) AS foo\n      ) AS constants\n      GROUP BY 1,2,3,4,5\n    ) AS foo\n  ) AS rs\n  JOIN pg_class cc ON cc.relname = rs.tablename\n  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema'\n  LEFT JOIN pg_index i ON indrelid = cc.oid\n  LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid\n) AS sml\nORDER BY wastedbytes DESC\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of John W Strange\n\nSent: Tuesday, December 14, 2010 8:48 AM\nTo: [email protected]\nSubject: [PERFORM] Index Bloat - how to tell?\n\nHow can you tell when your indexes are starting to get bloated and when you need to rebuild them.  I haven't seen a quick way to tell and not sure if it's being tracked.\n\n\n\n_______________________________________________________________________________________________\n\n| John W. Strange | Investment Bank | Global Commodities Technology\n\n| J.P. Morgan | 700 Louisiana, 11th Floor | T: 713-236-4122 | C: 281-744-6476 | F: 713 236-3333\n\n| [email protected] | jpmorgan.com\n\n\n\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\n\n\n\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\n\n\n\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 14 Dec 2010 14:12:11 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Bloat - how to tell?" }, { "msg_contents": "On 15/12/10 09:12, Dave Crooke wrote:\n> There is a plugin called pgstattuple which can be quite informative \n> .... however, it actually does a full scan of the table / index files, \n> which may be a bit invasive depending on your environment and load.\n>\n> http://www.postgresql.org/docs/current/static/pgstattuple.html\n>\n> It's in the contrib (at least for 8.4), and so you have to import its \n> functions into your schema using the script in the contrib directory.\n>\n\nIf you are using 8.4 or later, try the Freespacemap module:\n\nhttp://www.postgresql.org/docs/current/static/pgfreespacemap.html\n\nI tend to run this query:\n\n SELECT oid::regclass,\n pg_relation_size(oid)/(1024*1024) AS mb,\n sum(free)/(1024*1024) AS free_mb\n FROM\n (SELECT oid, (pg_freespace(oid)).avail AS free\n FROM pg_class) AS a\n GROUP BY a.oid ORDER BY free_mb DESC;\n\n\nto show up potentially troublesome amounts of bloat.\n\nregards\n\nMark\n\n\n\n\n\n\n\n On 15/12/10 09:12, Dave Crooke wrote:\n \n\n There is a plugin called pgstattuple which can be quite\n informative .... however, it actually does a full scan of the\n table / index files, which may be a bit invasive depending on your\n environment and load.\n\nhttp://www.postgresql.org/docs/current/static/pgstattuple.html\n\n It's in the contrib (at least for 8.4), and so you have to import\n its functions into your schema using the script in the contrib\n directory. \n\n\n\n If you are using 8.4 or later, try the Freespacemap module:\n\nhttp://www.postgresql.org/docs/current/static/pgfreespacemap.html\n\n I tend to run this query:\n\n SELECT oid::regclass, \n pg_relation_size(oid)/(1024*1024) AS mb,\n sum(free)/(1024*1024) AS free_mb \n FROM \n (SELECT oid, (pg_freespace(oid)).avail AS free \n FROM pg_class) AS a \n GROUP BY a.oid ORDER BY free_mb DESC;\n\n\n to show up potentially troublesome amounts of bloat. \n\n regards\n\n Mark", "msg_date": "Wed, 15 Dec 2010 11:20:38 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Bloat - how to tell?" }, { "msg_contents": "Dave Crooke wrote:\n> There is a plugin called pgstattuple which can be quite informative \n> .... however, it actually does a full scan of the table / index files, \n> which may be a bit invasive depending on your environment and load.\n>\n> http://www.postgresql.org/docs/current/static/pgstattuple.html\n>\n> It's in the contrib (at least for 8.4), and so you have to import its \n> functions into your schema using the script in the contrib directory.\n>\n> Cheers\n> Dave\nI tried it with one of my databases:\n\n\ntesttrack=# select * from pgstatindex('public.defects_pkey');\n version | tree_level | index_size | root_block_no | internal_pages | \nleaf_pages | empty_pages | deleted_pages | avg_leaf_density | \nleaf_fragmentation\n \n---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+-------------------\n-\n 2 | 1 | 827392 | 3 | 0 \n| 100 | 0 | 0 | 70.12 \n| 22\n(1 row)\n\n\nWhat is \"leaf_fragmentation\"? How is it defined? I wasn't able to find \nout any definition of that number. How is it calculated. I verified that \nrunning reindex makes it 0:\n\n\ntesttrack=# reindex table public.defects;\nREINDEX\ntesttrack=# select * from pgstatindex('public.defects_pkey');\n version | tree_level | index_size | root_block_no | internal_pages | \nleaf_pages | empty_pages | deleted_pages | avg_leaf_density | \nleaf_fragmentation\n \n---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+-------------------\n-\n 2 | 1 | 647168 | 3 | 0 \n| 78 | 0 | 0 | 89.67 \n| 0\n(1 row)\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Thu, 16 Dec 2010 14:27:08 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Bloat - how to tell?" }, { "msg_contents": "On Thu, Dec 16, 2010 at 2:27 PM, Mladen Gogala\n<[email protected]> wrote:\n> What is \"leaf_fragmentation\"? How is it defined? I wasn't able to find out\n> any definition of that number. How is it calculated. I verified that running\n> reindex makes it 0:\n\nWell, according to the code:\n\n /*\n * If the next leaf is on an earlier block, it means a\n * fragmentation.\n */\n if (opaque->btpo_next != P_NONE &&\nopaque->btpo_next < blkno)\n indexStat.fragments++;\n\nAnd then the final value is calculated thus:\n\n snprintf(values[j++], 32, \"%.2f\", (double)\nindexStat.fragments / (double) indexStat.leaf_pages * 100.0);\n\nThis doesn't really match my definition of the word \"fragmentation\", though...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 17 Dec 2010 21:32:48 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Bloat - how to tell?" }, { "msg_contents": "Robert Haas wrote:\n>\n> This doesn't really match my definition of the word \"fragmentation\", though...\n>\n> \nSame here. However, I did run \"reindex\" on one table and this indicator \ndid drop to 0. I will shoot an email to the author, he's probably \nsmarter than me and will be able to provide a reasonable explanation.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Fri, 17 Dec 2010 22:56:01 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Bloat - how to tell?" } ]
[ { "msg_contents": "I have a table in Postgresql 9.0.1 as folllows:\n\n Table \"public.crmentity\"\n Column | Type | Modifiers\n--------------+-----------------------------+--------------------\n crmid | integer | not null\n smcreatorid | integer | not null default 0\n smownerid | integer | not null default 0\n modifiedby | integer | not null default 0\n setype | character varying(30) | not null\n description | text |\n createdtime | timestamp without time zone | not null\n modifiedtime | timestamp without time zone | not null\n viewedtime | timestamp without time zone |\n status | character varying(50) |\n version | integer | not null default 0\n presence | integer | default 1\n deleted | integer | not null default 0\nIndexes:\n \"crmentity_pkey\" PRIMARY KEY, btree (crmid)\n \"crmentity_createdtime_idx\" btree (createdtime)\n \"crmentity_modifiedby_idx\" btree (modifiedby)\n \"crmentity_modifiedtime_idx\" btree (modifiedtime)\n \"crmentity_smcreatorid_idx\" btree (smcreatorid)\n \"crmentity_smownerid_idx\" btree (smownerid)\n \"ftx_crmentity_descr\" gin (to_tsvector('english'::regconfig,\nreplace(description, '<!--'::text, '<!-'::text)))\n \"crmentity_deleted_idx\" btree (deleted)\n \"crmentity_setype_idx\" btree (setype)\nReferenced by:\n TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid)\nREFERENCES crmentity(crmid) ON DELETE CASCADE\n TABLE \"_cc2crmentity\" CONSTRAINT \"fk__cc2crmentity_crmentity\" FOREIGN\nKEY (crm_id) REFERENCES crmentity(crmid) ON UPDATE CASCADE ON DELETE CASCADE\n\n\nEXPLAIN ANALYZE on this table:\n\nexplain analyze\nselect *\nFROM crmentity\nwhere crmentity.deleted=0 and crmentity.setype='Emails'\n\n Index Scan using crmentity_setype_idx on crmentity (cost=0.00..1882.76\nrows=55469 width=301) (actual time=0.058..158.564 rows=79193 loops=1)\n Index Cond: ((setype)::text = 'Emails'::text)\n Filter: (deleted = 0)\n Total runtime: 231.256 ms\n(4 rows)\n\nMy question is why \"crmentity_setype_idx\" index is being used only.\n\"crmentity_deleted_idx\" index is not using.\n\nAny idea please.\n\nI have a table in  Postgresql 9.0.1 as folllows:                 Table \"public.crmentity\"    Column    |            Type             |     Modifiers      --------------+-----------------------------+--------------------\n crmid        | integer                     | not null smcreatorid  | integer                     | not null default 0 smownerid    | integer                     | not null default 0 modifiedby   | integer                     | not null default 0\n setype       | character varying(30)       | not null description  | text                        |  createdtime  | timestamp without time zone | not null modifiedtime | timestamp without time zone | not null\n viewedtime   | timestamp without time zone |  status       | character varying(50)       |  version      | integer                     | not null default 0 presence     | integer                     | default 1\n deleted      | integer                     | not null default 0Indexes:    \"crmentity_pkey\" PRIMARY KEY, btree (crmid)    \"crmentity_createdtime_idx\" btree (createdtime)\n    \"crmentity_modifiedby_idx\" btree (modifiedby)    \"crmentity_modifiedtime_idx\" btree (modifiedtime)    \"crmentity_smcreatorid_idx\" btree (smcreatorid)\n    \"crmentity_smownerid_idx\" btree (smownerid)    \"ftx_crmentity_descr\" gin (to_tsvector('english'::regconfig, replace(description, '<!--'::text, '<!-'::text)))\n    \"crmentity_deleted_idx\" btree (deleted)    \"crmentity_setype_idx\" btree (setype)Referenced by:    TABLE \"service\" CONSTRAINT \"fk_1_service\" FOREIGN KEY (serviceid) REFERENCES crmentity(crmid) ON DELETE CASCADE\n    TABLE \"_cc2crmentity\" CONSTRAINT \"fk__cc2crmentity_crmentity\" FOREIGN KEY (crm_id) REFERENCES crmentity(crmid) ON UPDATE CASCADE ON DELETE CASCADEEXPLAIN ANALYZE on this table:\nexplain analyzeselect *FROM crmentity where  crmentity.deleted=0 and crmentity.setype='Emails' \n Index Scan using crmentity_setype_idx on crmentity  (cost=0.00..1882.76 rows=55469 width=301) (actual time=0.058..158.564 rows=79193 loops=1)   Index Cond: ((setype)::text = 'Emails'::text)\n   Filter: (deleted = 0) Total runtime: 231.256 ms(4 rows)My question is why \"crmentity_setype_idx\" index is being used only. \"crmentity_deleted_idx\" index is not using.\nAny idea please.", "msg_date": "Wed, 15 Dec 2010 12:56:32 +0600", "msg_from": "AI Rumman <[email protected]>", "msg_from_op": true, "msg_subject": "only one index is using, why?" }, { "msg_contents": "On Wed, Dec 15, 2010 at 08:56, AI Rumman <[email protected]> wrote:\n> My question is why \"crmentity_setype_idx\" index is being used only.\n> \"crmentity_deleted_idx\" index is not using.\n> Any idea please.\n\nBecause the planner determined that the cost of scanning *two* indexes\nand combining the results is more expensive than scanning one index\nand filtering the results afterwards.\n\nLooks like your query could use a composite index on both columns:\n(deleted, setype)\nOr a partial index: (setype) WHERE deleted=0\n\nRegards,\nMarti\n", "msg_date": "Wed, 15 Dec 2010 10:52:03 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: only one index is using, why?" } ]
[ { "msg_contents": "Hi, all. I'm trying to query table:\n\nEXPLAIN SELECT \"v\".\"id\", \"v\".\"title\" FROM \"video\" AS \"v\"\nWHERE (v.active) (v.fts @@\n'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery and\nv.id <> 500563 )\nORDER BY COALESCE(ts_rank_cd( '{0.1, 0.2, 0.7, 1.0}', v.fts,\n'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery),\n1) DESC,\n v.views DESC\nLIMIT 6\n\nHere's the query that gets all related items, where fts is tsvector field\nwith index on it (CREATE INDEX idx_video_fts ON video USING gin (fts);)\nearlier i tried gist, but results are the same.\n\nAnd here's what i got:\n\n\"Limit (cost=98169.89..98169.90 rows=6 width=284)\"\n\" -> Sort (cost=98169.89..98383.16 rows=85311 width=284)\"\n\" Sort Key: (COALESCE(ts_rank_cd('{0.1,0.2,0.7,1}'::real[], fts, '( (\n( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) |\n''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery), 1::real)), views\"\n\" -> Seq Scan on video v (cost=0.00..96640.70 rows=85311\nwidth=284)\"\n\" Filter: (active AND (fts @@ '( ( ( ( ( ''dexter'':A |\n''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) |\n''сезон'':A ) | ''серия'':A'::tsquery) AND (id <> 500563))\"\n\nAs you can see the query doesn't use index. If I drop \"or\" sentences for the\nquery, it will, but I do need them. I'm using PostgreSQL 9.0.\nWhat should I do? The query is really too slow.\n\nHi, all. I'm trying to query table:EXPLAIN SELECT \"v\".\"id\", \"v\".\"title\" FROM \"video\" AS \"v\"WHERE (v.active) (v.fts @@ 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery and v.id <> 500563 ) \nORDER BY COALESCE(ts_rank_cd( '{0.1, 0.2, 0.7, 1.0}', v.fts, 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery), 1) DESC,                   v.views DESC LIMIT 6\nHere's the query that gets all related items, where fts is tsvector field with index on it (CREATE INDEX idx_video_fts ON video USING gin (fts);) earlier i tried gist, but results are the same.\nAnd here's what i got:\"Limit  (cost=98169.89..98169.90 rows=6 width=284)\"\"  ->  Sort  (cost=98169.89..98383.16 rows=85311 width=284)\"\n\"        Sort Key: (COALESCE(ts_rank_cd('{0.1,0.2,0.7,1}'::real[], fts, '( ( ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery), 1::real)), views\"\n\"        ->  Seq Scan on video v  (cost=0.00..96640.70 rows=85311 width=284)\"\"              Filter: (active AND (fts @@ '( ( ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery) AND (id <> 500563))\"\nAs you can see the query doesn't use index. If I drop \"or\" sentences for the query, it will, but I do need them. I'm using PostgreSQL 9.0.What should I do? The query is really too slow.", "msg_date": "Wed, 15 Dec 2010 19:56:33 +0600", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with FTS" } ]
[ { "msg_contents": "I wrote a test program in C++ using libpq. It works as follows (pseudo code):\n\nfor ( int loop = 0; loop < 1000; ++loop ) {\n PQexec(\"BEGIN\");\n const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";\n PQprepare(m_conn, \"stmtid\",sql,0,NULL);\n for ( int i = 0; i < 1000; ++i )\n // Set values etc.\n PQexecPrepared(m_conn,...);\n }\n PQexec(\"DEALLOCATE stmtid\");\n PQexec(\"COMMIT\");\n}\n\nI measured the duration of every loop of the outer for-loop resulting in an average of 450 ms (per 1000 data sets insert)\n\nAfter that, I wrote a test program in Java using JDBC. It works as follows:\n\nfor ( int loops = 0; loops < 1000; ++i) {\n String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";\n PreparedStatement stmt = con.prepareStatement(sql);\n for (int i = 0; i < 1000; ++i ) {\n // Set values etc.\n stmt.addBatch();\n }\n stmt.executeBatch();\n con.commit();\n stmt.close();\n}\n\nI measured the duration of every loop of the outer for-loop resulting in an average of 100 ms (per 1000 data sets insert)\n\nThis means that accessing PostgreSQL by JDBC is about 4-5 times faster than using libpq.\n\nComparable results have been measured with analog update and delete statements.\n\nI need to enhance the performance of my C++ code. Is there any possibility in libpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements (I have no chance to use COPY statements)? I didn't find anything comparable to PreparedStatement.executeBatch() in libpq.\n\nBest regards,\nWerner Scholtes\n\n\n\n\n\nI wrote a test program in C++ using libpq. It works as follows (pseudo code): for ( int loop = 0; loop < 1000; ++loop ) {   PQexec(\"BEGIN\");   const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";   PQprepare(m_conn, \"stmtid\",sql,0,NULL);   for ( int i = 0; i < 1000; ++i )       // Set values etc.      PQexecPrepared(m_conn,…);   }   PQexec(\"DEALLOCATE stmtid\");   PQexec(\"COMMIT\");   } I measured the duration of every loop of the outer for-loop resulting in an average of 450 ms (per 1000 data sets insert) After that, I wrote a test program in Java using JDBC. It works as follows: for ( int loops = 0; loops < 1000; ++i) {   String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";   PreparedStatement stmt = con.prepareStatement(sql);   for (int i = 0; i < 1000; ++i ) {      // Set values etc.      stmt.addBatch();   }   stmt.executeBatch();   con.commit();   stmt.close();} I measured the duration of every loop of the outer for-loop resulting in an average of 100 ms (per 1000 data sets insert) This means that accessing PostgreSQL by JDBC is about 4-5 times faster than using libpq.  Comparable  results have been measured with analog update and delete statements.  I need to enhance the performance of my C++ code. Is there any possibility in libpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements (I have no chance to use COPY statements)? I didn't find anything comparable to PreparedStatement.executeBatch() in libpq. Best regards,Werner Scholtes", "msg_date": "Wed, 15 Dec 2010 15:51:55 +0100", "msg_from": "Werner Scholtes <[email protected]>", "msg_from_op": true, "msg_subject": "performance libpq vs JDBC" }, { "msg_contents": "Can you trying writing libpq program using COPY functions?\nI hope it will be better than prepared statements.\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Werner Scholtes <[email protected]>\nTo: \"[email protected]\" <[email protected]>\nSent: Wed, December 15, 2010 8:21:55 PM\nSubject: [PERFORM] performance libpq vs JDBC\n\n\nI wrote a test program in C++ using libpq. It works as follows (pseudo code):\n \nfor( int loop = 0; loop < 1000; ++loop ) {\n PQexec(\"BEGIN\");\n const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";\n PQprepare(m_conn,\"stmtid\",sql,0,NULL);\n for ( int i = 0; i < 1000; ++i ) \n // Set values etc.\n PQexecPrepared(m_conn,…);\n }\n PQexec(\"DEALLOCATE stmtid\");\n PQexec(\"COMMIT\"); \n}\n \nI measured the duration of every loop of the outer for-loop resulting in an \naverage of 450 ms (per 1000 data sets insert)\n \nAfter that, I wrote a test program in Java using JDBC. It works as follows:\n \nfor( intloops = 0; loops < 1000; ++i) {\n String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";\n PreparedStatement stmt = con.prepareStatement(sql);\n for(inti = 0; i < 1000; ++i ) {\n // Set values etc.\n stmt.addBatch();\n }\n stmt.executeBatch();\n con.commit();\n stmt.close();\n}\n \nI measured the duration of every loop of the outer for-loop resulting in an \naverage of 100 ms (per 1000 data sets insert)\n \nThis means that accessing PostgreSQL by JDBC is about 4-5 times faster than \nusing libpq. \n\n \nComparable results have been measured with analog update and delete statements. \n\n \nI need to enhance the performance of my C++ code. Is there any possibility in \nlibpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements \n(I have no chance to use COPY statements)? I didn't find anything comparable to \nPreparedStatement.executeBatch() in libpq.\n \nBest regards,\nWerner Scholtes\n\n\n \nCan you trying writing libpq program using COPY functions?I hope it will be better than prepared statements. Best Regards,DivakarFrom: Werner Scholtes <[email protected]>To: \"[email protected]\" <[email protected]>Sent: Wed, December 15, 2010 8:21:55 PMSubject: [PERFORM] performance libpq vs JDBCI wrote a test program in C++ using libpq. It works as follows (pseudo code):  for ( int loop = 0; loop < 1000; ++loop ) {   PQexec(\"BEGIN\");   const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";   PQprepare(m_conn, \"stmtid\",sql,0,NULL);   for ( int i = 0; i < 1000; ++i )       // Set values etc.      PQexecPrepared(m_conn,…);   }   PQexec(\"DEALLOCATE stmtid\");   PQexec(\"COMMIT\");   }  I measured the duration of every loop of the outer for-loop resulting in an average of 450 ms (per 1000 data sets insert)  After that, I wrote a test program in Java using JDBC. It works as follows:  for ( int loops = 0; loops < 1000; ++i) {   String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";   PreparedStatement stmt = con.prepareStatement(sql);   for (int i = 0; i < 1000; ++i ) {      // Set values etc.      stmt.addBatch();   }   stmt.executeBatch();   con.commit();   stmt.close();}  I measured the duration of every loop of the outer for-loop resulting in an average of 100 ms (per 1000 data sets insert)  This means that accessing PostgreSQL by JDBC is about 4-5 times faster than using libpq.  Comparable  results have been measured with analog update and delete statements.  I need to enhance the performance of my C++ code. Is there any possibility in libpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements (I have no chance to use COPY statements)? I didn't find anything comparable to\n PreparedStatement.executeBatch() in libpq.  Best regards,Werner Scholtes", "msg_date": "Thu, 16 Dec 2010 00:10:39 -0800 (PST)", "msg_from": "Divakar Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance libpq vs JDBC" }, { "msg_contents": "Unfortunately I cannot use COPY funtion, since I need the performance of JDBC for update and delete statements in C++ libpq-program as well.\r\n\r\nI wonder how JDBC PreparedStatement.addBatch() and PreparedStatement.executeBatch() work. They need to have a more efficient protocol to send bulks of parameter sets for one prepared statement as batch in one network transmission to the server. As far as I could see PQexecPrepared does not allow to send more than one parameter set (parameters for one row) in one call. So libpq sends 1000 times one single row to the server where JDBC sends 1 time 1000 rows, which is much more efficient.\r\n\r\nI assume that the wire protocol of PostgreSQL allows to transmit multiple rows at once, but libpq doesn't have an interface to access it. Is that right?\r\n\r\nVon: Divakar Singh [mailto:[email protected]]\r\nGesendet: Donnerstag, 16. Dezember 2010 09:11\r\nAn: Werner Scholtes; [email protected]\r\nBetreff: Re: [PERFORM] performance libpq vs JDBC\r\n\r\nCan you trying writing libpq program using COPY functions?\r\nI hope it will be better than prepared statements.\r\n\r\nBest Regards,\r\nDivakar\r\n\r\n\r\n________________________________\r\nFrom: Werner Scholtes <[email protected]>\r\nTo: \"[email protected]\" <[email protected]>\r\nSent: Wed, December 15, 2010 8:21:55 PM\r\nSubject: [PERFORM] performance libpq vs JDBC\r\n\r\n\r\nI wrote a test program in C++ using libpq. It works as follows (pseudo code):\r\n\r\nfor ( int loop = 0; loop < 1000; ++loop ) {\r\n PQexec(\"BEGIN\");\r\n const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";\r\n PQprepare(m_conn, \"stmtid\",sql,0,NULL);\r\n for ( int i = 0; i < 1000; ++i )\r\n // Set values etc.\r\n PQexecPrepared(m_conn,…);\r\n }\r\n PQexec(\"DEALLOCATE stmtid\");\r\n PQexec(\"COMMIT\");\r\n}\r\n\r\nI measured the duration of every loop of the outer for-loop resulting in an average of 450 ms (per 1000 data sets insert)\r\n\r\nAfter that, I wrote a test program in Java using JDBC. It works as follows:\r\n\r\nfor ( int loops = 0; loops < 1000; ++i) {\r\n String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";\r\n PreparedStatement stmt = con.prepareStatement(sql);\r\n for (int i = 0; i < 1000; ++i ) {\r\n // Set values etc.\r\n stmt.addBatch();\r\n }\r\n stmt.executeBatch();\r\n con.commit();\r\n stmt.close();\r\n}\r\n\r\nI measured the duration of every loop of the outer for-loop resulting in an average of 100 ms (per 1000 data sets insert)\r\n\r\nThis means that accessing PostgreSQL by JDBC is about 4-5 times faster than using libpq.\r\n\r\nComparable results have been measured with analog update and delete statements.\r\n\r\nI need to enhance the performance of my C++ code. Is there any possibility in libpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements (I have no chance to use COPY statements)? I didn't find anything comparable to PreparedStatement.executeBatch() in libpq.\r\n\r\nBest regards,\r\nWerner Scholtes\r\n\r\n\r\n\r\n\r\n\r\n\nUnfortunately I cannot use COPY funtion, since I need the performance of JDBC for update and delete statements in C++ libpq-program as well. I wonder how JDBC  PreparedStatement.addBatch() and PreparedStatement.executeBatch() work. They need to have a more efficient protocol to send bulks of parameter sets for one prepared statement as batch in one network transmission to the server. As far as I could see PQexecPrepared does not allow to send more than one parameter set (parameters for one row) in one call. So libpq sends 1000 times one single row to the server where JDBC sends 1 time 1000 rows, which is much more efficient. I assume that the wire protocol of PostgreSQL allows to transmit multiple rows at once, but libpq doesn't have an interface to access it. Is that right?  Von: Divakar Singh [mailto:[email protected]] Gesendet: Donnerstag, 16. Dezember 2010 09:11An: Werner Scholtes; [email protected]: Re: [PERFORM] performance libpq vs JDBC Can you trying writing libpq program using COPY functions?I hope it will be better than prepared statements. Best Regards,Divakar  From: Werner Scholtes <[email protected]>To: \"[email protected]\" <[email protected]>Sent: Wed, December 15, 2010 8:21:55 PMSubject: [PERFORM] performance libpq vs JDBCI wrote a test program in C++ using libpq. It works as follows (pseudo code): for ( int loop = 0; loop < 1000; ++loop ) {   PQexec(\"BEGIN\");   const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";   PQprepare(m_conn, \"stmtid\",sql,0,NULL);   for ( int i = 0; i < 1000; ++i )       // Set values etc.      PQexecPrepared(m_conn,…);   }   PQexec(\"DEALLOCATE stmtid\");   PQexec(\"COMMIT\");   } I measured the duration of every loop of the outer for-loop resulting in an average of 450 ms (per 1000 data sets insert) After that, I wrote a test program in Java using JDBC. It works as follows: for ( int loops = 0; loops < 1000; ++i) {   String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";   PreparedStatement stmt = con.prepareStatement(sql);   for (int i = 0; i < 1000; ++i ) {      // Set values etc.      stmt.addBatch();   }   stmt.executeBatch();   con.commit();   stmt.close();} I measured the duration of every loop of the outer for-loop resulting in an average of 100 ms (per 1000 data sets insert) This means that accessing PostgreSQL by JDBC is about 4-5 times faster than using libpq.  Comparable  results have been measured with analog update and delete statements.  I need to enhance the performance of my C++ code. Is there any possibility in libpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements (I have no chance to use COPY statements)? I didn't find anything comparable to PreparedStatement.executeBatch() in libpq. Best regards,Werner Scholtes", "msg_date": "Thu, 16 Dec 2010 10:21:53 +0100", "msg_from": "Werner Scholtes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance libpq vs JDBC" }, { "msg_contents": "If you have all records before issuing Insert, you can do it like: insert into \nxxx values (a,b,c), (d,e,f), ......;\nan example: http://kaiv.wordpress.com/2007/07/19/faster-insert-for-multiple-rows\n\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Werner Scholtes <[email protected]>\nTo: Divakar Singh <[email protected]>; \"[email protected]\" \n<[email protected]>\nSent: Thu, December 16, 2010 2:51:53 PM\nSubject: RE: [PERFORM] performance libpq vs JDBC\n\n\nUnfortunately I cannot use COPY funtion, since I need the performance of JDBC \nfor update and delete statements in C++ libpq-program as well.\n \nI wonder how JDBC PreparedStatement.addBatch() and \nPreparedStatement.executeBatch() work. They need to have a more efficient \nprotocol to send bulks of parameter sets for one prepared statement as batch in \none network transmission to the server. As far as I could see PQexecPrepared \ndoes not allow to send more than one parameter set (parameters for one row) in \none call. So libpq sends 1000 times one single row to the server where JDBC \nsends 1 time 1000 rows, which is much more efficient.\n \nI assume that the wire protocol of PostgreSQL allows to transmit multiple rows \nat once, but libpq doesn't have an interface to access it. Is that right? \n\n \nVon:Divakar Singh [mailto:[email protected]] \nGesendet: Donnerstag, 16. Dezember 2010 09:11\nAn: Werner Scholtes; [email protected]\nBetreff: Re: [PERFORM] performance libpq vs JDBC\n \nCan you trying writing libpq program using COPY functions?\nI hope it will be better than prepared statements.\n \nBest Regards,\nDivakar\n \n \n\n________________________________\n\nFrom:Werner Scholtes <[email protected]>\nTo: \"[email protected]\" <[email protected]>\nSent: Wed, December 15, 2010 8:21:55 PM\nSubject: [PERFORM] performance libpq vs JDBC\n\n\n\nI wrote a test program in C++ using libpq. It works as follows (pseudo code):\n \nfor( int loop = 0; loop < 1000; ++loop ) {\n PQexec(\"BEGIN\");\n const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";\n PQprepare(m_conn,\"stmtid\",sql,0,NULL);\n for ( int i = 0; i < 1000; ++i ) \n // Set values etc.\n PQexecPrepared(m_conn,…);\n }\n PQexec(\"DEALLOCATE stmtid\");\n PQexec(\"COMMIT\"); \n}\n \nI measured the duration of every loop of the outer for-loop resulting in an \naverage of 450 ms (per 1000 data sets insert)\n \nAfter that, I wrote a test program in Java using JDBC. It works as follows:\n \nfor( intloops = 0; loops < 1000; ++i) {\n String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";\n PreparedStatement stmt = con.prepareStatement(sql);\n for(inti = 0; i < 1000; ++i ) {\n // Set values etc.\n stmt.addBatch();\n }\n stmt.executeBatch();\n con.commit();\n stmt.close();\n}\n \nI measured the duration of every loop of the outer for-loop resulting in an \naverage of 100 ms (per 1000 data sets insert)\n \nThis means that accessing PostgreSQL by JDBC is about 4-5 times faster than \nusing libpq. \n\n \nComparable results have been measured with analog update and delete statements. \n\n \nI need to enhance the performance of my C++ code. Is there any possibility in \nlibpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements \n(I have no chance to use COPY statements)? I didn't find anything comparable to \nPreparedStatement.executeBatch() in libpq.\n \nBest regards,\nWerner Scholtes\n\n\n \nIf you have all records before issuing Insert, you can do it like: insert into xxx values (a,b,c), (d,e,f), ......;an example: http://kaiv.wordpress.com/2007/07/19/faster-insert-for-multiple-rows Best Regards,DivakarFrom: Werner Scholtes <[email protected]>To: Divakar Singh <[email protected]>; \"[email protected]\"\n <[email protected]>Sent: Thu, December 16, 2010 2:51:53 PMSubject: RE: [PERFORM] performance libpq vs JDBCUnfortunately I cannot use COPY funtion, since I need the performance of JDBC for update and delete statements in C++ libpq-program as well.  I wonder how JDBC  PreparedStatement.addBatch() and PreparedStatement.executeBatch() work. They need to have a more efficient protocol to send bulks of parameter sets for one prepared statement as batch in one network transmission to the server. As far as I could see PQexecPrepared does not allow to send more than one parameter set (parameters for one row) in one call. So libpq sends 1000 times one single row to the server where JDBC sends 1 time 1000 rows,\n which is much more efficient.  I assume that the wire protocol of PostgreSQL allows to transmit multiple rows at once, but libpq doesn't have an interface to access it. Is that right?  Von: Divakar Singh [mailto:[email protected]] Gesendet: Donnerstag, 16. Dezember 2010 09:11An: Werner Scholtes; [email protected]: Re: [PERFORM] performance libpq vs JDBC  Can you trying writing libpq program using COPY functions?I hope it will be better than prepared statements. Best Regards,Divakar    From: Werner Scholtes <[email protected]>To: \"[email protected]\" <[email protected]>Sent: Wed, December 15, 2010 8:21:55 PMSubject: [PERFORM] performance libpq vs JDBCI wrote a test program in C++ using libpq. It works as follows (pseudo code): for ( int loop = 0; loop < 1000; ++loop ) {   PQexec(\"BEGIN\");   const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";   PQprepare(m_conn, \"stmtid\",sql,0,NULL);   for ( int i = 0; i < 1000; ++i )       // Set values etc.      PQexecPrepared(m_conn,…);   }   PQexec(\"DEALLOCATE stmtid\");   PQexec(\"COMMIT\");   } I measured the duration of every loop of the outer for-loop resulting in an average of 450 ms (per 1000 data sets insert) After that, I wrote a test program in Java using JDBC. It works as follows: for ( int loops = 0; loops < 1000; ++i) {   String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";   PreparedStatement stmt = con.prepareStatement(sql);   for (int i = 0; i < 1000; ++i ) {      // Set values etc.      stmt.addBatch();   }   stmt.executeBatch();   con.commit();   stmt.close();} I measured the duration of every loop of the outer for-loop resulting in an average of 100 ms (per 1000 data sets insert) This means that accessing PostgreSQL by JDBC is about 4-5 times faster than using libpq.  Comparable  results have been measured with analog update and delete statements.  I need to enhance the performance of my C++ code. Is there any possibility in libpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements (I have no chance to use COPY statements)? I didn't find anything comparable to PreparedStatement.executeBatch() in libpq. Best regards,Werner Scholtes", "msg_date": "Thu, 16 Dec 2010 01:37:32 -0800 (PST)", "msg_from": "Divakar Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance libpq vs JDBC" }, { "msg_contents": "What about update and delete? In case of an update I have all records to be updated and in case of an delete I have all primary key values of records to be deleted.\r\n\r\nVon: [email protected] [mailto:[email protected]] Im Auftrag von Divakar Singh\r\nGesendet: Donnerstag, 16. Dezember 2010 10:38\r\nAn: Werner Scholtes; [email protected]\r\nBetreff: Re: [PERFORM] performance libpq vs JDBC\r\n\r\nIf you have all records before issuing Insert, you can do it like: insert into xxx values (a,b,c), (d,e,f), ......;\r\nan example: http://kaiv.wordpress.com/2007/07/19/faster-insert-for-multiple-rows\r\n\r\nBest Regards,\r\nDivakar\r\n\r\n\r\n________________________________\r\nFrom: Werner Scholtes <[email protected]>\r\nTo: Divakar Singh <[email protected]>; \"[email protected]\" <[email protected]>\r\nSent: Thu, December 16, 2010 2:51:53 PM\r\nSubject: RE: [PERFORM] performance libpq vs JDBC\r\n\r\n\r\nUnfortunately I cannot use COPY funtion, since I need the performance of JDBC for update and delete statements in C++ libpq-program as well.\r\n\r\nI wonder how JDBC PreparedStatement.addBatch() and PreparedStatement.executeBatch() work. They need to have a more efficient protocol to send bulks of parameter sets for one prepared statement as batch in one network transmission to the server. As far as I could see PQexecPrepared does not allow to send more than one parameter set (parameters for one row) in one call. So libpq sends 1000 times one single row to the server where JDBC sends 1 time 1000 rows, which is much more efficient.\r\n\r\nI assume that the wire protocol of PostgreSQL allows to transmit multiple rows at once, but libpq doesn't have an interface to access it. Is that right?\r\n\r\nVon: Divakar Singh [mailto:[email protected]]\r\nGesendet: Donnerstag, 16. Dezember 2010 09:11\r\nAn: Werner Scholtes; [email protected]\r\nBetreff: Re: [PERFORM] performance libpq vs JDBC\r\n\r\nCan you trying writing libpq program using COPY functions?\r\nI hope it will be better than prepared statements.\r\n\r\nBest Regards,\r\nDivakar\r\n\r\n\r\n________________________________\r\nFrom: Werner Scholtes <[email protected]>\r\nTo: \"[email protected]\" <[email protected]>\r\nSent: Wed, December 15, 2010 8:21:55 PM\r\nSubject: [PERFORM] performance libpq vs JDBC\r\n\r\nI wrote a test program in C++ using libpq. It works as follows (pseudo code):\r\n\r\nfor ( int loop = 0; loop < 1000; ++loop ) {\r\n PQexec(\"BEGIN\");\r\n const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";\r\n PQprepare(m_conn, \"stmtid\",sql,0,NULL);\r\n for ( int i = 0; i < 1000; ++i )\r\n // Set values etc.\r\n PQexecPrepared(m_conn,…);\r\n }\r\n PQexec(\"DEALLOCATE stmtid\");\r\n PQexec(\"COMMIT\");\r\n}\r\n\r\nI measured the duration of every loop of the outer for-loop resulting in an average of 450 ms (per 1000 data sets insert)\r\n\r\nAfter that, I wrote a test program in Java using JDBC. It works as follows:\r\n\r\nfor ( int loops = 0; loops < 1000; ++i) {\r\n String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";\r\n PreparedStatement stmt = con.prepareStatement(sql);\r\n for (int i = 0; i < 1000; ++i ) {\r\n // Set values etc.\r\n stmt.addBatch();\r\n }\r\n stmt.executeBatch();\r\n con.commit();\r\n stmt.close();\r\n}\r\n\r\nI measured the duration of every loop of the outer for-loop resulting in an average of 100 ms (per 1000 data sets insert)\r\n\r\nThis means that accessing PostgreSQL by JDBC is about 4-5 times faster than using libpq.\r\n\r\nComparable results have been measured with analog update and delete statements.\r\n\r\nI need to enhance the performance of my C++ code. Is there any possibility in libpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements (I have no chance to use COPY statements)? I didn't find anything comparable to PreparedStatement.executeBatch() in libpq.\r\n\r\nBest regards,\r\nWerner Scholtes\r\n\r\n\r\n\r\n\r\n\r\n\r\n\nWhat about update and delete? In case of an update I have all records to be updated and in case of an delete I have all primary key values of records to be deleted.  Von: [email protected] [mailto:[email protected]] Im Auftrag von Divakar SinghGesendet: Donnerstag, 16. Dezember 2010 10:38An: Werner Scholtes; [email protected]: Re: [PERFORM] performance libpq vs JDBC If you have all records before issuing Insert, you can do it like: insert into xxx values (a,b,c), (d,e,f), ......;an example: http://kaiv.wordpress.com/2007/07/19/faster-insert-for-multiple-rows Best Regards,Divakar  From: Werner Scholtes <[email protected]>To: Divakar Singh <[email protected]>; \"[email protected]\" <[email protected]>Sent: Thu, December 16, 2010 2:51:53 PMSubject: RE: [PERFORM] performance libpq vs JDBCUnfortunately I cannot use COPY funtion, since I need the performance of JDBC for update and delete statements in C++ libpq-program as well. I wonder how JDBC  PreparedStatement.addBatch() and PreparedStatement.executeBatch() work. They need to have a more efficient protocol to send bulks of parameter sets for one prepared statement as batch in one network transmission to the server. As far as I could see PQexecPrepared does not allow to send more than one parameter set (parameters for one row) in one call. So libpq sends 1000 times one single row to the server where JDBC sends 1 time 1000 rows, which is much more efficient. I assume that the wire protocol of PostgreSQL allows to transmit multiple rows at once, but libpq doesn't have an interface to access it. Is that right?  Von: Divakar Singh [mailto:[email protected]] Gesendet: Donnerstag, 16. Dezember 2010 09:11An: Werner Scholtes; [email protected]: Re: [PERFORM] performance libpq vs JDBC Can you trying writing libpq program using COPY functions?I hope it will be better than prepared statements. Best Regards,Divakar  From: Werner Scholtes <[email protected]>To: \"[email protected]\" <[email protected]>Sent: Wed, December 15, 2010 8:21:55 PMSubject: [PERFORM] performance libpq vs JDBCI wrote a test program in C++ using libpq. It works as follows (pseudo code): for ( int loop = 0; loop < 1000; ++loop ) {   PQexec(\"BEGIN\");   const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";   PQprepare(m_conn, \"stmtid\",sql,0,NULL);   for ( int i = 0; i < 1000; ++i )       // Set values etc.      PQexecPrepared(m_conn,…);   }   PQexec(\"DEALLOCATE stmtid\");   PQexec(\"COMMIT\");   } I measured the duration of every loop of the outer for-loop resulting in an average of 450 ms (per 1000 data sets insert) After that, I wrote a test program in Java using JDBC. It works as follows: for ( int loops = 0; loops < 1000; ++i) {   String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";   PreparedStatement stmt = con.prepareStatement(sql);   for (int i = 0; i < 1000; ++i ) {      // Set values etc.      stmt.addBatch();   }   stmt.executeBatch();   con.commit();   stmt.close();} I measured the duration of every loop of the outer for-loop resulting in an average of 100 ms (per 1000 data sets insert) This means that accessing PostgreSQL by JDBC is about 4-5 times faster than using libpq.  Comparable  results have been measured with analog update and delete statements.  I need to enhance the performance of my C++ code. Is there any possibility in libpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements (I have no chance to use COPY statements)? I didn't find anything comparable to PreparedStatement.executeBatch() in libpq. Best regards,Werner Scholtes", "msg_date": "Thu, 16 Dec 2010 10:41:36 +0100", "msg_from": "Werner Scholtes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance libpq vs JDBC" }, { "msg_contents": "Update and delete are the operations which affect more than 1 row in general.\nThe only thing is that the criteria has to be the same for all rows.\nIf you have different criteria for different rows in case of update or delete, \nyou will have to fire 2 queries.\n\nI mean, if you want to do\n1. delete from xyz where a = 1\nand\n2. delete from xyz where a = 2\nThen you will have to run query 2 times.\n\n Best Regards,\nDivakar\n\n\n\n\n________________________________\nFrom: Werner Scholtes <[email protected]>\nTo: Divakar Singh <[email protected]>; \"[email protected]\" \n<[email protected]>\nSent: Thu, December 16, 2010 3:11:36 PM\nSubject: Re: [PERFORM] performance libpq vs JDBC\n\n\nWhat about update and delete? In case of an update I have all records to be \nupdated and in case of an delete I have all primary key values of records to be \ndeleted. \n\n \nVon:[email protected] \n[mailto:[email protected]] Im Auftrag von Divakar Singh\nGesendet: Donnerstag, 16. Dezember 2010 10:38\nAn: Werner Scholtes; [email protected]\nBetreff: Re: [PERFORM] performance libpq vs JDBC\n \nIf you have all records before issuing Insert, you can do it like: insert into \nxxx values (a,b,c), (d,e,f), ......;\nan example: http://kaiv.wordpress.com/2007/07/19/faster-insert-for-multiple-rows\n \nBest Regards,\nDivakar\n \n \n\n________________________________\n\nFrom:Werner Scholtes <[email protected]>\nTo: Divakar Singh <[email protected]>; \"[email protected]\" \n<[email protected]>\nSent: Thu, December 16, 2010 2:51:53 PM\nSubject: RE: [PERFORM] performance libpq vs JDBC\n\n\n\nUnfortunately I cannot use COPY funtion, since I need the performance of JDBC \nfor update and delete statements in C++ libpq-program as well.\n \nI wonder how JDBC PreparedStatement.addBatch() and \nPreparedStatement.executeBatch() work. They need to have a more efficient \nprotocol to send bulks of parameter sets for one prepared statement as batch in \none network transmission to the server. As far as I could see PQexecPrepared \ndoes not allow to send more than one parameter set (parameters for one row) in \none call. So libpq sends 1000 times one single row to the server where JDBC \nsends 1 time 1000 rows, which is much more efficient.\n \nI assume that the wire protocol of PostgreSQL allows to transmit multiple rows \nat once, but libpq doesn't have an interface to access it. Is that right? \n\n \nVon:Divakar Singh [mailto:[email protected]] \nGesendet: Donnerstag, 16. Dezember 2010 09:11\nAn: Werner Scholtes; [email protected]\nBetreff: Re: [PERFORM] performance libpq vs JDBC\n \nCan you trying writing libpq program using COPY functions?\nI hope it will be better than prepared statements.\n \nBest Regards,\nDivakar\n \n \n\n________________________________\n\nFrom:Werner Scholtes <[email protected]>\nTo: \"[email protected]\" <[email protected]>\nSent: Wed, December 15, 2010 8:21:55 PM\nSubject: [PERFORM] performance libpq vs JDBC\n\n\nI wrote a test program in C++ using libpq. It works as follows (pseudo code):\n \nfor( int loop = 0; loop < 1000; ++loop ) {\n PQexec(\"BEGIN\");\n const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";\n PQprepare(m_conn,\"stmtid\",sql,0,NULL);\n for ( int i = 0; i < 1000; ++i ) \n // Set values etc.\n PQexecPrepared(m_conn,…);\n }\n PQexec(\"DEALLOCATE stmtid\");\n PQexec(\"COMMIT\"); \n}\n \nI measured the duration of every loop of the outer for-loop resulting in an \naverage of 450 ms (per 1000 data sets insert)\n \nAfter that, I wrote a test program in Java using JDBC. It works as follows:\n \nfor( intloops = 0; loops < 1000; ++i) {\n String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";\n PreparedStatement stmt = con.prepareStatement(sql);\n for(inti = 0; i < 1000; ++i ) {\n // Set values etc.\n stmt.addBatch();\n }\n stmt.executeBatch();\n con.commit();\n stmt.close();\n}\n \nI measured the duration of every loop of the outer for-loop resulting in an \naverage of 100 ms (per 1000 data sets insert)\n \nThis means that accessing PostgreSQL by JDBC is about 4-5 times faster than \nusing libpq. \n\n \nComparable results have been measured with analog update and delete statements. \n\n \nI need to enhance the performance of my C++ code. Is there any possibility in \nlibpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements \n(I have no chance to use COPY statements)? I didn't find anything comparable to \nPreparedStatement.executeBatch() in libpq.\n \nBest regards,\nWerner Scholtes\n\n\n \nUpdate and delete are the operations which affect more than 1 row in general.The only thing is that the criteria has to be the same for all rows.If you have different criteria for different rows in case of update or delete, you will have to fire 2 queries.I mean, if you want to do1. delete from xyz where a = 1and2. delete from xyz where a = 2Then you will have to run query 2 times. Best Regards,DivakarFrom: Werner Scholtes <[email protected]>To: Divakar Singh <[email protected]>; \"[email protected]\" <[email protected]>Sent: Thu, December 16, 2010 3:11:36 PMSubject: Re: [PERFORM] performance libpq vs JDBCWhat about update and delete? In case of an update I have all records to be updated and in case of an delete I have all primary key values of records to be deleted.  Von: [email protected] [mailto:[email protected]] Im Auftrag von Divakar SinghGesendet: Donnerstag, 16. Dezember 2010 10:38An: Werner Scholtes; [email protected]: Re: [PERFORM] performance libpq vs JDBC  If you have all records before issuing Insert, you can do it like: insert into xxx values (a,b,c), (d,e,f), ......;an example: http://kaiv.wordpress.com/2007/07/19/faster-insert-for-multiple-rows Best Regards,Divakar    From: Werner Scholtes <[email protected]>To: Divakar Singh <[email protected]>; \"[email protected]\" <[email protected]>Sent: Thu, December 16, 2010 2:51:53 PMSubject: RE: [PERFORM] performance libpq vs JDBCUnfortunately I cannot use COPY funtion, since I need the performance of JDBC for update and delete statements in C++ libpq-program as well. I wonder how JDBC  PreparedStatement.addBatch() and PreparedStatement.executeBatch() work. They need to have a more efficient protocol to send bulks of parameter sets for one prepared statement as batch in one network transmission to the server. As far as I could see PQexecPrepared does not allow to send more than one parameter set (parameters for one row) in one call. So libpq sends 1000 times one single row to the server where JDBC sends 1 time 1000 rows, which is much more efficient. I assume that the wire\n protocol of PostgreSQL allows to transmit multiple rows at once, but libpq doesn't have an interface to access it. Is that right?  Von: Divakar Singh [mailto:[email protected]] Gesendet: Donnerstag, 16. Dezember 2010 09:11An: Werner Scholtes; [email protected]: Re: [PERFORM] performance libpq vs JDBC Can you\n trying writing libpq program using COPY functions?I hope it will be better than prepared statements. Best Regards,Divakar  From: Werner Scholtes <[email protected]>To: \"[email protected]\" <[email protected]>Sent: Wed, December 15, 2010 8:21:55 PMSubject: [PERFORM] performance libpq vs\n JDBCI wrote a test program in C++ using libpq. It works as follows (pseudo code): for ( int loop = 0; loop < 1000; ++loop ) {   PQexec(\"BEGIN\");   const char* sql = \"INSERT INTO pg_perf_test (id, text) VALUES($1,$2)\";   PQprepare(m_conn, \"stmtid\",sql,0,NULL);   for ( int i = 0; i < 1000; ++i )       // Set values etc.      PQexecPrepared(m_conn,…);   }   PQexec(\"DEALLOCATE stmtid\");   PQexec(\"COMMIT\");   } I measured the duration of every loop of the outer for-loop resulting in an average of 450 ms (per 1000 data sets insert) After that, I wrote a test program in Java using JDBC. It works as follows: for ( int loops = 0; loops < 1000; ++i) {   String sql = \"INSERT INTO pq_perf_test (id,text) VALUES (?,?)\";   PreparedStatement stmt = con.prepareStatement(sql);   for (int i = 0; i < 1000; ++i ) {      // Set values etc.      stmt.addBatch();   }   stmt.executeBatch();   con.commit();   stmt.close();} I measured the duration of every loop of the outer for-loop resulting in an average of 100 ms (per 1000 data sets insert) This means that accessing PostgreSQL by JDBC is about 4-5 times faster than using libpq.  Comparable  results have been measured with analog update and delete statements.  I need to enhance the performance of my C++ code. Is\n there any possibility in libpq to reach the performance of JDBC for INSERT, UPDATE and DELETE statements (I have no chance to use COPY statements)? I didn't find anything comparable to PreparedStatement.executeBatch() in libpq. Best regards,Werner Scholtes", "msg_date": "Thu, 16 Dec 2010 01:48:36 -0800 (PST)", "msg_from": "Divakar Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance libpq vs JDBC" }, { "msg_contents": "On 16/12/10 09:21, Werner Scholtes wrote:\n> I assume that the wire protocol of PostgreSQL allows to transmit\n> multiple rows at once, but libpq doesn't have an interface to access it.\n> Is that right?\n\nSounds wrong to me. The libpq client is the default reference \nimplementation of the protocol. If there were large efficiencies that \ncould be copied, they would be.\n\nAnyway - you don't need to assume what's in the protocol. It's \ndocumented here:\n http://www.postgresql.org/docs/9.0/static/protocol.html\n\nI'd stick wireshark or some other network analyser on the two sessions - \nsee exactly what is different.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 16 Dec 2010 12:14:32 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance libpq vs JDBC" }, { "msg_contents": "Thanks a lot for your advice. I found the difference: My Java program sends one huge SQL string containing 1000 INSERT statements separated by ';' (without using prepared statements at all!), whereas my C++ program sends one INSERT statement with parameters to be prepared and after that 1000 times parameters. Now I refactured my C++ program to send also 1000 INSERT statements in one call to PQexec and reached the same performance as my Java program.\r\n\r\nI just wonder why anyone should use prepared statements at all?\r\n\r\n> -----Ursprüngliche Nachricht-----\r\n> Von: [email protected] [mailto:pgsql-performance-\r\n> [email protected]] Im Auftrag von Richard Huxton\r\n> Gesendet: Donnerstag, 16. Dezember 2010 13:15\r\n> An: Werner Scholtes\r\n> Cc: Divakar Singh; [email protected]\r\n> Betreff: Re: [PERFORM] performance libpq vs JDBC\r\n> \r\n> On 16/12/10 09:21, Werner Scholtes wrote:\r\n> > I assume that the wire protocol of PostgreSQL allows to transmit\r\n> > multiple rows at once, but libpq doesn't have an interface to access\r\n> it.\r\n> > Is that right?\r\n> \r\n> Sounds wrong to me. The libpq client is the default reference\r\n> implementation of the protocol. If there were large efficiencies that\r\n> could be copied, they would be.\r\n> \r\n> Anyway - you don't need to assume what's in the protocol. It's\r\n> documented here:\r\n> http://www.postgresql.org/docs/9.0/static/protocol.html\r\n> \r\n> I'd stick wireshark or some other network analyser on the two sessions\r\n> -\r\n> see exactly what is different.\r\n> \r\n> --\r\n> Richard Huxton\r\n> Archonet Ltd\r\n> \r\n> --\r\n> Sent via pgsql-performance mailing list (pgsql-\r\n> [email protected])\r\n> To make changes to your subscription:\r\n> http://www.postgresql.org/mailpref/pgsql-performance\r\n", "msg_date": "Thu, 16 Dec 2010 13:28:51 +0100", "msg_from": "Werner Scholtes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance libpq vs JDBC" }, { "msg_contents": "On 16/12/10 12:28, Werner Scholtes wrote:\n> Thanks a lot for your advice. I found the difference: My Java program\n> sends one huge SQL string containing 1000 INSERT statements separated\n> by ';' (without using prepared statements at all!), whereas my C++\n> program sends one INSERT statement with parameters to be prepared and\n> after that 1000 times parameters. Now I refactured my C++ program to\n> send also 1000 INSERT statements in one call to PQexec and reached\n> the same performance as my Java program.\n\nSo - it was the network round-trip overhead. Like Divakar suggested, \nCOPY or VALUES (),(),() would work too.\n\nYou mention multiple updates/deletes too. Perhaps the cleanest and \nfastest method would be to build a TEMP table containing IDs/values \nrequired and join against that for your updates/deletes.\n\n> I just wonder why anyone should use prepared statements at all?\n\nNot everything is a simple INSERT. Preparing saves planning-time on \nrepeated SELECTs. It also provides some SQL injection safety since you \nprovide parameters rather than building a SQL string.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 16 Dec 2010 12:37:50 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance libpq vs JDBC" }, { "msg_contents": "On Thu, Dec 16, 2010 at 7:14 AM, Richard Huxton <[email protected]> wrote:\n> On 16/12/10 09:21, Werner Scholtes wrote:\n>>\n>> I assume that the wire protocol of PostgreSQL allows to transmit\n>> multiple rows at once, but libpq doesn't have an interface to access it.\n>> Is that right?\n>\n> Sounds wrong to me. The libpq client is the default reference implementation\n> of the protocol. If there were large efficiencies that could be copied, they\n> would be.\n>\n> Anyway - you don't need to assume what's in the protocol. It's documented\n> here:\n>  http://www.postgresql.org/docs/9.0/static/protocol.html\n>\n> I'd stick wireshark or some other network analyser on the two sessions - see\n> exactly what is different.\n\nThere is only one explanation for the difference: they are slamming\ndata across the wire without waiting for the result. libpq queries\nare synchronous: you send a query, wait for the result. This means\nfor very simple queries like the above you can become network bound.\n\nIn C/C++ you can work around this using a couple of different methods.\n COPY of course is the fastest, but extremely limiting in what it can\ndo. We developed libpqtypes (I love talking about libpqtypes) to deal\nwith this problem. In the attached example, it stacks data into an\narray in the client, sends it to the server which unnests and inserts\nit. The attached example inserts a million rows in about 11 seconds\non my workstation (client side prepare could knock that down to 8 or\nso).\n\nIf you need to do something fancy, the we typically create a receiving\nfunction on the server in plpgsql which unnests() the result and makes\ndecisions, etc. This is extremely powerful and you can compose and\nsend very rich data to/from postgres in a single query.\n\nmerlin\n\n#include \"libpq-fe.h\"\n#include \"libpqtypes.h\"\n\n#define INS_COUNT 1000000\n\nint main()\n{\n int i;\n\n PGconn *conn = PQconnectdb(\"dbname=pg9\");\n PGresult *res;\n if(PQstatus(conn) != CONNECTION_OK)\n {\n printf(\"bad connection\");\n return -1;\n }\n\n PQtypesRegister(conn);\n\n PGregisterType type = {\"ins_test\", NULL, NULL};\n PQregisterComposites(conn, &type, 1);\n\n PGparam *p = PQparamCreate(conn);\n PGarray arr;\n arr.param = PQparamCreate(conn);\n arr.ndims = 0;\n\n PGparam *t = PQparamCreate(conn);\n\n for(i=0; i<INS_COUNT; i++)\n {\n PGint4 a=i;\n PGtext b = \"some_text\";\n PGtimestamp c;\n PGbytea d;\n\n d.len = 8;\n d.data = b;\n\n c.date.isbc = 0;\n c.date.year = 2000;\n c.date.mon = 0;\n c.date.mday = 19;\n c.time.hour = 10;\n c.time.min = 41;\n c.time.sec = 6;\n c.time.usec = 0;\n c.time.gmtoff = -18000;\n\n PQputf(t, \"%int4 %text %timestamptz %bytea\", a, b, &c, &d);\n PQputf(arr.param, \"%ins_test\", t);\n PQparamReset(t);\n }\n\n if(!PQputf(p, \"%ins_test[]\", &arr))\n {\n printf(\"putf failed: %s\\n\", PQgeterror());\n return -1;\n }\n res = PQparamExec(conn, p, \"insert into ins_test select * from\nunnest($1) r(a, b, c, d)\", 1);\n\n if(!res)\n {\n printf(\"got %s\\n\", PQgeterror());\n return -1;\n }\n PQclear(res);\n PQparamClear(p);\n PQfinish(conn);\n}\n", "msg_date": "Thu, 16 Dec 2010 10:09:21 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance libpq vs JDBC" } ]
[ { "msg_contents": "\nIs there a way force the db to re-evaluate its execution plan for a FK \nwithout bouncing the DB?\n\n PostgreSQL 8.1.17\n\nIn our latest release our developers have implemented some new foreign \nkeys but forgot to create indexes on these keys.\n\nThe problem surfaced at one of our client installs where a maintenance \nDELETE query was running for over 24 hrs. We have since then identified \nthe missing indexes and have sent the client a script to create them, \nbut in our testing we could not been able to get postgres to use the new \nindex for the FK cascade delete without bouncing the database.\n\nHere is an example of an added fk but missing index....\n\nALTER TABLE scheduled_job_arg ADD CONSTRAINT sjr_scheduled_job_id_fk\n FOREIGN KEY (scheduled_job_id) REFERENCES scheduled_job (id)\n ON UPDATE CASCADE ON DELETE CASCADE;\n\nThanks in Advance,\nEric\n\n\n", "msg_date": "Thu, 16 Dec 2010 07:12:03 -0500", "msg_from": "Eric Comeau <[email protected]>", "msg_from_op": true, "msg_subject": "How to get FK to use new index without restarting the database" }, { "msg_contents": "Hello,\n> Is there a way force the db to re-evaluate its execution plan for a FK \n> without bouncing the DB?\n> \n> PostgreSQL 8.1.17\n> \n> In our latest release our developers have implemented some new foreign \n> keys but forgot to create indexes on these keys.\n> \n> The problem surfaced at one of our client installs where a maintenance \n> DELETE query was running for over 24 hrs. We have since then identified \n> the missing indexes and have sent the client a script to create them, \n> but in our testing we could not been able to get postgres to use the new \n\n> index for the FK cascade delete without bouncing the database.\nDid you try analyze? May be it will help.\nhttp://www.postgresql.org/docs/9.0/static/sql-analyze.html \n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\n", "msg_date": "Thu, 16 Dec 2010 18:04:45 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get FK to use new index without restarting the\n database" }, { "msg_contents": "On 16/12/10 12:12, Eric Comeau wrote:\n>\n> The problem surfaced at one of our client installs where a maintenance\n> DELETE query was running for over 24 hrs. We have since then identified\n> the missing indexes and have sent the client a script to create them,\n> but in our testing we could not been able to get postgres to use the new\n> index for the FK cascade delete without bouncing the database.\n\nWell, an ongoing DELETE isn't going to see a new index. I'd have thought \na new connection should though.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 16 Dec 2010 12:39:39 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get FK to use new index without restarting the\n database" }, { "msg_contents": "On 10-12-16 07:34 AM, Jayadevan M wrote:\n> Hello,\n>> Is there a way force the db to re-evaluate its execution plan for a FK\n>> without bouncing the DB?\n>>\n>> PostgreSQL 8.1.17\n>>\n>> In our latest release our developers have implemented some new foreign\n>> keys but forgot to create indexes on these keys.\n>>\n>> The problem surfaced at one of our client installs where a maintenance\n>> DELETE query was running for over 24 hrs. We have since then identified\n>> the missing indexes and have sent the client a script to create them,\n>> but in our testing we could not been able to get postgres to use the new\n>\n>> index for the FK cascade delete without bouncing the database.\n> Did you try analyze? May be it will help.\n> http://www.postgresql.org/docs/9.0/static/sql-analyze.html\n\nYes we did. Thanks for the suggestion.\n\n>\n> Regards,\n> Jayadevan\n>\n>\n>\n>\n>\n> DISCLAIMER:\n>\n> \"The information in this e-mail and any attachment is intended only for\n> the person to whom it is addressed and may contain confidential and/or\n> privileged material. If you have received this e-mail in error, kindly\n> contact the sender and destroy all copies of the original communication.\n> IBS makes no warranty, express or implied, nor guarantees the accuracy,\n> adequacy or completeness of the information contained in this email or any\n> attachment and is not liable for any errors, defects, omissions, viruses\n> or for resultant loss or damage, if any, direct or indirect.\"\n>\n>\n>\n>\n>\n>\n\n", "msg_date": "Thu, 16 Dec 2010 07:55:46 -0500", "msg_from": "Eric Comeau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get FK to use new index without restarting the\n database" }, { "msg_contents": "Eric Comeau <[email protected]> writes:\n> Is there a way force the db to re-evaluate its execution plan for a FK \n> without bouncing the DB?\n\n> PostgreSQL 8.1.17\n\nYou don't need to bounce the whole DB, but you will need to start fresh\nsessions. We didn't add automatic invalidation of those plans until 8.3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Dec 2010 11:27:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get FK to use new index without restarting the database " }, { "msg_contents": "On 10-12-16 11:27 AM, Tom Lane wrote: \n\n\tEric Comeau <[email protected]> <mailto:[email protected]> writes:\n\t> Is there a way force the db to re-evaluate its execution plan for a FK\n\t> without bouncing the DB?\n\t\n\t> PostgreSQL 8.1.17\n\t\n\tYou don't need to bounce the whole DB, but you will need to start fresh\n\tsessions. We didn't add automatic invalidation of those plans until 8.3.\n\t\n\t regards, tom lane\n\t\n\n\nWe confirmed that disconnecting and reconnecting resolves the issue.\n\nThanks to all that helped.\n\nEric\n\n\n\n\n\n\n\n\n On 10-12-16 11:27 AM, Tom Lane wrote:\n \n\n\nRe: [PERFORM] How to get FK to use new index without\n restarting the database \n\nEric Comeau <[email protected]> writes:\n > Is there a way force the db to re-evaluate its execution\n plan for a FK\n > without bouncing the DB?\n\n >   PostgreSQL 8.1.17\n\n You don't need to bounce the whole DB, but you will need to\n start fresh\n sessions.  We didn't add automatic invalidation of those plans\n until 8.3.\n\n                         regards, tom lane\n\n\n\n We confirmed that disconnecting and reconnecting resolves the issue.\n\n Thanks to all that helped.\n\n Eric", "msg_date": "Thu, 16 Dec 2010 10:46:26 -0600", "msg_from": "\"Eric Comeau\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get FK to use new index without restarting the database" }, { "msg_contents": "On 10-12-16 11:27 AM, Tom Lane wrote:\n> Eric Comeau<[email protected]> writes:\n>> Is there a way force the db to re-evaluate its execution plan for a FK\n>> without bouncing the DB?\n>\n>> PostgreSQL 8.1.17\n>\n> You don't need to bounce the whole DB, but you will need to start fresh\n> sessions. We didn't add automatic invalidation of those plans until 8.3.\n>\n> \t\t\tregards, tom lane\n>\n\nWe confirmed that disconnecting and reconnecting resolves the issue.\n\nThanks to all that helped.\n\nI replied to Tom and the list yesterday from my e-mail, but I don't see \nmy reply here, so it must be stuck in the ether somewhere....\n\nEric\n", "msg_date": "Fri, 17 Dec 2010 01:53:56 -0500", "msg_from": "Eric Comeau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get FK to use new index without restarting the\n database" } ]
[ { "msg_contents": "\nDear Friends,\n I have a requirement for running more that 15000 queries per second.\nCan you please tell what all are the postgres parameters needs to be changed\nto achieve this. \n Already I have 17GB RAM and dual core processor and this machine is\ndedicated for database operation. \n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/postgres-performance-tunning-tp3307846p3307846.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Thu, 16 Dec 2010 04:33:07 -0800 (PST)", "msg_from": "selvi88 <[email protected]>", "msg_from_op": true, "msg_subject": "postgres performance tunning" }, { "msg_contents": "> Dear Friends,\n> I have a requirement for running more that 15000 queries per \n> second.\n> Can you please tell what all are the postgres parameters needs to be \n> changed\n> to achieve this.\n> Already I have 17GB RAM and dual core processor and this machine \n> is dedicated for database operation.\n\nThat depends on your queries : for simple things like \"SELECT * FROM table \nWHERE primary_key = constant\", no problem, a desktop dual core will do \nit...\nSo, please provide more details...\n", "msg_date": "Fri, 17 Dec 2010 11:07:27 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "\nMy requirement is more than 15 thousand queries will run,\nIt will be 5000 updates and 5000 insert and rest will be select.\n\nEach query will be executed in each psql client, (let say for 15000 queries\n15000 thousand psql connections will be made).\n\nSince the connections are more for me the performance is low, I have tested\nit with pgbench tool.\n\nConfigurations,\nRAM\t : 17.9GB \nCPU\t : 64-bit 2 cores each 5346 bogomips\n\nPostgres Configurations,\nShared Memory Required (shmmax) : 1720320000 bytes\nWal Buffers : 1024KB\nMaintenance work mem : 1024MB\nEffective Cache Size : 9216MB\nWork Memory : 32MB\nShared Buffer : 1536MB\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/postgres-performance-tunning-tp3307846p3309251.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Fri, 17 Dec 2010 02:48:42 -0800 (PST)", "msg_from": "selvi88 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "selvi88 wrote:\n> I have a requirement for running more that 15000 queries per second.\n> Can you please tell what all are the postgres parameters needs to be changed\n> to achieve this. \n> Already I have 17GB RAM and dual core processor and this machine is\n> dedicated for database operation. \n> \n\nYou can find a parameter tuning guide at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server that may \nhelp you out.\n\nYou are unlikely to hit 15K queries/second with a dual core processor. \nWhen I run really trivial queries using the pgbench program to simulate \ndatabase activity, that normally gives about 7K queries/second/core. My \ndual-core laptop will do 13K/second for example. And real-world queries \ntend to be a bit more intensive than that. I would normally expect that \na quad-core system would be needed to reach 15K even with trivial \nqueries; my quad-core server at home will do 28K queries/second running \npgbench. If your individual cores are really fast, you might just make it.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 17 Dec 2010 08:20:19 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "On Thu, Dec 16, 2010 at 14:33, selvi88 <[email protected]> wrote:\n>        I have a requirement for running more that 15000 queries per second.\n> Can you please tell what all are the postgres parameters needs to be changed\n> to achieve this.\n\nYou have not told us anything about what sort of queries they are or\nyou're trying to do. PostgreSQL is not the solution to all database\nproblems. If all you have is a dual-core machine then other software\ncan possibly make better use of the available hardware.\n\nFirst of all, if they're mostly read-only queries, you should use a\ncaching layer (like memcache) in front of PostgreSQL. And you can use\nreplication to spread the load across multiple machines (but you will\nget some latency until the updates fully propagate to slaves).\n\nIf they're write queries, memory databases (like Redis), or disk\ndatabases specifically optimized for writes (like Cassandra) might be\nmore applicable.\n\nAlternatively, if you can tolerate some latency, use message queuing\nmiddleware like RabbitMQ to queue up a larger batch and send updates\nto PostgreSQL in bulk.\n\nAs for optimizing PostgreSQL itself, if you have a high connection\nchurn then you will need connection pooling middleware in front --\nsuch as pgbouncer or pgpool. But avoiding reconnections is a better\nidea. Also, use prepared queries to avoid parsing overheads for every\nquery.\n\nObviously all of these choices involve tradeoffs and caveats, in terms\nof safety, consistency, latency and application complexity.\n\nRegards,\nMarti\n", "msg_date": "Fri, 17 Dec 2010 16:01:50 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "On Thu, Dec 16, 2010 at 7:33 AM, selvi88 <[email protected]> wrote:\n>\n> Dear Friends,\n>        I have a requirement for running more that 15000 queries per second.\n> Can you please tell what all are the postgres parameters needs to be changed\n> to achieve this.\n>        Already I have 17GB RAM and dual core processor and this machine is\n> dedicated for database operation.\n\n15k tps is doable on cheap hardware if they are read only, and\ntrivial. if you are writing, you are going to need some fancy\nstorage. each disk drive can do about 100-300 tps depending on the\nspeed of the drive and other factors (you can enhance this\nsignificantly by relaxing sync requirements). a single dual core is\nnot going to cut it though -- you should bank on 4 cores at least.\n\nplease describe the problem you are trying to solve in more detail.\n15k tps can be trivially done, or could require a massive engineering\neffort. it really depends on what you are trying to do.\n\nmerlin\n", "msg_date": "Fri, 17 Dec 2010 17:13:07 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "\n\nThanks for ur suggestion, already I have gone through that url, with that\nhelp I was able to make my configuration to work for 5K queries/second.\nThe parameters I changed was shared_buffer, work_mem, maintenance_work_mem\nand effective_cache.\nStill I was not able to reach my target.\n\nCan u kindly tell me ur postgres configurations thereby I can get some idea\nout of it.\n\n\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/postgres-performance-tunning-tp3307846p3310337.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Sat, 18 Dec 2010 01:34:42 -0800 (PST)", "msg_from": "selvi88 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "On Fri, Dec 17, 2010 at 07:48, selvi88 <[email protected]> wrote:\n\n>\n> My requirement is more than 15 thousand queries will run,\n> It will be 5000 updates and 5000 insert and rest will be select.\n>\n>\nWhat IO system are you running Postgres on? With that kind of writes you\nshould be really focusing on your storage solution.\n\n\n> Each query will be executed in each psql client, (let say for 15000 queries\n> 15000 thousand psql connections will be made).\n>\n>\nYou will benefit from a connection pooler. Try fiddling with\nmaximum_connections till you hit a sweet spot. Probably you should start\nwith 20 connections and go up till you see your tps decrease.\n\nStill, without deeply looking into your storage I wonder if you'll ever\nreach your TPS objective.\n\nOn Fri, Dec 17, 2010 at 07:48, selvi88 <[email protected]> wrote:\n\nMy requirement is more than 15 thousand queries will run,\nIt will be 5000 updates and 5000 insert and rest will be select.\nWhat IO system are you running Postgres on? With that kind of writes you should be really focusing on your storage solution.  \n\nEach query will be executed in each psql client, (let say for 15000 queries\n15000 thousand psql connections will be made).\nYou will benefit from a connection pooler. Try fiddling with maximum_connections till you hit a sweet spot. Probably you should start with 20 connections and go up till you see your tps decrease.\nStill, without deeply looking into your storage I wonder if you'll ever reach your TPS objective.", "msg_date": "Mon, 20 Dec 2010 12:50:13 -0300", "msg_from": "Fernando Hevia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "On Sat, Dec 18, 2010 at 2:34 AM, selvi88 <[email protected]> wrote:\n>\n>\n> Thanks for ur suggestion, already I have gone through that url, with that\n> help I was able to make my configuration to work for 5K queries/second.\n> The parameters I changed was shared_buffer, work_mem, maintenance_work_mem\n> and effective_cache.\n> Still I was not able to reach my target.\n>\n> Can u kindly tell me ur postgres configurations thereby I can get some idea\n> out of it.\n\nI already posted this on stack overflow, but you'll need more machine\nthan you have to do that. Specifically you'll need either to go to\nSSD hard drives (which have some issues with power loss data\ncorruption unless you spend the bucks on them with a super capacitor\nto make sure their write caches can flush on power loss) or a half\ndozen to a dozen or so spinning 15k drives with a battery backed\ncontroller.\n\nI can sustain about 5,000 transactions per second on a machine with 8\ncores (2 years old) and 14 15k seagate hard drives.\n", "msg_date": "Mon, 20 Dec 2010 10:26:56 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "Scott Marlowe wrote:\n> I can sustain about 5,000 transactions per second on a machine with 8\n> cores (2 years old) and 14 15k seagate hard drives.\n> \n\nRight. You can hit 2 to 3000/second with a relatively inexpensive \nsystem, so long as you have a battery-backed RAID controller and a few \nhard drives. Doing 5K writes/second is going to take a giant pile of \nhard drive or SSDs to pull off. There is no possible way to meet the \nperformance objectives here without a lot more cores in the server and \nsome pretty beefy storage too.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 20 Dec 2010 12:49:16 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "On Mon, Dec 20, 2010 at 10:49 AM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> I can sustain about 5,000 transactions per second on a machine with 8\n>> cores (2 years old) and 14 15k seagate hard drives.\n>>\n>\n> Right.  You can hit 2 to 3000/second with a relatively inexpensive system,\n> so long as you have a battery-backed RAID controller and a few hard drives.\n>  Doing 5K writes/second is going to take a giant pile of hard drive or SSDs\n> to pull off.  There is no possible way to meet the performance objectives\n> here without a lot more cores in the server and some pretty beefy storage\n> too.\n\nAnd it gets expensive fast as you need more and more tps capability.\nThose machines listed up there were $10k two years ago. Their\nreplacements are $25k machines with 48 cores, 128G RAM and 34 15k hard\ndrives, and they get about 8k tps. Note that due to the nature of\nthese machines' jobs they are NOT tuned heavily towards tps in real\nlife, but more for handling a bunch of little reads and few big writes\nand reads simultaneously. The new machines are much more than 30 or\n40% faster in real world testing, for our workload they're about 10x\nas fast, since we were CPU bound before with 8 cores.\n", "msg_date": "Mon, 20 Dec 2010 11:19:59 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>> I can sustain about 5,000 transactions per second on a machine with 8\n>> cores (2 years old) and 14 15k seagate hard drives.\n>\n> Right.  You can hit 2 to 3000/second with a relatively inexpensive system,\n> so long as you have a battery-backed RAID controller and a few hard drives.\n>  Doing 5K writes/second is going to take a giant pile of hard drive or SSDs\n> to pull off.  There is no possible way to meet the performance objectives\n> here without a lot more cores in the server and some pretty beefy storage\n> too.\n\nIs this with synchronous_commit on, or off?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 6 Jan 2011 16:31:54 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "On Thu, Jan 6, 2011 at 2:31 PM, Robert Haas <[email protected]> wrote:\n> On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith <[email protected]> wrote:\n>> Scott Marlowe wrote:\n>>> I can sustain about 5,000 transactions per second on a machine with 8\n>>> cores (2 years old) and 14 15k seagate hard drives.\n>>\n>> Right.  You can hit 2 to 3000/second with a relatively inexpensive system,\n>> so long as you have a battery-backed RAID controller and a few hard drives.\n>>  Doing 5K writes/second is going to take a giant pile of hard drive or SSDs\n>> to pull off.  There is no possible way to meet the performance objectives\n>> here without a lot more cores in the server and some pretty beefy storage\n>> too.\n>\n> Is this with synchronous_commit on, or off?\n\nOff. It doesn't seem to make a lot of difference one you're running\non a good battery backed caching RAID controller.\n", "msg_date": "Thu, 6 Jan 2011 14:41:32 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" }, { "msg_contents": "On Thu, Jan 6, 2011 at 2:41 PM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Jan 6, 2011 at 2:31 PM, Robert Haas <[email protected]> wrote:\n>> On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith <[email protected]> wrote:\n>>> Scott Marlowe wrote:\n>>>> I can sustain about 5,000 transactions per second on a machine with 8\n>>>> cores (2 years old) and 14 15k seagate hard drives.\n>>>\n>>> Right.  You can hit 2 to 3000/second with a relatively inexpensive system,\n>>> so long as you have a battery-backed RAID controller and a few hard drives.\n>>>  Doing 5K writes/second is going to take a giant pile of hard drive or SSDs\n>>> to pull off.  There is no possible way to meet the performance objectives\n>>> here without a lot more cores in the server and some pretty beefy storage\n>>> too.\n>>\n>> Is this with synchronous_commit on, or off?\n>\n> Off.  It doesn't seem to make a lot of difference one you're running\n> on a good battery backed caching RAID controller.\n\nSorry, that's ON not OFF. Turning it off doesn't seem to ...\n", "msg_date": "Thu, 6 Jan 2011 14:41:55 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres performance tunning" } ]
[ { "msg_contents": "Hello Daniel,\nWe have the same scenario for the native Java arrays, so we are storing bytea and doing conversion at the client side, but for the server side SQL, plJava comes very handy:\n\nNo sure how you want to create stored procedures to convert internally but this is how we do this:\n\nOne has to define conversion routines in Java then deploy them to plJava. Scanning though this field would be still CPU bound, around 2x slower than with native arrays and 6x slower than with blobs, but at least one has this ability. It's even possible to pass them to plR to do some statistical processing directly, so depending on the operations you do it may be still cheaper then streaming out over the wire to the regular JDBC client.\n\n1. deploy class like this within plJava (null handling left out for brevity)\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.io.ObjectInputStream;\nimport java.io.ObjectOutputStream;\n\npublic class myArrayConversion \n{\n \n public myArrayConversion() {}\n\n /** Serialize double array to blob */\n public static byte[] convertDoubleArrayToBytea(double[] obj) throws IOException {\n ByteArrayOutputStream baos = new ByteArrayOutputStream();\n ObjectOutputStream oos = new ObjectOutputStream(baos);\n oos.writeObject(obj); \n return baos.toByteArray();\n }\n /** Serialize int array to blob */\n public static byte[] convertIntToBytea(int[] obj) throws IOException {\n ByteArrayOutputStream baos = new ByteArrayOutputStream();\n ObjectOutputStream oos = new ObjectOutputStream(baos);\n oos.writeObject(obj); \n return baos.toByteArray();\n }\n \n /** Deserialize blob to double array */\n public static double[] convertToDoubleArray(byte[] obj) throws IOException,\n ClassNotFoundException {\n // Deserialize from a byte array\n ObjectInputStream ios = new ObjectInputStream(new ByteArrayInputStream(obj));\n return (double[])ios.readObject();\n }\n\n /** Deserialize blob to it array */\n public static int[] convertIntToArray(byte[] obj) throws IOException,\n ClassNotFoundException {\n // Deserialize from a byte array\n ObjectInputStream ios = new ObjectInputStream(new ByteArrayInputStream(obj));\n return (int[])ios.readObject();\n }\n\n\n// other types arrays streaming...\n//...\n}\n\n2. then create a mapping functions as a db owner:\n\n<sql>\nCREATE OR REPLACE FUNCTION public.convertDoubleArrayToBytea(double precision[])\n RETURNS bytea AS\n\t'mappingPkg.convertDoubleArrayToBytea(double[])'\n LANGUAGE 'javau' IMMUTABLE\n COST 50;\n\nGRANT EXECUTE ON FUNCTION public.convertDoubleArrayToBytea(double precision[]) TO public;\n\n\n\nCREATE OR REPLACE FUNCTION public.convertToDoubleArray(bytea)\n RETURNS double precision[] AS\n\t'mappingPkg.convertToDoubleArray(byte[])'\n LANGUAGE 'javau' IMMUTABLE\n COST 50;\n\nGRANT EXECUTE ON FUNCTION public.convertToDoubleArray(bytea) TO public;\n</sql>\n\n\nthen you can have conversion either way:\n\nselect convertToDoubleArray(convertDoubleArrayToBytea(array[i::float8,1.1,100.1,i*0.1]::float8[])) from generate_series(1,100) i;\n\nso you'd be also able to create bytea objects from native SQL arrays within SQL.\n\nPLJava seems to be enjoying revival last days thanks to Johann 'Myrkraverk' Oskarsson who fixed several long-standing bugs. Check out the plJava list for details.\n\n\n Krzysztof\n \n\n\nOn Dec 16, 2010, at 10:22 AM, [email protected] wrote:\n\n> From: Dan Schaffer <[email protected]>\n> Date: December 15, 2010 9:15:14 PM GMT+01:00\n> To: Andy Colson <[email protected]>\n> Cc: Jim Nasby <[email protected]>, [email protected], Nick Matheson <[email protected]>\n> Subject: Re: Help with bulk read performance\n> Reply-To: [email protected]\n> \n> \n> Hi,\n> My name is Dan and I'm a co-worker of Nick Matheson who initially submitted this question (because the mail group had me blacklisted for awhile for some reason).\n> \n> \n> Thank you for all of the suggestions. We were able to improve out bulk read performance from 3 MB/s to 60 MB/s (assuming the data are NOT in cache in both cases) by doing the following:\n> \n> 1. Storing the data in a \"bytea\" column instead of an \"array\" column.\n> 2. Retrieving the data via the Postgres 9 CopyManager#copyOut(String sql, OutputStream stream) method\n> \n> The key to the dramatic improvement appears to be the reduction in packing and unpacking time on the server and client, respectively. The server packing occurs when the retrieved data are packed into a bytestream for sending across the network. Storing the data as a simple byte array reduces this time substantially. The client-side unpacking time is spent generating a ResultSet object. By unpacking the bytestream into the desired arrays of floats by hand instead, this time became close to negligible.\n> \n> The only downside of storing the data in byte arrays is the loss of transparency. That is, a simple \"select *\" of a few rows shows bytes instead of floats. We hope to mitigate this by writing a simple stored procedures that unpacks the bytes into floats.\n> \n> A couple of other results:\n> \n> If the data are stored as a byte array but retrieve into a ResultSet, the unpacking time goes up by an order of magnitude and the observed total throughput is 25 MB/s. If the data are stored in a Postgres float array and unpacked into a byte stream, the observed throughput is 20 MB/s.\n> \n> Dan (and Nick)\n\n", "msg_date": "Thu, 16 Dec 2010 14:39:11 +0100", "msg_from": "Krzysztof Nienartowicz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with bulk read performance" } ]
[ { "msg_contents": "Hi all,\n\nI have a table that in the typical case holds two minute sample data for a few thousand sources. Often we need to report on these data for a particular source over a particular time period and we're finding this query tends to get a bit slow.\n\nThe structure of the table:\n\n Table \"public.sample\"\n Column | Type | Modifiers \n-------------------+--------------------------+-------------------------------------------------\n client | integer | not null\n aggregateid | bigint | not null\n sample | bigint | not null default nextval('samplekey'::regclass)\n customer | integer | \n period | integer | not null\n starttime | integer | not null\n duration | integer | not null\n ip | text | \n tariff | integer | \n bytessentrate | bigint | \n bytessent | bigint | \n bytesreceived | bigint | \n packets | integer | not null\n queuetype | integer | not null default 0\n collection | integer | \n bytesreceivedrate | bigint | \n greatestrate | bigint | \n invalidated | timestamp with time zone | \nIndexes:\n \"sample_pkey\" PRIMARY KEY, btree (sample)\n \"sample_collection_starttime_idx\" btree (collection, starttime)\n \"sample_customer_starttime_idx\" btree (customer, starttime)\n \"sample_sample_idx\" btree (client, sample)\nForeign-key constraints:\n \"sample_client_fkey\" FOREIGN KEY (client) REFERENCES client(client)\n\n\nfc=# explain analyse select collection, period, tariff, sum(bytesSent), sum(bytesReceived), sum(packets), max(sample), (starttime / 3600) * 3600 as startchunk from sample_20101001 where starttime between 1287493200 and 1290171599 and collection=128 and ip = '10.9.125.207' group by startchunk, tariff, collection, period; QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=34959.01..34959.03 rows=1 width=44) (actual time=67047.850..67047.850 rows=0 loops=1)\n -> Bitmap Heap Scan on sample_20101001 (cost=130.56..34958.91 rows=5 width=44) (actual time=67047.847..67047.847 rows=0 loops=1)\n Recheck Cond: ((collection = 128) AND (starttime >= 1287493200) AND (starttime <= 1290171599))\n Filter: (ip = '10.9.125.207'::text)\n -> Bitmap Index Scan on sample_20101001_collection_starttime_idx (cost=0.00..130.56 rows=9596 width=0) (actual time=9806.115..9806.115 rows=6830 loops=1)\n Index Cond: ((collection = 128) AND (starttime >= 1287493200) AND (starttime <= 1290171599))\n Total runtime: 67048.201 ms\n(7 rows)\n\n\nI figure at most there should only be ~20,000 rows to be read from disk, and I expect that the index is doing a pretty good job of making sure only the rows that need reading are read. inclusion of the ip in the query is almost redundant as most of the time an ip has its own collection.... My suspicion is that the rows that we're interested in are very sparsely distributed on disk, so we're having to read too many pages for the query...\n\nAll of the queries on this table are reporting on a single collection, so ideally a collection's data would all be stored in the same part of the disk... or at least clumped together. This can be achieved using \"cluster\", however as far as I know there's no automated, non-cronesque means of clustering and having the table become unusable during the cluster is not ideal. \n\nI've considered partitioning, but I don't think that's going to give the effect I need. Apparently clustering is only going to scale to a few dozen child tables, so that's only going to give one order of magnitude performance for significant complexity.\n\nAre there any other options?\n\nCheers!\n\n--Royce\n\n\nHi all,I have a table that in the typical case holds two minute sample data for a few thousand sources.  Often we need to report on these data for a particular source over a particular time period and we're finding this query tends to get a bit slow.The structure of the table:                                     Table \"public.sample\"      Column       |           Type           |                    Modifiers                    -------------------+--------------------------+------------------------------------------------- client            | integer                  | not null aggregateid       | bigint                   | not null sample            | bigint                   | not null default nextval('samplekey'::regclass) customer          | integer                  |  period            | integer                  | not null starttime         | integer                  | not null duration          | integer                  | not null ip                | text                     |  tariff            | integer                  |  bytessentrate     | bigint                   |  bytessent         | bigint                   |  bytesreceived     | bigint                   |  packets           | integer                  | not null queuetype         | integer                  | not null default 0 collection        | integer                  |  bytesreceivedrate | bigint                   |  greatestrate      | bigint                   |  invalidated       | timestamp with time zone | Indexes:    \"sample_pkey\" PRIMARY KEY, btree (sample)    \"sample_collection_starttime_idx\" btree (collection, starttime)    \"sample_customer_starttime_idx\" btree (customer, starttime)    \"sample_sample_idx\" btree (client, sample)Foreign-key constraints:    \"sample_client_fkey\" FOREIGN KEY (client) REFERENCES client(client)fc=# explain  analyse select collection, period, tariff, sum(bytesSent), sum(bytesReceived), sum(packets), max(sample), (starttime / 3600) * 3600 as startchunk from sample_20101001 where starttime between 1287493200 and 1290171599  and collection=128    and ip = '10.9.125.207' group by startchunk, tariff, collection, period;                                                                             QUERY PLAN                                                                              --------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=34959.01..34959.03 rows=1 width=44) (actual time=67047.850..67047.850 rows=0 loops=1)   ->  Bitmap Heap Scan on sample_20101001  (cost=130.56..34958.91 rows=5 width=44) (actual time=67047.847..67047.847 rows=0 loops=1)         Recheck Cond: ((collection = 128) AND (starttime >= 1287493200) AND (starttime <= 1290171599))         Filter: (ip = '10.9.125.207'::text)         ->  Bitmap Index Scan on sample_20101001_collection_starttime_idx  (cost=0.00..130.56 rows=9596 width=0) (actual time=9806.115..9806.115 rows=6830 loops=1)               Index Cond: ((collection = 128) AND (starttime >= 1287493200) AND (starttime <= 1290171599)) Total runtime: 67048.201 ms(7 rows)I figure at most there should only be ~20,000 rows to be read from disk, and I expect that the index is doing a pretty good job of making sure only the rows that need reading are read. inclusion of the ip in the query is almost redundant as most of the time an ip has its own collection....  My suspicion is that the rows that we're interested in are very sparsely distributed on disk, so we're having to read too many pages for the query...All of the queries on this table are reporting on a single collection, so ideally a collection's data would all be stored in the same part of the disk... or at least clumped together.  This can be achieved using \"cluster\", however as far as I know there's no automated, non-cronesque means of clustering and having the table become unusable during the cluster is not ideal.  I've considered partitioning, but I don't think that's going to give the effect I need.  Apparently clustering is only going to scale to a few dozen child tables, so that's only going to give one order of magnitude performance for significant complexity.Are there any other options?Cheers!--Royce", "msg_date": "Fri, 17 Dec 2010 10:49:02 +1100", "msg_from": "Royce Ausburn <[email protected]>", "msg_from_op": true, "msg_subject": "Auto-clustering?" }, { "msg_contents": "2010/12/17 Royce Ausburn <[email protected]>\n\n> Hi all,\n>\n> I have a table that in the typical case holds two minute sample data for a\n> few thousand sources. Often we need to report on these data for a\n> particular source over a particular time period and we're finding this query\n> tends to get a bit slow.\n>\n> The structure of the table:\n>\n> Table \"public.sample\"\n> Column | Type |\n> Modifiers\n>\n> -------------------+--------------------------+-------------------------------------------------\n> client | integer | not null\n> aggregateid | bigint | not null\n> sample | bigint | not null default\n> nextval('samplekey'::regclass)\n> customer | integer |\n> period | integer | not null\n> starttime | integer | not null\n> duration | integer | not null\n> ip | text |\n> tariff | integer |\n> bytessentrate | bigint |\n> bytessent | bigint |\n> bytesreceived | bigint |\n> packets | integer | not null\n> queuetype | integer | not null default 0\n> collection | integer |\n> bytesreceivedrate | bigint |\n> greatestrate | bigint |\n> invalidated | timestamp with time zone |\n> Indexes:\n> \"sample_pkey\" PRIMARY KEY, btree (sample)\n> \"sample_collection_starttime_idx\" btree (collection, starttime)\n> \"sample_customer_starttime_idx\" btree (customer, starttime)\n> \"sample_sample_idx\" btree (client, sample)\n> Foreign-key constraints:\n> \"sample_client_fkey\" FOREIGN KEY (client) REFERENCES client(client)\n>\n>\n> fc=# explain analyse select collection, period, tariff, sum(bytesSent),\n> sum(bytesReceived), sum(packets), max(sample), (starttime / 3600) * 3600 as\n> startchunk from sample_20101001 where starttime between 1287493200 and\n> 1290171599 and collection=128 and ip = '10.9.125.207' group by\n> startchunk, tariff, collection, period;\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=34959.01..34959.03 rows=1 width=44) (actual\n> time=67047.850..67047.850 rows=0 loops=1)\n> -> Bitmap Heap Scan on sample_20101001 (cost=130.56..34958.91 rows=5\n> width=44) (actual time=67047.847..67047.847 rows=0 loops=1)\n> Recheck Cond: ((collection = 128) AND (starttime >= 1287493200)\n> AND (starttime <= 1290171599))\n> Filter: (ip = '10.9.125.207'::text)\n> -> Bitmap Index Scan on sample_20101001_collection_starttime_idx\n> (cost=0.00..130.56 rows=9596 width=0) (actual time=9806.115..9806.115\n> rows=6830 loops=1)\n> Index Cond: ((collection = 128) AND (starttime >=\n> 1287493200) AND (starttime <= 1290171599))\n> Total runtime: 67048.201 ms\n> (7 rows)\n>\n>\nhow about (auto)vacuuming?\n\n\n>\n> I figure at most there should only be ~20,000 rows to be read from disk,\n> and I expect that the index is doing a pretty good job of making sure only\n> the rows that need reading are read. inclusion of the ip in the query is\n> almost redundant as most of the time an ip has its own collection.... My\n> suspicion is that the rows that we're interested in are very sparsely\n> distributed on disk, so we're having to read too many pages for the query...\n>\n\n\nyou can test this suspicion in very simple way:\n- create test table (like yours including indexes including constraints, but\nwith no data)\n- insert into test select * from yours order by\n- analyze test tablee available\n- test the query on the new table\n\nIf new query is much faster, and if you have intensive random UPD/DEL/INS\nactivity, periodic CLUSTER could be a good idea...\nbut it depends on actual usage patterns (SELECT/modify ratio, types of\nupdates, and so on).\n\n\n\n>\n> All of the queries on this table are reporting on a single collection, so\n> ideally a collection's data would all be stored in the same part of the\n> disk... or at least clumped together. This can be achieved using \"cluster\",\n> however as far as I know there's no automated, non-cronesque means of\n> clustering and having the table become unusable during the cluster is not\n> ideal.\n>\n\ncron is a way of automation, isn't it :-)\n\n\n\n>\n>\n> I've considered partitioning, but I don't think that's going to give the\n> effect I need. Apparently clustering is only going to scale to a few dozen\n> child tables, so that's only going to give one order of magnitude\n> performance for significant complexity.\n>\n>\n\n\nregarding partitioning: I guess it starts to make sense around 10M rows or\n10G Bytes in one table.\n\nregarding clustering: it does not help with index bloat.\n\nand finally, you did not specify what PostgreSQL version are you using.\n\n\ncheers,\nFilip\n\n2010/12/17 Royce Ausburn <[email protected]>\nHi all,I have a table that in the typical case holds two minute sample data for a few thousand sources.  Often we need to report on these data for a particular source over a particular time period and we're finding this query tends to get a bit slow.\nThe structure of the table:                                     Table \"public.sample\"      Column       |           Type           |                    Modifiers                    \n-------------------+--------------------------+------------------------------------------------- client            | integer                  | not null aggregateid       | bigint                   | not null sample            | bigint                   | not null default nextval('samplekey'::regclass)\n customer          | integer                  |  period            | integer                  | not null starttime         | integer                  | not null duration          | integer                  | not null\n ip                | text                     |  tariff            | integer                  |  bytessentrate     | bigint                   |  bytessent         | bigint                   |  bytesreceived     | bigint                   | \n packets           | integer                  | not null queuetype         | integer                  | not null default 0 collection        | integer                  |  bytesreceivedrate | bigint                   | \n greatestrate      | bigint                   |  invalidated       | timestamp with time zone | Indexes:    \"sample_pkey\" PRIMARY KEY, btree (sample)    \"sample_collection_starttime_idx\" btree (collection, starttime)\n    \"sample_customer_starttime_idx\" btree (customer, starttime)    \"sample_sample_idx\" btree (client, sample)Foreign-key constraints:    \"sample_client_fkey\" FOREIGN KEY (client) REFERENCES client(client)\nfc=# explain  analyse select collection, period, tariff, sum(bytesSent), sum(bytesReceived), sum(packets), max(sample), (starttime / 3600) * 3600 as startchunk from sample_20101001 where starttime between 1287493200 and 1290171599  and collection=128    and ip = '10.9.125.207' group by startchunk, tariff, collection, period;                                                                             QUERY PLAN                                                                              \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate  (cost=34959.01..34959.03 rows=1 width=44) (actual time=67047.850..67047.850 rows=0 loops=1)\n   ->  Bitmap Heap Scan on sample_20101001  (cost=130.56..34958.91 rows=5 width=44) (actual time=67047.847..67047.847 rows=0 loops=1)         Recheck Cond: ((collection = 128) AND (starttime >= 1287493200) AND (starttime <= 1290171599))\n         Filter: (ip = '10.9.125.207'::text)         ->  Bitmap Index Scan on sample_20101001_collection_starttime_idx  (cost=0.00..130.56 rows=9596 width=0) (actual time=9806.115..9806.115 rows=6830 loops=1)\n               Index Cond: ((collection = 128) AND (starttime >= 1287493200) AND (starttime <= 1290171599)) Total runtime: 67048.201 ms(7 rows)\nhow about (auto)vacuuming? \nI figure at most there should only be ~20,000 rows to be read from disk, and I expect that the index is doing a pretty good job of making sure only the rows that need reading are read. inclusion of the ip in the query is almost redundant as most of the time an ip has its own collection....  My suspicion is that the rows that we're interested in are very sparsely distributed on disk, so we're having to read too many pages for the query...\nyou can test this suspicion in very simple way:- create test table (like yours including indexes including constraints, but with no data)- insert into test select * from yours order by \n- analyze test tablee available - test the query on the new tableIf new query is much faster, and if you have intensive random UPD/DEL/INS activity, periodic CLUSTER could be a good idea... but it depends on actual usage patterns (SELECT/modify ratio, types of updates, and so on).\n \nAll of the queries on this table are reporting on a single collection, so ideally a collection's data would all be stored in the same part of the disk... or at least clumped together.  This can be achieved using \"cluster\", however as far as I know there's no automated, non-cronesque means of clustering and having the table become unusable during the cluster is not ideal.\ncron is a way of automation, isn't it :-) \n  I've considered partitioning, but I don't think that's going to give the effect I need.  Apparently clustering is only going to scale to a few dozen child tables, so that's only going to give one order of magnitude performance for significant complexity.\n regarding partitioning: I guess it starts to make sense around 10M rows or 10G Bytes in one table.regarding clustering: it does not help with index bloat. \nand finally, you did not specify what PostgreSQL version are you using.cheers,Filip", "msg_date": "Fri, 17 Dec 2010 10:27:57 +0100", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Auto-clustering?" }, { "msg_contents": "2010/12/17 Filip Rembiałkowski <[email protected]>:\n> regarding clustering: it does not help with index bloat.\n\nI'm almost sure it does, CLUSTER re-creates all indexes from scratch\nafter copying the tuples.\n\nRegards,\nMarti\n", "msg_date": "Fri, 17 Dec 2010 11:41:12 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Auto-clustering?" }, { "msg_contents": "you are right, I must have missed it...\n\n Table \"public.u\"\n Column | Type | Modifiers\n--------+-----------------------------+-----------\n id | integer |\n t | timestamp without time zone |\n d | text |\nIndexes:\n \"u_d\" btree (d)\n \"u_id\" btree (id)\n \"u_t\" btree (t)\n\nfilip@filip=# select oid, relname, pg_Relation_size(oid) from pg_class where\nrelname in('u','u_id','u_t','u_d');\n oid | relname | pg_relation_size\n-------+---------+------------------\n 64283 | u | 15187968\n 64289 | u_id | 6758400\n 64290 | u_t | 6086656\n 64291 | u_d | 16482304\n\nfilip@filip=# CLUSTER u USING u_t;\nCLUSTER\nfilip@filip=# select oid, relname, pg_Relation_size(oid) from pg_class where\nrelname in('u','u_id','u_t','u_d');\n oid | relname | pg_relation_size\n-------+---------+------------------\n 64283 | u | 12115968\n 64289 | u_id | 3391488\n 64290 | u_t | 3391488\n 64291 | u_d | 8216576\n(4 rows)\n\n\nSo CLUSTER is effectively CLUSTER + REINDEX... nice.\n\n\nW dniu 17 grudnia 2010 10:41 użytkownik Marti Raudsepp <[email protected]>napisał:\n\n> 2010/12/17 Filip Rembiałkowski <[email protected]>:\n> > regarding clustering: it does not help with index bloat.\n>\n> I'm almost sure it does, CLUSTER re-creates all indexes from scratch\n> after copying the tuples.\n>\n> Regards,\n> Marti\n>\n\nyou are right, I must have missed it...                 Table \"public.u\" Column |            Type             | Modifiers --------+-----------------------------+----------- id     | integer                     | \n t      | timestamp without time zone |  d      | text                        | Indexes:    \"u_d\" btree (d)    \"u_id\" btree (id)    \"u_t\" btree (t)filip@filip=# select oid, relname, pg_Relation_size(oid) from pg_class where relname in('u','u_id','u_t','u_d');\n  oid  | relname | pg_relation_size-------+---------+------------------ 64283 | u       |         15187968 64289 | u_id    |          6758400 64290 | u_t     |          6086656 64291 | u_d     |         16482304\nfilip@filip=# CLUSTER u USING u_t;CLUSTERfilip@filip=# select oid, relname, pg_Relation_size(oid) from pg_class where relname in('u','u_id','u_t','u_d');  oid  | relname | pg_relation_size \n-------+---------+------------------ 64283 | u       |         12115968 64289 | u_id    |          3391488 64290 | u_t     |          3391488 64291 | u_d     |          8216576(4 rows)So CLUSTER is effectively CLUSTER + REINDEX... nice.\nW dniu 17 grudnia 2010 10:41 użytkownik Marti Raudsepp <[email protected]> napisał:\n2010/12/17 Filip Rembiałkowski <[email protected]>:\n> regarding clustering: it does not help with index bloat.\n\nI'm almost sure it does, CLUSTER re-creates all indexes from scratch\nafter copying the tuples.\n\nRegards,\nMarti", "msg_date": "Fri, 17 Dec 2010 11:19:36 +0100", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Auto-clustering?" }, { "msg_contents": "\n> fc=# explain analyse select collection, period, tariff, sum(bytesSent), \n> sum(bytesReceived), sum(packets), max(sample), (starttime / 3600) * 3600 \n> as startchunk from sample_20101001 where starttime between 1287493200 \n> and 1290171599 and collection=128 and ip = '10.9.125.207' group by \n> startchunk, tariff, collection, \n> period;\n\nIf CLUSTER locks bother you, and you don't do UPDATEs, you might consider \ndoing something like this :\n\n- accumulate the rows in a \"recent\" table\n- every hour, INSERT INTO archive SELECT * FROM recent ORDER BY (your \ncluster fields)\n- DELETE FROM recent the rows you just inserted\n- VACUUM recent\n\nThe cluster in your archive table will not be perfect but at least all \nrows from 1 source in 1 hour will be stored close together. But clustering \ndoesn't need to be perfect either, if you get 100x better locality, that's \nalready good !\n\nNow, if you have a huge amount of data but never query it with a precision \nexceeding 1 hour, you might consider creating an aggregate table where, at \nthe end of every hour, you only store sum(), min(), max() of the data for \nthe last hour's data using GROUP BY the fields you want. You could also \nuse a trigger, but that would generate a huge amount of UPDATEs.\n\nFor the above query you'd do :\n\nINSERT INTO stats_by_hour (columns...) SELECT\ncollection, ip, period, tariff, sum(bytesSent),\nsum(bytesReceived), sum(packets), max(sample), (starttime / 3600) * 3600\nas startchunk from sample_20101001 WHERE starttime > some value\nGROUP BY collection, ip, period, tariff, startchunk\n\nThen you can run aggregates against this much smaller table instead.\n", "msg_date": "Fri, 17 Dec 2010 11:20:01 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Auto-clustering?" }, { "msg_contents": "\nRoyce Ausburn a �crit :\n> All of the queries on this table are reporting on a single collection, so ideally a collection's data would all be stored in the same part of the disk... or at least clumped together. This can be achieved using \"cluster\", however as far as I know there's no automated, non-cronesque means of clustering and having the table become unusable during the cluster is not ideal. \n>\n>\n> \n\nIf the lock level used by CLUSTER is a problem for you, you could \nconsider pg_reorg contrib. AFAIK, it does similar work as CLUSTER but \nallowing a concurrent read and write activity on the table.\n\nRegards. Philippe.\n\n", "msg_date": "Fri, 17 Dec 2010 17:19:14 +0100", "msg_from": "phb07 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Auto-clustering?" }, { "msg_contents": "On 17/12/2010, at 8:27 PM, Filip Rembiałkowski wrote:\n\n> \n> 2010/12/17 Royce Ausburn <[email protected]>\n> Hi all,\n> \n> I have a table that in the typical case holds two minute sample data for a few thousand sources. Often we need to report on these data for a particular source over a particular time period and we're finding this query tends to get a bit slow.\n> \n> \n> how about (auto)vacuuming?\n\nA key piece of information I left out: we almost never update rows in this table.\n\n> \n> \n> I figure at most there should only be ~20,000 rows to be read from disk, and I expect that the index is doing a pretty good job of making sure only the rows that need reading are read. inclusion of the ip in the query is almost redundant as most of the time an ip has its own collection.... My suspicion is that the rows that we're interested in are very sparsely distributed on disk, so we're having to read too many pages for the query...\n> \n> \n> you can test this suspicion in very simple way:\n> - create test table (like yours including indexes including constraints, but with no data)\n> - insert into test select * from yours order by \n> - analyze test tablee available \n> - test the query on the new table\n> \n> If new query is much faster, and if you have intensive random UPD/DEL/INS activity, periodic CLUSTER could be a good idea... \n> but it depends on actual usage patterns (SELECT/modify ratio, types of updates, and so on).\n\nGood idea! This vastly improves query times.\n> \n> \n> and finally, you did not specify what PostgreSQL version are you using.\n\nIn the case I've been working with it's 8.1 =( But we have a few instances of this database... I believe the rest are a mixture of 8.4s and they all have the same problem.\n\n--Royce\nOn 17/12/2010, at 8:27 PM, Filip Rembiałkowski wrote:2010/12/17 Royce Ausburn <[email protected]>\nHi all,I have a table that in the typical case holds two minute sample data for a few thousand sources.  Often we need to report on these data for a particular source over a particular time period and we're finding this query tends to get a bit slow.\n\nhow about (auto)vacuuming?A key piece of information I left out: we almost never update rows in this table. \nI figure at most there should only be ~20,000 rows to be read from disk, and I expect that the index is doing a pretty good job of making sure only the rows that need reading are read. inclusion of the ip in the query is almost redundant as most of the time an ip has its own collection....  My suspicion is that the rows that we're interested in are very sparsely distributed on disk, so we're having to read too many pages for the query...\nyou can test this suspicion in very simple way:- create test table (like yours including indexes including constraints, but with no data)- insert into test select * from yours order by \n- analyze test tablee available - test the query on the new tableIf new query is much faster, and if you have intensive random UPD/DEL/INS activity, periodic CLUSTER could be a good idea... but it depends on actual usage patterns (SELECT/modify ratio, types of updates, and so on).Good idea!  This vastly improves query times.\nand finally, you did not specify what PostgreSQL version are you using.In the case I've been working with it's 8.1 =(  But we have a few instances of this database... I believe the rest are a mixture of 8.4s and they all have the same problem.--Royce", "msg_date": "Sun, 19 Dec 2010 09:39:38 +1100", "msg_from": "Royce Ausburn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Auto-clustering?" }, { "msg_contents": "\nOn 17/12/2010, at 9:20 PM, Pierre C wrote:\n\n> \n>> fc=# explain analyse select collection, period, tariff, sum(bytesSent), sum(bytesReceived), sum(packets), max(sample), (starttime / 3600) * 3600 as startchunk from sample_20101001 where starttime between 1287493200 and 1290171599 and collection=128 and ip = '10.9.125.207' group by startchunk, tariff, collection, period;\n> \n> If CLUSTER locks bother you, and you don't do UPDATEs, you might consider doing something like this :\n> \n> - accumulate the rows in a \"recent\" table\n> - every hour, INSERT INTO archive SELECT * FROM recent ORDER BY (your cluster fields)\n> - DELETE FROM recent the rows you just inserted\n> - VACUUM recent\n> \n> The cluster in your archive table will not be perfect but at least all rows from 1 source in 1 hour will be stored close together. But clustering doesn't need to be perfect either, if you get 100x better locality, that's already good !\n\nThat's a really decent idea and can slot in perfectly well with how the application already works! We have existing DBAO code that handles monthly tables; it'll happily pop data in to a recent table.... In fact we can probably tolerate having a \"today\" table. Thanks!\n\n--Royce\n\n\n", "msg_date": "Sun, 19 Dec 2010 14:07:48 +1100", "msg_from": "Royce Ausburn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Auto-clustering?" } ]
[ { "msg_contents": "Tom Polak wrote:\n \nHi neighbor! (We're just up I90/I39 a bit.)\n \n> What kind of performance can I expect out of Postgres compare to\n> MSSQL?\n \nI can't speak directly to MS SQL Server 2000, but Sybase and ASE have\ncommon roots; I think MS SQL Server 2000 is still using the engine\nthat they inherited from Sybase (check the server startup logging to\nsee if the Sybase copyright notice is still there), so comparisons to\nSybase might be roughly applicable.\n \nMoving from Sybase on Windows to PostgreSQL on Linux we got a major\nperformance improvement. I hesitate to include a hard number in\nthis post because the license agreement from Sybase prohibited\npublishing any benchmarks involving their product without advance\npermission in writing. I don't think it constitutes a \"benchmark\" to\nmention that we went from load balancing our largest web app against\ntwo database servers to running comfortably on one database server\nwith the switch.\n \nOver 95% of our queries ran faster on PostgreSQL without anything but\nbasic tuning of the server configuration. Most of the rest were\npretty easy to rework so they ran well. There was one which we had\nto break up into multiple smaller queries. That was on PostgreSQL\n8.1; in 8.4 the addition of semi-join and anti-join logic for EXISTS\ntests solved many of the issues within the server; I'd bet 98% to 99%\nof our queries would have run faster on PostgreSQL without any work\nhad that been present.\n \n> RAID 5 (for data redundancy/security),\n \nContrary to some admonitions, RAID 5 performs well for some\nworkloads. It does, however, put you at risk of losing everything\nshould a second drive experience a failure before you rebuild a lost\ndrive, and if (as is usually the case) all your drives are from the\nsame batch, running in the same environment, with fairly evenly\nbalanced load, that second failure about the same time as the first\nis not as rare as you might think. If you don't have good\nreplication, be sure you have good backups using hot or warm standby.\n \n> 24 GB of RAM\n \n> 10GB of data in a couple of tables\n \nIf the active portion of your database fits within RAM, you should\nset your seq_page_cost and random_page_cost to equal values, probably\nat 0.1 or less. That's in addition to all the other advice on\nconfiguration.\n \n> Any comparisons in terms of performance would be great.\n \nCheck your SQL Server license. Odds are good that it prevents you\nfrom publishing benchmarks which they don't review and approve in\nadvance. There is probably a reason they put that in. PostgreSQL,\nof course, has no such restriction. ;-)\n \nWe have a server not much bigger than what you're looking at, except\nfor a lot more drives, holding two databases over 1TB each, plus a\ncouple smaller ones. We've got over 20 web apps hitting this server,\nthe largest of which has over five million requests per day. While\nthe database involved is up to about 1.6 TB now, most of that is\ndocuments; the active part of the database, holding case management\ninformation, is about 200 GB, with some heavily hit tables holding\nhundreds of millions of rows. Feel free to poke around to get a\nsense of performance:\n \nhttp://wcca.wicourts.gov/\n \nWhen you do a name search, due to the inclusion of party aliases and\nsecurity and privacy rules, there are about 20 joins.\n \nI hope this helps.\n \n-Kevin\n", "msg_date": "Sat, 18 Dec 2010 09:53:11 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compared MS SQL 2000 to Postgresql 9.0 on\n\t Windows" } ]
[ { "msg_contents": "Rauan Maemirov wrote:\n \n> EXPLAIN SELECT [...]\n \nPlease show us the results of EXPLAIN ANALYZE SELECT ...\n \nAlso, please show us the table layout (including indexes), and\ndetails about your hardware and PostgreSQL configuration. See this\npage for details:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n> As you can see the query doesn't use index.\n \nThat means that either the optimizer thinks that the index isn't\nusable for this query (due to type mismatch or some such) or that it\nthinks a plan without the index costs less to run (i.e., it will\ngenerally run faster). You haven't told us enough to know whether\nthat is actually true, much less how to allow PostgreSQL to develop\nmore accurate costing estimates in your environment if it's currently\nwrong about this.\n \n-Kevin\n", "msg_date": "Sat, 18 Dec 2010 11:56:57 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with FTS" }, { "msg_contents": "Hi, Kevin.\n\nSorry for long delay.\n\nEXPLAIN ANALYZE SELECT \"v\".\"id\", \"v\".\"title\" FROM \"video\" AS \"v\"\nWHERE (v.active) AND (v.fts @@\n'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery and\nv.id <> 500563 )\nORDER BY COALESCE(ts_rank_cd( '{0.1, 0.2, 0.7, 1.0}', v.fts,\n'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery),\n1) DESC, v.views DESC\nLIMIT 6\n\n\"Limit (cost=103975.50..103975.52 rows=6 width=280) (actual\ntime=2893.193..2893.199 rows=6 loops=1)\"\n\" -> Sort (cost=103975.50..104206.07 rows=92228 width=280) (actual\ntime=2893.189..2893.193 rows=6 loops=1)\"\n\" Sort Key: (COALESCE(ts_rank_cd('{0.1,0.2,0.7,1}'::real[], fts, '( (\n( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) |\n''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery), 1::real)), views\"\n\" Sort Method: top-N heapsort Memory: 25kB\"\n\" -> Seq Scan on video v (cost=0.00..102322.34 rows=92228\nwidth=280) (actual time=0.100..2846.639 rows=54509 loops=1)\"\n\" Filter: (active AND (fts @@ '( ( ( ( ( ''dexter'':A |\n''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) |\n''сезон'':A ) | ''серия'':A'::tsquery) AND (id <> 500563))\"\n\"Total runtime: 2893.264 ms\"\n\nTable scheme:\n\nCREATE TABLE video\n(\n id bigserial NOT NULL,\n hash character varying(12),\n account_id bigint NOT NULL,\n category_id smallint NOT NULL,\n converted boolean NOT NULL DEFAULT false,\n active boolean NOT NULL DEFAULT true,\n title character varying(255),\n description text,\n tags character varying(1000),\n authorized boolean NOT NULL DEFAULT false,\n adult boolean NOT NULL DEFAULT false,\n views bigint DEFAULT 0,\n rating real NOT NULL DEFAULT 0,\n screen smallint DEFAULT 2,\n duration smallint,\n \"type\" smallint DEFAULT 0,\n mp4 smallint NOT NULL DEFAULT 0,\n size bigint,\n size_high bigint DEFAULT 0,\n source character varying(255),\n storage_id smallint NOT NULL DEFAULT 1,\n rule_watching smallint,\n rule_commenting smallint,\n count_comments integer NOT NULL DEFAULT 0,\n count_likes integer NOT NULL DEFAULT 0,\n count_faves integer NOT NULL DEFAULT 0,\n fts tsvector,\n modified timestamp without time zone NOT NULL DEFAULT now(),\n created timestamp without time zone DEFAULT now(),\n CONSTRAINT video_pkey PRIMARY KEY (id),\n CONSTRAINT video_hash_key UNIQUE (hash)\n)\nWITH (\n OIDS=FALSE\n);\n\nIndexes:\n\nCREATE INDEX idx_video_account_id ON video USING btree (account_id);\nCREATE INDEX idx_video_created ON video USING btree (created);\nCREATE INDEX idx_video_fts ON video USING gin (fts);\nCREATE INDEX idx_video_hash ON video USING hash (hash);\n\n(here I tried both gist and gin indexes)\n\nI have 32Gb ram and 2 core quad E5520, 2.27GHz (8Mb cache).\n\nPgsql conf:\nmax_connections = 200\nshared_buffers = 7680MB\nwork_mem = 128MB\nmaintenance_work_mem = 1GB\neffective_cache_size = 22GB\ndefault_statistics_target = 100\n\nAnything else?\n\n2010/12/18 Kevin Grittner <[email protected]>\n\n> Rauan Maemirov wrote:\n>\n> > EXPLAIN SELECT [...]\n>\n> Please show us the results of EXPLAIN ANALYZE SELECT ...\n>\n> Also, please show us the table layout (including indexes), and\n> details about your hardware and PostgreSQL configuration. See this\n> page for details:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> > As you can see the query doesn't use index.\n>\n> That means that either the optimizer thinks that the index isn't\n> usable for this query (due to type mismatch or some such) or that it\n> thinks a plan without the index costs less to run (i.e., it will\n> generally run faster). You haven't told us enough to know whether\n> that is actually true, much less how to allow PostgreSQL to develop\n> more accurate costing estimates in your environment if it's currently\n> wrong about this.\n>\n> -Kevin\n>\n\nHi, Kevin.Sorry for long delay.EXPLAIN ANALYZE SELECT \"v\".\"id\", \"v\".\"title\" FROM \"video\" AS \"v\"WHERE (v.active) AND (v.fts @@ 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery and v.id <> 500563 ) \nORDER BY COALESCE(ts_rank_cd( '{0.1, 0.2, 0.7, 1.0}', v.fts, 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery), 1) DESC, v.views DESC LIMIT 6\n\"Limit  (cost=103975.50..103975.52 rows=6 width=280) (actual time=2893.193..2893.199 rows=6 loops=1)\"\"  ->  Sort  (cost=103975.50..104206.07 rows=92228 width=280) (actual time=2893.189..2893.193 rows=6 loops=1)\"\n\"        Sort Key: (COALESCE(ts_rank_cd('{0.1,0.2,0.7,1}'::real[], fts, '( ( ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery), 1::real)), views\"\n\"        Sort Method:  top-N heapsort  Memory: 25kB\"\"        ->  Seq Scan on video v  (cost=0.00..102322.34 rows=92228 width=280) (actual time=0.100..2846.639 rows=54509 loops=1)\"\n\"              Filter: (active AND (fts @@ '( ( ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery) AND (id <> 500563))\"\n\"Total runtime: 2893.264 ms\"Table scheme:CREATE TABLE video(  id bigserial NOT NULL,  hash character varying(12),\n  account_id bigint NOT NULL,  category_id smallint NOT NULL,  converted boolean NOT NULL DEFAULT false,  active boolean NOT NULL DEFAULT true,  title character varying(255),\n  description text,  tags character varying(1000),  authorized boolean NOT NULL DEFAULT false,  adult boolean NOT NULL DEFAULT false,  views bigint DEFAULT 0,  rating real NOT NULL DEFAULT 0,\n  screen smallint DEFAULT 2,  duration smallint,  \"type\" smallint DEFAULT 0,  mp4 smallint NOT NULL DEFAULT 0,  size bigint,  size_high bigint DEFAULT 0,\n  source character varying(255),  storage_id smallint NOT NULL DEFAULT 1,  rule_watching smallint,  rule_commenting smallint,  count_comments integer NOT NULL DEFAULT 0,\n  count_likes integer NOT NULL DEFAULT 0,  count_faves integer NOT NULL DEFAULT 0,  fts tsvector,  modified timestamp without time zone NOT NULL DEFAULT now(),  created timestamp without time zone DEFAULT now(),\n  CONSTRAINT video_pkey PRIMARY KEY (id),  CONSTRAINT video_hash_key UNIQUE (hash))WITH (  OIDS=FALSE);Indexes:\nCREATE INDEX idx_video_account_id  ON video  USING btree  (account_id);CREATE INDEX idx_video_created  ON video  USING btree  (created);CREATE INDEX idx_video_fts  ON video  USING gin  (fts);\nCREATE INDEX idx_video_hash  ON video  USING hash  (hash);(here I tried both gist and gin indexes)I have 32Gb ram and 2 core quad E5520, 2.27GHz (8Mb cache).\nPgsql conf:max_connections = 200shared_buffers = 7680MBwork_mem = 128MBmaintenance_work_mem = 1GBeffective_cache_size = 22GBdefault_statistics_target = 100\nAnything else?2010/12/18 Kevin Grittner <[email protected]>\nRauan Maemirov  wrote:\n\n> EXPLAIN SELECT [...]\n\nPlease show us the results of EXPLAIN ANALYZE SELECT ...\n\nAlso, please show us the table layout (including indexes), and\ndetails about your hardware and PostgreSQL configuration.  See this\npage for details:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n> As you can see the query doesn't use index.\n\nThat means that either the optimizer thinks that the index isn't\nusable for this query (due to type mismatch or some such) or that it\nthinks a plan without the index costs less to run (i.e., it will\ngenerally run faster).  You haven't told us enough to know whether\nthat is actually true, much less how to allow PostgreSQL to develop\nmore accurate costing estimates in your environment if it's currently\nwrong about this.\n\n-Kevin", "msg_date": "Tue, 11 Jan 2011 14:16:10 +0600", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with FTS" }, { "msg_contents": "On Tue, Jan 11, 2011 at 3:16 AM, Rauan Maemirov <[email protected]> wrote:\n> Hi, Kevin.\n> Sorry for long delay.\n> EXPLAIN ANALYZE SELECT \"v\".\"id\", \"v\".\"title\" FROM \"video\" AS \"v\"\n> WHERE (v.active) AND (v.fts @@\n> 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery and\n> v.id <> 500563 )\n> ORDER BY COALESCE(ts_rank_cd( '{0.1, 0.2, 0.7, 1.0}', v.fts,\n> 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery),\n> 1) DESC, v.views DESC\n> LIMIT 6\n> \"Limit  (cost=103975.50..103975.52 rows=6 width=280) (actual\n> time=2893.193..2893.199 rows=6 loops=1)\"\n> \"  ->  Sort  (cost=103975.50..104206.07 rows=92228 width=280) (actual\n> time=2893.189..2893.193 rows=6 loops=1)\"\n> \"        Sort Key: (COALESCE(ts_rank_cd('{0.1,0.2,0.7,1}'::real[], fts, '( (\n> ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) |\n> ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery), 1::real)), views\"\n> \"        Sort Method:  top-N heapsort  Memory: 25kB\"\n> \"        ->  Seq Scan on video v  (cost=0.00..102322.34 rows=92228\n> width=280) (actual time=0.100..2846.639 rows=54509 loops=1)\"\n> \"              Filter: (active AND (fts @@ '( ( ( ( ( ''dexter'':A |\n> ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) |\n> ''сезон'':A ) | ''серия'':A'::tsquery) AND (id <> 500563))\"\n> \"Total runtime: 2893.264 ms\"\n> Table scheme:\n> CREATE TABLE video\n> (\n>   id bigserial NOT NULL,\n>   hash character varying(12),\n>   account_id bigint NOT NULL,\n>   category_id smallint NOT NULL,\n>   converted boolean NOT NULL DEFAULT false,\n>   active boolean NOT NULL DEFAULT true,\n>   title character varying(255),\n>   description text,\n>   tags character varying(1000),\n>   authorized boolean NOT NULL DEFAULT false,\n>   adult boolean NOT NULL DEFAULT false,\n>   views bigint DEFAULT 0,\n>   rating real NOT NULL DEFAULT 0,\n>   screen smallint DEFAULT 2,\n>   duration smallint,\n>   \"type\" smallint DEFAULT 0,\n>   mp4 smallint NOT NULL DEFAULT 0,\n>   size bigint,\n>   size_high bigint DEFAULT 0,\n>   source character varying(255),\n>   storage_id smallint NOT NULL DEFAULT 1,\n>   rule_watching smallint,\n>   rule_commenting smallint,\n>   count_comments integer NOT NULL DEFAULT 0,\n>   count_likes integer NOT NULL DEFAULT 0,\n>   count_faves integer NOT NULL DEFAULT 0,\n>   fts tsvector,\n>   modified timestamp without time zone NOT NULL DEFAULT now(),\n>   created timestamp without time zone DEFAULT now(),\n>   CONSTRAINT video_pkey PRIMARY KEY (id),\n>   CONSTRAINT video_hash_key UNIQUE (hash)\n> )\n> WITH (\n>   OIDS=FALSE\n> );\n> Indexes:\n> CREATE INDEX idx_video_account_id  ON video  USING btree  (account_id);\n> CREATE INDEX idx_video_created  ON video  USING btree  (created);\n> CREATE INDEX idx_video_fts  ON video  USING gin  (fts);\n> CREATE INDEX idx_video_hash  ON video  USING hash  (hash);\n> (here I tried both gist and gin indexes)\n> I have 32Gb ram and 2 core quad E5520, 2.27GHz (8Mb cache).\n> Pgsql conf:\n> max_connections = 200\n> shared_buffers = 7680MB\n> work_mem = 128MB\n> maintenance_work_mem = 1GB\n> effective_cache_size = 22GB\n> default_statistics_target = 100\n> Anything else?\n\nFor returning that many rows, an index scan might actually be slower.\nMaybe it's worth testing. Try:\n\nSET enable_seqscan=off;\nEXPLAIN ANALYZE ...\n\nand see what you get. If it's slower, well, then be happy it didn't\nuse the index (maybe the question is... what index should you have\ninstead?). If it's faster, post the results...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 14 Jan 2011 14:03:47 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with FTS" }, { "msg_contents": "The problem has returned back, and here's the results, as you've said it's\nfaster now:\n\nSET enable_seqscan=off;\nEXPLAIN ANALYZE SELECT \"v\".\"id\", \"v\".\"title\" FROM \"video\" AS \"v\"\nWHERE (v.active) AND (v.fts @@\n'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery\nand v.id <> 500563 )\nORDER BY COALESCE(ts_rank_cd( '{0.1, 0.2, 0.7, 1.0}', v.fts,\n'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery),\n1) DESC, v.views DESC\nLIMIT 6\n\nLimit (cost=219631.83..219631.85 rows=6 width=287) (actual\ntime=1850.567..1850.570 rows=6 loops=1)\n -> Sort (cost=219631.83..220059.05 rows=170886 width=287) (actual\ntime=1850.565..1850.566 rows=6 loops=1)\n Sort Key: (COALESCE(ts_rank_cd('{0.1,0.2,0.7,1}'::real[], fts, '( (\n( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) |\n''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery), 1::real)), views\n Sort Method: top-N heapsort Memory: 26kB\n -> Bitmap Heap Scan on video v (cost=41180.92..216568.73\nrows=170886 width=287) (actual time=214.842..1778.830 rows=103087 loops=1)\n Recheck Cond: (fts @@ '( ( ( ( ( ''dexter'':A | ''season'':A\n) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) | ''сезон'':A ) |\n''серия'':A'::tsquery)\n Filter: (active AND (id <> 500563))\n -> Bitmap Index Scan on idx_video_fts (cost=0.00..41138.20\nrows=218543 width=0) (actual time=170.206..170.206 rows=171945 loops=1)\n Index Cond: (fts @@ '( ( ( ( ( ''dexter'':A |\n''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) |\n''сезон'':A ) | ''серия'':A'::tsquery)\nTotal runtime: 1850.632 ms\n\n\nShould I use this instead?\n\n2011/1/15 Robert Haas <[email protected]>\n\n> On Tue, Jan 11, 2011 at 3:16 AM, Rauan Maemirov <[email protected]>\n> wrote:\n> > Hi, Kevin.\n> > Sorry for long delay.\n> > EXPLAIN ANALYZE SELECT \"v\".\"id\", \"v\".\"title\" FROM \"video\" AS \"v\"\n> > WHERE (v.active) AND (v.fts @@\n> > 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery\n> and\n> > v.id <> 500563 )\n> > ORDER BY COALESCE(ts_rank_cd( '{0.1, 0.2, 0.7, 1.0}', v.fts,\n> >\n> 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery),\n> > 1) DESC, v.views DESC\n> > LIMIT 6\n> > \"Limit (cost=103975.50..103975.52 rows=6 width=280) (actual\n> > time=2893.193..2893.199 rows=6 loops=1)\"\n> > \" -> Sort (cost=103975.50..104206.07 rows=92228 width=280) (actual\n> > time=2893.189..2893.193 rows=6 loops=1)\"\n> > \" Sort Key: (COALESCE(ts_rank_cd('{0.1,0.2,0.7,1}'::real[], fts,\n> '( (\n> > ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) |\n> > ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery), 1::real)),\n> views\"\n> > \" Sort Method: top-N heapsort Memory: 25kB\"\n> > \" -> Seq Scan on video v (cost=0.00..102322.34 rows=92228\n> > width=280) (actual time=0.100..2846.639 rows=54509 loops=1)\"\n> > \" Filter: (active AND (fts @@ '( ( ( ( ( ''dexter'':A |\n> > ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) |\n> > ''сезон'':A ) | ''серия'':A'::tsquery) AND (id <> 500563))\"\n> > \"Total runtime: 2893.264 ms\"\n> > Table scheme:\n> > CREATE TABLE video\n> > (\n> > id bigserial NOT NULL,\n> > hash character varying(12),\n> > account_id bigint NOT NULL,\n> > category_id smallint NOT NULL,\n> > converted boolean NOT NULL DEFAULT false,\n> > active boolean NOT NULL DEFAULT true,\n> > title character varying(255),\n> > description text,\n> > tags character varying(1000),\n> > authorized boolean NOT NULL DEFAULT false,\n> > adult boolean NOT NULL DEFAULT false,\n> > views bigint DEFAULT 0,\n> > rating real NOT NULL DEFAULT 0,\n> > screen smallint DEFAULT 2,\n> > duration smallint,\n> > \"type\" smallint DEFAULT 0,\n> > mp4 smallint NOT NULL DEFAULT 0,\n> > size bigint,\n> > size_high bigint DEFAULT 0,\n> > source character varying(255),\n> > storage_id smallint NOT NULL DEFAULT 1,\n> > rule_watching smallint,\n> > rule_commenting smallint,\n> > count_comments integer NOT NULL DEFAULT 0,\n> > count_likes integer NOT NULL DEFAULT 0,\n> > count_faves integer NOT NULL DEFAULT 0,\n> > fts tsvector,\n> > modified timestamp without time zone NOT NULL DEFAULT now(),\n> > created timestamp without time zone DEFAULT now(),\n> > CONSTRAINT video_pkey PRIMARY KEY (id),\n> > CONSTRAINT video_hash_key UNIQUE (hash)\n> > )\n> > WITH (\n> > OIDS=FALSE\n> > );\n> > Indexes:\n> > CREATE INDEX idx_video_account_id ON video USING btree (account_id);\n> > CREATE INDEX idx_video_created ON video USING btree (created);\n> > CREATE INDEX idx_video_fts ON video USING gin (fts);\n> > CREATE INDEX idx_video_hash ON video USING hash (hash);\n> > (here I tried both gist and gin indexes)\n> > I have 32Gb ram and 2 core quad E5520, 2.27GHz (8Mb cache).\n> > Pgsql conf:\n> > max_connections = 200\n> > shared_buffers = 7680MB\n> > work_mem = 128MB\n> > maintenance_work_mem = 1GB\n> > effective_cache_size = 22GB\n> > default_statistics_target = 100\n> > Anything else?\n>\n> For returning that many rows, an index scan might actually be slower.\n> Maybe it's worth testing. Try:\n>\n> SET enable_seqscan=off;\n> EXPLAIN ANALYZE ...\n>\n> and see what you get. If it's slower, well, then be happy it didn't\n> use the index (maybe the question is... what index should you have\n> instead?). If it's faster, post the results...\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nThe problem has returned back, and here's the results, as you've said it's faster now:SET enable_seqscan=off;EXPLAIN ANALYZE SELECT \"v\".\"id\", \"v\".\"title\" FROM \"video\" AS \"v\"\nWHERE (v.active) AND (v.fts @@ 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery and v.id <> 500563 ) ORDER BY COALESCE(ts_rank_cd( '{0.1, 0.2, 0.7, 1.0}', v.fts, 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery), 1) DESC, v.views DESC \nLIMIT 6Limit  (cost=219631.83..219631.85 rows=6 width=287) (actual time=1850.567..1850.570 rows=6 loops=1)  ->  Sort  (cost=219631.83..220059.05 rows=170886 width=287) (actual time=1850.565..1850.566 rows=6 loops=1)\n        Sort Key: (COALESCE(ts_rank_cd('{0.1,0.2,0.7,1}'::real[], fts, '( ( ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery), 1::real)), views\n        Sort Method:  top-N heapsort  Memory: 26kB        ->  Bitmap Heap Scan on video v  (cost=41180.92..216568.73 rows=170886 width=287) (actual time=214.842..1778.830 rows=103087 loops=1)\n              Recheck Cond: (fts @@ '( ( ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery)\n              Filter: (active AND (id <> 500563))              ->  Bitmap Index Scan on idx_video_fts  (cost=0.00..41138.20 rows=218543 width=0) (actual time=170.206..170.206 rows=171945 loops=1)\n                    Index Cond: (fts @@ '( ( ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery)\nTotal runtime: 1850.632 msShould I use this instead?2011/1/15 Robert Haas <[email protected]>\nOn Tue, Jan 11, 2011 at 3:16 AM, Rauan Maemirov <[email protected]> wrote:\n\n> Hi, Kevin.\n> Sorry for long delay.\n> EXPLAIN ANALYZE SELECT \"v\".\"id\", \"v\".\"title\" FROM \"video\" AS \"v\"\n> WHERE (v.active) AND (v.fts @@\n> 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery and\n> v.id <> 500563 )\n> ORDER BY COALESCE(ts_rank_cd( '{0.1, 0.2, 0.7, 1.0}', v.fts,\n> 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery),\n> 1) DESC, v.views DESC\n> LIMIT 6\n> \"Limit  (cost=103975.50..103975.52 rows=6 width=280) (actual\n> time=2893.193..2893.199 rows=6 loops=1)\"\n> \"  ->  Sort  (cost=103975.50..104206.07 rows=92228 width=280) (actual\n> time=2893.189..2893.193 rows=6 loops=1)\"\n> \"        Sort Key: (COALESCE(ts_rank_cd('{0.1,0.2,0.7,1}'::real[], fts, '( (\n> ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) |\n> ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery), 1::real)), views\"\n> \"        Sort Method:  top-N heapsort  Memory: 25kB\"\n> \"        ->  Seq Scan on video v  (cost=0.00..102322.34 rows=92228\n> width=280) (actual time=0.100..2846.639 rows=54509 loops=1)\"\n> \"              Filter: (active AND (fts @@ '( ( ( ( ( ''dexter'':A |\n> ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) |\n> ''сезон'':A ) | ''серия'':A'::tsquery) AND (id <> 500563))\"\n> \"Total runtime: 2893.264 ms\"\n> Table scheme:\n> CREATE TABLE video\n> (\n>   id bigserial NOT NULL,\n>   hash character varying(12),\n>   account_id bigint NOT NULL,\n>   category_id smallint NOT NULL,\n>   converted boolean NOT NULL DEFAULT false,\n>   active boolean NOT NULL DEFAULT true,\n>   title character varying(255),\n>   description text,\n>   tags character varying(1000),\n>   authorized boolean NOT NULL DEFAULT false,\n>   adult boolean NOT NULL DEFAULT false,\n>   views bigint DEFAULT 0,\n>   rating real NOT NULL DEFAULT 0,\n>   screen smallint DEFAULT 2,\n>   duration smallint,\n>   \"type\" smallint DEFAULT 0,\n>   mp4 smallint NOT NULL DEFAULT 0,\n>   size bigint,\n>   size_high bigint DEFAULT 0,\n>   source character varying(255),\n>   storage_id smallint NOT NULL DEFAULT 1,\n>   rule_watching smallint,\n>   rule_commenting smallint,\n>   count_comments integer NOT NULL DEFAULT 0,\n>   count_likes integer NOT NULL DEFAULT 0,\n>   count_faves integer NOT NULL DEFAULT 0,\n>   fts tsvector,\n>   modified timestamp without time zone NOT NULL DEFAULT now(),\n>   created timestamp without time zone DEFAULT now(),\n>   CONSTRAINT video_pkey PRIMARY KEY (id),\n>   CONSTRAINT video_hash_key UNIQUE (hash)\n> )\n> WITH (\n>   OIDS=FALSE\n> );\n> Indexes:\n> CREATE INDEX idx_video_account_id  ON video  USING btree  (account_id);\n> CREATE INDEX idx_video_created  ON video  USING btree  (created);\n> CREATE INDEX idx_video_fts  ON video  USING gin  (fts);\n> CREATE INDEX idx_video_hash  ON video  USING hash  (hash);\n> (here I tried both gist and gin indexes)\n> I have 32Gb ram and 2 core quad E5520, 2.27GHz (8Mb cache).\n> Pgsql conf:\n> max_connections = 200\n> shared_buffers = 7680MB\n> work_mem = 128MB\n> maintenance_work_mem = 1GB\n> effective_cache_size = 22GB\n> default_statistics_target = 100\n> Anything else?\n\nFor returning that many rows, an index scan might actually be slower.\nMaybe it's worth testing.  Try:\n\nSET enable_seqscan=off;\nEXPLAIN ANALYZE ...\n\nand see what you get.  If it's slower, well, then be happy it didn't\nuse the index (maybe the question is... what index should you have\ninstead?).  If it's faster, post the results...\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 21 Nov 2011 11:53:47 +0600", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with FTS" }, { "msg_contents": "On Mon, Nov 21, 2011 at 12:53 AM, Rauan Maemirov <[email protected]> wrote:\n> The problem has returned back, and here's the results, as you've said it's\n> faster now:\n>\n> SET enable_seqscan=off;\n> EXPLAIN ANALYZE SELECT \"v\".\"id\", \"v\".\"title\" FROM \"video\" AS \"v\"\n> WHERE (v.active) AND (v.fts @@\n> 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery and\n> v.id <> 500563 )\n> ORDER BY COALESCE(ts_rank_cd( '{0.1, 0.2, 0.7, 1.0}', v.fts,\n> 'dexter:A|season:A|seri:A|декстер:A|качество:A|сезон:A|серия:A'::tsquery),\n> 1) DESC, v.views DESC\n> LIMIT 6\n>\n> Limit  (cost=219631.83..219631.85 rows=6 width=287) (actual\n> time=1850.567..1850.570 rows=6 loops=1)\n>   ->  Sort  (cost=219631.83..220059.05 rows=170886 width=287) (actual\n> time=1850.565..1850.566 rows=6 loops=1)\n>         Sort Key: (COALESCE(ts_rank_cd('{0.1,0.2,0.7,1}'::real[], fts, '( (\n> ( ( ( ''dexter'':A | ''season'':A ) | ''seri'':A ) | ''декстер'':A ) |\n> ''качество'':A ) | ''сезон'':A ) | ''серия'':A'::tsquery), 1::real)), views\n>         Sort Method:  top-N heapsort  Memory: 26kB\n>         ->  Bitmap Heap Scan on video v  (cost=41180.92..216568.73\n> rows=170886 width=287) (actual time=214.842..1778.830 rows=103087 loops=1)\n>               Recheck Cond: (fts @@ '( ( ( ( ( ''dexter'':A | ''season'':A )\n> | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) | ''сезон'':A ) |\n> ''серия'':A'::tsquery)\n>               Filter: (active AND (id <> 500563))\n>               ->  Bitmap Index Scan on idx_video_fts  (cost=0.00..41138.20\n> rows=218543 width=0) (actual time=170.206..170.206 rows=171945 loops=1)\n>                     Index Cond: (fts @@ '( ( ( ( ( ''dexter'':A |\n> ''season'':A ) | ''seri'':A ) | ''декстер'':A ) | ''качество'':A ) |\n> ''сезон'':A ) | ''серия'':A'::tsquery)\n> Total runtime: 1850.632 ms\n>\n>\n> Should I use this instead?\n\nCan you also provide EXPLAIN ANALYZE output for the query with\nenable_seqscan=on?\n\nThe row-count estimates look reasonably accurate, so there's some\nother problem here. What do you have random_page_cost, seq_page_cost,\nand effective_cache_size set to? You might try \"SET\nrandom_page_cost=2\" or even \"SET random_page_cost=0.5; SET\nseq_page_cost=0.3\" and see if those settings help.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 30 Nov 2011 15:58:28 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with FTS" }, { "msg_contents": "On 2011-11-30 21:58, Robert Haas wrote:\n> The row-count estimates look reasonably accurate, so there's some\n> other problem here. What do you have random_page_cost, seq_page_cost,\n> and effective_cache_size set to? You might try \"SET\n> random_page_cost=2\" or even \"SET random_page_cost=0.5; SET\n> seq_page_cost=0.3\" and see if those settings help\nI may be seing ghosts here, since I've encountered\nthe same problem. But the Query-planner does not\ntake toast into account, so a Sequential Scan + filter\nonly cost what it takes to scan the main table, but fts-fields\nare typically large enough to be toasted so the cost should\nbe main+toast (amount of pages) + filtering cost.\n\nI posted about it yesterday:\n\nhttp://archives.postgresql.org/pgsql-hackers/2011-11/msg01754.php\n\nIf above problem is on <9.1 a patch to proper account of gin-estimates\nhave been added to 9.1 which also may benefit the planning:\nhttp://www.postgresql.org/docs/9.1/static/release-9-1.html\n\n Improve GIN index scan cost estimation (Teodor Sigaev)\n\nJesper\n-- \nJesper\n", "msg_date": "Thu, 01 Dec 2011 07:11:40 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with FTS" } ]
[ { "msg_contents": "I have server computer install Windows 2008R2, PostgreSQL 9.0.1 64 bit, 8G\nRAM, RAID 10 - 4 disks, dedicated server\nHere is config of postgresql.conf after running pgtune\n\ndefault_statistics_target = 100 # pgtune wizard 2010-12-15\nmaintenance_work_mem = 480MB # pgtune wizard 2010-12-15\nconstraint_exclusion = on # pgtune wizard 2010-12-15\ncheckpoint_completion_target = 0.9 # pgtune wizard 2010-12-15\neffective_cache_size = 2816MB # pgtune wizard 2010-12-15\nwork_mem = 8MB # pgtune wizard 2010-12-15\nwal_buffers = 32MB # pgtune wizard 2010-12-15\ncheckpoint_segments = 64 # pgtune wizard 2010-12-15\nshared_buffers = 960MB # pgtune wizard 2010-12-15\nmax_connections = 254 # pgtune wizard 2010-12-15\n\nAfter running pgbench\npgbench -i -h 127.0.0.1 -p 5433 -U postgres -s 10 pgbench\npgbench -h 127.0.0.1 -p 5433 -U postgres -c 100 -t 10 -C -s 10 pgbench\n\nScale option ignored, using pgbench_branches table count = 10\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nquery mode: simple\nnumber of clients: 100\nnumber of threads: 1\nnumber of transactions per client: 10\nnumber of transactions actually processed: 1000/1000\ntps = 20.143494 (including connections establishing)\ntps = 256.630260 (excluding connections establishing)\n\nWhy pgbench on my server is very low or is it common value with my server ?\n\nPlease help me. Thanks in advance.\n\nTuan Hoang ANh.\n\nI have server computer install Windows 2008R2, PostgreSQL 9.0.1 64 bit, 8G RAM, RAID 10 - 4 disks, dedicated serverHere is config of postgresql.conf after running pgtune default_statistics_target = 100 # pgtune wizard 2010-12-15\nmaintenance_work_mem = 480MB # pgtune wizard 2010-12-15constraint_exclusion = on # pgtune wizard 2010-12-15checkpoint_completion_target = 0.9 # pgtune wizard 2010-12-15effective_cache_size = 2816MB # pgtune wizard 2010-12-15\nwork_mem = 8MB # pgtune wizard 2010-12-15wal_buffers = 32MB # pgtune wizard 2010-12-15checkpoint_segments = 64 # pgtune wizard 2010-12-15shared_buffers = 960MB # pgtune wizard 2010-12-15max_connections = 254 # pgtune wizard 2010-12-15\nAfter running pgbenchpgbench -i -h 127.0.0.1 -p 5433 -U postgres -s 10 pgbenchpgbench -h 127.0.0.1 -p 5433 -U postgres -c 100  -t 10 -C  -s 10 pgbenchScale option ignored, using pgbench_branches table count = 10\nstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 10query mode: simplenumber of clients: 100number of threads: 1number of transactions per client: 10number of transactions actually processed: 1000/1000\ntps = 20.143494 (including connections establishing)tps = 256.630260 (excluding connections establishing)Why pgbench on my server is very low or is it common value with my server ?Please help me. Thanks in advance.\nTuan Hoang ANh.", "msg_date": "Sun, 19 Dec 2010 01:15:33 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "On Sat, Dec 18, 2010 at 10:15 AM, tuanhoanganh <[email protected]> wrote:\n> I have server computer install Windows 2008R2, PostgreSQL 9.0.1 64 bit, 8G\n...\n> pgbench -h 127.0.0.1 -p 5433 -U postgres -c 100  -t 10 -C  -s 10 pgbench\n\nWhy the -C option? You are essentially benchmarking how fast you can\nmake new connections to the database. Is that what you want to be\nbenchmarking?\n\nIf the code you anticipate using is really going to make and break\nconnections between every query, you should use a connection pooler.\nWhich means you should be benchmarking through the connection pooler,\nor just leave off the -C.\n\nAlso, -t 10 is probably too small to get meaningful results.\n\n> tps = 20.143494 (including connections establishing)\n> tps = 256.630260 (excluding connections establishing)\n>\n> Why pgbench on my server is very low or is it common value with my server ?\n\nStarting a new connection in PG is relatively slow, especially so on\nWindows, because it involves starting and setting up a new process for\neach one.\n\nCheers,\n\nJeff\n", "msg_date": "Sat, 18 Dec 2010 10:51:05 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "My app has ~ 20 exe file, each of exe create new connect to postgesql and\nthere are 10-30 user use my application so I need -C to check PostgreSQL\nperformance.\n\nI will test without -C option. But is there any way to decrease connect time\nwhen there are 200 process, each of process will create new connect to\npostgresql.\n\n\nOn Sun, Dec 19, 2010 at 1:51 AM, Jeff Janes <[email protected]> wrote:\n\n> On Sat, Dec 18, 2010 at 10:15 AM, tuanhoanganh <[email protected]> wrote:\n> > I have server computer install Windows 2008R2, PostgreSQL 9.0.1 64 bit,\n> 8G\n> ...\n> > pgbench -h 127.0.0.1 -p 5433 -U postgres -c 100 -t 10 -C -s 10 pgbench\n>\n> Why the -C option? You are essentially benchmarking how fast you can\n> make new connections to the database. Is that what you want to be\n> benchmarking?\n>\n> If the code you anticipate using is really going to make and break\n> connections between every query, you should use a connection pooler.\n> Which means you should be benchmarking through the connection pooler,\n> or just leave off the -C.\n>\n> Also, -t 10 is probably too small to get meaningful results.\n>\n> > tps = 20.143494 (including connections establishing)\n> > tps = 256.630260 (excluding connections establishing)\n> >\n> > Why pgbench on my server is very low or is it common value with my server\n> ?\n>\n> Starting a new connection in PG is relatively slow, especially so on\n> Windows, because it involves starting and setting up a new process for\n> each one.\n>\n> Cheers,\n>\n> Jeff\n>\n\nMy app has ~ 20 exe file, each of exe create new connect to postgesql and there are 10-30 user use my application so I need -C to check PostgreSQL performance.I will test without -C option. But is there any way to decrease connect time when there are 200 process, each of process will create new connect to postgresql.\nOn Sun, Dec 19, 2010 at 1:51 AM, Jeff Janes <[email protected]> wrote:\nOn Sat, Dec 18, 2010 at 10:15 AM, tuanhoanganh <[email protected]> wrote:\n> I have server computer install Windows 2008R2, PostgreSQL 9.0.1 64 bit, 8G\n...\n> pgbench -h 127.0.0.1 -p 5433 -U postgres -c 100  -t 10 -C  -s 10 pgbench\n\nWhy the -C option?  You are essentially benchmarking how fast you can\nmake new connections to the database.  Is that what you want to be\nbenchmarking?\n\nIf the code you anticipate using is really going to make and break\nconnections between every query, you should use a connection pooler.\nWhich means you should be benchmarking through the connection pooler,\nor just leave off the -C.\n\nAlso, -t 10 is probably too small to get meaningful results.\n\n> tps = 20.143494 (including connections establishing)\n> tps = 256.630260 (excluding connections establishing)\n>\n> Why pgbench on my server is very low or is it common value with my server ?\n\nStarting a new connection in PG is relatively slow, especially so on\nWindows, because it involves starting and setting up a new process for\neach one.\n\nCheers,\n\nJeff", "msg_date": "Sun, 19 Dec 2010 02:13:23 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "Here is my result without -C\npgbench -h 127.0.0.1 -p 9999 -U postgres -c 100 -t 10 -s 10 pgbench\n\nScale option ignored, using pgbench_branches table count = 10\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nquery mode: simple\nnumber of clients: 100\nnumber of threads: 1\nnumber of transactions per client: 10\nnumber of transactions actually processed: 1000/1000\ntps = 98.353544 (including connections establishing)\ntps = 196.318788 (excluding connections establishing)\n\nOn Sun, Dec 19, 2010 at 1:51 AM, Jeff Janes <[email protected]> wrote:\n\n> On Sat, Dec 18, 2010 at 10:15 AM, tuanhoanganh <[email protected]> wrote:\n> > I have server computer install Windows 2008R2, PostgreSQL 9.0.1 64 bit,\n> 8G\n> ...\n> > pgbench -h 127.0.0.1 -p 5433 -U postgres -c 100 -t 10 -C -s 10 pgbench\n>\n> Why the -C option? You are essentially benchmarking how fast you can\n> make new connections to the database. Is that what you want to be\n> benchmarking?\n>\n> If the code you anticipate using is really going to make and break\n> connections between every query, you should use a connection pooler.\n> Which means you should be benchmarking through the connection pooler,\n> or just leave off the -C.\n>\n> Also, -t 10 is probably too small to get meaningful results.\n>\n> > tps = 20.143494 (including connections establishing)\n> > tps = 256.630260 (excluding connections establishing)\n> >\n> > Why pgbench on my server is very low or is it common value with my server\n> ?\n>\n> Starting a new connection in PG is relatively slow, especially so on\n> Windows, because it involves starting and setting up a new process for\n> each one.\n>\n> Cheers,\n>\n> Jeff\n>\n\nHere is my result without -Cpgbench -h 127.0.0.1 -p 9999 -U postgres -c 100 -t 10 -s 10 pgbenchScale option ignored, using pgbench_branches table count = 10starting vacuum...end.transaction type: TPC-B (sort of)\nscaling factor: 10query mode: simplenumber of clients: 100number of threads: 1number of transactions per client: 10number of transactions actually processed: 1000/1000tps = 98.353544 (including connections establishing)\ntps = 196.318788 (excluding connections establishing)On Sun, Dec 19, 2010 at 1:51 AM, Jeff Janes <[email protected]> wrote:\nOn Sat, Dec 18, 2010 at 10:15 AM, tuanhoanganh <[email protected]> wrote:\n\n> I have server computer install Windows 2008R2, PostgreSQL 9.0.1 64 bit, 8G\n...\n> pgbench -h 127.0.0.1 -p 5433 -U postgres -c 100  -t 10 -C  -s 10 pgbench\n\nWhy the -C option?  You are essentially benchmarking how fast you can\nmake new connections to the database.  Is that what you want to be\nbenchmarking?\n\nIf the code you anticipate using is really going to make and break\nconnections between every query, you should use a connection pooler.\nWhich means you should be benchmarking through the connection pooler,\nor just leave off the -C.\n\nAlso, -t 10 is probably too small to get meaningful results.\n\n> tps = 20.143494 (including connections establishing)\n> tps = 256.630260 (excluding connections establishing)\n>\n> Why pgbench on my server is very low or is it common value with my server ?\n\nStarting a new connection in PG is relatively slow, especially so on\nWindows, because it involves starting and setting up a new process for\neach one.\n\nCheers,\n\nJeff", "msg_date": "Sun, 19 Dec 2010 02:42:16 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "On Sat, Dec 18, 2010 at 11:13 AM, tuanhoanganh <[email protected]> wrote:\n> My app has ~ 20 exe file, each of exe create new connect to postgesql\n\nBut how often do they do that? Does each exe make a new connection,\ndo one transaction, and then exit? Or does each exe make one\nconnection, do one transaction, then close the connection and make a\nnew one? Or does each exe make one connection, then stick around for\na while using that connection over and over again?\n\nIn the first two cases, indeed -C is the correct way to benchmark it,\nbut in the third case it is not.\n\n> and\n> there are 10-30 user use my application so I need -C to check PostgreSQL\n> performance.\n>\n> I will test without -C option. But is there any way to decrease connect time\n> when there are 200 process, each of process will create new connect to\n> postgresql.\n\nI think the easiest way to decrease the connect time by a lot would be\nuse a connection pooler.\n\nThe critical question is how often does each process create a new\nconnection. 200 processes which make one connection each and keep\nthem open for 10 minutes is quite different from 200 processes which\nmake and break connections as fast as they can.\n\n\nCheers,\n\nJeff\n", "msg_date": "Sat, 18 Dec 2010 11:48:07 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "On 12/18/10 20:42, tuanhoanganh wrote:\n> Here is my result without -C\n> pgbench -h 127.0.0.1 -p 9999 -U postgres -c 100 -t 10 -s 10 pgbench\n\nYou really should replace \"-t 10\" with something like \"-T 60\" or more.\n\n", "msg_date": "Sat, 18 Dec 2010 23:27:07 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "Here is my new pgbench's point\n\npgbench -h 127.0.0.1 -p 9999 -U postgres -c 200 -t 100 -s 10 pgbench\n\nScale option ignored, using pgbench_branches table count = 10\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nquery mode: simple\nnumber of clients: 200\nnumber of threads: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 20000/20000\ntps = 202.556936 (including connections establishing)\ntps = 225.498811 (excluding connections establishing)\n\nPostgreSQL config with pgtune\ndefault_statistics_target = 100 # pgtune wizard 2010-12-15\nmaintenance_work_mem = 480MB # pgtune wizard 2010-12-15\nconstraint_exclusion = on # pgtune wizard 2010-12-15\ncheckpoint_completion_target = 0.9 # pgtune wizard 2010-12-15\neffective_cache_size = 2816MB # pgtune wizard 2010-12-15\nwork_mem = 8MB # pgtune wizard 2010-12-15\nwal_buffers = 32MB # pgtune wizard 2010-12-15\ncheckpoint_segments = 64 # pgtune wizard 2010-12-15\nshared_buffers = 960MB # pgtune wizard 2010-12-15\nmax_connections = 254 # pgtune wizard 2010-12-15\n\nI have server computer install Windows 2008R2, PostgreSQL 9.0.1 64 bit, 8G\nRAM, RAID 10 - 4 disks\n\nIs it common pgbench 's point with my server ?\n\nThanks you very much.\n\nTuan Hoang Anh\n\nOn Sun, Dec 19, 2010 at 5:27 AM, Ivan Voras <[email protected]> wrote:\n\n> On 12/18/10 20:42, tuanhoanganh wrote:\n>\n>> Here is my result without -C\n>> pgbench -h 127.0.0.1 -p 9999 -U postgres -c 100 -t 10 -s 10 pgbench\n>>\n>\n> You really should replace \"-t 10\" with something like \"-T 60\" or more.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHere is my new pgbench's pointpgbench -h 127.0.0.1 -p 9999 -U postgres -c 200 -t 100 -s 10 pgbenchScale option ignored, using pgbench_branches table count = 10starting vacuum...end.transaction type: TPC-B (sort of)\nscaling factor: 10query mode: simplenumber of clients: 200number of threads: 1number of transactions per client: 100number of transactions actually processed: 20000/20000tps = 202.556936 (including connections establishing)\ntps = 225.498811 (excluding connections establishing)PostgreSQL config with pgtunedefault_statistics_target = 100 # pgtune wizard 2010-12-15\nmaintenance_work_mem = 480MB # pgtune wizard 2010-12-15constraint_exclusion = on # pgtune wizard 2010-12-15checkpoint_completion_target = 0.9 # pgtune wizard 2010-12-15effective_cache_size = 2816MB # pgtune wizard 2010-12-15\n\nwork_mem = 8MB # pgtune wizard 2010-12-15wal_buffers = 32MB # pgtune wizard 2010-12-15checkpoint_segments = 64 # pgtune wizard 2010-12-15shared_buffers = 960MB # pgtune wizard 2010-12-15max_connections = 254 # pgtune wizard 2010-12-15\nI have server computer install Windows 2008R2, PostgreSQL 9.0.1 64 bit, 8G RAM, RAID 10 - 4 disksIs it common pgbench 's point with my server ?Thanks you very much.Tuan Hoang Anh\nOn Sun, Dec 19, 2010 at 5:27 AM, Ivan Voras <[email protected]> wrote:\nOn 12/18/10 20:42, tuanhoanganh wrote:\n\nHere is my result without -C\npgbench -h 127.0.0.1 -p 9999 -U postgres -c 100 -t 10 -s 10 pgbench\n\n\nYou really should replace \"-t 10\" with something like \"-T 60\" or more.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 20 Dec 2010 21:10:58 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "On Mon, Dec 20, 2010 at 7:10 AM, tuanhoanganh <[email protected]> wrote:\n> Here is my new pgbench's point\n>\n> pgbench -h 127.0.0.1 -p 9999 -U postgres -c 200 -t 100 -s 10 pgbench\n\nYour -c should always be the same or lower than -s. Anything higher\nand you're just thrashing your IO system waiting for locks. Note that\n-s is ignored on runs if you're not doing -i. Also -t 100 is too\nsmall to get a good test, try at least 1000 or 10000 and let it run a\nminute.\n\n> Scale option ignored, using pgbench_branches table count = 10\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 10\n> query mode: simple\n> number of clients: 200\n> number of threads: 1\n> number of transactions per client: 100\n> number of transactions actually processed: 20000/20000\n> tps = 202.556936 (including connections establishing)\n> tps = 225.498811 (excluding connections establishing)\n\n> I have server computer install Windows 2008R2, PostgreSQL 9.0.1 64 bit, 8G\n> RAM, RAID 10 - 4 disks\n>\n> Is it common pgbench 's point with my server ?\n\nThat's a pretty reasonable number for that class machine. I assume\nyou do NOT have a battery backed caching RAID controller or it would\nbe WAY higher.\n", "msg_date": "Mon, 20 Dec 2010 07:21:40 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "P.s. here's one of my two slower slave machines. It has dual quad\ncore opterons (2352 2.1GHz) and 32 Gig ram. Controller is an Areca\n1680 with 512M battery backed cache and 2 disks for pg_xlog and 12 for\nthe data/base directory. Running Centos 5.4 or so.\n\npgbench -c 10 -t 10000 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 5289.941145 (including connections establishing)\ntps = 5302.815418 (excluding connections establishing)\n", "msg_date": "Mon, 20 Dec 2010 07:24:14 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "Is there any tool work on windows can open 200 connect to postgresql and\napplication connect to this tool to decrease time connect to PostgreSQL\n(because PostgreSQL start new process when have a new connect, I want this\ntool open and keep 200 connect to postgreSQL, my application connect to this\ntool instead of postgreSQL).\n\n\nMy server is running Windows 2008R2 and only has RAID 10 - 4 Disk\nOn Mon, Dec 20, 2010 at 9:24 PM, Scott Marlowe <[email protected]>wrote:\n\n> P.s. here's one of my two slower slave machines. It has dual quad\n> core opterons (2352 2.1GHz) and 32 Gig ram. Controller is an Areca\n> 1680 with 512M battery backed cache and 2 disks for pg_xlog and 12 for\n> the data/base directory. Running Centos 5.4 or so.\n>\n> pgbench -c 10 -t 10000 test\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 100\n> number of clients: 10\n> number of transactions per client: 10000\n> number of transactions actually processed: 100000/100000\n> tps = 5289.941145 (including connections establishing)\n> tps = 5302.815418 (excluding connections establishing)\n>\n\nThanks in advance\n\nTuan Hoang Anh\n\nIs there any tool work on windows can open 200 connect to postgresql  and application connect to this tool to decrease time connect to PostgreSQL (because PostgreSQL start new process when have a new connect, I want this tool open and keep 200 connect to postgreSQL, my application connect to this tool instead of postgreSQL).\nMy server is running Windows 2008R2 and only has RAID 10 - 4 DiskOn Mon, Dec 20, 2010 at 9:24 PM, Scott Marlowe <[email protected]> wrote:\nP.s. here's one of my two slower slave machines.  It has dual quad\ncore opterons (2352 2.1GHz) and 32 Gig ram.  Controller is an Areca\n1680 with 512M battery backed cache and 2 disks for pg_xlog and 12 for\nthe data/base directory.  Running Centos 5.4 or so.\n\npgbench -c 10 -t 10000 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 5289.941145 (including connections establishing)\ntps = 5302.815418 (excluding connections establishing)Thanks in advanceTuan Hoang Anh", "msg_date": "Tue, 21 Dec 2010 10:31:59 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "On Mon, Dec 20, 2010 at 8:31 PM, tuanhoanganh <[email protected]> wrote:\n> Is there any tool work on windows can open 200 connect to postgresql  and\n> application connect to this tool to decrease time connect to PostgreSQL\n> (because PostgreSQL start new process when have a new connect, I want this\n> tool open and keep 200 connect to postgreSQL, my application connect to this\n> tool instead of postgreSQL).\n\nSure, that's what any good pooler can do. Have it open and hold open\n200 connections, then have your app connect to the pooler. The pooler\nkeeps the connects open all the time. The app connects to a much\nfaster mechanism, the pooler each time. You need to make sure your\nconnections are \"clean\" when you disconnect, i.e. no idle transactions\nleft over, or you'll get weird errors about failed transactions til\nrollback etc.\n", "msg_date": "Mon, 20 Dec 2010 20:35:29 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "On Tue, Dec 21, 2010 at 04:35, Scott Marlowe <[email protected]> wrote:\n> On Mon, Dec 20, 2010 at 8:31 PM, tuanhoanganh <[email protected]> wrote:\n>> Is there any tool work on windows can open 200 connect to postgresql  and\n>> application connect to this tool to decrease time connect to PostgreSQL\n>> (because PostgreSQL start new process when have a new connect, I want this\n>> tool open and keep 200 connect to postgreSQL, my application connect to this\n>> tool instead of postgreSQL).\n>\n> Sure, that's what any good pooler can do.  Have it open and hold open\n> 200 connections, then have your app connect to the pooler.  The pooler\n> keeps the connects open all the time.  The app connects to a much\n> faster mechanism, the pooler each time.  You need to make sure your\n> connections are \"clean\" when you disconnect, i.e. no idle transactions\n> left over, or you'll get weird errors about failed transactions til\n> rollback etc.\n\nYeah, AFAIK pgbouncer works fine on Windows, and is a very good pooler\nfor PostgreSQL. I haven't run it on Windows myself, but it should\nsupport it fine...\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Tue, 21 Dec 2010 09:43:50 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "As far as i know, Pgbouncer can help to minimum connect to postgresql, I\nwant tool can open and keep 200 connect to postgresql (be cause start new\nconnect to postgresql in windows very slow, i want it open 200 connect in\nfirst time and my application connect to this tool)\n\nIs there any tool like that in windows.\n\nThanks for you help.\n\nTuan Hoang ANh.\n\nOn Tue, Dec 21, 2010 at 3:43 PM, Magnus Hagander <[email protected]>wrote:\n\n> On Tue, Dec 21, 2010 at 04:35, Scott Marlowe <[email protected]>\n> wrote:\n> > On Mon, Dec 20, 2010 at 8:31 PM, tuanhoanganh <[email protected]>\n> wrote:\n> >> Is there any tool work on windows can open 200 connect to postgresql\n> and\n> >> application connect to this tool to decrease time connect to PostgreSQL\n> >> (because PostgreSQL start new process when have a new connect, I want\n> this\n> >> tool open and keep 200 connect to postgreSQL, my application connect to\n> this\n> >> tool instead of postgreSQL).\n> >\n> > Sure, that's what any good pooler can do. Have it open and hold open\n> > 200 connections, then have your app connect to the pooler. The pooler\n> > keeps the connects open all the time. The app connects to a much\n> > faster mechanism, the pooler each time. You need to make sure your\n> > connections are \"clean\" when you disconnect, i.e. no idle transactions\n> > left over, or you'll get weird errors about failed transactions til\n> > rollback etc.\n>\n> Yeah, AFAIK pgbouncer works fine on Windows, and is a very good pooler\n> for PostgreSQL. I haven't run it on Windows myself, but it should\n> support it fine...\n>\n> --\n> Magnus Hagander\n> Me: http://www.hagander.net/\n> Work: http://www.redpill-linpro.com/\n>\n\nAs far as i know, Pgbouncer can help to minimum connect to postgresql, I want tool can open and keep 200 connect to postgresql (be cause start new connect to postgresql in windows very slow, i want it open 200 connect in first time and my application connect to this tool)\nIs there any tool like that in windows.Thanks for you help.Tuan Hoang ANh.On Tue, Dec 21, 2010 at 3:43 PM, Magnus Hagander <[email protected]> wrote:\nOn Tue, Dec 21, 2010 at 04:35, Scott Marlowe <[email protected]> wrote:\n\n> On Mon, Dec 20, 2010 at 8:31 PM, tuanhoanganh <[email protected]> wrote:\n>> Is there any tool work on windows can open 200 connect to postgresql  and\n>> application connect to this tool to decrease time connect to PostgreSQL\n>> (because PostgreSQL start new process when have a new connect, I want this\n>> tool open and keep 200 connect to postgreSQL, my application connect to this\n>> tool instead of postgreSQL).\n>\n> Sure, that's what any good pooler can do.  Have it open and hold open\n> 200 connections, then have your app connect to the pooler.  The pooler\n> keeps the connects open all the time.  The app connects to a much\n> faster mechanism, the pooler each time.  You need to make sure your\n> connections are \"clean\" when you disconnect, i.e. no idle transactions\n> left over, or you'll get weird errors about failed transactions til\n> rollback etc.\n\nYeah, AFAIK pgbouncer works fine on Windows, and is a very good pooler\nfor PostgreSQL. I haven't run it on Windows myself, but it should\nsupport it fine...\n\n--\n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/", "msg_date": "Wed, 22 Dec 2010 18:28:41 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "On Wed, Dec 22, 2010 at 6:28 AM, tuanhoanganh <[email protected]> wrote:\n\n> As far as i know, Pgbouncer can help to minimum connect to postgresql, I\n> want tool can open and keep 200 connect to postgresql (be cause start new\n> connect to postgresql in windows very slow, i want it open 200 connect in\n> first time and my application connect to this tool)\n>\n> Is there any tool like that in windows.\n>\n> Thanks for you help.\n>\n>\nAs Magnus said, pgBouncer does what you are asking for.\n\nRegards,\n-- \ngurjeet.singh\n@ EnterpriseDB - The Enterprise Postgres Company\nhttp://www.EnterpriseDB.com\n\nsingh.gurjeet@{ gmail | yahoo }.com\nTwitter/Skype: singh_gurjeet\n\nMail sent from my BlackLaptop device\n\nOn Wed, Dec 22, 2010 at 6:28 AM, tuanhoanganh <[email protected]> wrote:\n\nAs far as i know, Pgbouncer can help to minimum connect to postgresql, I want tool can open and keep 200 connect to postgresql (be cause start new connect to postgresql in windows very slow, i want it open 200 connect in first time and my application connect to this tool)\nIs there any tool like that in windows.Thanks for you help.As Magnus said, pgBouncer does what you are asking for.Regards, -- gurjeet.singh@ EnterpriseDB - The Enterprise Postgres Company\nhttp://www.EnterpriseDB.comsingh.gurjeet@{ gmail | yahoo }.comTwitter/Skype: singh_gurjeetMail sent from my BlackLaptop device", "msg_date": "Wed, 22 Dec 2010 07:13:56 -0500", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "Could you show me what parameter of pgbouncer.ini can do that. I read\npgbouncer and can not make pgbouncer open and keep 200 connect to postgres\n(Sorry for my English)\n\nThanks you very much.\n\nTuan Hoang ANh\n\nOn Wed, Dec 22, 2010 at 7:13 PM, Gurjeet Singh <[email protected]>wrote:\n\n> On Wed, Dec 22, 2010 at 6:28 AM, tuanhoanganh <[email protected]> wrote:\n>\n>> As far as i know, Pgbouncer can help to minimum connect to postgresql, I\n>> want tool can open and keep 200 connect to postgresql (be cause start new\n>> connect to postgresql in windows very slow, i want it open 200 connect in\n>> first time and my application connect to this tool)\n>>\n>> Is there any tool like that in windows.\n>>\n>> Thanks for you help.\n>>\n>>\n> As Magnus said, pgBouncer does what you are asking for.\n>\n> Regards,\n> --\n> gurjeet.singh\n> @ EnterpriseDB - The Enterprise Postgres Company\n> http://www.EnterpriseDB.com\n>\n> singh.gurjeet@{ gmail | yahoo }.com\n> Twitter/Skype: singh_gurjeet\n>\n> Mail sent from my BlackLaptop device\n>\n\nCould you show me what parameter of pgbouncer.ini can do that. I read pgbouncer and can not make pgbouncer open and keep 200 connect to postgres (Sorry for my English)Thanks you very much.Tuan Hoang ANh\nOn Wed, Dec 22, 2010 at 7:13 PM, Gurjeet Singh <[email protected]> wrote:\nOn Wed, Dec 22, 2010 at 6:28 AM, tuanhoanganh <[email protected]> wrote:\n\n\nAs far as i know, Pgbouncer can help to minimum connect to postgresql, I want tool can open and keep 200 connect to postgresql (be cause start new connect to postgresql in windows very slow, i want it open 200 connect in first time and my application connect to this tool)\nIs there any tool like that in windows.Thanks for you help.As Magnus said, pgBouncer does what you are asking for.Regards, -- gurjeet.singh@ EnterpriseDB - The Enterprise Postgres Company\nhttp://www.EnterpriseDB.comsingh.gurjeet@{ gmail | yahoo }.comTwitter/Skype: singh_gurjeetMail sent from my BlackLaptop device", "msg_date": "Thu, 23 Dec 2010 21:20:59 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "tuanhoanganh <[email protected]> wrote:\n \n> Could you show me what parameter of pgbouncer.ini can do that. I\n> read pgbouncer and can not make pgbouncer open and keep 200\n> connect to postgres\n \nWhat makes you think that 200 connections to PostgreSQL will be a\ngood idea? Perhaps you want a smaller number of connections from\npgbouncer to PostgreSQL and a larger number from your application to\npgbouncer?\n \nIf you search the archives you can probably find at least 100 posts\nabout how both throughput and response time degrade when you have\nmore connections active then there are resources to use. \n(Saturation is often around twice the CPU core count plus the\neffective number of spindles, with caching reducing the latter.) It\nis quite often the case that a transaction will complete sooner if\nit is queued for later execution than if it is thrown into a mix\nwhere resources are saturated.\n \n-Kevin\n", "msg_date": "Thu, 23 Dec 2010 08:37:48 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low\n\t question?" }, { "msg_contents": "On Thu, Dec 23, 2010 at 09:20:59PM +0700, tuanhoanganh wrote:\n> Could you show me what parameter of pgbouncer.ini can do that. I read\n> pgbouncer and can not make pgbouncer open and keep 200 connect to postgres\n> (Sorry for my English)\n> \n> Thanks you very much.\n> \n> Tuan Hoang ANh\n> \n\nYou need to use session pooling for that to work. From the man page:\n\n In order not to compromise transaction semantics for connection\n pooling, pgbouncer supports several types of pooling when\n rotating connections:\n\n Session pooling\n Most polite method. When client connects, a server connection\n will be assigned to it for the whole duration the client\n stays connected. When the client disconnects, the server\n connection will be put back into the pool. This is the\n default method.\n\n Transaction pooling\n A server connection is assigned to client only during a\n transaction. When PgBouncer notices that transaction is over,\n the server connection will be put back into the pool.\n\n Statement pooling\n Most aggressive method. The server connection will be put back\n into pool immediately after a query completes. Multi-statement\n transactions are disallowed in this mode as they would break.\n\n\nThe fact that pgbouncer will not keep 200 connections open to\nthe database means that you do not have enough work to actually\nkeep 200 permanent connections busy. It is much more efficient\nto use transaction pooling. You typically want the number of\npersistent database connections to be a small multiple of the\nnumber of CPUs (cores) on your system. Then set pgbouncer to\nallow as many client connections as you need. This will give\nyou the best throughput and pgbouncer can setup and tear down\nthe connections to your clients much, much faster than making\na full connection to the PostgreSQL database. \n\nRegards,\nKen\n", "msg_date": "Thu, 23 Dec 2010 08:46:28 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "On Thu, Dec 23, 2010 at 9:37 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> tuanhoanganh <[email protected]> wrote:\n>\n> > Could you show me what parameter of pgbouncer.ini can do that. I\n> > read pgbouncer and can not make pgbouncer open and keep 200\n> > connect to postgres\n>\n> What makes you think that 200 connections to PostgreSQL will be a\n> good idea? Perhaps you want a smaller number of connections from\n> pgbouncer to PostgreSQL and a larger number from your application to\n> pgbouncer?\n>\n\nIf you read this thread, My app has ~ 20 exe file, each of exe create new\nconnect to postgesql and there are 10-30 user use my application.\nMy server running Windows 2008 R2. In this thread, postgresql on windows\ncreate new connect very slow\n\n\"Starting a new connection in PG is relatively slow, especially so on\nWindows, because it involves starting and setting up a new process for\neach one.\nJeff Janes \"\n\nSo I need a tool to open and keep 200 connect to postgres, my application\nconnect to this tool. And decrease time to connect to postgres ( because no\nneed to start new postgres process on windows)\n\n\n>\n> If you search the archives you can probably find at least 100 posts\n> about how both throughput and response time degrade when you have\n> more connections active then there are resources to use.\n> (Saturation is often around twice the CPU core count plus the\n> effective number of spindles, with caching reducing the latter.) It\n> is quite often the case that a transaction will complete sooner if\n> it is queued for later execution than if it is thrown into a mix\n> where resources are saturated.\n>\n> -Kevin\n>\n\nOn Thu, Dec 23, 2010 at 9:37 PM, Kevin Grittner <[email protected]> wrote:\ntuanhoanganh <[email protected]> wrote:\n\n> Could you show me what parameter of pgbouncer.ini can do that. I\n> read pgbouncer and can not make pgbouncer open and keep 200\n> connect to postgres\n\nWhat makes you think that 200 connections to PostgreSQL will be a\ngood idea?  Perhaps you want a smaller number of connections from\npgbouncer to PostgreSQL and a larger number from your application to\npgbouncer?If you read this thread, My app has ~ 20 exe file, each of exe create new connect to postgesql and there are 10-30 user use my application.My server running Windows 2008 R2. In this thread, postgresql on windows create new connect very slow \n\"Starting a new connection in PG is relatively slow, especially so on\nWindows, because it involves starting and setting up a new process for\neach one.Jeff Janes \"So I need a tool to open and keep 200 connect to postgres, my application connect to this tool. And decrease time to connect to postgres ( because no need to start new postgres process on windows)\n \n\nIf you search the archives you can probably find at least 100 posts\nabout how both throughput and response time degrade when you have\nmore connections active then there are resources to use.\n(Saturation is often around twice the CPU core count plus the\neffective number of spindles, with caching reducing the latter.)  It\nis quite often the case that a transaction will complete sooner if\nit is queued for later execution than if it is thrown into a mix\nwhere resources are saturated.\n\n-Kevin", "msg_date": "Thu, 23 Dec 2010 22:20:19 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" }, { "msg_contents": "What you still haven't clarified is how long each exe/user combo keeps\nthe connection open for.\n\nIf for a day, then who cares that it takes 4 seconds each morning to\nopen them all?\n\nIf for a fraction of a second, then you do not need 200 simultaneous\nopen connections, they can probably share a much smaller number. That\nis the whole point of pooling.\n\nCheers,\n\nJeff\n", "msg_date": "Thu, 23 Dec 2010 08:37:51 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" } ]
[ { "msg_contents": "> tuanhoanganh wrote:\n \n> tps = 20.143494 (including connections establishing)\n> tps = 256.630260 (excluding connections establishing)\n> \n> Why pgbench on my server is very low or is it common value with my\n> server ?\n \nThose numbers look pretty low to me. I would start with looking at\nwhy it is taking so long to establish a TCP connection. Are you\nusing SSL? How are you authenticating?\n \n-Kevin\n", "msg_date": "Sat, 18 Dec 2010 12:35:24 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low\n\t question?" }, { "msg_contents": "No, I don't use SSL. Here is my pg_hba.conf\n# IPv4 local connections:\nhost all all 0.0.0.0/0 trust\n# IPv6 local connections:\nhost all all ::1/128 trust\n\nOn Sun, Dec 19, 2010 at 1:35 AM, Kevin Grittner <[email protected]\n> wrote:\n\n> > tuanhoanganh wrote:\n>\n> > tps = 20.143494 (including connections establishing)\n> > tps = 256.630260 (excluding connections establishing)\n> >\n> > Why pgbench on my server is very low or is it common value with my\n> > server ?\n>\n> Those numbers look pretty low to me. I would start with looking at\n> why it is taking so long to establish a TCP connection. Are you\n> using SSL? How are you authenticating?\n>\n> -Kevin\n>\n\nNo, I don't use SSL. Here is my pg_hba.conf# IPv4 local connections:host    all             all             0.0.0.0/0            trust# IPv6 local connections:host    all             all             ::1/128                 trust\nOn Sun, Dec 19, 2010 at 1:35 AM, Kevin Grittner <[email protected]> wrote:\n> tuanhoanganh  wrote:\n\n> tps = 20.143494 (including connections establishing)\n> tps = 256.630260 (excluding connections establishing)\n>\n> Why pgbench on my server is very low or is it common value with my\n> server ?\n\nThose numbers look pretty low to me.  I would start with looking at\nwhy it is taking so long to establish a TCP connection.  Are you\nusing SSL?  How are you authenticating?\n\n-Kevin", "msg_date": "Sun, 19 Dec 2010 01:57:39 +0700", "msg_from": "tuanhoanganh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 9.0 x64 bit pgbench TPC very low question?" } ]
[ { "msg_contents": "Hi,\n\nJust stumbled on the following post:\nhttp://yoshinorimatsunobu.blogspot.com/2010/10/using-mysql-as-nosql-story-for.html\n\nThe post claim that MySQL can do more qps then MemCahed or any other NoSQL\nwhen doing simple queries like: SELECT * FROM table WHERE id=num;\n\nAnd I wonder if:\n\n1. Currently, is it possbile to achive the same using PG 9.0.x\n2. Is it possible at all?\n\nIt seems to me that if such gain is possible, PG should benefit from that\nsignificantly when it comes to Key/Value queries.\n\n\nBest,\nMiki\n\n--------------------------------------------------\nMichael Ben-Nes - Internet Consultant and Director.\nhttp://www.epoch.co.il - weaving the Net.\nCellular: 054-4848113\n--------------------------------------------------\n\nHi,Just stumbled on the following post:http://yoshinorimatsunobu.blogspot.com/2010/10/using-mysql-as-nosql-story-for.html\nThe post claim that MySQL can do more qps then MemCahed or any other NoSQL when doing simple queries like: SELECT * FROM table WHERE id=num;And I wonder if:1. Currently, is it possbile to achive the same using PG 9.0.x\n2. Is it possible at all?It seems to me that if such gain is possible, PG should benefit from that significantly when it comes to Key/Value queries.Best,Miki--------------------------------------------------\nMichael Ben-Nes - Internet Consultant and Director.http://www.epoch.co.il - weaving the Net.Cellular: 054-4848113--------------------------------------------------", "msg_date": "Tue, 21 Dec 2010 11:09:53 +0200", "msg_from": "Michael Ben-Nes <[email protected]>", "msg_from_op": true, "msg_subject": "MySQL HandlerSocket - Is this possible in PG?" }, { "msg_contents": "Hello\n\nyou can emulate it now.\n\na) try to do a simple stored procedure, where you can wrap your query\nb) use a FAST CALL API to call this procedure\nc) use a some pool tool for pooling and persisting sessions\n\nRegards\n\nPavel Stehule\n\n2010/12/21 Michael Ben-Nes <[email protected]>:\n> Hi,\n>\n> Just stumbled on the following post:\n> http://yoshinorimatsunobu.blogspot.com/2010/10/using-mysql-as-nosql-story-for.html\n>\n> The post claim that MySQL can do more qps then MemCahed or any other NoSQL\n> when doing simple queries like: SELECT * FROM table WHERE id=num;\n>\n> And I wonder if:\n>\n> 1. Currently, is it possbile to achive the same using PG 9.0.x\n> 2. Is it possible at all?\n>\n> It seems to me that if such gain is possible, PG should benefit from that\n> significantly when it comes to Key/Value queries.\n>\n>\n> Best,\n> Miki\n>\n> --------------------------------------------------\n> Michael Ben-Nes - Internet Consultant and Director.\n> http://www.epoch.co.il - weaving the Net.\n> Cellular: 054-4848113\n> --------------------------------------------------\n>\n", "msg_date": "Tue, 21 Dec 2010 12:31:47 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL HandlerSocket - Is this possible in PG?" }, { "msg_contents": "Hi Pavel,\n\nThanks for your quick answer. Can you please elaborate a bit more about the\npoints bellow.\n\nOn Tue, Dec 21, 2010 at 1:31 PM, Pavel Stehule <[email protected]>wrote:\n\n> Hello\n>\n> you can emulate it now.\n>\n> a) try to do a simple stored procedure, where you can wrap your query\n>\n\nDo you mean I should use PREPARE?\n\nb) use a FAST CALL API to call this procedure\n>\n\nCurrently I use PHP to access the DB which use libpq. Is that cosidered a\nfast call API ? if not, can you please refer me to the right info.\n\n\n> c) use a some pool tool for pooling and persisting sessions\n>\nPHP pg_pconnect command open a persistent PostgreSQL connection. Is it\nenough or I better use PgPool2 or something similar?\n\nConsidering the points above, will I be able to get such high QPS from\nPostgreSQL ? If so, it will be my pleasure to dump Reddis and work solely\nwith PG :)\n\n\nThanks,\nMiki\n\n\n> Regards\n>\n> Pavel Stehule\n>\n> 2010/12/21 Michael Ben-Nes <[email protected]>:\n> > Hi,\n> >\n> > Just stumbled on the following post:\n> >\n> http://yoshinorimatsunobu.blogspot.com/2010/10/using-mysql-as-nosql-story-for.html\n> >\n> > The post claim that MySQL can do more qps then MemCahed or any other\n> NoSQL\n> > when doing simple queries like: SELECT * FROM table WHERE id=num;\n> >\n> > And I wonder if:\n> >\n> > 1. Currently, is it possbile to achive the same using PG 9.0.x\n> > 2. Is it possible at all?\n> >\n> > It seems to me that if such gain is possible, PG should benefit from that\n> > significantly when it comes to Key/Value queries.\n> >\n> >\n> > Best,\n> > Miki\n> >\n> >\n>\n--------------------------------------------------\nMichael Ben-Nes - Internet Consultant and Director.\nhttp://www.epoch.co.il - weaving the Net.\nCellular: 054-4848113\n--------------------------------------------------\n\nHi Pavel,\nThanks for your quick answer. Can you please elaborate a bit more about the points bellow.On Tue, Dec 21, 2010 at 1:31 PM, Pavel Stehule <[email protected]> wrote:\nHello\n\nyou can emulate it now.\n\na) try to do a simple stored procedure, where you can wrap your queryDo you mean I should use PREPARE?\n\nb) use a FAST CALL API to call this procedureCurrently I use PHP to access the DB which use libpq. Is that cosidered a fast call API ? if not, can you please refer me to the right info. \n\nc) use a some pool tool for pooling and persisting sessionsPHP pg_pconnect command open a persistent PostgreSQL connection. Is it enough or I better use PgPool2 or something similar?\n Considering the points above, will I be able to get such high QPS from PostgreSQL ? If so, it will be my pleasure to dump Reddis and work solely with PG :)Thanks,Miki\n\nRegards\n\nPavel Stehule\n\n2010/12/21 Michael Ben-Nes <[email protected]>:\n> Hi,\n>\n> Just stumbled on the following post:\n> http://yoshinorimatsunobu.blogspot.com/2010/10/using-mysql-as-nosql-story-for.html\n>\n> The post claim that MySQL can do more qps then MemCahed or any other NoSQL\n> when doing simple queries like: SELECT * FROM table WHERE id=num;\n>\n> And I wonder if:\n>\n> 1. Currently, is it possbile to achive the same using PG 9.0.x\n> 2. Is it possible at all?\n>\n> It seems to me that if such gain is possible, PG should benefit from that\n> significantly when it comes to Key/Value queries.\n>\n>\n> Best,\n> Miki\n>\n\n>\n--------------------------------------------------Michael Ben-Nes - Internet Consultant and Director.http://www.epoch.co.il - weaving the Net.\nCellular: 054-4848113--------------------------------------------------", "msg_date": "Tue, 21 Dec 2010 17:17:07 +0200", "msg_from": "Michael Ben-Nes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MySQL HandlerSocket - Is this possible in PG?" }, { "msg_contents": "2010/12/21 Michael Ben-Nes <[email protected]>:\n> Hi Pavel,\n>\n> Thanks for your quick answer. Can you please elaborate a bit more about the\n> points bellow.\n>\n> On Tue, Dec 21, 2010 at 1:31 PM, Pavel Stehule <[email protected]>\n> wrote:\n>>\n>> Hello\n>>\n>> you can emulate it now.\n>>\n>> a) try to do a simple stored procedure, where you can wrap your query\n>\n> Do you mean I should use PREPARE?\n\nyes\n\n>\n>> b) use a FAST CALL API to call this procedure\n>\n> Currently I use PHP to access the DB which use libpq. Is that cosidered a\n> fast call API ? if not, can you please refer me to the right info.\n>\n>>\n\nsorry it is a fast-path interface\n\nhttp://www.postgresql.org/docs/8.1/static/libpq-fastpath.html\n\nbut php hasn't a adequate API :(\n\n\n>> c) use a some pool tool for pooling and persisting sessions\n>\n> PHP pg_pconnect command open a persistent PostgreSQL connection. Is it\n> enough or I better use PgPool2 or something similar?\n>\n\nprobably it's enough\n\n>\n> Considering the points above, will I be able to get such high QPS from\n> PostgreSQL ? If so, it will be my pleasure to dump Reddis and work solely\n> with PG :)\n>\n\nThere is a lot of unknown factors, but I believe so speed is limited\nby IO more than by sw. The PostgreSQL engine isn't specially optimised\nfor access with primary key (InnoDB has this optimization, PostgreSQL\nhasn't clustered index) , so probably Pg will be slower.\n\nRegards\n\nPavel Stehule\n\n>\n> Thanks,\n> Miki\n>\n>>\n>> Regards\n>>\n>> Pavel Stehule\n>>\n>> 2010/12/21 Michael Ben-Nes <[email protected]>:\n>> > Hi,\n>> >\n>> > Just stumbled on the following post:\n>> >\n>> > http://yoshinorimatsunobu.blogspot.com/2010/10/using-mysql-as-nosql-story-for.html\n>> >\n>> > The post claim that MySQL can do more qps then MemCahed or any other\n>> > NoSQL\n>> > when doing simple queries like: SELECT * FROM table WHERE id=num;\n>> >\n>> > And I wonder if:\n>> >\n>> > 1. Currently, is it possbile to achive the same using PG 9.0.x\n>> > 2. Is it possible at all?\n>> >\n>> > It seems to me that if such gain is possible, PG should benefit from\n>> > that\n>> > significantly when it comes to Key/Value queries.\n>> >\n>> >\n>> > Best,\n>> > Miki\n>> >\n>> >\n>\n> --------------------------------------------------\n> Michael Ben-Nes - Internet Consultant and Director.\n> http://www.epoch.co.il - weaving the Net.\n> Cellular: 054-4848113\n> --------------------------------------------------\n>\n", "msg_date": "Tue, 21 Dec 2010 16:50:51 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL HandlerSocket - Is this possible in PG?" }, { "msg_contents": "On Tue, Dec 21, 2010 at 10:50 AM, Pavel Stehule <[email protected]> wrote:\n> 2010/12/21 Michael Ben-Nes <[email protected]>:\n>> Hi Pavel,\n>>\n>> Thanks for your quick answer. Can you please elaborate a bit more about the\n>> points bellow.\n>>\n>> On Tue, Dec 21, 2010 at 1:31 PM, Pavel Stehule <[email protected]>\n>> wrote:\n>>>\n>>> Hello\n>>>\n>>> you can emulate it now.\n>>>\n>>> a) try to do a simple stored procedure, where you can wrap your query\n>>\n>> Do you mean I should use PREPARE?\n>\n> yes\n>\n>>\n>>> b) use a FAST CALL API to call this procedure\n>>\n>> Currently I use PHP to access the DB which use libpq. Is that cosidered a\n>> fast call API ? if not, can you please refer me to the right info.\n>>\n>>>\n>\n> sorry it is a fast-path interface\n>\n> http://www.postgresql.org/docs/8.1/static/libpq-fastpath.html\n>\n> but php hasn't a adequate API :(\n\n\nI don't think fastpath interface is going to get you there. What they\nare doing with mysql is bypassing both the parser and the protocol.\nAs soon as you use libpq, you've lost the battle...you can't see\nanywhere close to to that performance before you become network\nbottlenecked.\n\nIf you want to see postgres doing this in action, you could fire up\nthe database in single user mode and run raw queries against the\nbackend. Another way to do it is to hack tcop/postgres.c and inject\nprotocol messages manually. Right now, the only way to get that close\nto the metal using standard techniques is via SPI (plpgsql, etc). A\nproper transaction free stored procedure implementation would open a\nlot of doors for fast query execution.\n\nmerlin\n", "msg_date": "Tue, 21 Dec 2010 16:07:13 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL HandlerSocket - Is this possible in PG?" }, { "msg_contents": "I think this might be a game changing feature.\nFor the first time after 10 years I have reason to consider MySQL, as the\ncost per performance in such scenario is amazing. Morever I wont have to run\nit in single mod or loose other functionality by using this feautre. as I\ncan access the ordinary interface on port 3306 and the fast interface on\nother port.\n\nI wonder if PostgreSQL should replicate this functionality somehow. How can\nI represent this idea to the developers? They will probably know if this\nfeature worth something.\n\nThanks,\nMiki\n\n--------------------------------------------------\nMichael Ben-Nes - Internet Consultant and Director.\nhttp://www.epoch.co.il - weaving the Net.\nCellular: 054-4848113\n--------------------------------------------------\n\n\nOn Tue, Dec 21, 2010 at 11:07 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Tue, Dec 21, 2010 at 10:50 AM, Pavel Stehule <[email protected]>\n> wrote:\n> > 2010/12/21 Michael Ben-Nes <[email protected]>:\n> >> Hi Pavel,\n> >>\n> >> Thanks for your quick answer. Can you please elaborate a bit more about\n> the\n> >> points bellow.\n> >>\n> >> On Tue, Dec 21, 2010 at 1:31 PM, Pavel Stehule <[email protected]\n> >\n> >> wrote:\n> >>>\n> >>> Hello\n> >>>\n> >>> you can emulate it now.\n> >>>\n> >>> a) try to do a simple stored procedure, where you can wrap your query\n> >>\n> >> Do you mean I should use PREPARE?\n> >\n> > yes\n> >\n> >>\n> >>> b) use a FAST CALL API to call this procedure\n> >>\n> >> Currently I use PHP to access the DB which use libpq. Is that cosidered\n> a\n> >> fast call API ? if not, can you please refer me to the right info.\n> >>\n> >>>\n> >\n> > sorry it is a fast-path interface\n> >\n> > http://www.postgresql.org/docs/8.1/static/libpq-fastpath.html\n> >\n> > but php hasn't a adequate API :(\n>\n>\n> I don't think fastpath interface is going to get you there. What they\n> are doing with mysql is bypassing both the parser and the protocol.\n> As soon as you use libpq, you've lost the battle...you can't see\n> anywhere close to to that performance before you become network\n> bottlenecked.\n>\n> If you want to see postgres doing this in action, you could fire up\n> the database in single user mode and run raw queries against the\n> backend. Another way to do it is to hack tcop/postgres.c and inject\n> protocol messages manually. Right now, the only way to get that close\n> to the metal using standard techniques is via SPI (plpgsql, etc). A\n> proper transaction free stored procedure implementation would open a\n> lot of doors for fast query execution.\n>\n> merlin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI think this might be a game changing feature.For the first time after 10 years I have reason to consider MySQL, as the cost per performance in such scenario is amazing. Morever I wont have to run it in single mod or loose other functionality by using this feautre. as I can access the ordinary interface on port 3306 and the fast interface on other port.\nI wonder if PostgreSQL should replicate this functionality somehow. How can I represent this idea to the developers? They will probably know if this feature worth something.Thanks,Miki--------------------------------------------------\nMichael Ben-Nes - Internet Consultant and Director.http://www.epoch.co.il - weaving the Net.Cellular: 054-4848113--------------------------------------------------\nOn Tue, Dec 21, 2010 at 11:07 PM, Merlin Moncure <[email protected]> wrote:\nOn Tue, Dec 21, 2010 at 10:50 AM, Pavel Stehule <[email protected]> wrote:\n> 2010/12/21 Michael Ben-Nes <[email protected]>:\n>> Hi Pavel,\n>>\n>> Thanks for your quick answer. Can you please elaborate a bit more about the\n>> points bellow.\n>>\n>> On Tue, Dec 21, 2010 at 1:31 PM, Pavel Stehule <[email protected]>\n>> wrote:\n>>>\n>>> Hello\n>>>\n>>> you can emulate it now.\n>>>\n>>> a) try to do a simple stored procedure, where you can wrap your query\n>>\n>> Do you mean I should use PREPARE?\n>\n> yes\n>\n>>\n>>> b) use a FAST CALL API to call this procedure\n>>\n>> Currently I use PHP to access the DB which use libpq. Is that cosidered a\n>> fast call API ? if not, can you please refer me to the right info.\n>>\n>>>\n>\n> sorry it is a fast-path interface\n>\n> http://www.postgresql.org/docs/8.1/static/libpq-fastpath.html\n>\n> but php hasn't a adequate API :(\n\n\nI don't think fastpath interface is going to get you there.  What they\nare doing with mysql is bypassing both the parser and the protocol.\nAs soon as you use libpq, you've lost the battle...you can't see\nanywhere close to to that performance before you become network\nbottlenecked.\n\nIf you want to see postgres doing this in action, you could fire up\nthe database in single user mode and run raw queries against the\nbackend.   Another way to do it is to hack tcop/postgres.c and inject\nprotocol messages manually.  Right now, the only way to get that close\nto the metal using standard techniques is via SPI (plpgsql, etc).  A\nproper transaction free stored procedure implementation would open a\nlot of doors for fast query execution.\n\nmerlin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 22 Dec 2010 09:41:51 +0200", "msg_from": "Michael Ben-Nes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MySQL HandlerSocket - Is this possible in PG?" }, { "msg_contents": "\n\n> Do you mean I should use PREPARE?\n>\n> Currently I use PHP to access the DB which use libpq. Is that cosidered a\n> fast call API ? if not, can you please refer me to the right info.\n>\n> PHP pg_pconnect command open a persistent PostgreSQL connection. Is it\n> enough or I better use PgPool2 or something similar?\n>\n> Considering the points above, will I be able to get such high QPS from\n> PostgreSQL ? If so, it will be my pleasure to dump Reddis and work solely\n> with PG :)\n\nI suppose you already have a web server like lighttpd, zeus, or nginx, \nusing php as fastcgi, or apache behind a proxy ? In that case, since the \nnumber of php processes is limited (usually to something like 2x your \nnumber of cores), the number of postgres connections a web server \ngenerates is limited, and you can do without pgpool and use pg_pconnect. \nBe wary of the pg_pconnect bugs though (like if you restart pg, you also \nhave to restart php, I suppose you know that).\n\nHere are some timings (Core 2 Q6600) for a simple SELECT on PK query :\n\nusing tcp (localhost)\n 218 µs / query : pg_query\n 226 µs / query : pg_query_params\n 143 µs / query : pg_execute\n\nusing unix sockets\n 107 µs / query : pg_query\n 122 µs / query : pg_query_params\n 63 µs / query : pg_execute\n\nquery inside plpgsql function\n 17 µs / query\n\nDon't use PDO, it is 2x-3x slower.\n\nTCP overhead is quite large...\n\nIf you have a named prepared statement (created with PREPARE) use \npg_execute(), which is much faster than pg_query (for very simple queries).\n\nOf course you need to prepare the statements... you can do that with \npg_pool which can execute a script upon connection initialization.\n", "msg_date": "Wed, 22 Dec 2010 09:22:17 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL HandlerSocket - Is this possible in PG?" }, { "msg_contents": "On Tue, Dec 21, 2010 at 2:09 AM, Michael Ben-Nes <[email protected]> wrote:\n> Hi,\n>\n> Just stumbled on the following post:\n> http://yoshinorimatsunobu.blogspot.com/2010/10/using-mysql-as-nosql-story-for.html\n>\n> The post claim that MySQL can do more qps then MemCahed or any other NoSQL\n> when doing simple queries like: SELECT * FROM table WHERE id=num;\n\nNo it does not. They use an interface that bypasses SQL and is much\nmore primitive.\n", "msg_date": "Wed, 22 Dec 2010 01:52:28 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL HandlerSocket - Is this possible in PG?" }, { "msg_contents": "On Tue, Dec 21, 2010 at 11:09, Michael Ben-Nes <[email protected]> wrote:\n> Just stumbled on the following post:\n> http://yoshinorimatsunobu.blogspot.com/2010/10/using-mysql-as-nosql-story-for.html\n>\n> The post claim that MySQL can do more qps then MemCahed or any other NoSQL\n> when doing simple queries like: SELECT * FROM table WHERE id=num;\n>\n> And I wonder if:\n>\n> 1. Currently, is it possbile to achive the same using PG 9.0.x\n> 2. Is it possible at all?\n\nI was curious what could be done currently, without any modifications\nto PostgreSQL itself, so I ran a simple benchmark.\n\nTable:\ncreate table usr (user_id int primary key not null, user_name text not\nnull, user_email text not null, created timestamp not null);\ninsert into usr select generate_series(1, 1000000), 'Yukari Takeba',\n'[email protected]', '2010-02-03 11:22:33';\n\n<?php\n$db = pg_connect('');\n$res = pg_prepare($db, 'get_user', 'select user_name, user_email,\ncreated from usr where user_id=$1');\n$res = pg_query($db, 'begin');\n\n$args = array();\nfor($i = 0; $i < 250000; $i++)\n{\n $args[0] = rand(1, 1000000);\n $res = pg_execute($db, 'get_user', $args);\n $row = pg_fetch_row($res);\n}\n?>\n\nEach process does 250k queries, so when I run 4 in parallel it's 1M\nqueries total.\n\nI'm running PostgreSQL 9.1alpha2, PHP 5.3.4, kernel 2.6.36.2 on Arch\nLinux; AMD Phenom II X4 955.\nThe only tuning I did was setting shared_buffers=256M\n\nResults:\n% time php pg.php & time php pg.php &time php pg.php &time php pg.php & sleep 11\n[1] 29792\n[2] 29793\n[3] 29795\n[4] 29797\nphp pg.php 1,99s user 0,97s system 30% cpu 9,678 total\n[2] done time php pg.php\nphp pg.php 1,94s user 1,06s system 30% cpu 9,731 total\n[3] - done time php pg.php\nphp pg.php 1,92s user 1,07s system 30% cpu 9,746 total\n[1] - done time php pg.php\nphp pg.php 2,00s user 1,04s system 31% cpu 9,777 total\n[4] + done time php pg.php\n\nSo around 10 seconds to run the test in total.\nThese numbers aren't directly comparable to their test -- I tested\nover a local UNIX socket, with PHP client on the same machine -- but\nit's a datapoint nevertheless.\n\nBottom line, you can expect up to 100 000 QPS using pg_execute() on a\ncheap quad-core gamer CPU. You won't be beating memcached with current\nPostgreSQL, but I think it's a respectable result.\n\nRegards,\nMarti\n", "msg_date": "Wed, 22 Dec 2010 12:58:33 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL HandlerSocket - Is this possible in PG?" }, { "msg_contents": "On Wed, 22 Dec 2010 14:17:21 +0100, Michael Ben-Nes <[email protected]>\nwrote:\n\n> Thanks, it is most interesting\n>\n> --------------------------------------------------\n> Michael Ben-Nes - Internet Consultant and Director.\n> http://www.epoch.co.il - weaving the Net.\n> Cellular: 054-4848113\n> --------------------------------------------------\n>\n\nIn fact, it would be possible to implement something like MySQL\nHandlerSocket, using the following Very Ugly Hack :\n\nThis would only work for ultra simple \"SELECT 1 row WHERE primary key =\nconstant\" queries.\n\n- a pooler (separate process) waits for connections\n- clients connect to the pooler and send queries\n- pooler accumulates enough queries to justify the overhead of what's\ngoing to come next\n- pooler takes a bunch of queries and encodes them in some custom ad-hoc\nformat (not SQL)\n- pooler says to postgres \"SELECT do_my_queries( serialized data )\"\n- do_my_queries() is a user function (in C) which uses postgres access\nmethods directly (like index access method on primary key), processes\nqueries, and sends results back as binary data\n- repeat for next batch\n\nNested Loop Index Scan processes about 400.000 rows/s which is 2.5\nus/query, maybe you could get into that ballpark (per core).\n\nOf course it's a rather extremely ugly hack.\n\n-------------------\n\nNote that you could very possibly have almost the same benefits with\n\"almost\" none of the ugliness by doing the following :\n\nsame as above :\n- a pooler (separate process) waits for connections\n- clients connect to the pooler and send queries in the format query +\nparameters (which libpq uses if you ask)\n- pooler accumulates enough queries to justify the overhead of what's\ngoing to come next\n\ndifferent :\n- pooler looks at each query, and if it has not seen it yet on this\nparticular pg connection, issues a \"PREPARE\" on the query\n- pooler sends, in one TCP block, a begin, then a bunch of \"execute named\nprepared statement with parameters\" commands, then a rollback\n- postgres executes all of those and returns all replies in one TCP block\n(this would need a modification)\n- pooler distributes results back to clients\n\nThis would need a very minor change to postgres (coalescing output\nblocks). It would make the pooler pay TCP overhead instead of postgres,\nand greatly improve cache locality in postgres.\n\nSince error handling would be \"problematic\" (to say the least...) and\nexpensive it would only work on simple selects.\n", "msg_date": "Wed, 22 Dec 2010 22:50:16 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL HandlerSocket - Is this possible in PG?" }, { "msg_contents": "Michael Ben-Nes <[email protected]> writes:\n> I wonder if PostgreSQL should replicate this functionality somehow. How can\n> I represent this idea to the developers? They will probably know if this\n> feature worth something.\n\nAs I didn't have enough time to follow this thread in detail I'm not\nsure how closely it is related, but have you tried preprepare?\n\n https://github.com/dimitri/preprepare\n\nRegards,\n-- \nDimitri Fontaine\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Sun, 09 Jan 2011 14:58:14 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL HandlerSocket - Is this possible in PG?" } ]
[ { "msg_contents": "hello.\n\nI ve the table NumeryA with 3 indices. Query below uses incorrect index.\n\n\nSELECT\n A.\"NKA\",\n A.\"NTA\",\n Min(\"PołączeniaMin\") || ',' || Max(\"PołączeniaMax\") AS \"Biling\",\n Sum(\"Ile\")::text AS \"Ilość CDR\",\n R.\"LP\"::text AS \"Sprawa\",\n R.\"Osoba weryfikująca\" AS \"Osoba\",\n to_char(min(\"Wartość\"),'FM9999990D00') AS \"Wartość po kontroli\",\n max(R.\"Kontrola po naprawie w Serat - CDR\")::text AS \"CDR po kontroli\",\n min(A.\"KodBłędu\")::text AS KodBłędu,\n Max(to_char(R.\"Data kontroli\",'YYYY-MM-DD')) AS \"Ostatnia Kontrola\"\n, max(\"Skutek wprowadzenia błednej ewidencji w Serat\") as \"Skutek\"\nFROM\n ONLY \"NumeryA\" A\nLEFT JOIN\n (select * from \"Rejestr stacji do naprawy\" where \"Data weryfikacji\"\n>= current_date-3*30) R\nON\n A.\"NKA\" = R.\"Numer kierunkowy\"\n and substr(A.\"NTA\",1,5) = substr(R.\"Numer stacji\",1,5)\n and A.\"NTA\" like R.\"Numer stacji\"\n and A.\"KodBłędu\" = R.\"Kod Błędu\"\nWHERE\n A.\"DataPliku\" >= current_date-3*30\n and A.\"KodBłędu\" similar to '74'\nGROUP\n BY R.\"Osoba weryfikująca\",R.\"LP\",A.\"NKA\", A.\"NTA\"\nORDER\n BY Sum(\"Ile\") DESC\nLIMIT 4000\n\nThis query has plan:\n\n----------------------------------------------------------------\nLimit (cost=9656.43..9666.43 rows=4000 width=96) (actual\ntime=2149.383..2174.363 rows=4000 loops=1)\n -> Sort (cost=9656.43..9716.86 rows=24175 width=96) (actual\ntime=2149.373..2158.355 rows=4000 loops=1)\n Sort Key: (sum(a.Ile\"))\"\n Sort Method: top-N heapsort Memory: 1028kB\n -> HashAggregate (cost=6711.21..8089.19 rows=24175 width=96)\n(actual time=2040.721..2110.075 rows=9080 loops=1)\n -> Merge Left Join (cost=5338.65..5925.53 rows=24175\nwidth=96) (actual time=1180.490..1717.727 rows=33597 loops=1)\n Merge Cond: (((a.NKA\")::text = (\"Rejestr stacji do\nnaprawy\".\"Numer kierunkowy\")::text) AND ((substr((a.\"NTA\")::text, 1,\n5)) = (substr((\"Rejestr stacji do naprawy\".\"Numer stacji\")::text, 1,\n5))) AND ((a.\"KodBłędu\")::text = (\"Rejestr stacji do naprawy\".\"Kod\nBłędu\")::text))\"\n Join Filter: ((a.NTA\")::text ~~ (\"Rejestr stacji\ndo naprawy\".\"Numer stacji\")::text)\"\n -> Sort (cost=3565.16..3625.60 rows=24175\nwidth=42) (actual time=819.034..900.141 rows=33597 loops=1)\n Sort Key: a.NKA\", (substr((a.\"NTA\")::text,\n1, 5)), a.\"KodBłędu\"\"\n Sort Method: quicksort Memory: 5487kB\n -> Index Scan using dp_kb on NumeryA\" a\n(cost=0.01..1805.07 rows=24175 width=42) (actual time=0.295..197.627\nrows=33597 loops=1)\"\n Index Cond: (DataPliku\" >=\n(('now'::text)::date - 90))\"\n Filter: ((KodBłędu\")::text ~\n'***:^(?:74)$'::text)\"\n -> Sort (cost=1773.49..1811.23 rows=15096\nwidth=67) (actual time=361.430..434.675 rows=32948 loops=1)\n Sort Key: Rejestr stacji do naprawy\".\"Numer\nkierunkowy\", (substr((\"Rejestr stacji do naprawy\".\"Numer\nstacji\")::text, 1, 5)), \"Rejestr stacji do naprawy\".\"Kod Błędu\"\"\n Sort Method: quicksort Memory: 2234kB\n -> Bitmap Heap Scan on Rejestr stacji do\nnaprawy\" (cost=141.75..725.68 rows=15096 width=67) (actual\ntime=2.604..51.567 rows=14893 loops=1)\"\n Recheck Cond: (Data weryfikacji\" >=\n(('now'::text)::date - 90))\"\n -> Bitmap Index Scan on Data\nweryfikacji_Kod Błędu\" (cost=0.00..137.98 rows=15096 width=0) (actual\ntime=2.463..2.463 rows=15462 loops=1)\"\n Index Cond: (Data weryfikacji\"\n>= (('now'::text)::date - 90))\"\nTotal runtime: 2186.011 ms\n\n\nWhen i delete index dp_kb, query runs faster:\n\n-------------------------------------------------------------------------\nLimit (cost=15221.69..15231.69 rows=4000 width=96) (actual\ntime=1296.896..1322.144 rows=4000 loops=1)\n -> Sort (cost=15221.69..15282.13 rows=24175 width=96) (actual\ntime=1296.887..1305.993 rows=4000 loops=1)\n Sort Key: (sum(a.Ile\"))\"\n Sort Method: top-N heapsort Memory: 1028kB\n -> HashAggregate (cost=12276.48..13654.45 rows=24175\nwidth=96) (actual time=1188.706..1257.669 rows=9080 loops=1)\n -> Merge Left Join (cost=0.01..11490.79 rows=24175\nwidth=96) (actual time=0.220..840.102 rows=33597 loops=1)\n Merge Cond: (((a.NKA\")::text = (\"Rejestr stacji do\nnaprawy\".\"Numer kierunkowy\")::text) AND (substr((a.\"NTA\")::text, 1, 5)\n= substr((\"Rejestr stacji do naprawy\".\"Numer stacji\")::text, 1, 5))\nAND ((a.\"KodBłędu\")::text = (\"Rejestr stacji do naprawy\".\"Kod\nBłędu\")::text))\"\n Join Filter: ((a.NTA\")::text ~~ (\"Rejestr stacji\ndo naprawy\".\"Numer stacji\")::text)\"\n -> Index Scan using NTA_5\" on \"NumeryA\" a\n(cost=0.01..10016.75 rows=24175 width=42) (actual time=0.132..308.018\nrows=33597 loops=1)\"\n Index Cond: (((KodBłędu\")::text =\n'74'::text) AND (\"DataPliku\" >= (('now'::text)::date - 90)))\"\n Filter: ((KodBłędu\")::text ~ '***:^(?:74)$'::text)\"\n -> Index Scan using 3\" on \"Rejestr stacji do\nnaprawy\" (cost=0.01..1002.73 rows=15096 width=67) (actual\ntime=0.047..129.840 rows=32948 loops=1)\"\n Index Cond: (Rejestr stacji do\nnaprawy\".\"Data weryfikacji\" >= (('now'::text)::date - 90))\"\nTotal runtime: 1333.347 ms\n\n\nHow to tune settings to use good index ?\nInclude definitions of indexes:\n\nCREATE TABLE \"NumeryA\"\n(\n \"Plik\" character varying(254) NOT NULL,\n \"DataPliku\" date,\n \"KodBłędu\" character varying(254) NOT NULL,\n \"NKA\" character varying(254) NOT NULL,\n \"NTA\" character varying(254) NOT NULL,\n \"Ile\" integer,\n \"PołączeniaMin\" character varying,\n \"PołączeniaMax\" character varying,\n \"Wycofane\" \"char\",\n \"Data\" character varying[],\n \"ID Kobat\" character varying[],\n \"NRB\" character varying[],\n \"LP\" integer,\n CONSTRAINT \"NumeryA_1_pkey\" PRIMARY KEY (\"NTA\", \"NKA\", \"KodBłędu\", \"Plik\")\n)\nWITH (\n OIDS=FALSE\n);\n\n\nCREATE INDEX \"NTA_5\"\n ON \"NumeryA\"\n USING btree\n (\"NKA\", substr(\"NTA\"::text, 1, 5), \"KodBłędu\", \"DataPliku\");\n\nCREATE INDEX dp_kb\n ON \"NumeryA\"\n USING btree\n (\"DataPliku\");\n\nCREATE INDEX nka_nta\n ON \"NumeryA\"\n USING btree\n (\"NKA\", \"NTA\");\n\n\nHere my planner settings:\n\n-----------------------------------------------------------\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\nseq_page_cost = 0.3\t\t\t# measured on an arbitrary scale\nrandom_page_cost = 0.5\t\t\t# same scale as above\ncpu_tuple_cost = 0.007\t\t\t# same scale as above\n#cpu_index_tuple_cost = 0.005\t\t# same scale as above\n#cpu_operator_cost = 0.0025\t\t# same scale as above\n#effective_cache_size = 128MB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default based on effort\n#geqo_generations = 0\t\t\t# selects default based on effort\n#geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n\n# - Other Planner Options -\n\ndefault_statistics_target = 25\t\t# range 1-10000\nconstraint_exclusion = partition\t# on, off, or partition\ncursor_tuple_fraction = 0.05\t\t# range 0.0-1.0\nfrom_collapse_limit = 8\njoin_collapse_limit = 8\t\t\t# 1 disables collapsing of explicit\n\t\t\t\t\t# JOIN clauses\n\n\n\n------------\npasman\n", "msg_date": "Tue, 21 Dec 2010 15:33:21 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Query uses incorrect index" }, { "msg_contents": "pasman pasma*ski<[email protected]> wrote:\n \n> -> Index Scan using NTA_5\" on \"NumeryA\" a\n> (cost=0.01..10016.75 rows=24175 width=42) (actual\n> time=0.132..308.018 rows=33597 loops=1)\"\n \n> seq_page_cost = 0.3\n> random_page_cost = 0.5\n \nYour data is heavily cached (to be able to read 33597 rows randomly\nthrough an index in 308 ms), yet you're telling the optimizer that a\nrandom access is significantly more expensive than a sequential one.\nTry this in your session before running the query (with all indexes\npresent):\n \nset seq_page_cost = 0.1;\nset random_page_cost = 0.1;\n \nI don't know if the data for all your queries is so heavily cached\n-- if so, you might want to change these settings in your\npostgresql.conf file.\n \n-Kevin\n", "msg_date": "Tue, 21 Dec 2010 09:44:15 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query uses incorrect index" } ]
[ { "msg_contents": "I was asked about performance of PostgreSQL on NetApp, the protocol \nshould be NFSv3. Has anybody tried it? The database in question is a DW \ntype, a bunch of documents indexed by Sphinx. Does anyone have any \ninformation?\n-- \n\n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Tue, 21 Dec 2010 14:28:46 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of PostgreSQL over NFS" }, { "msg_contents": "I am wondering why anyone would do that? Too much overhead and no reliable\nenough.\n\nOn Tue, Dec 21, 2010 at 2:28 PM, Mladen Gogala <[email protected]>wrote:\n\n> I was asked about performance of PostgreSQL on NetApp, the protocol should\n> be NFSv3. Has anybody tried it? The database in question is a DW type, a\n> bunch of documents indexed by Sphinx. Does anyone have any information?\n> --\n>\n>\n> Mladen Gogala Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> http://www.vmsinfo.com The Leader in Integrated Media Intelligence\n> Solutions\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI am wondering why anyone would do that?  Too much overhead and no reliable enough.On Tue, Dec 21, 2010 at 2:28 PM, Mladen Gogala <[email protected]> wrote:\nI was asked about performance of PostgreSQL on NetApp, the protocol should be NFSv3.  Has anybody tried it? The database in question is a DW type, a bunch of documents indexed by Sphinx. Does anyone have any information?\n\n-- \n\n\nMladen Gogala Sr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com The Leader in Integrated Media Intelligence Solutions\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 21 Dec 2010 15:31:37 -0500", "msg_from": "Rich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of PostgreSQL over NFS" }, { "msg_contents": "Rich wrote:\n> I am wondering why anyone would do that? Too much overhead and no \n> reliable enough.\n\nApparently, NetApp thinks that it is reliable. They're selling that \nstuff for years. I know that Oracle works with NetApp, they even have \ntheir own user mode NFS client driver, I am not sure about PostgreSQL. \nDid anybody try that?\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Tue, 21 Dec 2010 17:46:54 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of PostgreSQL over NFS" }, { "msg_contents": "Mladen Gogala wrote:\n> Rich wrote:\n>> I am wondering why anyone would do that? Too much overhead and no \n>> reliable enough.\n>\n> Apparently, NetApp thinks that it is reliable. They're selling that \n> stuff for years. I know that Oracle works with NetApp, they even \n> have their own user mode NFS client driver, I am not sure about \n> PostgreSQL. Did anybody try that?\n>\n\nYou have hit upon the crucial distinction here. In order for NFS to \nwork well, you need a rock solid NFS server. NetApp does a good job \nthere. You also need a rock solid NFS client, configured perfectly in \norder to eliminate the risk of corruption you get if the NFS \nimplementation makes any mistake in handling sync operations or error \nhandling. The issue really isn't \"will PostgreSQL performance well over \nNFS?\". The real concern is \"will my data get corrupted if my NFS client \nmisbehaves, and how likely is that to happen?\" That problem is scary \nenough that whether or not the performance is good is secondary. And \nunlike Oracle, there hasn't been much full end to end integration to \ncertify the reliability of PostgreSQL in this context, the way \nNetApp+Oracle has worked out those issues. It's hard to most of us to \neven justify that investigation, given that NFS and NetApp's offerings \nthat use it feel like legacy technologies, ones that are less relevant \nevery year.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 03 Jan 2011 21:28:31 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of PostgreSQL over NFS" }, { "msg_contents": "On Mon, Jan 3, 2011 at 9:28 PM, Greg Smith <[email protected]> wrote:\n> Mladen Gogala wrote:\n>>\n>> Rich wrote:\n>>>\n>>> I am wondering why anyone would do that?  Too much overhead and no\n>>> reliable enough.\n>>\n>> Apparently, NetApp  thinks that  it is reliable. They're selling that\n>> stuff  for years.  I know that Oracle works with NetApp, they even have\n>> their own user mode NFS client driver, I am not sure about PostgreSQL. Did\n>> anybody try that?\n>>\n>\n> You have hit upon the crucial distinction here.  In order for NFS to work\n> well, you need a rock solid NFS server.  NetApp does a good job there.  You\n> also need a rock solid NFS client, configured perfectly in order to\n> eliminate the risk of corruption you get if the NFS implementation makes any\n> mistake in handling sync operations or error handling.  The issue really\n> isn't \"will PostgreSQL performance well over NFS?\".  The real concern is\n> \"will my data get corrupted if my NFS client misbehaves, and how likely is\n> that to happen?\"  That problem is scary enough that whether or not the\n> performance is good is secondary.  And unlike Oracle, there hasn't been much\n> full end to end integration to certify the reliability of PostgreSQL in this\n> context, the way NetApp+Oracle has worked out those issues.  It's hard to\n> most of us to even justify that investigation, given that NFS and NetApp's\n> offerings that use it feel like legacy technologies, ones that are less\n> relevant every year.\n>\n> --\n> Greg Smith   2ndQuadrant US    [email protected]   Baltimore, MD\n\nWell there is a reason why Oracle and folks spend millions of dollars\nin NFS and this was before they bought Sun and got NFS hardware\nproducts in the acquisition.\n\nThe answer is economics. The faster fiber optics is currently 8Gbps\nwhile ethernet is 10Gbps. Fiber is trying to leapfrog with faster ones\nbut in the meanwhile 10Gbps switches are now affordable. That said for\npeople who want \"Multi-host\" storage or \"Shared-Storage\" however you\nlook at it, NFS seems to be very economical compared to expensive\nfiber SAN solutions. So while onboard RAID cards give you great\nperformance it still doesnt give you the option of the single point of\nfailure.. your server. NFS solves that problem for many small\nenterprises inexpensively. Yes I agree that your NFS client has to be\nrock solid. If not select the right OS but yes NFS has a fanfare out\nthere.\n\nThat said I have tested with NFS with Solaris as well as VMware's ESX\nserver. NFS on Solaris had to be tweaked with right rw window sizes\nfor PostgreSQL write sizes and jumbo frames to get the performance on\n10GB networks out and it was pretty good with multiple connections\n(single connections had limits on how much it can pull/push) on\nSolaris. (Of course this was couple of years ago.)\n\nOn vSphere ESX mostly it was transparent to PostgreSQL since it was\nall hidden by ESX to the guest VMs.\n\nMy 2 cents.\n\nRegards,\nJignesh\n", "msg_date": "Tue, 11 Jan 2011 20:43:32 -0500", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of PostgreSQL over NFS" } ]
[ { "msg_contents": "Thanks for reply.\nI tested random changes and query runs fastest after:\n\nset seq_page_cost = 0.1;\nset random_page_cost = 0.1;\ncpu_operator_cost = 0.01\n\n\n\n------------\npasman\n", "msg_date": "Wed, 22 Dec 2010 10:22:00 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query uses incorrect index" }, { "msg_contents": "pasman pasmański <pasman.p 'at' gmail.com> writes:\n\n> Thanks for reply.\n> I tested random changes and query runs fastest after:\n>\n> set seq_page_cost = 0.1;\n> set random_page_cost = 0.1;\n> cpu_operator_cost = 0.01\n\nIf I'm correct, you're basically telling postgresql that your\ndisk is unusually fast compared to your CPU. Even if some queries\nwill run faster from a side-effect of these settings, you're\nlikely to create other random problems...\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Wed, 22 Dec 2010 10:38:15 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query uses incorrect index" }, { "msg_contents": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]> writes:\n> Thanks for reply.\n> I tested random changes and query runs fastest after:\n\n> set seq_page_cost = 0.1;\n> set random_page_cost = 0.1;\n> cpu_operator_cost = 0.01\n\nAs a general rule, \"optimizing\" those settings on the basis of testing a\nsingle query is a great way to send your overall performance into the\ntank --- especially since repeating a single query will be heavily\nbiased by cache effects. You need to look at a representative sample of\nall your queries across all your data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Dec 2010 10:19:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query uses incorrect index " } ]
[ { "msg_contents": "Guillaume Cottenceau wrote:\n \n> If I'm correct, you're basically telling postgresql that your\n> disk is unusually fast compared to your CPU. Even if some queries\n> will run faster from a side-effect of these settings, you're\n> likely to create other random problems...\n \nIf this is set globally and the active portion of the database is not\nhighly cached, yes. If the example query is typical of the level of\ncaching for frequently-run queries, it might provide an overall\nperformance boost to set these in postgresql.conf.\n \nThe original problem was that the optimizer was grossly\nover-estimating the cost of returning a tuple through the index,\nwhich was taking about 0.01 ms per tuple. \n \n-Kevin\n", "msg_date": "Wed, 22 Dec 2010 06:31:28 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query uses incorrect index" } ]
[ { "msg_contents": "Hi.\n\nI install auto_explain module for monitoring queries.\nBy the way, is any tool to tune planner automatically ?\n\n\n------------\npasman\n", "msg_date": "Wed, 22 Dec 2010 17:01:08 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query uses incorrect index" } ]
[ { "msg_contents": "Hello,\n\nIs is possible to create an index on a field on a function that returns a\ndata type that contains subfields?\n\nIt is possible to do this:\ncreate index indx_test\non address\n(sp_address_text_to_template(address_text))\nwhere (sp_address_text_to_template(address_text)).city_name =\n'some_city_on_some_planet';\n\n,but I would like to create the index without the partial clause to index\nresults only from sp_address_text_to_template(address_text)).city_name.\n\nIs this possible? How would I write the statement?\n\nThanks,\n\nDesmond.\n\nHello,Is is possible to create an index on a field on a function that returns a data type that contains subfields?It is possible to do this:create index indx_teston address(sp_address_text_to_template(address_text))\nwhere (sp_address_text_to_template(address_text)).city_name = 'some_city_on_some_planet';,but I would like to create the index without the partial clause to index results only from sp_address_text_to_template(address_text)).city_name.\nIs this possible? How would I write the statement?Thanks,Desmond.", "msg_date": "Thu, 23 Dec 2010 18:34:31 +0200", "msg_from": "Desmond Coertzen <[email protected]>", "msg_from_op": true, "msg_subject": "Index on function that returns type with sub fields" } ]
[ { "msg_contents": "Hello,\n\nIs is possible to create an index on a field on a function that returns a\ndata type that contains subfields?\n\nIt is possible to do this:\ncreate index indx_test\non address\n(sp_address_text_to_template(address_text))\nwhere (sp_address_text_to_template(address_text)).city_name =\n'some_city_on_some_planet';\n\n,but, I would like to create the index without the partial clause to index\nresults only from sp_address_text_to_template(address_text)).city_name.\n\nIs this possible? How would I write the statement?\n\nThanks,\n\nDesmond.\n\nHello,Is is possible to create an index on a field on a function that returns a data type that contains subfields?It is possible to do this:create index indx_teston address(sp_address_text_to_template(address_text))\nwhere (sp_address_text_to_template(address_text)).city_name = 'some_city_on_some_planet';,but, I would like to create the index without the partial clause to index results only from sp_address_text_to_template(address_text)).city_name.\nIs this possible? How would I write the statement?Thanks,Desmond.", "msg_date": "Thu, 23 Dec 2010 18:53:24 +0200", "msg_from": "Desmond Coertzen <[email protected]>", "msg_from_op": true, "msg_subject": "Create index on subfield returned by function that returns base type\n\twith sub fields" }, { "msg_contents": "Hi,\n\nOn Thursday 23 December 2010 17:53:24 Desmond Coertzen wrote:\n> Is is possible to create an index on a field on a function that returns a\n> data type that contains subfields?\n> Is this possible? How would I write the statement?\nI am not sure I understood you correctly. Maybe you mean something like that:\n\ntest=# CREATE FUNCTION blub(IN int, OUT a int, OUT b int) RETURNS record \nIMMUTABLE LANGUAGE sql AS $$SELECT $1+1, $1+2$$;\nCREATE FUNCTION\nTime: 1.665 ms\n\ntest=# CREATE INDEX foo__blub ON foo (((blub(data)).a));\nCREATE INDEX\nTime: 86.393 ms\n\nBtw, this is the wrong list for this sort of question. The right place would \nbe -general.\n\nAndres\n", "msg_date": "Thu, 23 Dec 2010 18:03:33 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create index on subfield returned by function that returns base\n\ttype with sub fields" } ]
[ { "msg_contents": "When testing the IO performance of ioSAN storage device from FusionIO\n(650GB MLC version) one of the things I tried is a set of IO intensive\noperations in Postgres: bulk data loads, updates, and queries calling\nfor random IO. So far I cannot make Postgres take advantage of this\ntremendous IO capacity. I can squeeze a factor of a few here and there\nwhen caching cannot be utilized, but this hardware can do a lot more.\n\nLow level testing with fio shows on average x10 speedups over disk for\nsequential IO and x500-800 for random IO. With enough threads I can get\nIOPS in the 100-200K range and 1-1.5GB/s bandwidth, basically what's\nadvertised. But not with Postgres.\n\nIs this because the Postgres backend is essentially single threaded and\nin general does not perform asynchronous IO, or I'm missing something?\nI found out that the effective_io_concurrency parameter only takes\neffect for bitmap index scans.\n\nAlso, is there any work going on to allow concurrent IO in the backend\nand adapt Postgres to the capabilities of Flash?\n\nWill appreciate any comments, experiences, etc.\n\nPrzemek Wozniak\n\n\n\n", "msg_date": "Thu, 23 Dec 2010 10:37:53 -0700", "msg_from": "Przemek Wozniak <[email protected]>", "msg_from_op": true, "msg_subject": "concurrent IO in postgres?" }, { "msg_contents": "On Thu, Dec 23, 2010 at 10:37 AM, Przemek Wozniak <[email protected]> wrote:\n> When testing the IO performance of ioSAN storage device from FusionIO\n> (650GB MLC version) one of the things I tried is a set of IO intensive\n> operations in Postgres: bulk data loads, updates, and queries calling\n> for random IO. So far I cannot make Postgres take advantage of this\n\nSo, were you running a lot of these at once? Or just single threaded?\n\nI get very good io concurrency with lots of parallel postgresql\nconnections on a 34 disk SAS setup with a battery backed controller.\n", "msg_date": "Thu, 23 Dec 2010 11:24:43 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "Typically my problem is that the large queries are simply CPU bound.. do you have a sar/top output that you see. I'm currently setting up two FusionIO DUO @640GB in a lvm stripe to do some testing with, I will publish the results after I'm done.\r\n\r\nIf anyone has some tests/suggestions they would like to see done please let me know.\r\n\r\n- John\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Przemek Wozniak\r\nSent: Thursday, December 23, 2010 11:38 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] concurrent IO in postgres?\r\n\r\nWhen testing the IO performance of ioSAN storage device from FusionIO\r\n(650GB MLC version) one of the things I tried is a set of IO intensive\r\noperations in Postgres: bulk data loads, updates, and queries calling\r\nfor random IO. So far I cannot make Postgres take advantage of this\r\ntremendous IO capacity. I can squeeze a factor of a few here and there\r\nwhen caching cannot be utilized, but this hardware can do a lot more.\r\n\r\nLow level testing with fio shows on average x10 speedups over disk for\r\nsequential IO and x500-800 for random IO. With enough threads I can get\r\nIOPS in the 100-200K range and 1-1.5GB/s bandwidth, basically what's\r\nadvertised. But not with Postgres.\r\n\r\nIs this because the Postgres backend is essentially single threaded and\r\nin general does not perform asynchronous IO, or I'm missing something?\r\nI found out that the effective_io_concurrency parameter only takes\r\neffect for bitmap index scans.\r\n\r\nAlso, is there any work going on to allow concurrent IO in the backend\r\nand adapt Postgres to the capabilities of Flash?\r\n\r\nWill appreciate any comments, experiences, etc.\r\n\r\nPrzemek Wozniak\r\n\r\n\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Thu, 23 Dec 2010 13:47:58 -0500", "msg_from": "John W Strange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "John W Strange <[email protected]> wrote:\n \n> Typically my problem is that the large queries are simply CPU\n> bound.\n \nWell, if your bottleneck is CPU, then you're obviously not going to\nbe driving another resource (like disk) to its limit. First,\nthough, I want to confirm that your \"CPU bound\" case isn't in the\n\"I/O Wait\" category of CPU time. What does `vmstat 1` show while\nyou're CPU bound?\n \nIf it's not I/O Wait time, then you need to try to look at the\nqueries involved. If you're not hitting the disk because most of\nthe active data is cached, that would normally be a good thing. \nWhat kind of throughput are you seeing? Do you need better?\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n", "msg_date": "Thu, 23 Dec 2010 13:02:59 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "On Thu, 2010-12-23 at 11:24 -0700, Scott Marlowe wrote:\n> On Thu, Dec 23, 2010 at 10:37 AM, Przemek Wozniak <[email protected]> wrote:\n> > When testing the IO performance of ioSAN storage device from FusionIO\n> > (650GB MLC version) one of the things I tried is a set of IO intensive\n> > operations in Postgres: bulk data loads, updates, and queries calling\n> > for random IO. So far I cannot make Postgres take advantage of this\n> \n> So, were you running a lot of these at once? Or just single threaded?\n\n> I get very good io concurrency with lots of parallel postgresql\n> connections on a 34 disk SAS setup with a battery backed controller.\n\nIn one test I was running between 1 and 32 clients simultaneously\nwriting lots of data using copy binary. The problem is that with a large\nRAM buffer it all goes there, and then the background writer, a single\npostgres process, will issue write requests one at a time I suspect.\nSo the actual IO is effectively serialized by the backend.\n\n\n", "msg_date": "Thu, 23 Dec 2010 12:46:57 -0700", "msg_from": "Przemek Wozniak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "\n--- On Thu, 12/23/10, John W Strange <[email protected]> wrote:\n\n> Typically my problem is that the\n> large queries are simply CPU bound..  do you have a\n> sar/top output that you see. I'm currently setting up two\n> FusionIO DUO @640GB in a lvm stripe to do some testing with,\n> I will publish the results after I'm done.\n> \n> If anyone has some tests/suggestions they would like to see\n> done please let me know.\n> \n> - John\n\nSomewhat tangential to the current topics, I've heard that FusionIO uses internal cache and hence is not crash-safe, and if the cache is turned off performance will take a big hit. Is that your experience?\n\n\n \n", "msg_date": "Thu, 23 Dec 2010 11:58:18 -0800 (PST)", "msg_from": "Andy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "On Dec 23, 2010, at 11:58 AM, Andy wrote:\n\n> \n> Somewhat tangential to the current topics, I've heard that FusionIO uses internal cache and hence is not crash-safe, and if the cache is turned off performance will take a big hit. Is that your experience?\n\nIt does use an internal cache, but it also has onboard battery power. The driver needs to put its house in order when restarting after an unclean shutdown, however, and that can take up to 30 minutes per card.", "msg_date": "Thu, 23 Dec 2010 13:22:32 -0800", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "On Thu, Dec 23, 2010 at 11:46 AM, Przemek Wozniak <[email protected]> wrote:\n\n> In one test I was running between 1 and 32 clients simultaneously\n> writing lots of data using copy binary.\n\nAre you by-passing WAL? If not, you are likely serializing on that.\nNot so much the writing, but the lock.\n\n> The problem is that with a large\n> RAM buffer it all goes there, and then the background writer, a single\n> postgres process, will issue write requests one at a time I suspect.\n\nBut those \"writes\" are probably just copies of 8K into kernel's RAM,\nand so very fast.\n\n> So the actual IO is effectively serialized by the backend.\n\nIf the background writer cannot keep up, then the individual backends\nstart doing writes as well, so it isn't really serialized..\n\nCheers,\n\nJeff\n", "msg_date": "Sat, 25 Dec 2010 17:48:16 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "Jeff Janes wrote:\n> If the background writer cannot keep up, then the individual backends\n> start doing writes as well, so it isn't really serialized..\n>\n> \nIs there any parameter governing that behavior? Can you tell me where in \nthe code (version 9.0.2) can I find that? Thanks.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Sat, 25 Dec 2010 23:30:32 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "On 12/25/10, Mladen Gogala <[email protected]> wrote:\n> Jeff Janes wrote:\n>> If the background writer cannot keep up, then the individual backends\n>> start doing writes as well, so it isn't really serialized..\n>>\n>>\n> Is there any parameter governing that behavior?\n\nNo, it is automatic.\n\nThere are parameters governing how likely it is that bgwriter falls\nbehind in the first place, though.\n\nhttp://www.postgresql.org/docs/9.0/static/runtime-config-resource.html\n\nIn particular bgwriter_lru_maxpages could be made bigger and/or\nbgwriter_delay smaller.\n\nBut bulk copy binary might use a nondefault allocation strategy, and I\ndon't know enough about that part of the code to assess the\ninteraction of that with bgwriter.\n\n> Can you tell me where in\n> the code (version 9.0.2) can I find\nthat? Thanks.\n\nBufmgr.c, specifically BufferAlloc.\n\nCheers,\n\nJeff\n", "msg_date": "Sun, 26 Dec 2010 08:11:07 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "Jeff Janes wrote:\n> There are parameters governing how likely it is that bgwriter falls\n> behind in the first place, though.\n>\n> http://www.postgresql.org/docs/9.0/static/runtime-config-resource.html\n>\n> In particular bgwriter_lru_maxpages could be made bigger and/or\n> bgwriter_delay smaller.\n> \n\nAlso, one of the structures used for caching the list of fsync requests \nthe background writer is handling, the thing that results in backend \nwrites when it can't keep up, is proportional to the size of \nshared_buffers on the server. Setting that tunable to a reasonable size \nand lowering bgwriter_delay are two things that help most for the \nbackground writer to keep up with overall load rather than having \nbackends write their own buffers. And the way checkpoints in PostgreSQL \nwork, having more backend writes is generally not a performance \nimproving change, even though it does have the property that it gets \nmore processes writing at once.\n\nThe thread opening post here really didn't discuss if any PostgreSQL \nserver tuning or OS tuning was done to try and optimize performance. \nThe usual list at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server is \nnormally a help.\n\nAt the kernel level, the #1 thing I find necessary to get decent bulk \nperformance in a lot of situations is proper read-ahead. On Linux for \nexample, you must get the OS doing readahead to compensate for the fact \nthat PostgreSQL is issuing requests in a serial sequence. It's going to \nask for block #1, then block #2, then block #3, etc. If the OS doesn't \nstart picking up on that pattern and reading blocks 4, 5, 6, etc. before \nthe server asks for them, to keep the disk fully occupied and return the \ndatabase data fast from the kernel buffers, you'll never reach the full \npotential even of a regular hard drive. And the default readahead on \nLinux is far too low for modern hardware.\n\n> But bulk copy binary might use a nondefault allocation strategy, and I\n> don't know enough about that part of the code to assess the\n> interaction of that with bgwriter.\n> \n\nIt's documented pretty well in src/backend/storage/buffer/README , \nspecifically the \"Buffer Ring Replacement Strategy\" section. Sequential \nscan reads, VACUUM, COPY IN, and CREATE TABLE AS SELECT are the \noperations that get one of the more specialized buffer replacement \nstrategies. These all use the same basic approach, which is to re-use a \nring of data rather than running rampant over the whole buffer cache. \nThe main thing different between them is the size of the ring. Inside \nfreelist.c the GetAccessStrategy code lets you see the size you get in \neach of these modes.\n\nSince PostgreSQL reads and writes through the OS buffer cache in \naddition to its own shared_buffers pool, this whole ring buffer thing \ndoesn't protect the OS cache from being trashed by a big bulk \noperation. Your only real defense there is to make shared_buffers large \nenough that it retains a decent chunk of data even in the wake of that.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 04 Jan 2011 08:53:01 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" } ]
[ { "msg_contents": "hello --\n\ni have a schema similar to the following\n\ncreate table foo (\n id integer not null,\n val integer not null,\n s integer not null,\n e integer not null\n);\n\ncreate index foo_s_idx on foo using btree (s);\ncreate index foo_e_idx on foo using btree (e);\n\ni want to do queries like\n\nselect * from foo where 150 between s and e;\n\nthis usually gives me index or bitmap scans on one of the indices, plus a filter for the other condition. this is not terribly efficient as the table is large (billions of rows), but there should only be a few thousand rows with s < k < e for any k. the data is id, value, interval (s, e), with s < e, and e - s is \"small\".\n\ni am experimenting and would like to see the effect of using a bitmap index \"AND\" scan using both indices. as far as i can tell, there are no easy ways to force or encourage this -- there are no switches like enable_seqscan and such which force the use of bitmap AND, and i don't know how to tell the query planner about the structure of the data (i don't think this is adequately captured in any of the statistics it generates, i would need multi-column statistics.)\n\nany clues?\n\nbest regards, ben", "msg_date": "Thu, 23 Dec 2010 12:06:40 -0800", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "encourging bitmap AND" }, { "msg_contents": "Ben <[email protected]> writes:\n> i have a schema similar to the following\n\n> create index foo_s_idx on foo using btree (s);\n> create index foo_e_idx on foo using btree (e);\n\n> i want to do queries like\n\n> select * from foo where 150 between s and e;\n\nThat index structure is really entirely unsuited to what you want to do,\nso it's not surprising that the planner isn't impressed with the idea of\na bitmap AND.\n\nI'd suggest setting up something involving a gist index over an\ninterval-ish datatype. The PERIOD datatype that Jeff Davis is fooling\nwith would do what you want --- see\n http://pgfoundry.org/projects/temporal\n http://thoughts.j-davis.com/2009/11/08/temporal-keys-part-2/\nIf you don't want any add-on parts involved, you could fake it by using\na box or possibly lseg.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Dec 2010 15:52:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encourging bitmap AND " }, { "msg_contents": "\nOn Dec 23, 2010, at 12:52 PM, Tom Lane wrote:\n\n> Ben <[email protected]> writes:\n>> i have a schema similar to the following\n> \n>> create index foo_s_idx on foo using btree (s);\n>> create index foo_e_idx on foo using btree (e);\n> \n>> i want to do queries like\n> \n>> select * from foo where 150 between s and e;\n> \n> That index structure is really entirely unsuited to what you want to do,\n> so it's not surprising that the planner isn't impressed with the idea of\n> a bitmap AND.\n> \n> I'd suggest setting up something involving a gist index over an\n> interval-ish datatype. The PERIOD datatype that Jeff Davis is fooling\n> with would do what you want --- see\n> http://pgfoundry.org/projects/temporal\n> http://thoughts.j-davis.com/2009/11/08/temporal-keys-part-2/\n> If you don't want any add-on parts involved, you could fake it by using\n> a box or possibly lseg.\n\nThanks for the quick response. I've already played a lot with the PERIOD datatype and GIST, it works pretty good, but I found that the lack of statistics and real selectivity functions hurt me. I was experimenting with the two column setup as an alternative, but if you think this is a dead end I'll look elsewhere.\n\nBest regards, Ben", "msg_date": "Thu, 23 Dec 2010 13:01:48 -0800", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "Re: encourging bitmap AND" }, { "msg_contents": "On Thu, Dec 23, 2010 at 22:52, Tom Lane <[email protected]> wrote:\n> Ben <[email protected]> writes:\n>> i have a schema similar to the following\n>\n>> create index foo_s_idx on foo using btree (s);\n>> create index foo_e_idx on foo using btree (e);\n>\n>> i want to do queries like\n>\n>> select * from foo where 150 between s and e;\n>\n> That index structure is really entirely unsuited to what you want to do,\n> so it's not surprising that the planner isn't impressed with the idea of\n> a bitmap AND.\n\nWhy is it unsuited for this query? It expands to (150 < s AND 150 > e)\n which should work nicely with bitmap AND as far as I can tell.\n\nRegards,\nMarti\n", "msg_date": "Sun, 26 Dec 2010 08:50:49 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encourging bitmap AND" }, { "msg_contents": "Marti Raudsepp <[email protected]> writes:\n> On Thu, Dec 23, 2010 at 22:52, Tom Lane <[email protected]> wrote:\n>> That index structure is really entirely unsuited to what you want to do,\n>> so it's not surprising that the planner isn't impressed with the idea of\n>> a bitmap AND.\n\n> Why is it unsuited for this query? It expands to (150 < s AND 150 > e)\n> which should work nicely with bitmap AND as far as I can tell.\n\nWell, maybe for small values of \"nicely\". If you do it like that, then\non average each indexscan will scan about half of its index and return a\nbitmap representing about half the rows in the table. That's an\nexpensive indexscan, and an expensive bitmap-AND operation, even if the\nfinal number of rows out of the AND is small. Plus you're at serious\nrisk that the bitmaps will become lossy, which degrades the performance\nof the final bitmap heapscan.\n\nIf you're doing interval queries enough to worry about having an index\nfor them, you really want an indexing structure that is designed to do\ninterval queries efficiently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Dec 2010 12:24:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encourging bitmap AND " }, { "msg_contents": "On Dec 26, 2010, at 11:24 AM, Tom Lane wrote:\n> If you're doing interval queries enough to worry about having an index\n> for them, you really want an indexing structure that is designed to do\n> interval queries efficiently.\n\nBTW, one way to accomplish that is to transform your data into geometric shapes and then index them accordingly. Prior to the work Jeff Davis has done on time intervals it was common to treat time as points and ranges as lines or boxes. While we no longer need to play those games for time, I don't think there's an equivalent for non-time datatypes.\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n", "msg_date": "Sun, 2 Jan 2011 15:00:57 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: encourging bitmap AND " } ]
[ { "msg_contents": "On Dec 23, 2010, at 13:22:32, Ben Chobot wrote:\n> \n> On Dec 23, 2010, at 11:58 AM, Andy wrote:\n> >\n> > Somewhat tangential to the current topics, I've heard that FusionIO\n>uses\n> > internal cache and hence is not crash-safe, and if the cache is turned\n> > off performance will take a big hit. Is that your experience?\n> \n> It does use an internal cache, but it also has onboard battery power. The\n> driver needs to put its house in order when restarting after an unclean\n> shutdown, however, and that can take up to 30 minutes per card.\n\nSorry to intrude here, but I'd like to clarify the behavior of the\nFusion-io\ndevices. Unlike SSDs, we do not use an internal cache nor do we use\nbatteries.\n\n(We *do* have a small internal FIFO (with capacitive hold-up) that is\n100% guaranteed to be written to our persistent storage in the event of\nunexpected power failure.)\n\nWhen a write() to a Fusion-io device has been acknowledged, the data is\nguaranteed to be stored safely. This is a strict requirement for any\nenterprise-ready storage device.\n\nThanks,\nJohn Cagle\nFusion-io, Inc.\n\n\nConfidentiality Notice: This e-mail message, its contents and any attachments to it are confidential to the intended recipient, and may contain information that is privileged and/or exempt from disclosure under applicable law. If you are not the intended recipient, please immediately notify the sender and destroy the original e-mail message and any attachments (and any copies that may have been made) from your system or otherwise. Any unauthorized use, copying, disclosure or distribution of this information is strictly prohibited.\n", "msg_date": "Thu, 23 Dec 2010 17:49:16 -0700", "msg_from": "John Cagle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "John,\n\n> When a write() to a Fusion-io device has been acknowledged, the data is\n> guaranteed to be stored safely. This is a strict requirement for any\n> enterprise-ready storage device.\n\nThanks for the clarification!\n\nWhile you're here, any general advice on configuring fusionIO devices\nfor database access, or vice-versa?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 23 Dec 2010 17:48:51 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" }, { "msg_contents": "\nI wonder how the OP configured effective_io_concurrency ; even on a single \ndrive with command queuing the fadvise() calls that result do make a \ndifference...\n", "msg_date": "Fri, 24 Dec 2010 10:55:49 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: concurrent IO in postgres?" } ]
[ { "msg_contents": "Hi ALL:\n\n \n\nMy Database is for logs only, I almost have 30G data growth in my database.\n\nI Use partition table to store those data, those tables are partitioned by\ntime column daily.\n\nMy data only keep for three days.\n\nI will dump those data into dump file and drop the partition table after\nthree days.\n\n \n\nMy question is :\n\nThe partition table that I have to backup and drop is running a long\nPrevent-Wraparound-Autovaccuum,\n\nIs any way to let the vacuum faster?\n\n \n\nThe Prevent-Wraparound-Autovaccuum run very slow, almost 36 hours.\n\nMy Disk IO is low.\n\n \n\nMy Server config for vacuum list below:\n\n \n\n \n\n#---------------------------------------------------------------------------\n---\n\n# RESOURCE USAGE (except WAL)\n\n#---------------------------------------------------------------------------\n---\n\n \n\nshared_buffers = 4096MB # min 128kB\n\ntemp_buffers = 384MB # min 800kB\n\nmax_prepared_transactions = 100 # zero disables the feature\n\nwork_mem = 100MB # min 64kB\n\nmaintenance_work_mem = 192MB # min 1MB\n\nmax_stack_depth = 4MB # min 100kB\n\n \n\nvacuum_cost_delay = 50ms # 0-100 milliseconds\n\nvacuum_cost_page_hit = 6 # 0-10000 credits\n\nvacuum_cost_limit = 1000 # 1-10000 credits\n\n \n\n#---------------------------------------------------------------------------\n---\n\n# AUTOVACUUM PARAMETERS\n\n#---------------------------------------------------------------------------\n---\n\n \n\nautovacuum = on # Enable autovacuum subprocess? 'on' \n\nautovacuum_max_workers = 3 # max number of autovacuum subprocesses\n\nautovacuum_naptime = 1 # time between autovacuum runs\n\nautovacuum_vacuum_scale_factor = 0.01 # fraction of table size before vacuum\n\nautovacuum_vacuum_cost_delay = 10ms # default vacuum cost delay for\n\n \n\n \n\n \n\n \n\n\n\n\nRegards.\n Marc Hsiao\n\n\n\n\nHi ALL: My Database is for logs only, I almost have 30G data growth in my database.I Use partition table to store those data, those tables are partitioned by time column daily.My data only keep for three days.I will dump those data into dump file and drop the partition table after three days. My question is :The partition table that I have to backup and drop is running a long Prevent-Wraparound-Autovaccuum,Is any way to let the vacuum faster? The Prevent-Wraparound-Autovaccuum run very slow, almost 36 hours.My Disk IO is low. My Server config for vacuum list below:  #------------------------------------------------------------------------------# RESOURCE USAGE (except WAL)#------------------------------------------------------------------------------ shared_buffers = 4096MB     # min 128kBtemp_buffers = 384MB      # min 800kBmax_prepared_transactions = 100   # zero disables the featurework_mem = 100MB        # min 64kBmaintenance_work_mem = 192MB    # min 1MBmax_stack_depth = 4MB     # min 100kB vacuum_cost_delay = 50ms    # 0-100 millisecondsvacuum_cost_page_hit = 6    # 0-10000 creditsvacuum_cost_limit = 1000    # 1-10000 credits #------------------------------------------------------------------------------# AUTOVACUUM PARAMETERS#------------------------------------------------------------------------------ autovacuum = on     # Enable autovacuum subprocess?  'on' autovacuum_max_workers = 3    # max number of autovacuum subprocessesautovacuum_naptime = 1    # time between autovacuum runsautovacuum_vacuum_scale_factor = 0.01 # fraction of table size before vacuumautovacuum_vacuum_cost_delay = 10ms # default vacuum cost delay for    Regards.  Marc Hsiao", "msg_date": "Tue, 28 Dec 2010 13:53:48 +0800", "msg_from": "\"marc.hsiao\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to turn autovacuum prevent wrap around run faster?" }, { "msg_contents": "\nHi All:\n\nThe autovacuum (prevent wraparound) still run more than 36 hours, I can not\ndrop the partition table after adjust the autovacuum parameters.\n\nIf a table is running autovacuum (prevent wraparound), can I purge this\ntable?\nIf not, what else I can do for clean this partition table?\n\nIf the table is running autovacuum (prevent wraparound), can I use pg_dump\nto backup it?\nWill the Transaction ID Wraparound Failures happen while table has been\nrestored into new DB?\n\nRegards\nMarc\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/How-to-turn-autovacuum-prevent-wrap-around-run-faster-tp3319984p3331421.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Thu, 6 Jan 2011 19:57:29 -0800 (PST)", "msg_from": "marc47marc47 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to turn autovacuum prevent wrap around run faster?" } ]
[ { "msg_contents": "On Mon, Dec 27, 2010 at 10:53 PM, marc.hsiao <[email protected]> wrote:\n> Hi ALL:\n>\n> My Database is for logs only, I almost have 30G data growth in my database.\n>\n> I Use partition table to store those data, those tables are partitioned by\n> time column daily.\n>\n> My data only keep for three days.\n>\n> I will dump those data into dump file and drop the partition table after\n> three days.\n>\n> My question is :\n>\n> The partition table that I have to backup and drop is running a long\n> Prevent-Wraparound-Autovaccuum,\n>\n> Is any way to let the vacuum faster?\n>\n> The Prevent-Wraparound-Autovaccuum run very slow, almost 36 hours.\n> My Disk IO is low.\n> My Server config for vacuum list below:\n> maintenance_work_mem = 192MB    # min 1MB\n>\n> max_stack_depth = 4MB     # min 100kB\n>\n>\n>\n> vacuum_cost_delay = 50ms    # 0-100 milliseconds\n\nThat's a pretty high regular vacuum cost delay. Just sayin, autovac\ndoesn't use it.\n\n> vacuum_cost_page_hit = 6    # 0-10000 credits\n>\n> vacuum_cost_limit = 1000    # 1-10000 credits\n\nAnd that's pretty low.\n\n> # AUTOVACUUM PARAMETERS\n>\n> #------------------------------------------------------------------------------\n> autovacuum = on     # Enable autovacuum subprocess?  'on'\n> autovacuum_max_workers = 3    # max number of autovacuum subprocesses\n> autovacuum_naptime = 1    # time between autovacuum runs\n> autovacuum_vacuum_scale_factor = 0.01 # fraction of table size before vacuum\n> autovacuum_vacuum_cost_delay = 10ms # default vacuum cost delay for\n\nSet an autovacuum max cost much higher than the vacuum max cost\n(100000 or so) and drop autovac cost delay to 0, then restart\nautovacuum.\n", "msg_date": "Mon, 27 Dec 2010 23:11:43 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to turn autovacuum prevent wrap around run faster?" }, { "msg_contents": "Hi Scot Marlowe:\n\n> Set an autovacuum max cost much higher than the vacuum max cost\n> (100000 or so) and drop autovac cost delay to 0, then restart\n> autovacuum.\n\nI had set \" autovacuum_vacuum_cost_delay=0\".\nBTW, how to set \" autovacuum max cost\" ?\n\n\nRegards\nMarc\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Scott Marlowe\nSent: Tuesday, December 28, 2010 2:12 PM\nTo: marc.hsiao\nCc: [email protected]\nSubject: Re: [PERFORM] How to turn autovacuum prevent wrap around run\nfaster?\n\nOn Mon, Dec 27, 2010 at 10:53 PM, marc.hsiao <[email protected]>\nwrote:\n> Hi ALL:\n>\n> My Database is for logs only, I almost have 30G data growth in my\ndatabase.\n>\n> I Use partition table to store those data, those tables are partitioned by\n> time column daily.\n>\n> My data only keep for three days.\n>\n> I will dump those data into dump file and drop the partition table after\n> three days.\n>\n> My question is :\n>\n> The partition table that I have to backup and drop is running a long\n> Prevent-Wraparound-Autovaccuum,\n>\n> Is any way to let the vacuum faster?\n>\n> The Prevent-Wraparound-Autovaccuum run very slow, almost 36 hours.\n> My Disk IO is low.\n> My Server config for vacuum list below:\n> maintenance_work_mem = 192MB    # min 1MB\n>\n> max_stack_depth = 4MB     # min 100kB\n>\n>\n>\n> vacuum_cost_delay = 50ms    # 0-100 milliseconds\n\nThat's a pretty high regular vacuum cost delay. Just sayin, autovac\ndoesn't use it.\n\n> vacuum_cost_page_hit = 6    # 0-10000 credits\n>\n> vacuum_cost_limit = 1000    # 1-10000 credits\n\nAnd that's pretty low.\n\n> # AUTOVACUUM PARAMETERS\n>\n>\n#---------------------------------------------------------------------------\n---\n> autovacuum = on     # Enable autovacuum subprocess?  'on'\n> autovacuum_max_workers = 3    # max number of autovacuum subprocesses\n> autovacuum_naptime = 1    # time between autovacuum runs\n> autovacuum_vacuum_scale_factor = 0.01 # fraction of table size before\nvacuum\n> autovacuum_vacuum_cost_delay = 10ms # default vacuum cost delay for\n\nSet an autovacuum max cost much higher than the vacuum max cost\n(100000 or so) and drop autovac cost delay to 0, then restart\nautovacuum.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 3 Jan 2011 10:01:38 +0800", "msg_from": "\"marc.hsiao\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to turn autovacuum prevent wrap around run faster?" }, { "msg_contents": "\nHi All:\n\nThe autovacuum (prevent wraparound) still run more than 36 hours, I can not\ndrop the partition table after adjust the autovacuum parameters.\n\nIf a table is running autovacuum (prevent wraparound), can I purge this\ntable?\nIf not, what else I can do for clean this partition table?\n\nIf the table is running autovacuum (prevent wraparound), can I use pg_dump\nto backup it?\nWill the Transaction ID Wraparound Failures happen while table has been\nrestored into new DB?\n\nRegards\nMarc\n\n\n", "msg_date": "Fri, 7 Jan 2011 16:14:41 +0800", "msg_from": "\"marc.hsiao\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to turn autovacuum prevent wrap around run faster?" }, { "msg_contents": "Hi All:\n\nMy Server list below\n\npostgres=# select version();\n version\n\n----------------------------------------------------------------------------\n---------------------------------------\n PostgreSQL 8.4.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)\n4.1.2 20080704 (Red Hat 4.1.2-44), 64-bit\n(1 row)\n\nI use another way to solve this issue.\nThe table is partition table, I will create this partition table before\ninsert data.\nThe freezemaxid will be smaller than others, so that the partition table\nwill be last one that been vacuum.\n\nUse this sql to check\nSELECT relname, age(relfrozenxid) FROM pg_class WHERE relkind = 'r' order by\n2;\n\nI also adjust the postgresql.conf three parameters.\nThis will cause that my partition table will not reach the max_age in a\nshort time.\n\n autovacuum_freeze_max_age = 2000000000\n vacuum_freeze_min_age = 10000000\n vacuum_freeze_table_age = 150000000\n\nAs far as now that my partition table drop run normal, without autovacuum\nprevent wraparound interrupt.\n\nRegards\nMarc\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of marc.hsiao\nSent: Friday, January 07, 2011 4:15 PM\nTo: [email protected]\nSubject: Re: [PERFORM] How to turn autovacuum prevent wrap around run\nfaster?\n\n\nHi All:\n\nThe autovacuum (prevent wraparound) still run more than 36 hours, I can not\ndrop the partition table after adjust the autovacuum parameters.\n\nIf a table is running autovacuum (prevent wraparound), can I purge this\ntable?\nIf not, what else I can do for clean this partition table?\n\nIf the table is running autovacuum (prevent wraparound), can I use pg_dump\nto backup it?\nWill the Transaction ID Wraparound Failures happen while table has been\nrestored into new DB?\n\nRegards\nMarc\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 12 Jan 2011 16:21:12 +0800", "msg_from": "\"marc.hsiao\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to turn autovacuum prevent wrap around run faster?" } ]
[ { "msg_contents": "I have a table \"aaa\" which is not very big. It has less than 10'000\nrows. However read operations on this table is very frequent.\n\nWhenever I try to create a new table \"bbb\" with foreign key pointing\nto \"aaa\". The operation locks, and reading \"aaa\" is not possible. The\nquery also never seems to finish.\n\nALTER TABLE \"bbb\" ADD CONSTRAINT \"topic_id_refs_id_3942a46c6ab2c0b4\"\nFOREIGN KEY (\"topic_id\") REFERENCES \"aaa\" (\"id\") DEFERRABLE INITIALLY\nDEFERRED;\n\nThe current workaround is to create any new table at off-peak hours,\ne.g. midnight after restarting the db.\n\nI would like to know if there's any proper solution of this. Is this\nan issue affecting all relational databases? My db is PostgreSQL 8.3.\n", "msg_date": "Mon, 27 Dec 2010 23:08:45 -0800 (PST)", "msg_from": "kakarukeys <[email protected]>", "msg_from_op": true, "msg_subject": "adding foreign key constraint locks up table" }, { "msg_contents": "On 12/28/2010 02:08 AM, kakarukeys wrote:\n> I have a table \"aaa\" which is not very big. It has less than 10'000\n> rows. However read operations on this table is very frequent.\n>\n> Whenever I try to create a new table \"bbb\" with foreign key pointing\n> to \"aaa\". The operation locks, and reading \"aaa\" is not possible. The\n> query also never seems to finish.\n\nHow long did you wait?\n\n> ALTER TABLE \"bbb\" ADD CONSTRAINT \"topic_id_refs_id_3942a46c6ab2c0b4\"\n> FOREIGN KEY (\"topic_id\") REFERENCES \"aaa\" (\"id\") DEFERRABLE INITIALLY\n> DEFERRED;\n>\n> The current workaround is to create any new table at off-peak hours,\n> e.g. [sic] midnight after restarting the db.\n>\n> I would like to know if there's any proper solution of this. Is this\n> an issue affecting all relational databases? My db is PostgreSQL 8.3.\n\nNaturally the system has to lock the table to alter it. It also has to check \nthat all records already in \"bbb\" satisfy the new constraint.\n\nWhat's the longest you've waited for ALTER TABLE to release its lock?\n\n-- \nLew\nCeci n'est pas une pipe.\n", "msg_date": "Tue, 28 Dec 2010 08:34:30 -0500", "msg_from": "Lew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "On Tue, Dec 28, 2010 at 2:08 AM, kakarukeys <[email protected]> wrote:\n\n> I have a table \"aaa\" which is not very big. It has less than 10'000\n> rows. However read operations on this table is very frequent.\n>\n> Whenever I try to create a new table \"bbb\" with foreign key pointing\n> to \"aaa\". The operation locks, and reading \"aaa\" is not possible. The\n> query also never seems to finish.\n>\n> ALTER TABLE \"bbb\" ADD CONSTRAINT \"topic_id_refs_id_3942a46c6ab2c0b4\"\n> FOREIGN KEY (\"topic_id\") REFERENCES \"aaa\" (\"id\") DEFERRABLE INITIALLY\n> DEFERRED;\n>\n> The current workaround is to create any new table at off-peak hours,\n> e.g. midnight after restarting the db.\n>\n> I would like to know if there's any proper solution of this. Is this\n> an issue affecting all relational databases? My db is PostgreSQL 8.3.\n>\n>\nhow many rows does \"bbb\" have? And what are the data types of column\naaa.idand bbb.topic_id?\n\nCreating a foreign key should not lock out aaa against reads. Can you\nprovide the output of the following:\n\nselect relname, oid from pg_class where relname in ( 'aaa', 'bbb' );\n\nselect * from pg_locks; -- run this from a new session when you think \"aaa\"\nis locked by foreign key creation.\n\nRegards,\n-- \ngurjeet.singh\n@ EnterpriseDB - The Enterprise Postgres Company\nhttp://www.EnterpriseDB.com\n\nsingh.gurjeet@{ gmail | yahoo }.com\nTwitter/Skype: singh_gurjeet\n\nMail sent from my BlackLaptop device\n\nOn Tue, Dec 28, 2010 at 2:08 AM, kakarukeys <[email protected]> wrote:\n\nI have a table \"aaa\" which is not very big. It has less than 10'000\nrows. However read operations on this table is very frequent.\n\nWhenever I try to create a new table \"bbb\" with foreign key pointing\nto \"aaa\". The operation locks, and reading \"aaa\" is not possible. The\nquery also never seems to finish.\n\nALTER TABLE \"bbb\" ADD CONSTRAINT \"topic_id_refs_id_3942a46c6ab2c0b4\"\nFOREIGN KEY (\"topic_id\") REFERENCES \"aaa\" (\"id\") DEFERRABLE INITIALLY\nDEFERRED;\n\nThe current workaround is to create any new table at off-peak hours,\ne.g. midnight after restarting the db.\n\nI would like to know if there's any proper solution of this. Is this\nan issue affecting all relational databases? My db is PostgreSQL 8.3.\nhow many rows does \"bbb\" have? And what are the data types of column aaa.id and bbb.topic_id?Creating a foreign key should not lock out aaa against reads. Can you provide the output of the following:\nselect relname, oid from pg_class where relname in ( 'aaa', 'bbb' );select * from pg_locks; -- run this from a new session when you think \"aaa\" is locked by foreign key creation.\n\nRegards,-- gurjeet.singh@ EnterpriseDB - The Enterprise Postgres Companyhttp://www.EnterpriseDB.comsingh.gurjeet@{ gmail | yahoo }.comTwitter/Skype: singh_gurjeet\nMail sent from my BlackLaptop device", "msg_date": "Tue, 28 Dec 2010 08:37:33 -0500", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "On Dec 28, 9:37 pm, [email protected] (Gurjeet Singh) wrote:\n> On Tue, Dec 28, 2010 at 2:08 AM, kakarukeys <[email protected]> wrote:\n> > I have a table \"aaa\" which is not very big. It has less than 10'000\n> > rows. However read operations on this table is very frequent.\n>\n> > Whenever I try to create a new table \"bbb\" with foreign key pointing\n> > to \"aaa\". The operation locks, and reading \"aaa\" is not possible. The\n> > query also never seems to finish.\n>\n> > ALTER TABLE \"bbb\" ADD CONSTRAINT \"topic_id_refs_id_3942a46c6ab2c0b4\"\n> > FOREIGN KEY (\"topic_id\") REFERENCES \"aaa\" (\"id\") DEFERRABLE INITIALLY\n> > DEFERRED;\n>\n> > The current workaround is to create any new table at off-peak hours,\n> > e.g. midnight after restarting the db.\n>\n> > I would like to know if there's any proper solution of this. Is this\n> > an issue affecting all relational databases? My db is PostgreSQL 8.3.\n>\n> how many rows does \"bbb\" have? And what are the data types of column\n> aaa.idand bbb.topic_id?\n>\n> Creating a foreign key should not lock out aaa against reads. Can you\n> provide the output of the following:\n>\n> select relname, oid from pg_class where relname in ( 'aaa', 'bbb' );\n>\n> select * from pg_locks; -- run this from a new session when you think \"aaa\"\n> is locked by foreign key creation.\n>\n> Regards,\n> --\n> gurjeet.singh\n> @ EnterpriseDB - The Enterprise Postgres Companyhttp://www.EnterpriseDB.com\n>\n> singh.gurjeet@{ gmail | yahoo }.com\n> Twitter/Skype: singh_gurjeet\n>\n> Mail sent from my BlackLaptop device\n\n> How long did you wait?\nhours in the past.\nFor recent happenings, I aborted after 10 mins.\n\nSince it's a new table's creation, 'bbb' is empty.\nThe 'alter table' never finished, so the lock was not released.\naaa.id, bbb.topic_id are integers (id is auto-increament key)\n\nThank you for the investigative queries, I shall run it on next\nsighting of the problem.\n\nI also saw this:\nhttp://postgresql.1045698.n5.nabble.com/Update-INSERT-RULE-while-running-for-Partitioning-td2057708.html\n\n\"Note that using ALTER TABLE to add a constraint as well as\nusing DROP TABLE or TRUNCATE to remove/recycle partitions are\nDDL commands that require exclusive locks. This will block\nboth readers and writers to the table(s) and can also cause readers\nand writers to now interfere with each other. \"\n", "msg_date": "Tue, 28 Dec 2010 05:55:59 -0800 (PST)", "msg_from": "kakarukeys <[email protected]>", "msg_from_op": true, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "Gurjeet Singh <[email protected]> wrote:\n \n> how many rows does \"bbb\" have? And what are the data types of\n> column aaa.idand bbb.topic_id?\n \nFor that matter, is there a unique index (directly or as the result\nof a constraint) on the aaa.id column (by itself)?\n \n-Kevin\n", "msg_date": "Tue, 28 Dec 2010 08:43:49 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "On Tue, Dec 28, 2010 at 9:43 AM, Kevin Grittner <[email protected]\n> wrote:\n\n> Gurjeet Singh <[email protected]> wrote:\n>\n> > how many rows does \"bbb\" have? And what are the data types of\n> > column aaa.idand bbb.topic_id?\n>\n> For that matter, is there a unique index (directly or as the result\n> of a constraint) on the aaa.id column (by itself)?\n>\n>\nIsn't it a requirement that the FKey referenced columns be UNIQUE or PRIMARY\nKEY'd already?\n\nRegards,\n-- \ngurjeet.singh\n@ EnterpriseDB - The Enterprise Postgres Company\nhttp://www.EnterpriseDB.com\n\nsingh.gurjeet@{ gmail | yahoo }.com\nTwitter/Skype: singh_gurjeet\n\nMail sent from my BlackLaptop device\n\nOn Tue, Dec 28, 2010 at 9:43 AM, Kevin Grittner <[email protected]> wrote:\nGurjeet Singh <[email protected]> wrote:\n\n> how many rows does \"bbb\" have? And what are the data types of\n> column aaa.idand bbb.topic_id?\n\nFor that matter, is there a unique index (directly or as the result\nof a constraint) on the aaa.id column (by itself)?\nIsn't it a requirement that the FKey referenced columns be UNIQUE or PRIMARY KEY'd already?Regards,-- gurjeet.singh@ EnterpriseDB - The Enterprise Postgres Company\nhttp://www.EnterpriseDB.comsingh.gurjeet@{ gmail | yahoo }.comTwitter/Skype: singh_gurjeetMail sent from my BlackLaptop device", "msg_date": "Tue, 28 Dec 2010 09:56:19 -0500", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "> Whenever I try to create a new table \"bbb\" with foreign key pointing\n> to \"aaa\". The operation locks, and reading \"aaa\" is not possible. The\n> query also never seems to finish.\n\nDo you mean that the ALTER query and subsequent queries are shown as\n\"waiting\" in pg_stat_activity? In this case, I'm also wondering why\nthis is inecessary.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Tue, 28 Dec 2010 14:57:14 +0000", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "On Tue, Dec 28, 2010 at 8:55 AM, kakarukeys <[email protected]> wrote:\n\n>\n> > How long did you wait?\n> hours in the past.\n> For recent happenings, I aborted after 10 mins.\n>\n> Since it's a new table's creation, 'bbb' is empty.\n> The 'alter table' never finished, so the lock was not released.\n> aaa.id, bbb.topic_id are integers (id is auto-increament key)\n>\n\nThat surely is a _long_ time for an empty table's ALTER.\n\n\n>\n>\n> I also saw this:\n>\n> http://postgresql.1045698.n5.nabble.com/Update-INSERT-RULE-while-running-for-Partitioning-td2057708.html\n>\n> \"Note that using ALTER TABLE to add a constraint as well as\n> using DROP TABLE or TRUNCATE to remove/recycle partitions are\n> DDL commands that require exclusive locks. This will block\n> both readers and writers to the table(s) and can also cause readers\n> and writers to now interfere with each other. \"\n>\n>\nIn your case ALTER TABLE would lock bbb, but not aaa; other sessions should\nstill be able to read aaa.\n\nRegards,\n-- \ngurjeet.singh\n@ EnterpriseDB - The Enterprise Postgres Company\nhttp://www.EnterpriseDB.com\n\nsingh.gurjeet@{ gmail | yahoo }.com\nTwitter/Skype: singh_gurjeet\n\nMail sent from my BlackLaptop device\n\nOn Tue, Dec 28, 2010 at 8:55 AM, kakarukeys <[email protected]> wrote:\n\n> How long did you wait?\nhours in the past.\nFor recent happenings, I aborted after 10 mins.\n\nSince it's a new table's creation, 'bbb' is empty.\nThe 'alter table' never finished, so the lock was not released.\naaa.id, bbb.topic_id are integers (id is auto-increament key)That surely is a _long_ time for an empty table's ALTER. \n\nI also saw this:\nhttp://postgresql.1045698.n5.nabble.com/Update-INSERT-RULE-while-running-for-Partitioning-td2057708.html\n\n\"Note that using ALTER TABLE to add a constraint as well as\nusing DROP TABLE or TRUNCATE to remove/recycle partitions are\nDDL commands that require exclusive locks.  This will block\nboth readers and writers to the table(s) and can also cause readers\nand writers to now interfere with each other. \"\nIn your case ALTER TABLE would lock bbb, but not aaa; other sessions should still be able to read aaa.Regards,-- gurjeet.singh\n\n@ EnterpriseDB - The Enterprise Postgres Companyhttp://www.EnterpriseDB.comsingh.gurjeet@{ gmail | yahoo }.comTwitter/Skype: singh_gurjeetMail sent from my BlackLaptop device", "msg_date": "Tue, 28 Dec 2010 10:01:24 -0500", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "Gurjeet Singh <[email protected]> wrote:\n \n> Isn't it a requirement that the FKey referenced columns be UNIQUE\n> or PRIMARY KEY'd already?\n \nAh, so it is. Never mind.\n \n-Kevin\n", "msg_date": "Tue, 28 Dec 2010 09:07:33 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "Florian Weimer <[email protected]> writes:\n>> Whenever I try to create a new table \"bbb\" with foreign key pointing\n>> to \"aaa\". The operation locks, and reading \"aaa\" is not possible. The\n>> query also never seems to finish.\n\nWhat that sounds like to me is there's some long-running (probably idle)\nopen transaction that's holding AccessShare lock on aaa. The ALTER is\nblocked waiting for that xact to finish and release its lock.\nEverything else queues up behind the ALTER. A bit of looking in\npg_locks would find the culprit, if this theory is right.\n\n> Do you mean that the ALTER query and subsequent queries are shown as\n> \"waiting\" in pg_stat_activity? In this case, I'm also wondering why\n> this is inecessary.\n\nALTER ADD FOREIGN KEY must lock both tables to add triggers to them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Dec 2010 10:08:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table " }, { "msg_contents": "* Tom Lane:\n\n>> Do you mean that the ALTER query and subsequent queries are shown as\n>> \"waiting\" in pg_stat_activity? In this case, I'm also wondering why\n>> this is inecessary.\n>\n> ALTER ADD FOREIGN KEY must lock both tables to add triggers to them.\n\nBut why is such a broad lock needed? If the table was created in the\ncurrent transaction and is empty, the contents of the foreign key\ntable should not matter.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Tue, 04 Jan 2011 13:52:24 +0000", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "Florian Weimer <[email protected]> writes:\n> * Tom Lane:\n>> ALTER ADD FOREIGN KEY must lock both tables to add triggers to them.\n\n> But why is such a broad lock needed? If the table was created in the\n> current transaction and is empty, the contents of the foreign key\n> table should not matter.\n\nIt's not about content, it's about having reproducible results. We\ncannot commit an ADD TRIGGER operation when there are table-modifying\nqueries already in progress, because they might (will) fail to notice\nthe trigger. If you don't believe this is a problem, consider the\nfollowing sequence of events:\n\n1. Session 1 issues \"DELETE FROM pk WHERE true\". It fetches the table\ndefinition, sees there are no triggers, and begins to execute the\nDELETE. Now it goes to sleep for awhile.\n\n2. Session 2 issues ALTER TABLE fk ADD FOREIGN KEY pk. If it doesn't\ntake a lock on pk that would exclude the concurrent DELETE, it can fall\nthrough and commit before session 1 makes any more progress.\n\n3. Session 2 inserts some rows in fk. They are valid since the matching\nrows in pk are valid (and not yet even marked for deletion).\n\n4. Session 1 wakes up and finishes its DELETE. Not knowing there is any\ncommitted trigger on pk, it performs no FK checking.\n\nNow you have rows in fk that violate the foreign key constraint.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Jan 2011 10:21:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table " }, { "msg_contents": "On Dec 28 2010, 9:55 pm, kakarukeys <[email protected]> wrote:\n> On Dec 28, 9:37 pm, [email protected] (Gurjeet Singh) wrote:\n>\n>\n>\n> > On Tue, Dec 28, 2010 at 2:08 AM, kakarukeys <[email protected]> wrote:\n> > > I have a table \"aaa\" which is not very big. It has less than 10'000\n> > > rows. However read operations on this table is very frequent.\n>\n> > > Whenever I try to create a new table \"bbb\" with foreign key pointing\n> > > to \"aaa\". The operation locks, and reading \"aaa\" is not possible. The\n> > > query also never seems to finish.\n>\n> > > ALTER TABLE \"bbb\" ADD CONSTRAINT \"topic_id_refs_id_3942a46c6ab2c0b4\"\n> > > FOREIGN KEY (\"topic_id\") REFERENCES \"aaa\" (\"id\") DEFERRABLE INITIALLY\n> > > DEFERRED;\n>\n> > > The current workaround is to create any new table at off-peak hours,\n> > > e.g. midnight after restarting the db.\n>\n> > > I would like to know if there's any proper solution of this. Is this\n> > > an issue affecting all relational databases? My db is PostgreSQL 8.3.\n>\n> > how many rows does \"bbb\" have? And what are the data types of column\n> > aaa.idand bbb.topic_id?\n>\n> > Creating a foreign key should not lock out aaa against reads. Can you\n> > provide the output of the following:\n>\n> > select relname, oid from pg_class where relname in ( 'aaa', 'bbb' );\n>\n> > select * from pg_locks; -- run this from a new session when you think \"aaa\"\n> > is locked by foreign key creation.\n>\n> > Regards,\n> > --\n> > gurjeet.singh\n> > @ EnterpriseDB - The Enterprise Postgres Companyhttp://www.EnterpriseDB.com\n>\n> > singh.gurjeet@{ gmail | yahoo }.com\n> > Twitter/Skype: singh_gurjeet\n>\n> > Mail sent from my BlackLaptop device\n> > How long did you wait?\n>\n> hours in the past.\n> For recent happenings, I aborted after 10 mins.\n>\n> Since it's a new table's creation, 'bbb' is empty.\n> The 'alter table' never finished, so the lock was not released.\n> aaa.id, bbb.topic_id are integers (id is auto-increament key)\n>\n> Thank you for the investigative queries, I shall run it on next\n> sighting of the problem.\n>\n> I also saw this:http://postgresql.1045698.n5.nabble.com/Update-INSERT-RULE-while-runn...\n>\n> \"Note that using ALTER TABLE to add a constraint as well as\n> using DROP TABLE or TRUNCATE to remove/recycle partitions are\n> DDL commands that require exclusive locks.  This will block\n> both readers and writers to the table(s) and can also cause readers\n> and writers to now interfere with each other. \"\n\nAs requested, here are some output of the investigative queries, run\nwhen the problem occurred. I could see some locks there, but I don't\nknow why the alter table add constraint takes so long of time.\n\nlibero=# select relname, oid from pg_class where relname in\n( 'monitor_monitortopic', 'domain_banning' );\n relname | oid\n----------------------+-------\n monitor_monitortopic | 43879\n(1 row)\n\nlibero=# select * from pg_stat_activity where current_query ~ '^ALTER\nTABLE';\n datid | datname | procpid | usesysid | usename\n|\ncurrent_query\n| waiting | xact_start |\nquery_start | backend_start | client_addr |\nclient_port\n-------+---------+---------+----------+---------\n+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n+---------+-------------------------------\n+-------------------------------+-------------------------------\n+-------------+-------------\n 41788 | libero | 4544 | 16384 | jamiq | ALTER TABLE\n\"domain_banning\" ADD CONSTRAINT \"topic_id_refs_id_32761795e066407b\"\nFOREIGN KEY (\"topic_id\") REFERENCES \"monitor_monitortopic\" (\"id\")\nDEFERRABLE INITIALLY DEFERRED; | t | 2011-01-05\n06:31:58.726905+00 | 2011-01-05 06:32:01.507688+00 | 2011-01-05\n06:31:44.966489+00 | 127.0.0.1 | 60833\n(1 row)\n\nlibero=# select * from pg_locks where pid=4544;\n locktype | database | relation | page | tuple | virtualxid |\ntransactionid | classid | objid | objsubid | virtualtransaction | pid\n| mode | granted\n---------------+----------+----------+------+-------+------------\n+---------------+---------+-------+----------+--------------------\n+------+---------------------+---------\n virtualxid | | | | | 40/1295227\n| | | | | 40/1295227 |\n4544 | ExclusiveLock | t\n relation | 41788 | 5815059 | | |\n| | | | | 40/1295227 |\n4544 | AccessExclusiveLock | t\n object | 0 | | | |\n| | 1260 | 16384 | 0 | 40/1295227 |\n4544 | AccessShareLock | t\n relation | 41788 | 43879 | | |\n| | | | | 40/1295227 |\n4544 | AccessExclusiveLock | f\n relation | 41788 | 5815063 | | |\n| | | | | 40/1295227 |\n4544 | AccessExclusiveLock | t\n relation | 41788 | 5815055 | | |\n| | | | | 40/1295227 |\n4544 | AccessShareLock | t\n relation | 41788 | 5815055 | | |\n| | | | | 40/1295227 |\n4544 | ShareLock | t\n relation | 41788 | 5815055 | | |\n| | | | | 40/1295227 |\n4544 | AccessExclusiveLock | t\n relation | 41788 | 5815053 | | |\n| | | | | 40/1295227 |\n4544 | AccessShareLock | t\n relation | 41788 | 5815053 | | |\n| | | | | 40/1295227 |\n4544 | AccessExclusiveLock | t\n transactionid | | | | | |\n1340234445 | | | | 40/1295227 | 4544 |\nExclusiveLock | t\n(11 rows)\n", "msg_date": "Tue, 4 Jan 2011 23:09:58 -0800 (PST)", "msg_from": "kakarukeys <[email protected]>", "msg_from_op": true, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "On Wed, Jan 5, 2011 at 2:09 AM, kakarukeys <[email protected]> wrote:\n> As requested, here are some output of the investigative queries, run\n> when the problem occurred. I could see some locks there, but I don't\n> know why the alter table add constraint takes so long of time.\n\nIt's pretty clear from the output you posted that it's waiting for a\nlock, but you didn't include the full contents of pg_stat_activity and\npg_locks, so we can't see who has the lock it's waiting for. Tom's\nguess upthread is a good bet, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sat, 8 Jan 2011 22:34:11 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: adding foreign key constraint locks up table" }, { "msg_contents": "On Jan 9, 11:34 am, [email protected] (Robert Haas) wrote:\n> On Wed, Jan 5, 2011 at 2:09 AM, kakarukeys <[email protected]> wrote:\n> > As requested, here are some output of the investigative queries, run\n> > when the problem occurred. I could see some locks there, but I don't\n> > know why the alter table addconstrainttakes so long of time.\n>\n> It's pretty clear from the output you posted that it's waiting for a\n> lock, but you didn't include the full contents of pg_stat_activity and\n> pg_locks, so we can't see who has the lock it's waiting for.  Tom's\n> guess upthread is a good bet, though.\n>\n> --\n> Robert Haas\n> EnterpriseDB:http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\nYes. Lately, I have learned quite abit of pgsql process to interpret\nthe log. There was always an AccessShareLock granted on\nmonitor_monitortopic by some process idle in transaction. This blocks\nAccessExclusiveLock that the alter table statement tried to acquire.\n\nThe correct solution will be to have that transaction rolled back and\nthe lock released (or simply kill the process) before running alter\ntable.\n\nThank you all for the help.\n", "msg_date": "Thu, 13 Jan 2011 07:33:46 -0800 (PST)", "msg_from": "kakarukeys <[email protected]>", "msg_from_op": true, "msg_subject": "Re: adding foreign key constraint locks up table" } ]
[ { "msg_contents": "Hi\n\nI have the problem that on our servers it happens regularly under a\ncertain workload (several times per minute) that all backend processes\nget a SIGUSR1 and spend several seconds in ProcessCatchupEvent(). At\n100-200 connections (most of them idle) this causes the system load to\nskyrocket. I am not really familiar with the code but my wild guess is\nthat the processes spend most of their time waiting for spinlocks.\n\nWe have reduced the number of connections as much as possible for now\nbut it still makes up for roughly 50% of the total CPU time. Has\nanyone experienced a similar problem?\n\nI can reproduce the issue on a test system with production data but it\nis not so easy to pinpoint what exactly causes the problem. The queries\nare basically tsearch2 full text searches over moderately big tables\n(~35GB). The queries are performed by functions which aggregate data\nfrom partitions in temporary tables, cache some data, and perform\ncalculations before returning it to the user.\n\nThe PostgreSQL version is 8.3.12, the test server has 8 amd64 cores\nand 16GB of ram. I experimented with shared_buffers between 1GB and\n4GB but it doesn't make much of a difference. Disk IO doesn't seem to\nbe an issue here.\n\nRegards,\nJulian v. Bock\n\n-- \nJulian v. Bock Projektleitung Software-Entwicklung\nOpenIT GmbH Tel +49 211 239 577-0\nIn der Steele 33a-41 Fax +49 211 239 577-10\nD-40599 Düsseldorf http://www.openit.de\n________________________________________________________________\nHRB 38815 Amtsgericht Düsseldorf USt-Id DE 812951861\nGeschäftsführer: Oliver Haakert, Maurice Kemmann\n", "msg_date": "Wed, 29 Dec 2010 15:28:34 +0100", "msg_from": "[email protected] (Julian v. Bock)", "msg_from_op": true, "msg_subject": "long wait times in ProcessCatchupEvent()" }, { "msg_contents": "[email protected] (Julian v. Bock) writes:\n> I have the problem that on our servers it happens regularly under a\n> certain workload (several times per minute) that all backend processes\n> get a SIGUSR1 and spend several seconds in ProcessCatchupEvent().\n\nThis is fixed in 8.4 and up.\nhttp://archives.postgresql.org/pgsql-committers/2008-06/msg00227.php\n\nIf you aren't willing to move off 8.3 you might be able to ameliorate\nthe problem by reducing the volume of catalog changes, but that can be\npretty hard if you're dependent on temp tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Dec 2010 10:18:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long wait times in ProcessCatchupEvent() " }, { "msg_contents": "On 12/29/10 6:28 AM, Julian v. Bock wrote:\n> I have the problem that on our servers it happens regularly under a\n> certain workload (several times per minute) that all backend processes\n> get a SIGUSR1 and spend several seconds in ProcessCatchupEvent(). At\n> 100-200 connections (most of them idle) this causes the system load to\n> skyrocket. I am not really familiar with the code but my wild guess is\n> that the processes spend most of their time waiting for spinlocks.\n>\n> We have reduced the number of connections as much as possible for now\n> but it still makes up for roughly 50% of the total CPU time. Has\n> anyone experienced a similar problem?\n>\n> I can reproduce the issue on a test system with production data but it\n> is not so easy to pinpoint what exactly causes the problem. The queries\n> are basically tsearch2 full text searches over moderately big tables\n> (~35GB). The queries are performed by functions which aggregate data\n> from partitions in temporary tables, cache some data, and perform\n> calculations before returning it to the user.\n>\n> The PostgreSQL version is 8.3.12, the test server has 8 amd64 cores\n> and 16GB of ram. I experimented with shared_buffers between 1GB and\n> 4GB but it doesn't make much of a difference. Disk IO doesn't seem to\n> be an issue here.\n\nThis sounds like the exact same problem I had on Postgres 8.3 and 8.4:\n\nhttp://archives.postgresql.org/pgsql-performance/2010-04/msg00071.php\n\nUpdating to Postgres version 9 fixed it. Here is what appeared to be the best analysis of what was happening, but we never confirmed it.\n\nhttp://archives.postgresql.org/pgsql-performance/2010-06/msg00464.php\n\nCraig\n\n", "msg_date": "Wed, 29 Dec 2010 11:28:25 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long wait times in ProcessCatchupEvent()" }, { "msg_contents": "Craig James <[email protected]> writes:\n> On 12/29/10 6:28 AM, Julian v. Bock wrote:\n>> I have the problem that on our servers it happens regularly under a\n>> certain workload (several times per minute) that all backend processes\n>> get a SIGUSR1 and spend several seconds in ProcessCatchupEvent().\n\n> This sounds like the exact same problem I had on Postgres 8.3 and 8.4:\n\n> http://archives.postgresql.org/pgsql-performance/2010-04/msg00071.php\n\n> Updating to Postgres version 9 fixed it. Here is what appeared to be the best analysis of what was happening, but we never confirmed it.\n\n> http://archives.postgresql.org/pgsql-performance/2010-06/msg00464.php\n\nIt happened for you on 8.4 too? In that previous thread you were still\non 8.3. If you did see it on 8.4 then it wasn't sinval ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Dec 2010 14:58:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long wait times in ProcessCatchupEvent() " }, { "msg_contents": "On 12/29/10 11:58 AM, Tom Lane wrote:\n> Craig James<[email protected]> writes:\n>> On 12/29/10 6:28 AM, Julian v. Bock wrote:\n>>> I have the problem that on our servers it happens regularly under a\n>>> certain workload (several times per minute) that all backend processes\n>>> get a SIGUSR1 and spend several seconds in ProcessCatchupEvent().\n>\n>> This sounds like the exact same problem I had on Postgres 8.3 and 8.4:\n>\n>> http://archives.postgresql.org/pgsql-performance/2010-04/msg00071.php\n>\n>> Updating to Postgres version 9 fixed it. Here is what appeared to be the best analysis of what was happening, but we never confirmed it.\n>\n>> http://archives.postgresql.org/pgsql-performance/2010-06/msg00464.php\n>\n> It happened for you on 8.4 too? In that previous thread you were still\n> on 8.3. If you did see it on 8.4 then it wasn't sinval ...\n\nMy mistake - it was only 8.3.\n\nCraig\n\n", "msg_date": "Wed, 29 Dec 2010 15:53:29 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long wait times in ProcessCatchupEvent()" }, { "msg_contents": "On 12/29/2010 2:58 PM, Tom Lane wrote:\n> It happened for you on 8.4 too? In that previous thread you were still\n> on 8.3. If you did see it on 8.4 then it wasn't sinval ...\n>\n> \t\t\tregards, tom lane\n>\nMay I ask what exactly is \"sinval\"? I took a look at Craig's problem and \nyour description but I wasn't able to figure out what is sinval lock and \nwhat does it lock? I apologize if the question is stupid.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Wed, 29 Dec 2010 19:54:36 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long wait times in ProcessCatchupEvent()" }, { "msg_contents": "Mladen Gogala <[email protected]> wrote:\n \n> May I ask what exactly is \"sinval\"? I took a look at Craig's\n> problem and your description but I wasn't able to figure out what\n> is sinval lock and what does it lock? I apologize if the question\n> is stupid.\n \nThis area could probably use a README file, but you can get a good\nidea from the comment starting on line 30 of this file:\n \nhttp://git.postgresql.org/gitweb?p=postgresql.git;a=blob;f=src/backend/storage/ipc/sinvaladt.c;h=7910346dd55512be13712ea2342586d705bb0b35\n \nIt has to do with communication between processes regarding\ninvalidation of the shared cache.\n \n-Kevin\n", "msg_date": "Thu, 30 Dec 2010 08:50:00 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long wait times in ProcessCatchupEvent()" }, { "msg_contents": "Hi\n\n>>>>> \"TL\" == Tom Lane <[email protected]> writes:\n\nTL> [email protected] (Julian v. Bock) writes:\n>> I have the problem that on our servers it happens regularly under a\n>> certain workload (several times per minute) that all backend\n>> processes get a SIGUSR1 and spend several seconds in\n>> ProcessCatchupEvent().\n\nTL> This is fixed in 8.4 and up.\nTL> http://archives.postgresql.org/pgsql-committers/2008-06/msg00227.php\n\nThanks for the quick reply.\n\nTL> If you aren't willing to move off 8.3 you might be able to\nTL> ameliorate the problem by reducing the volume of catalog changes,\nTL> but that can be pretty hard if you're dependent on temp tables.\n\nUpgrading to 8.4 or 9.0 is not possible at the moment but if it is only\ncatalog changes I can probably work around that.\n\nRegards,\nJulian v. Bock\n\n-- \nJulian v. Bock Projektleitung Software-Entwicklung\nOpenIT GmbH Tel +49 211 239 577-0\nIn der Steele 33a-41 Fax +49 211 239 577-10\nD-40599 Düsseldorf http://www.openit.de\n________________________________________________________________\nHRB 38815 Amtsgericht Düsseldorf USt-Id DE 812951861\nGeschäftsführer: Oliver Haakert, Maurice Kemmann\n", "msg_date": "Thu, 30 Dec 2010 16:03:53 +0100", "msg_from": "[email protected] (Julian v. Bock)", "msg_from_op": true, "msg_subject": "Re: long wait times in ProcessCatchupEvent()" } ]
[ { "msg_contents": "How good are certmagic.com practice exams for PostgreSQL\n exam? My friends told me they are pretty good and he has passed many\nexams with their material. Let me know guys!\n", "msg_date": "Wed, 29 Dec 2010 23:54:46 -0800 (PST)", "msg_from": "nextage Tech <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL" }, { "msg_contents": "nextage Tech wrote:\n> How good are c***c.com practice exams for PostgreSQL\n> exam? My friends told me they are pretty good and he has passed many\n> exams with their material. Let me know guys!\n\n\"... friends ... he ...\"?\n\nIs this an ad for c***c.com? What is your connection to them?\n\n-- \nLew\nCeci n'est pas une pipe.\n", "msg_date": "Thu, 30 Dec 2010 09:23:38 -0500", "msg_from": "Lew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL" }, { "msg_contents": "nextage Tech wrote:\n> How good are certmagic.com practice exams for PostgreSQL\n> exam? My friends told me they are pretty good and he has passed many\n> exams with their material. Let me know guys!\n> \n\nWell, since the PostgreSQL CE 8 Silver exam itself isn't very well \ndefined as an industry certification goes, whether or not Certmagic's \nprep helps you pass or not isn't too exciting to talk about. Unlike \nmost of the commercial databases, there is no official certification \navailable for PostgreSQL from the vendor, as there is no real vendor \nhere. SRA does their CE certification, EnterpriseDB has some \ncertification exams related to their training, and we're happy to print \ncertificates for students who pass our training classes too. But \nwithout any standardized testing guidelines that go beyond tests \ndeveloped by individual training companies, PostgreSQL certification \nreally doesn't result in the same sort of credibility that, say, Oracle \nor Cisco certification does.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 04 Jan 2011 09:07:22 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL" } ]
[ { "msg_contents": "Hi there, this is my first post.\n\nI have a PostgreSQL 8.3.6, compiled by Visual C++ build 1400 running on\nWindows with virtual machine, 2 GB ram, and configured the postgresql.conf\nfile to log statements duration >= 500 ms.\n\nAnd I have this query/log entry:\n2011-01-03 23:06:29 BRT LOG: duration: 2843.000 ms statement: SELECT\nDESCRICAO FROM CURSO WHERE CODCURSO = 2\n\nMy question is, this same query executes many times a day and many times\nfast/normal, but why in some cases its run slowly? Especialy because the\n\"CODCURSO\" column is PK and this table has only 3 registers (tiny table).\n\nThank you in advance!\nFernando\n\nHi there, this is my first post.I have a PostgreSQL 8.3.6, compiled by Visual C++ build 1400 running on Windows with virtual machine, 2 GB ram, and configured the postgresql.conf file to log statements duration >= 500 ms.\nAnd I have this query/log entry:2011-01-03 23:06:29 BRT LOG:  duration: 2843.000 ms  statement: SELECT DESCRICAO FROM CURSO WHERE CODCURSO = 2My question is, this same query executes many times a day and many times fast/normal, but why in some cases its run slowly? Especialy because the \"CODCURSO\" column is PK and this table has only 3 registers (tiny table).\nThank you in advance!Fernando", "msg_date": "Tue, 4 Jan 2011 16:03:03 -0200", "msg_from": "Fernando Mertins <[email protected]>", "msg_from_op": true, "msg_subject": "Same stament sometime fast, something slow" }, { "msg_contents": "On 01/05/2011 05:03 AM, Fernando Mertins wrote:\n> Hi there, this is my first post.\n>\n> I have a PostgreSQL 8.3.6, compiled by Visual C++ build 1400 running on\n> Windows with virtual machine\n ^^^^^^^^^^^^^^^^^^^^\n\nWhat kind of VM host? where? what else is on the same host?\n\nYour most likely culprit is I/O contention from other guests on the same \nhost, possibly combined with I/O queuing policies on the host that \nfavour throughput over request latency.\n\nCheckpoints might also be a factor.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 05 Jan 2011 12:24:19 +1100", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Same stament sometime fast, something slow" } ]
[ { "msg_contents": "Fernando Mertins wrote:\n \n> I have a PostgreSQL 8.3.6\n \nYou should consider upgrading to the latest minor release:\n \nhttp://www.postgresql.org/support/versioning\n \nhttp://www.postgresql.org/docs/8.3/static/release.html\n \n> My question is, this same query executes many times a day and many\n> times fast/normal, but why in some cases its run slowly? Especialy\n> because the \"CODCURSO\" column is PK and this table has only 3\n> registers (tiny table).\n \nTwo common causes for this are blocking and overloading the I/O\nsystem at checkpoint. You might want to turn on logging of\ncheckpoints to see if this happens only during checkpoints. See this\npage for techniques to look at blocking:\n \nhttp://wiki.postgresql.org/wiki/Lock_Monitoring\n \nIf neither of these helps, please review this page and post again\nwith more details:\n \nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n \n-Kevin\n", "msg_date": "Tue, 04 Jan 2011 12:12:45 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Same stament sometime fast, something slow" }, { "msg_contents": "Kevin Grittner wrote:\n> Two common causes for this are blocking and overloading the I/O\n> system at checkpoint. You might want to turn on logging of\n> checkpoints to see if this happens only during checkpoints. See this\n> page for techniques to look at blocking:\n> \n> http://wiki.postgresql.org/wiki/Lock_Monitoring\n> \n\nI just updated this to mention use of log_lock_waits to help here. \nLooking for patterns in log_min_duration_statement, log_checkpoints, and \nlog_lock_waits entries, seeing which tend to happen at the same time, is \nthe usual helpful trio to investigate when having intermittent slow \nqueries. Of course, with Windows running on a VM, there's a hundred \nother things that could be causing this completely unrelated to the \ndatabase.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 04 Jan 2011 20:51:49 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Same stament sometime fast, something slow" } ]
[ { "msg_contents": "All,\n\nOne of my coworkers just pointed this out:\n\n\"The amount of memory used in shared memory for WAL data. The default is\n64 kilobytes (64kB). The setting need only be large enough to hold the\namount of WAL data generated by one typical transaction, since the data\nis written out to disk at every transaction commit. This parameter can\nonly be set at server start.\"\nhttp://www.postgresql.org/docs/9.0/static/runtime-config-wal.html\n\nThat's quite incorrect. The wal_buffers are shared by all concurrent\ntransactions, so it needs to be sized appropriately for all\n*simultaneous* uncommitted transactions, otherwise you'll get\nunnecessary flushing.\n\nCertainly performance testing data posted on this list and -hackers.\nbears that out. My suggestion instead:\n\n\"The amount of shared memory dedicated to buffering writes to the WAL.\nThe default is 64 kilobytes (64kB), which is low for a busy production\nserver. Users who have high write concurrency, or transactions which\ncommit individual large data writes, will want to increase it to between\n1MB and 16MB. This parameter can only be set at server start.\"\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Wed, 05 Jan 2011 12:43:10 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong docs on wal_buffers?" }, { "msg_contents": "On Wed, Jan 5, 2011 at 12:43 PM, Josh Berkus <[email protected]> wrote:\n> All,\n>\n> One of my coworkers just pointed this out:\n>\n> \"The amount of memory used in shared memory for WAL data. The default is\n> 64 kilobytes (64kB). The setting need only be large enough to hold the\n> amount of WAL data generated by one typical transaction, since the data\n> is written out to disk at every transaction commit. This parameter can\n> only be set at server start.\"\n> http://www.postgresql.org/docs/9.0/static/runtime-config-wal.html\n>\n> That's quite incorrect.  The wal_buffers are shared by all concurrent\n> transactions, so it needs to be sized appropriately for all\n> *simultaneous* uncommitted transactions, otherwise you'll get\n> unnecessary flushing.\n\nI'd thought the same thing in the past. But on further thinking about\nit, I had decided otherwise.\n\nOn a highly concurrent system, transaction commits are constantly and\nunavoidably writing and flushing other transactions' WAL.\n\nIf the transactions are well spread out, each of N concurrent\nhomogeneous transactions only has 1/N of its total WAL in shared\nbuffers at any one time, so the total does come out to about 1/N * N =\n1 typical transaction size. Throw in stochastic departures from\nuniform distribution, and it would be somewhat higher, but not N.\n\nOnly if all the transactions move through the system in lock-step,\nwould need N times the typical size for one transaction. pgbench can\ncreate this condition, but I don't know how likely it is for\nreal-world work flows to do so. Maybe it is common there as well?\n\nBut my bigger objection to the original wording is that it is very\nhard to know how much WAL a typical transaction generates, especially\nunder full_page_writes.\n\n\nAnd the risks are rather asymmetric. I don't know of any problem from\ntoo large a buffer until it starts crowding out shared_buffers, while\nunder-sizing leads to the rather drastic performance consequences of\nAdvanceXLInsertBuffer having to wait on the WALWriteLock while holding\nthe WALInsertLock,\n\n\n>\n> Certainly performance testing data posted on this list and -hackers.\n> bears that out.  My suggestion instead:\n>\n> \"The amount of shared memory dedicated to buffering writes to the WAL.\n> The default is 64 kilobytes (64kB), which is low for a busy production\n> server.  Users who have high write concurrency, or transactions which\n> commit individual large data writes, will want to increase it to between\n> 1MB and 16MB. This parameter can only be set at server start.\"\n\nI like this wording.\n\nBut I wonder if initdb.c, when selecting the default shared_buffers,\nshouldn't test with wal_buffers = shared_buffers/64 or\nshared_buffers/128, with a lower limit of 8 blocks, and set that as\nthe default.\n\nCheers,\n\nJeff\n", "msg_date": "Wed, 5 Jan 2011 13:45:21 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on wal_buffers?" }, { "msg_contents": "\n> And the risks are rather asymmetric. I don't know of any problem from\n> too large a buffer until it starts crowding out shared_buffers, while\n> under-sizing leads to the rather drastic performance consequences of\n> AdvanceXLInsertBuffer having to wait on the WALWriteLock while holding\n> the WALInsertLock,\n\nSuppose you have a large update which generates lots of WAL, some WAL \nsegment switching will take place, and therefore some fsync()s. If \nwal_buffers is small enough that it fills up during the time it takes to \nfsync() the previous WAL segment, isn't there a risk that all WAL writes \nare stopped, waiting for the end of this fsync() ?\n", "msg_date": "Wed, 05 Jan 2011 23:58:32 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on wal_buffers?" }, { "msg_contents": "\n> And the risks are rather asymmetric. I don't know of any problem from\n> too large a buffer until it starts crowding out shared_buffers, while\n> under-sizing leads to the rather drastic performance consequences of\n> AdvanceXLInsertBuffer having to wait on the WALWriteLock while holding\n> the WALInsertLock,\n\nYes, performance testing has bourne that out. Increasing wal_buffers to\nbetween 1MB and 16MB has benfitted most test cases (DBT2, pgBench, user\ndatabases) substantially, while an increase has never been shown to be a\npenalty. Increases above 16MB didn't seem to help, which is\nunsurprising given the size of a WAL segment.\n\n> But I wonder if initdb.c, when selecting the default shared_buffers,\n> shouldn't test with wal_buffers = shared_buffers/64 or\n> shared_buffers/128, with a lower limit of 8 blocks, and set that as\n> the default.\n\nWe talked about bumping it to 512kB or 1MB for 9.1. Did that get in?\nDo I need to write that patch?\n\nIt would be nice to have it default to 16MB out of the gate, but there\nwe're up against the Linux/FreeBSD SysV memory limits again. When are\nthose OSes going to modernize?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 06 Jan 2011 10:58:59 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong docs on wal_buffers?" }, { "msg_contents": "\nOn Jan 6, 2011, at 10:58 AM, Josh Berkus wrote:\n\n> \n>> But I wonder if initdb.c, when selecting the default shared_buffers,\n>> shouldn't test with wal_buffers = shared_buffers/64 or\n>> shared_buffers/128, with a lower limit of 8 blocks, and set that as\n>> the default.\n> \n> We talked about bumping it to 512kB or 1MB for 9.1. Did that get in?\n> Do I need to write that patch?\n> \n> It would be nice to have it default to 16MB out of the gate, but there\n> we're up against the Linux/FreeBSD SysV memory limits again. When are\n> those OSes going to modernize?\n> \n\nWhy wait? Just set it to 1MB, and individual distributions can set it lower if need be (for example Mac OSX with its 4MB default shared memory limit). Bowing to lowest common denominator OS settings causes more problems than it solves IMO.\n\n> -- \n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 6 Jan 2011 13:50:29 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on wal_buffers?" }, { "msg_contents": "On Thu, Jan 6, 2011 at 10:58 AM, Josh Berkus <[email protected]> wrote:\n>\n>> And the risks are rather asymmetric.  I don't know of any problem from\n>> too large a buffer until it starts crowding out shared_buffers, while\n>> under-sizing leads to the rather drastic performance consequences of\n>> AdvanceXLInsertBuffer having to wait on the WALWriteLock while holding\n>> the WALInsertLock,\n>\n> Yes, performance testing has bourne that out.  Increasing wal_buffers to\n> between 1MB and 16MB has benfitted most test cases (DBT2, pgBench, user\n> databases) substantially, while an increase has never been shown to be a\n> penalty.  Increases above 16MB didn't seem to help, which is\n> unsurprising given the size of a WAL segment.\n>\n>> But I wonder if initdb.c, when selecting the default shared_buffers,\n>> shouldn't test with wal_buffers = shared_buffers/64 or\n>> shared_buffers/128, with a lower limit of 8 blocks, and set that as\n>> the default.\n>\n> We talked about bumping it to 512kB or 1MB for 9.1.  Did that get in?\n\nDoesn't look like it, not yet anyway.\n\n> Do I need to write that patch?\n>\n> It would be nice to have it default to 16MB out of the gate,\n\nWould that be a good default even when the shared_buffer is only 32MB\n(the maximum that initdb will ever pick as the default)?\n\n> but there\n> we're up against the Linux/FreeBSD SysV memory limits again.  When are\n> those OSes going to modernize?\n\nI don't think that we should let that limit us.\n\nFor one thing, some Linux distributions already do have large defaults\nfor SHMMAX. SUSE, for, example, defaults to 4GB on 32-bit and much\nmuch larger on 64-bit, and I think they have for years.\n\nFor another thing, initdb already does a climb-down on shared_buffers\nuntil it finds something that works. All we would have to do is make\nwal_buffers participate in that climb-down.\n\nIf I manually set SHMMAX to 32MB, then initdb currently climbs down to\n28MB for the shared_buffers on my 32 bit machine. At that point, I\ncan increase wal_buffers to 896kB before shmget fails, so I think\n512kb would be a good default in that situation.\n\nMaybe initdb should test larger values for shared_buffers as well,\nrather than starting at only 32MB.\n\nCheers,\n\nJeff\n", "msg_date": "Thu, 6 Jan 2011 15:02:36 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on wal_buffers?" }, { "msg_contents": "Josh Berkus wrote:\n> We talked about bumping it to 512kB or 1MB for 9.1. Did that get in?\n> Do I need to write that patch?\n> \n\nIf it defaulted to 3% of shared_buffers, min 64K & max 16MB for the auto \nsetting, it would for the most part become an autotuned parameter. That \nwould make it 0.75 to 1MB at the standard anemic Linux default kernel \nparameters. Maybe more than some would like, but dropping \nshared_buffers from 24MB to 23MB to keep this from being ridiculously \nundersized is probably a win. That percentage would reach 16MB by the \ntime shared_buffers was increased to 533MB, which also seems about right \nto me. On a really bad setup (brief pause to flip off Apple) with only \n4MB to work with total, you'd end up with wal_buffers between 64 and \n128K, so very close to the status quo.\n\nCode that up, and we could probably even remove the parameter as a \ntunable altogether. Very few would see a downside relative to any \nsensible configuration under the current situation, and many people \nwould notice better automagic performance with one less parameter to \ntweak. Given the recent investigations about the serious downsides of \ntiny wal_buffers values on new Linux kernels when using open_datasync, a \ntouch more aggression about this setting seems particularly appropriate \nto consider now. That's been swapped out as the default, but it's still \npossible people will switch to it.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Thu, 06 Jan 2011 23:37:45 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on wal_buffers?" }, { "msg_contents": "On Thu, Jan 6, 2011 at 8:37 PM, Greg Smith <[email protected]> wrote:\n\n> Josh Berkus wrote:\n>\n>> We talked about bumping it to 512kB or 1MB for 9.1. Did that get in?\n>> Do I need to write that patch?\n>>\n>>\n>\n> If it defaulted to 3% of shared_buffers, min 64K & max 16MB for the auto\n> setting, it would for the most part become an autotuned parameter. That\n> would make it 0.75 to 1MB at the standard anemic Linux default kernel\n> parameters. Maybe more than some would like, but dropping shared_buffers\n> from 24MB to 23MB to keep this from being ridiculously undersized is\n> probably a win. That percentage would reach 16MB by the time shared_buffers\n> was increased to 533MB, which also seems about right to me. On a really bad\n> setup (brief pause to flip off Apple) with only 4MB to work with total,\n> you'd end up with wal_buffers between 64 and 128K, so very close to the\n> status quo.\n>\n> Code that up, and we could probably even remove the parameter as a tunable\n> altogether. Very few would see a downside relative to any sensible\n> configuration under the current situation, and many people would notice\n> better automagic performance with one less parameter to tweak. Given the\n> recent investigations about the serious downsides of tiny wal_buffers values\n> on new Linux kernels when using open_datasync, a touch more aggression about\n> this setting seems particularly appropriate to consider now. That's been\n> swapped out as the default, but it's still possible people will switch to\n> it.\n>\n>\nDoes it not seem that this insistence on shipping a default config that\nworks out of the box on every system incurs a dramatic penalty when it comes\nto getting a useful postgres config for a production system? It seems like\npostgres is forcing users to learn all of the fairly specialized and\nintricate details of how shared memory is utilized by the write ahead log,\nrather than asking them to modify the shared memory settings as part of the\ninstallation procedure on a handful of systems - changes which are\nrelatively common and easily documented on affected systems. Most sysadmins\nwill not be unfamiliar with modifying shared memory settings while none\nwithout postgres expertise will have a clue about configuring postgres WAL\nlogs, shared buffers, and checkpoint segments. If we're trying to provide\nan easy first-use experience for inexperienced users, doesn't it actually\nmake more sense to require a reasonable amount of shared memory rather than\nconstraining the install to function with only a tiny amount of shared\nmemory in a time when it is common even for laptops to have 4 or more\ngigabytes of RAM?\n\nI'm sure this argument has probably been done to death on this list (I'm a\nrelatively recent subscriber), but issues with configuration options with\nnearly useless values as a result of shared memory constraints in the\ndefault config sure seem to crop up a lot. Wouldn't so many issues be\nresolved if postgres shipped with useful defaults for a modern hardware\nconfig along with instructions for how to adjust shared memory constraints\nso that the config will function on each system?\n\n--sam\n\nOn Thu, Jan 6, 2011 at 8:37 PM, Greg Smith <[email protected]> wrote:\nJosh Berkus wrote:\n\nWe talked about bumping it to 512kB or 1MB for 9.1.  Did that get in?\nDo I need to write that patch?\n  \n\n\nIf it defaulted to 3% of shared_buffers, min 64K & max 16MB for the auto setting, it would for the most part become an autotuned parameter.  That would make it 0.75 to 1MB at the standard anemic Linux default kernel parameters.  Maybe more than some would like, but dropping shared_buffers from 24MB to 23MB to keep this from being ridiculously undersized is probably a win.  That percentage would reach 16MB by the time shared_buffers was increased to 533MB, which also seems about right to me.  On a really bad setup (brief pause to flip off Apple) with only 4MB to work with total, you'd end up with wal_buffers between 64 and 128K, so very close to the status quo.\n\nCode that up, and we could probably even remove the parameter as a tunable altogether.  Very few would see a downside relative to any sensible configuration under the current situation, and many people would notice better automagic performance with one less parameter to tweak.  Given the recent investigations about the serious downsides of tiny wal_buffers values on new Linux kernels when using open_datasync, a touch more aggression about this setting seems particularly appropriate to consider now.  That's been swapped out as the default, but it's still possible people will switch to it.\n\nDoes it not seem that this insistence on shipping a default config that works out of the box on every system incurs a dramatic penalty when it comes to getting a useful postgres config for a production system?  It seems like postgres is forcing users to learn all of the fairly specialized and intricate details of how shared memory is utilized by the write ahead log, rather than asking them to modify the shared memory settings as part of the installation procedure on a handful of systems - changes which are relatively common and easily documented on affected systems. Most sysadmins will not be unfamiliar with modifying shared memory settings while none without postgres expertise will have a clue about configuring postgres WAL logs, shared buffers, and checkpoint segments.  If we're trying to provide an easy first-use experience for inexperienced users, doesn't it actually make more sense to require a reasonable amount of shared memory rather than constraining the install to function with only a tiny amount of shared memory in a time when it is common even for laptops to have 4 or more gigabytes of RAM? \nI'm sure this argument has probably been done to death on this list (I'm a relatively recent subscriber), but issues with configuration options with nearly useless values as a result of shared memory constraints in the default config sure seem to crop up a lot. Wouldn't so many issues be resolved if postgres shipped with useful defaults for a modern hardware config along with instructions for how to adjust shared memory constraints so that the config will function on each system? \n--sam", "msg_date": "Fri, 7 Jan 2011 06:09:38 -0800", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on wal_buffers?" }, { "msg_contents": "Samuel Gendler <[email protected]> writes:\n> Does it not seem that this insistence on shipping a default config that\n> works out of the box on every system incurs a dramatic penalty when it comes\n> to getting a useful postgres config for a production system?\n\n> I'm sure this argument has probably been done to death on this list (I'm a\n> relatively recent subscriber),\n\nNo kidding. Please review the archives.\n\nThe short answer is that even though modern machines tend to have plenty\nof RAM, they don't tend to have correspondingly large default settings\nof SHMMAX etc. If we crank up the default shared-memory-usage settings\nto the point where PG won't start in a couple of MB, we won't accomplish\na thing in terms of \"making it work out of the box\"; we'll just put\nanother roadblock in front of newbies getting to try it at all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jan 2011 10:07:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on wal_buffers? " }, { "msg_contents": "On Fri, Jan 7, 2011 at 7:07 AM, Tom Lane <[email protected]> wrote:\n\n> Samuel Gendler <[email protected]> writes:\n> > Does it not seem that this insistence on shipping a default config that\n> > works out of the box on every system incurs a dramatic penalty when it\n> comes\n> > to getting a useful postgres config for a production system?\n>\n> > I'm sure this argument has probably been done to death on this list (I'm\n> a\n> > relatively recent subscriber),\n>\n> No kidding. Please review the archives.\n>\n> The short answer is that even though modern machines tend to have plenty\n> of RAM, they don't tend to have correspondingly large default settings\n> of SHMMAX etc. If we crank up the default shared-memory-usage settings\n> to the point where PG won't start in a couple of MB, we won't accomplish\n> a thing in terms of \"making it work out of the box\"; we'll just put\n> another roadblock in front of newbies getting to try it at all.\n>\n>\nYes, I understand that. I was trying to make the point that, in an attempt\nto make things very easy for novice users, we are actually making them quite\na bit more complex for novice users who want to do anything besides start\nthe server. But no need to have the debate again.\n\n--sam\n\nOn Fri, Jan 7, 2011 at 7:07 AM, Tom Lane <[email protected]> wrote:\nSamuel Gendler <[email protected]> writes:\n> Does it not seem that this insistence on shipping a default config that\n> works out of the box on every system incurs a dramatic penalty when it comes\n> to getting a useful postgres config for a production system?\n\n> I'm sure this argument has probably been done to death on this list (I'm a\n> relatively recent subscriber),\n\nNo kidding.  Please review the archives.\n\nThe short answer is that even though modern machines tend to have plenty\nof RAM, they don't tend to have correspondingly large default settings\nof SHMMAX etc.  If we crank up the default shared-memory-usage settings\nto the point where PG won't start in a couple of MB, we won't accomplish\na thing in terms of \"making it work out of the box\"; we'll just put\nanother roadblock in front of newbies getting to try it at all.\nYes, I understand that.  I was trying to make the point that, in an attempt to make things very easy for novice users, we are actually making them quite a bit more complex for novice users who want to do anything besides start the server.  But no need to have the debate again.\n--sam", "msg_date": "Fri, 7 Jan 2011 09:25:15 -0800", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on wal_buffers?" }, { "msg_contents": "Samuel Gendler wrote:\n> I was trying to make the point that, in an attempt to make things very \n> easy for novice users, we are actually making them quite a bit more \n> complex for novice users who want to do anything besides start the server.\n\nPeople who can't start the server often abandon PostgreSQL, never come \nback again. And in the worst case, they popularize their frustration \nvia blogs etc. That contributes to the perception the program is hard \nto use far more than people who run into problems only with \nperformance--people expect database tuning to be non-trivial, but they \nhave minimal tolerance for trouble when first trying to use a program. \n From an advocacy perspective, there is no higher priority than making \nsure things work as smoothly as feasible for people who have never run \nPostgreSQL before. Changing the software so it doesn't work out of the \nbox on a system with minimal shared memory defaults, as seen on Linux \nand other operating systems, would be one of the worst possible changes \nto the database anyone could make.\n\nAbout once a month now I came across someone who used my pgtune \nprogram: https://github.com/gregs1104/pgtune to get a reasonable \nstarting configuration. That program is still rough and has \nlimitations, but the general idea turns out to work just fine. I'm \nbored with discussions about making this easier for users unless they \ninvolve people volunteering to help with the coding needed to turn that, \nor something better than it, into a release quality tool. The path to \nsort this out is mapped out in excruciating detail from my perspective. \nThe only thing missing are code contributors with time to walk down it. \nSo far it's me, occasional code refinement from Matt Harrison, some \npackager help, and periodic review from people like Josh Berkus. And \nthat's just not good enough to make progress on this particular front \nquickly enough.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services and Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 07 Jan 2011 13:46:12 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on wal_buffers?" } ]