threads
listlengths 1
275
|
---|
[
{
"msg_contents": "> From: Bill Moran [mailto:[email protected]]\n> In response to \"Medora Schauer\" <[email protected]>:\n> \n> > I've recently moved to 8.1 and find that autovacuum doesn't seem to\nbe\n> > working, at least not the way I expected it to. I need the tuple\ncount\n> > for a table to be updated so indexes will be used when appropriate.\nI\n> > was expecting the tuples count for a table to be updated after\n> > autovacuum ran. This doesn't seem to be the case. I added 511\nrecords\n> > to a previously empty table and waited over an hour. Tuples for the\n> > table (as per pgaccess) was 0. After I did a manual vacuum analyze\nit\n> > went to 511.\n> \n> From your attached config file:\n> \n> #autovacuum_vacuum_threshold = 1000\t# min # of tuple updates before\n> \t\t\t\t\t# vacuum\n>\n\nYup, that was it.\n\nThanks.\n\nMedora\n\n",
"msg_date": "Mon, 9 Oct 2006 09:27:30 -0500",
"msg_from": "\"Medora Schauer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: autovacuum not working?"
},
{
"msg_contents": "On Mon, Oct 09, 2006 at 09:27:30AM -0500, Medora Schauer wrote:\n> > From your attached config file:\n> > \n> > #autovacuum_vacuum_threshold = 1000\t# min # of tuple updates before\n> > \t\t\t\t\t# vacuum\n> >\n> \n> Yup, that was it.\n\nActually, not quite.\n\nVacuum will update relpages and reltuples, but it won't update other\nstats. That's what analyze does (autovacuum_analyze_threshold). By\ndefault, that's set to 500; I'll typically drop it to 200 or so (keep in\nmind that analyze is much cheaper than vacuum).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 9 Oct 2006 16:26:15 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: autovacuum not working?"
}
] |
[
{
"msg_contents": "I have two systems running 8.2beta1 getting strange difference of\nresults in count(*). Query that illistrates the difference is\ncount(*). this is a synthetic test i use to measure a sytems's cpu\nperformance.\n\nSystem A:\n2.2 ghz p4 northwood, HT\nwin xp\nvanilla sata (1 disk)\n\nSystem B:\namd 64 3700+\nlinux cent/os 4.4 32 bit\n4 raptors, raid 5, 3ware\n\nexplain analyze select 5000!;\nA: 2.4 seconds\nB: 1.8 seconds\n\nexplain analyze select count(*) from generate_series(1,500000);\nA: 0.85 seconds\nB: 4.94 seconds\n\nBoth systems have a freshly minted database. By all resepcts I would\nexpect B to outperform A on most cpu bound tests with a faster\nprocessor and linux kernel. memory is not an issue here, varying the\nsize of the count(*) does not effect the results, A is always 5x\nfaster than B. the only two variables i see are cpu and o/s.\n\nAlso tested on pg 8.1, results are same except pg 8.2 is about 10%\nfaster on both systems for count(*). (yay!) :-)\n\nanybody think of anything obvious? should i profile? (windows mingw\nprofiling sucks)\n\nmerlin\n",
"msg_date": "Mon, 9 Oct 2006 14:17:24 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "odd variances in count(*) times"
},
{
"msg_contents": "* Merlin Moncure ([email protected]) wrote:\n> explain analyze select 5000!;\n> A: 2.4 seconds\n> B: 1.8 seconds\n> \n> explain analyze select count(*) from generate_series(1,500000);\n> A: 0.85 seconds\n> B: 4.94 seconds\n\nTry w/o the explain analyze. It adds quite a bit of overhead and that\nmight be inconsistant between the systems (mainly it may have to do with\nthe gettimeofday() calls being implemented differently between Windows\nand Linux..).\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Mon, 9 Oct 2006 14:30:26 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd variances in count(*) times"
},
{
"msg_contents": "On 10/9/06, Stephen Frost <[email protected]> wrote:\n> * Merlin Moncure ([email protected]) wrote:\n> > explain analyze select 5000!;\n> > A: 2.4 seconds\n> > B: 1.8 seconds\n> >\n> > explain analyze select count(*) from generate_series(1,500000);\n> > A: 0.85 seconds\n> > B: 4.94 seconds\n>\n> Try w/o the explain analyze. It adds quite a bit of overhead and that\n> might be inconsistant between the systems (mainly it may have to do with\n> the gettimeofday() calls being implemented differently between Windows\n> and Linux..).\n\nthat was it. amd system now drop to .3 seconds, windows .6. (doing\ntime foo > psql -c bar > file). thanks...\n\nmerlin\n",
"msg_date": "Mon, 9 Oct 2006 14:41:07 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: odd variances in count(*) times"
},
{
"msg_contents": "On Mon, Oct 09, 2006 at 02:41:07PM -0400, Merlin Moncure wrote:\n> that was it. amd system now drop to .3 seconds, windows .6. (doing\n> time foo > psql -c bar > file). thanks...\n\nWhat you want is probably \\timing in psql, by the way. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 9 Oct 2006 20:45:04 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd variances in count(*) times"
},
{
"msg_contents": "On Mon, Oct 09, 2006 at 02:41:07PM -0400, Merlin Moncure wrote:\n> On 10/9/06, Stephen Frost <[email protected]> wrote:\n> >* Merlin Moncure ([email protected]) wrote:\n> >> explain analyze select 5000!;\n> >> A: 2.4 seconds\n> >> B: 1.8 seconds\n> >>\n> >> explain analyze select count(*) from generate_series(1,500000);\n> >> A: 0.85 seconds\n> >> B: 4.94 seconds\n> >\n> >Try w/o the explain analyze. It adds quite a bit of overhead and that\n> >might be inconsistant between the systems (mainly it may have to do with\n> >the gettimeofday() calls being implemented differently between Windows\n> >and Linux..).\n> \n> that was it. amd system now drop to .3 seconds, windows .6. (doing\n> time foo > psql -c bar > file). thanks...\n\nYou can also turn timing on in psql.\n\nAnd FWIW, RAID5 generally isn't a good idea for databases.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 9 Oct 2006 16:23:56 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: odd variances in count(*) times"
},
{
"msg_contents": "On 10/10/06, Jim C. Nasby <[email protected]> wrote:\n> > >Try w/o the explain analyze. It adds quite a bit of overhead and that\n> > >might be inconsistant between the systems (mainly it may have to do with\n> > >the gettimeofday() calls being implemented differently between Windows\n> > >and Linux..).\n> >\n> > that was it. amd system now drop to .3 seconds, windows .6. (doing\n> > time foo > psql -c bar > file). thanks...\n>\n> You can also turn timing on in psql.\n>\n> And FWIW, RAID5 generally isn't a good idea for databases.\n\nthats just our development box here at the office. production system\nruns on something much more extravagent :).\n\nmelrin\n",
"msg_date": "Tue, 10 Oct 2006 12:00:07 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: odd variances in count(*) times"
}
] |
[
{
"msg_contents": "[Jim C. Nasby - Mon at 04:18:27PM -0500]\n> I can agree to that, but we'll never get any progress so long as every\n> time hints are brought up the response is that they're evil and should\n> never be in the database. I'll also say that a very simple hinting\n> language (ie: allowing you to specify access method for a table, and\n> join methods) would go a huge way towards enabling app developers to get\n> stuff done now while waiting for all these magical optimizer\n> improvements that have been talked about for years.\n\nJust a comment from the side line; can't the rough \"set\nenable_seqscan=off\" be considered as sort of a hint anyway? There have\nbeen situations where we've actually had to resort to such crud.\n\nBeeing able to i.e. force a particular index is something I really\nwouldn't put into the application except for as a very last resort,\n_but_ beeing able to force i.e. the use of a particular index in an\ninteractive 'explain analyze'-query would often be ... if not outright\nuseful, then at least very interessting.\n\n",
"msg_date": "Mon, 9 Oct 2006 23:33:03 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "On Mon, Oct 09, 2006 at 11:33:03PM +0200, Tobias Brox wrote:\n> [Jim C. Nasby - Mon at 04:18:27PM -0500]\n> > I can agree to that, but we'll never get any progress so long as every\n> > time hints are brought up the response is that they're evil and should\n> > never be in the database. I'll also say that a very simple hinting\n> > language (ie: allowing you to specify access method for a table, and\n> > join methods) would go a huge way towards enabling app developers to get\n> > stuff done now while waiting for all these magical optimizer\n> > improvements that have been talked about for years.\n> \n> Just a comment from the side line; can't the rough \"set\n> enable_seqscan=off\" be considered as sort of a hint anyway? There have\n> been situations where we've actually had to resort to such crud.\n> \n> Beeing able to i.e. force a particular index is something I really\n> wouldn't put into the application except for as a very last resort,\n> _but_ beeing able to force i.e. the use of a particular index in an\n> interactive 'explain analyze'-query would often be ... if not outright\n> useful, then at least very interessting.\n\nOne of the big problems with doing set enable_...=off is that there's no\nway to embed that into something like a view, so you're almost forced\ninto putting into the application code itself, which makes matters even\nworse. If you could hint this within a query (maybe even on a per-table\nlevel), you could at least encapsulate that into a view.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 9 Oct 2006 17:30:31 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "> \n> One of the big problems with doing set enable_...=off is that there's no\n> way to embed that into something like a view, so you're almost forced\n> into putting into the application code itself, which makes matters even\n> worse. If you could hint this within a query (maybe even on a per-table\n> level), you could at least encapsulate that into a view.\n\nYou can easily pass multiple statements within a single exec() or push\nit into an SPF.\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Mon, 09 Oct 2006 15:41:09 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> One of the big problems with doing set enable_...=off is that there's no\n> way to embed that into something like a view, so you're almost forced\n> into putting into the application code itself, which makes matters even\n> worse. If you could hint this within a query (maybe even on a per-table\n> level), you could at least encapsulate that into a view.\n\nYou've almost reinvented one of the points that was made in the last\ngo-round on the subject of hints, which is that keeping them out of the\napplication code is an important factor in making them manageable by a\nDBA. Hints stored in a system catalog (and probably having the form of\n\"make this statistical assumption\" rather than specifically \"use that\nplan\") would avoid many of the negatives.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 09 Oct 2006 18:45:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly? "
},
{
"msg_contents": "On Mon, Oct 09, 2006 at 06:45:16PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > One of the big problems with doing set enable_...=off is that there's no\n> > way to embed that into something like a view, so you're almost forced\n> > into putting into the application code itself, which makes matters even\n> > worse. If you could hint this within a query (maybe even on a per-table\n> > level), you could at least encapsulate that into a view.\n> \n> You've almost reinvented one of the points that was made in the last\n> go-round on the subject of hints, which is that keeping them out of the\n> application code is an important factor in making them manageable by a\n> DBA. Hints stored in a system catalog (and probably having the form of\n> \"make this statistical assumption\" rather than specifically \"use that\n> plan\") would avoid many of the negatives.\n\nSure, but IIRC no one's figured out what that would actually look like,\nwhile it's not hard to come up with a syntax that allows you to tell the\noptimizer \"scan index XYZ to access this table\". (And if there's real\ninterest in adding that I'll come up with a proposal.)\n\nI'd rather have the ugly solution sooner rather than the elegant one\nlater (if ever).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 10 Oct 2006 09:00:50 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "On Mon, Oct 09, 2006 at 03:41:09PM -0700, Joshua D. Drake wrote:\n> > \n> > One of the big problems with doing set enable_...=off is that there's no\n> > way to embed that into something like a view, so you're almost forced\n> > into putting into the application code itself, which makes matters even\n> > worse. If you could hint this within a query (maybe even on a per-table\n> > level), you could at least encapsulate that into a view.\n> \n> You can easily pass multiple statements within a single exec() or push\n> it into an SPF.\n\nUnless I'm missing something, putting multiple statements in a single\nexec means you're messing with the application code. And you can't\nupdate a SRF (also means messing with the application code). Though, I\nsuppose you could update a view that pulled from an SRF...\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 10 Oct 2006 09:02:22 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> I'd rather have the ugly solution sooner rather than the elegant one\n> later (if ever).\n\nThe trouble with that is that we couldn't ever get rid of it, and we'd\nbe stuck with backward-compatibility concerns with the first (over\nsimplified) design. It's important to get it right the first time,\nat least for stuff that you know perfectly well is going to end up\nembedded in application code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 10:14:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly? "
},
{
"msg_contents": "On Tue, Oct 10, 2006 at 10:14:48AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > I'd rather have the ugly solution sooner rather than the elegant one\n> > later (if ever).\n> \n> The trouble with that is that we couldn't ever get rid of it, and we'd\n> be stuck with backward-compatibility concerns with the first (over\n> simplified) design. It's important to get it right the first time,\n> at least for stuff that you know perfectly well is going to end up\n> embedded in application code.\n\nWe've depricated things before, I'm sure we'll do it again. Yes, it's a\npain, but it's better than not having anything release after release.\nAnd having a formal hint language would at least allow us to eventually\nclean up some of these oddball cases, like the OFFSET 0 hack.\n\nI'm also not convinced that even supplimental statistics will be enough\nto ensure the planner always does the right thing, so query-level hints\nmay have to stay (though it'd be great if that wasn't the case).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 10 Oct 2006 09:21:02 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Mon, Oct 09, 2006 at 03:41:09PM -0700, Joshua D. Drake wrote:\n>>> One of the big problems with doing set enable_...=off is that there's no\n>>> way to embed that into something like a view, so you're almost forced\n>>> into putting into the application code itself, which makes matters even\n>>> worse. If you could hint this within a query (maybe even on a per-table\n>>> level), you could at least encapsulate that into a view.\n>> You can easily pass multiple statements within a single exec() or push\n>> it into an SPF.\n> \n> Unless I'm missing something, putting multiple statements in a single\n> exec means you're messing with the application code. And you can't\n> update a SRF (also means messing with the application code). Though, I\n> suppose you could update a view that pulled from an SRF...\n\nI always think of application code as outside the db. I was thinking\nmore in layers.\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 10 Oct 2006 07:23:30 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "On Mon, Oct 09, 2006 at 23:33:03 +0200,\n Tobias Brox <[email protected]> wrote:\n> \n> Just a comment from the side line; can't the rough \"set\n> enable_seqscan=off\" be considered as sort of a hint anyway? There have\n> been situations where we've actually had to resort to such crud.\n\nThat only works for simple queries. To be generally useful, you want to\nbe able to hint how to handle each join being done in the query. The\ncurrent controlls affect all joins.\n",
"msg_date": "Tue, 10 Oct 2006 11:45:34 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Jim,\n\n> We've depricated things before, I'm sure we'll do it again. Yes, it's a\n> pain, but it's better than not having anything release after release.\n> And having a formal hint language would at least allow us to eventually\n> clean up some of these oddball cases, like the OFFSET 0 hack.\n>\n> I'm also not convinced that even supplimental statistics will be enough\n> to ensure the planner always does the right thing, so query-level hints\n> may have to stay (though it'd be great if that wasn't the case).\n\n\"stay\"? I don't think that the general developers of PostgreSQL are going \nto *accept* anything that stands a significant chance of breaking in one \nrelease. You have you challange for the EDB development team: come up \nwith a hinting language which is flexible enough not to do more harm than \ngood (hint: it's not Oracle's hints).\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Tue, 10 Oct 2006 10:28:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "On Tue, Oct 10, 2006 at 10:28:29AM -0700, Josh Berkus wrote:\n> Jim,\n> \n> > We've depricated things before, I'm sure we'll do it again. Yes, it's a\n> > pain, but it's better than not having anything release after release.\n> > And having a formal hint language would at least allow us to eventually\n> > clean up some of these oddball cases, like the OFFSET 0 hack.\n> >\n> > I'm also not convinced that even supplimental statistics will be enough\n> > to ensure the planner always does the right thing, so query-level hints\n> > may have to stay (though it'd be great if that wasn't the case).\n> \n> \"stay\"? I don't think that the general developers of PostgreSQL are going \n> to *accept* anything that stands a significant chance of breaking in one \n> release. You have you challange for the EDB development team: come up \n> with a hinting language which is flexible enough not to do more harm than \n> good (hint: it's not Oracle's hints).\n\nMy point was that I think we'll always have a need for fine-grained (ie:\ntable and join level) hints, even if we do get the ability for users to\nover-ride the statistics system. It's just not possible to come up with\nautomation that will handle every possible query that can be thrown at a\nsystem. I don't see how that means breaking anything in a given release.\nWorst-case, the optimizer might be able to do a better job of something\nthan hints written for an older version of the database, but that's\ngoing to be true of any planner override we come up with.\n\nBTW, I'm not speaking for EnterpriseDB or it's developers here... query\nhints are something I feel we've needed for a long time.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 10 Oct 2006 14:26:23 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
}
] |
[
{
"msg_contents": "PG does support hints actually.. and I used them to solve the last performance\nproblem I had, rather than waiting n years for the query planner to be\nimproved. The problem in question (from an automated query planning point of\nview) is the lack of multi-column statistics, leading to the wrong index being\nused.\n\nThe only thing is, the hints are expressed in an obscure, ad-hoc and\nimplementation dependant language.\n\nFor example, the \"Don't use index X\" hint (the one I used) can be accessed by\nreplacing your index with an index on values derived from the actual index,\ninstead of the values themselves. Then that index is not available during\nnormal query planning.\n\nAnother example is the \"Maybe use index on X and also sort by X\" hint, which\nyou access by adding \"ORDER BY X\" to your query. That would have solved my\nproblem for a simple select, but it didn't help for an update.\n\nThen there's the \"Don't use seq scan\" hint, which is expressed as \"set\nenable_seqscan=off\". That can help when it mistakenly chooses seq scan.\n\nAnd there are many more such hints, which are regularly used by PG users to\nwork around erroneous query plans.\n\nWhile writing this email, I had an idea for a FAQ, which would tell PG users\nhow to access this informal hint language:\n\nQ: The query planner keeps choosing the wrong index. How do I force it to use\nthe correct index?\n\nA: Have you analyzed your tables, increased statistics, etc etc etc? If that\ndoesn't help, you can change the index to use a value derived from the actual\nrow values. Then the index will not be available unless you explicitly use the\nderived values in your conditions.\n\nWith such a FAQ, us people who use PG in the real world can have our queries\nrunning reliably and efficiently, while work to improve the query planner continues.\n",
"msg_date": "Tue, 10 Oct 2006 11:10:42 +1000 (EST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Brian Herlihy wrote:\n> PG does support hints actually.. \n> The only thing is, the hints are expressed in an obscure, ad-hoc and\n> implementation dependant language.\n> \n> For example, the \"Don't use index X\" hint (the one I used) can be accessed by\n> replacing your index with an index on values derived from the actual index...\n\nAnd then there's \n\n select ... from (select ... offset 0)\n\nwhere the \"offset 0\" prevents any rewriting between the two levels of query. This replaces joins and AND clauses where the planner makes the wrong choice of join order or filtering. I grepped my code and found four of these (all workarounds for the same underlying problem).\n\nImagine I got run over by a train, and someone was reading my code. Which would be easier for them to maintain: Code with weird SQL, or code with sensible, well-written SQL and explicit hints? Luckily for my (hypothetical, I hope) successor, I put massive comments in my code explaining the strange SQL.\n\nThe bad applications are ALREADY HERE. And they're WORSE to maintain than if we had a formal hint language. The argument that hints lead to poor application is true. But lack of hints leads to worse applications.\n\nCraig\n\n",
"msg_date": "Mon, 09 Oct 2006 20:16:28 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "\n> Imagine I got run over by a train, and someone was reading my code. \n> Which would be easier for them to maintain: Code with weird SQL, or code\n> with sensible, well-written SQL and explicit hints?\n\nYou forgot the most important option:\n\nCode with appropriate documentation about your weird SQL.\n\nIf you document your code, your argument is moot.\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Mon, 09 Oct 2006 20:22:39 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "On Mon, Oct 09, 2006 at 08:22:39PM -0700, Joshua D. Drake wrote:\n> \n> > Imagine I got run over by a train, and someone was reading my code. \n> > Which would be easier for them to maintain: Code with weird SQL, or code\n> > with sensible, well-written SQL and explicit hints?\n> \n> You forgot the most important option:\n> \n> Code with appropriate documentation about your weird SQL.\n> \n> If you document your code, your argument is moot.\n\nYou apparently didn't read the whole email. He said he did document his\ncode. But his point is still valid: obscure code is bad even with\ndocumentation. Would you put something from the obfuscated C contest\ninto production with comments describing what it does, or would you just\nwrite the code cleanly to begin with?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 10 Oct 2006 09:07:03 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "On Tue, Oct 10, 2006 at 09:07:03AM -0500, Jim C. Nasby wrote:\n> Would you put something from the obfuscated C contest\n> into production with comments describing what it does,\n\nIf nothing else, it would be a nice practical joke =)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 10 Oct 2006 16:16:00 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Mon, Oct 09, 2006 at 08:22:39PM -0700, Joshua D. Drake wrote:\n>>> Imagine I got run over by a train, and someone was reading my code. \n>>> Which would be easier for them to maintain: Code with weird SQL, or code\n>>> with sensible, well-written SQL and explicit hints?\n>> You forgot the most important option:\n>>\n>> Code with appropriate documentation about your weird SQL.\n>>\n>> If you document your code, your argument is moot.\n> \n> You apparently didn't read the whole email. He said he did document his\n> code. But his point is still valid: obscure code is bad even with\n> documentation. Would you put something from the obfuscated C contest\n> into production with comments describing what it does, or would you just\n> write the code cleanly to begin with?\n\nYou are comparing apples to oranges. We aren't talking about an\nobfuscated piece of code. We are talking about an SQL statement that\nsolves a particular problem.\n\nThat can easily be documented, and documented with enough verbosity that\nit is never a question, except to test and see if the problem exists in\ncurrent versions.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 10 Oct 2006 07:24:49 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> On Tue, Oct 10, 2006 at 09:07:03AM -0500, Jim C. Nasby wrote:\n>> Would you put something from the obfuscated C contest\n>> into production with comments describing what it does,\n> \n> If nothing else, it would be a nice practical joke =)\n\nnice isn't the word I would use ;)\n\nJoshua D. Drake\n\n> \n> /* Steinar */\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 10 Oct 2006 07:25:28 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "-- tom lane wrote ---------------------------------------------------------\n\"Jim C. Nasby\" <[email protected]> writes:\n> I'd rather have the ugly solution sooner rather than the elegant one\n> later (if ever).\n\nThe trouble with that is that we couldn't ever get rid of it, and we'd\nbe stuck with backward-compatibility concerns with the first (over\nsimplified) design. It's important to get it right the first time,\nat least for stuff that you know perfectly well is going to end up\nembedded in application code.\n\n\t\t\tregards, tom lane\n---------------------------------------------------------------------------\n\nI agree that it's important to get it right the first time. It's also\nimportant that my queries use the right index NOW. It's no use to me if my\nqueries run efficiently in the next release when I am running those queries\nright now.\n\nHints would allow me to do that.\n\nWhat would it take for hints to be added to postgres? If someone designed a\nhint system that was powerful and flexible, and offered to implement it\nthemselves, would this be sufficient? This would address the concerns of\nhaving a \"bad\" hint system, and also the concern of time being better spent on\nother things.\n\nI want to know if the other objections to hints, such as hints being left\nbehind after an improvement to the optimizer, would also be an issue. I don't\nsee this objection as significant, as people are already using ad hoc hacks\nwhere they would otherwise use hints. The other reason I don't accept this\nobjection is that people who care about performance will review their code\nafter every DBMS upgrade, and they will read the release notes :)\n",
"msg_date": "Wed, 11 Oct 2006 11:22:02 +1000 (EST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Brian Herlihy <[email protected]> writes:\n> What would it take for hints to be added to postgres?\n\nA *whole lot* more thought and effort than has been expended on the\nsubject to date.\n\nPersonally I have no use for the idea of \"force the planner to do\nexactly X given a query of exactly Y\". You don't have exactly Y\ntoday, tomorrow, and the day after (if you do, you don't need a\nhint mechanism at all, you need a mysql-style query cache).\nIMHO most of the planner mistakes we see that could be fixed via\nhinting are really statistical estimation errors, and so the right\nlevel to be fixing them at is hints about how to estimate the number\nof rows produced for given conditions. Mind you that's still a plenty\nhard problem, but you could at least hope that a hint of that form\nwould be useful for more than one query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 22:38:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly? "
},
{
"msg_contents": "--- Tom Lane <[email protected]> wrote:\n> Personally I have no use for the idea of \"force the planner to do\n> exactly X given a query of exactly Y\". You don't have exactly Y\n> today, tomorrow, and the day after (if you do, you don't need a\n> hint mechanism at all, you need a mysql-style query cache).\n\nI don't agree here. I have \"exactly Y\" running millions of times daily. \nThere's enough data that the statistics on specific values don't help all that\nmuch, even at the maximum statistics collection level. By \"exactly Y\" I mean\nthe form of the query is identical, and the query plan is identical, since only\nthe general statistics are being used for most executions of the query. The\nspecific values vary, so caching is no help.\n\nIn summary, I have a need to run \"exactly Y\" with query plan \"exactly X\".\n(detail in postscript)\n\n> IMHO most of the planner mistakes we see that could be fixed via\n> hinting are really statistical estimation errors, and so the right\n> level to be fixing them at is hints about how to estimate the number\n> of rows produced for given conditions.\n\nDo you mean something like \"The selectivity of these two columns together is\nreally X\"? That would solve my specific problem. And the academic part of me\nlikes the elegance of that solution.\n\nOn the negative side, it means people must learn how the optimizer uses\nstatistics (which I would never have done if I could have said \"Use index X\").\n\n> Mind you that's still a plenty\n> hard problem, but you could at least hope that a hint of that form\n> would be useful for more than one query.\n\nYes it would be useful for more than one query. I agree that it's the \"right\"\nlevel to hint at, in that it is at a higher level. Maybe the right level is\nnot the best level though? In a business environment, you just want things to\nwork, you don't want to analyze a problem all the way through and find the\nbest, most general solution. As a former academic I understand the two points\nof view, and I don't think either is correct or wrong. Each view has its\nplace.\n\nSince I work for a business now, my focus is on making quick fixes that keep\nthe system running smoothly. Solving problems in the \"right\" way is not\nimportant. If the query slows down again later, we will examine the query plan\nand do whatever we have to do to fix it. It's not elegant, but it gives fast\nresponse times to the customers, and that's what matters.\n\n\nPS The case in question is a table with a 3-column primary key on (A, B, C). \nIt also has an index on (B, C). Re-ordering the primary key doesn't help as I\ndo lookups on A only as well. When I specify A, B and C (the primary key), the\noptimizer chooses the (B, C) index, on the assumption that specifying these two\nvalues will return only 1 row. But high correlation between B and C leads to\n100s of rows being returned, and the query gets very slow. The quick fix is to\nsay \"Use index (A, B, C)\". The statistics level fix would be to say \"B and C\nreally have high correlation\".\n",
"msg_date": "Wed, 11 Oct 2006 13:57:58 +1000 (EST)",
"msg_from": "Brian Herlihy <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Simple join optimized badly? "
},
{
"msg_contents": "> Brian Herlihy <[email protected]> writes:\n> > What would it take for hints to be added to postgres?\n> \n> A *whole lot* more thought and effort than has been expended on the\n> subject to date.\n> \n> Personally I have no use for the idea of \"force the planner to do\n> exactly X given a query of exactly Y\". You don't have exactly Y\n> today, tomorrow, and the day after (if you do, you don't need a\n> hint mechanism at all, you need a mysql-style query cache).\n> IMHO most of the planner mistakes we see that could be fixed via\n> hinting are really statistical estimation errors, and so the right\n> level to be fixing them at is hints about how to estimate the number\n> of rows produced for given conditions. Mind you that's still a plenty\n> hard problem, but you could at least hope that a hint of that form\n> would be useful for more than one query.\n> \n\nDo I understand correctly that you're suggesting it might not be a bad\nidea to allow users to provide statistics?\n\nIs this along the lines of \"I'm loading a big table and touching every\nrow of data, so I may as well collect some stats along the way\" and \"I\nknow my data contains these statistical properties, but the analyzer\nwasn't able to figure that out (or maybe can't figure it out efficiently\nenough)\"?\n\nWhile it seems like this would require more knowledge from the user\n(e.g. more about their data, how the planner works, and how it uses\nstatistics) this would actually be helpful/required for those who really\ncare about performance. I guess it's the difference between a tool\nadvanced users can get long term benefit from, or a quick fix that will\nprobably come back to bite you. I've been pleased with Postgres'\nthoughtful design; recently I've been doing some work with MySQL, and\ncan't say I feel the same way.\n\nAlso, I'm guessing this has already come up at some point, but what\nabout allowing PG to do some stat collection during queries? If you're\ntouching a lot of data (such as an import process) wouldn't it be more\nefficient (and perhaps more accurate) to collect stats then, rather than\nhaving to re-scan? It would be nice to be able to turn this on/off on a\nper query basis, seeing as it could have pretty negative impacts on OLTP\nperformance...\n\n- Bucky\n",
"msg_date": "Wed, 11 Oct 2006 10:27:26 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly? "
},
{
"msg_contents": "Bucky Jordan wrote:\n> \n> Is this along the lines of \"I'm loading a big table and touching every\n> row of data, so I may as well collect some stats along the way\" and \"I\n> know my data contains these statistical properties, but the analyzer\n> wasn't able to figure that out (or maybe can't figure it out efficiently\n> enough)\"?\n> \n> While it seems like this would require more knowledge from the user\n> (e.g. more about their data, how the planner works, and how it uses\n> statistics) this would actually be helpful/required for those who really\n> care about performance. ...\n\nThe user would have to know his data, but he wouldn't need to know how \nthe planner works. While with hints like \"use index X\", he *does* need \nto know how the planner works.\n\nBeing able to give hints about statistical properties of relations and \ntheir relationships seems like a good idea to me. And we can later \nfigure out ways to calculate them automatically.\n\nBTW, in DB2 you can declare a table as volatile, which means that the \ncardinality of the table varies greatly. The planner favors index scans \non volatile tables.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 11 Oct 2006 15:51:11 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Heikki Linnakangas wrote:\n> BTW, in DB2 you can declare a table as volatile, which means that the \n> cardinality of the table varies greatly. The planner favors index scans \n> on volatile tables.\n\nNow that seems like a valuable idea.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Wed, 11 Oct 2006 10:53:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "Tom,\n\nI'm interested in the problem of cross-column statistics from a\ntheoretical perspective. It would be interesting to sit down and try to\nreason out a useful solution, or at very least to understand the problem\nbetter so I can anticipate when it might come and eat me.\n\n>From my understanding, the main problem is that if PG knows the\nselectivity of n conditions C1,C2,...,Cn then it doesn't know whether\nthe combined selectivity will be C1*C2*...*Cn (conditions are\nindependent) or max(C1,C2,...,Cn) (conditions are strictly dependent),\nor somewhere in the middle. Therefore, row estimates could be orders of\nmagnitude off.\n\nI suppose a common example would be a table with a serial primary key\ncolumn and a timestamp value which is always inserted as\nCURRENT_TIMESTAMP, so the two columns are strongly correlated. If the\nplanner guesses that 1% of the rows of the table will match pk>1000000,\nand 1% of the rows of the table will match timestamp > X, then it would\nbe nice for it to know that if you specify both \"pk>1000000 AND\ntimestamp>X\" that the combined selectivity is still only 1% and not 1% *\n1% = 0.01%.\n\nAs long as I'm sitting down and reasoning about the problem anyway, are\nthere any other types of cases you're aware of where some form of cross-\ncolumn statistics would be useful? In the unlikely event that I\nactually come up with a brilliant and simple solution, I'd at least like\nto make sure that I'm solving the right problem :)\n\nThanks,\nMark Lewis\n\n\n\nOn Tue, 2006-10-10 at 22:38 -0400, Tom Lane wrote:\n> Brian Herlihy <[email protected]> writes:\n> > What would it take for hints to be added to postgres?\n> \n> A *whole lot* more thought and effort than has been expended on the\n> subject to date.\n> \n> Personally I have no use for the idea of \"force the planner to do\n> exactly X given a query of exactly Y\". You don't have exactly Y\n> today, tomorrow, and the day after (if you do, you don't need a\n> hint mechanism at all, you need a mysql-style query cache).\n> IMHO most of the planner mistakes we see that could be fixed via\n> hinting are really statistical estimation errors, and so the right\n> level to be fixing them at is hints about how to estimate the number\n> of rows produced for given conditions. Mind you that's still a plenty\n> hard problem, but you could at least hope that a hint of that form\n> would be useful for more than one query.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n",
"msg_date": "Wed, 11 Oct 2006 08:07:40 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Simple join optimized badly?"
},
{
"msg_contents": "On Wed, Oct 11, 2006 at 10:27:26AM -0400, Bucky Jordan wrote:\n> Also, I'm guessing this has already come up at some point, but what\n> about allowing PG to do some stat collection during queries? If you're\n> touching a lot of data (such as an import process) wouldn't it be more\n> efficient (and perhaps more accurate) to collect stats then, rather than\n> having to re-scan? It would be nice to be able to turn this on/off on a\n> per query basis, seeing as it could have pretty negative impacts on OLTP\n> performance...\n\nI suspect that could be highly useful in data warehouse environments\nwhere you're more likely to have to sequential scan a table. It would be\ninteresting to have it so that a sequential scan that will run to\ncompletion also collects stats along the way.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 11 Oct 2006 15:27:58 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Collect stats during seqscan (was: Simple join optimized badly?)"
}
] |
[
{
"msg_contents": "All,\n\nWe are facing few issues while we install Postgres 8.0 in Windows 2000\nJapanese OS. Installer kit name : postgresql-8.0-ja\n\nScenario 1: While installing PostGRE 8.0, we got an logon failure at the end\nof installing the component telling that it failed to produce the process\nfor initdb and also that the user name was not able to be recognized or the\npassword is wrong. After the OK button was clicked the whole process rolled\nback automatically and the PostGRE got uninstalled.\n\nScenario 2: In one of the computers we managed to install the PostGRE 8.0\nbut the database initialization could not be performed. While creating the\ndatabase using the Credb patch we got an error telling that the tables were\nmissing and the connection with the local host failed.\n\t\nScenario 3: For one of the machines the database has also been created but\nonce the system is restarted the PostGRE does not work and we get the same\nerror as in the Scenario2.\n\nPlease shed some light on this. If this question is not relevant to this\ngroup, please redirect us... \n\nThanks and regards,\nRavi\nDISCLAIMER \nThe contents of this e-mail and any attachment(s) are confidential and intended for the \n\nnamed recipient(s) only. It shall not attach any liability on the originator or HCL or its \n\naffiliates. Any views or opinions presented in this email are solely those of the author and \n\nmay not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, \n\ndissemination, copying, disclosure, modification, distribution and / or publication of this \n\nmessage without the prior written consent of the author of this e-mail is strictly \n\nprohibited. If you have received this email in error please delete it and notify the sender \n\nimmediately. Before opening any mail and attachments please check them for viruses and \n\ndefect.\n",
"msg_date": "Tue, 10 Oct 2006 16:17:06 +0530",
"msg_from": "\"Ravindran G - TLS, Chennai.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgre 8.0 Installation - Issues"
},
{
"msg_contents": "Moving to -general.\n\nOn Tue, Oct 10, 2006 at 04:17:06PM +0530, Ravindran G - TLS, Chennai. wrote:\n> All,\n> \n> We are facing few issues while we install Postgres 8.0 in Windows 2000\n> Japanese OS. Installer kit name : postgresql-8.0-ja\n \nIs there a reason you're not using 8.1.4? 8.0 was the first windows\nrelease, and as such there's a number of issues that were improved in\n8.1. You should at least be using the latest 8.0 version (8.0.8).\n\n> Scenario 1: While installing PostGRE 8.0, we got an logon failure at the end\n\nBTW, it's PostgreSQL or Postgres. PostGRE doesn't exist...\n\n> of installing the component telling that it failed to produce the process\n> for initdb and also that the user name was not able to be recognized or the\n> password is wrong. After the OK button was clicked the whole process rolled\n> back automatically and the PostGRE got uninstalled.\n \nMake sure that you have the right password for the account that\nPostgreSQL will be running under. I often find it's easiest to just\ndelete that account and let the installer create it for me.\n\n> Scenario 2: In one of the computers we managed to install the PostGRE 8.0\n> but the database initialization could not be performed. While creating the\n> database using the Credb patch we got an error telling that the tables were\n> missing and the connection with the local host failed.\n> \t\n> Scenario 3: For one of the machines the database has also been created but\n> once the system is restarted the PostGRE does not work and we get the same\n> error as in the Scenario2.\n \nThese could be issues surrounding administrator rights. PostgreSQL will\nrefuse to start if the account it's running under has Administrator\nrights.\n\n> Please shed some light on this. If this question is not relevant to this\n> group, please redirect us... \n> \n> Thanks and regards,\n> Ravi\n> DISCLAIMER \n> The contents of this e-mail and any attachment(s) are confidential and intended for the \n> \n> named recipient(s) only. It shall not attach any liability on the originator or HCL or its \n> \n> affiliates. Any views or opinions presented in this email are solely those of the author and \n> \n> may not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, \n> \n> dissemination, copying, disclosure, modification, distribution and / or publication of this \n> \n> message without the prior written consent of the author of this e-mail is strictly \n> \n> prohibited. If you have received this email in error please delete it and notify the sender \n> \n> immediately. Before opening any mail and attachments please check them for viruses and \n> \n> defect.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 10 Oct 2006 09:15:52 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
},
{
"msg_contents": "\n>> Scenario 1: While installing PostGRE 8.0, we got an logon failure at the end\n> \n> BTW, it's PostgreSQL or Postgres. PostGRE doesn't exist...\n\nYou know, every time someone brings this up it reminds me of:\n\nAre you Josh or Joshua...\n\nIt doesn't matter people.\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 10 Oct 2006 07:28:23 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
},
{
"msg_contents": "In response to \"Joshua D. Drake\" <[email protected]>:\n> \n> >> Scenario 1: While installing PostGRE 8.0, we got an logon failure at the end\n> > \n> > BTW, it's PostgreSQL or Postgres. PostGRE doesn't exist...\n> \n> You know, every time someone brings this up it reminds me of:\n> \n> Are you Josh or Joshua...\n> \n> It doesn't matter people.\n\nTo some it does. I've had a number of people ask me whether I want\nBill, William, or Will. The first two are fine, I prefer that the\nthird not be used.\n\nI had an almost-gf once who was introduced to me as Patricia. I asked\nif she went by Pat or Patty. She responded, \"Not if you want to live.\"\nI called her Tricia.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Tue, 10 Oct 2006 10:34:14 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
},
{
"msg_contents": "\nOn Oct 10, 2006, at 10:34 , Bill Moran wrote:\n\n> I had an almost-gf once...\n\nMe too!\n\n-M\n",
"msg_date": "Tue, 10 Oct 2006 10:39:10 -0400",
"msg_from": "AgentM <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
},
{
"msg_contents": "Bill Moran wrote:\n> In response to \"Joshua D. Drake\" <[email protected]>:\n>>>> Scenario 1: While installing PostGRE 8.0, we got an logon failure at the end\n>>> BTW, it's PostgreSQL or Postgres. PostGRE doesn't exist...\n>> You know, every time someone brings this up it reminds me of:\n>>\n>> Are you Josh or Joshua...\n>>\n>> It doesn't matter people.\n> \n> To some it does. I've had a number of people ask me whether I want\n> Bill, William, or Will. The first two are fine, I prefer that the\n> third not be used.\n> \n> I had an almost-gf once who was introduced to me as Patricia. I asked\n> if she went by Pat or Patty. She responded, \"Not if you want to live.\"\n> I called her Tricia.\n\nYou can not compare the intricacies of the woman psyche to that of\nsoftware naming ;).\n\nI get your point but when someone is asking for help, if the first thing\nyou do is correct them on something so minimal that has nothing to do\nwith their problem.... It sends a negative vibe.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 10 Oct 2006 08:20:26 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
},
{
"msg_contents": " \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Joshua D. Drake\n> Sent: 10 October 2006 15:28\n> To: Jim C. Nasby\n> Cc: Ravindran G - TLS, Chennai.; \n> [email protected]; Hari Krishna D - TLS , Chennai; \n> Sasikala V - TLS , Chennai\n> Subject: Re: [GENERAL] [PERFORM] Postgre 8.0 Installation - Issues\n> \n> \n> >> Scenario 1: While installing PostGRE 8.0, we got an logon \n> failure at the end\n> > \n> > BTW, it's PostgreSQL or Postgres. PostGRE doesn't exist...\n> \n> You know, every time someone brings this up it reminds me of:\n> \n> Are you Josh or Joshua...\n> \n> It doesn't matter people.\n\nThat reminds me Bob - did you see my email about Stefan's pmt account?\n\n:-p\n\n/D\n",
"msg_date": "Tue, 10 Oct 2006 16:26:23 +0100",
"msg_from": "\"Dave Page\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
},
{
"msg_contents": "In response to \"Joshua D. Drake\" <[email protected]>:\n\n> Bill Moran wrote:\n> > In response to \"Joshua D. Drake\" <[email protected]>:\n> >>>> Scenario 1: While installing PostGRE 8.0, we got an logon failure at the end\n> >>> BTW, it's PostgreSQL or Postgres. PostGRE doesn't exist...\n> >> You know, every time someone brings this up it reminds me of:\n> >>\n> >> Are you Josh or Joshua...\n> >>\n> >> It doesn't matter people.\n> > \n> > To some it does. I've had a number of people ask me whether I want\n> > Bill, William, or Will. The first two are fine, I prefer that the\n> > third not be used.\n> > \n> > I had an almost-gf once who was introduced to me as Patricia. I asked\n> > if she went by Pat or Patty. She responded, \"Not if you want to live.\"\n> > I called her Tricia.\n> \n> You can not compare the intricacies of the woman psyche to that of\n> software naming ;).\n\n:)\n\n> I get your point but when someone is asking for help, if the first thing\n> you do is correct them on something so minimal that has nothing to do\n> with their problem.... It sends a negative vibe.\n\nI suppose. On many mailing lists that I frequent, the first response a\nnew poster gets is something along the lines of, \"please don't top-post\"\nor \"please fix your email formatting.\"\n\nThese could be taken as \"negative vibe\" and have often been complained\nabout by newbies. I claim that they're an indication that we have some\nactual culture, and that it's a manifestation of the desire to maintain\nthat culture. I find the complaints to be a manifestation of inconsiderate\npeople who don't respect the culture of others.\n\nIt's also possible that I just think about this stuff too much.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Tue, 10 Oct 2006 11:26:39 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
},
{
"msg_contents": "Dave Page wrote:\n> \n> \n>> -----Original Message-----\n>> From: [email protected] \n>> [mailto:[email protected]] On Behalf Of \n>> Joshua D. Drake\n>> Sent: 10 October 2006 15:28\n>> To: Jim C. Nasby\n>> Cc: Ravindran G - TLS, Chennai.; \n>> [email protected]; Hari Krishna D - TLS , Chennai; \n>> Sasikala V - TLS , Chennai\n>> Subject: Re: [GENERAL] [PERFORM] Postgre 8.0 Installation - Issues\n>>\n>>\n>>>> Scenario 1: While installing PostGRE 8.0, we got an logon \n>> failure at the end\n>>> BTW, it's PostgreSQL or Postgres. PostGRE doesn't exist...\n>> You know, every time someone brings this up it reminds me of:\n>>\n>> Are you Josh or Joshua...\n>>\n>> It doesn't matter people.\n> \n> That reminds me Bob - did you see my email about Stefan's pmt account?\n\n*sigh* how many times must I remind you... it is Bobby not Bob...\n\nAnd, I appear to have missed it.. I will get it setup.\n\nJoshua D. Drake\n\n\n> \n> :-p\n> \n> /D\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 10 Oct 2006 09:10:41 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
},
{
"msg_contents": "\n> It's also possible that I just think about this stuff too much.\n> \nGive this guy a cookie. :)\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 10 Oct 2006 09:11:26 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
},
{
"msg_contents": " \n\n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]] \n> Sent: 10 October 2006 17:11\n> To: Dave Page\n> Cc: Jim C. Nasby; [email protected]\n> Subject: Re: [GENERAL] [PERFORM] Postgre 8.0 Installation - Issues\n> \n> > That reminds me Bob - did you see my email about Stefan's \n> pmt account?\n> \n> *sigh* how many times must I remind you... it is Bobby not Bob...\n> \n> And, I appear to have missed it.. I will get it setup.\n\nThanks Rob.\n\n/D\n",
"msg_date": "Tue, 10 Oct 2006 17:23:24 +0100",
"msg_from": "\"Dave Page\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
},
{
"msg_contents": "On Oct 10, 2006, at 9:28 AM, Joshua D. Drake wrote:\n>>> Scenario 1: While installing PostGRE 8.0, we got an logon failure \n>>> at the end\n>>\n>> BTW, it's PostgreSQL or Postgres. PostGRE doesn't exist...\n>\n> You know, every time someone brings this up it reminds me of:\n>\n> Are you Josh or Joshua...\n>\n> It doesn't matter people.\n\nOn the other hand Fred, I see about a dozen emails about how our name \ndoesn't matter and not one actually answering Ravindran's question...\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Tue, 10 Oct 2006 21:00:18 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgre 8.0 Installation - Issues"
}
] |
[
{
"msg_contents": "While doing a verbose vacuum, I'm constantly hitting things like:\n\nDETAIL: 3606 dead row versions cannot be removed yet.\n\nI believe this is a problem, because I still do have some empty tables\nrequireing up to 3-400 ms just to check if the table is empty (see\nthread \"slow queue-like empty table\").\n\nIf pg_stat_activity.query_start actually is the start time of the\ntransaction, then we've gotten rid of all the real long-running\ntransactions. Then again, if pg_stat_activity.query_start actually was\nthe start time of the transaction, the attribute would have been called\npg_stat_activity.transaction_start, right?\n\nIs there any way to find the longest running transaction?\n\n",
"msg_date": "Tue, 10 Oct 2006 18:10:27 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "long running transactions"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n> If pg_stat_activity.query_start actually is the start time of the\n> transaction,\n\n... but it isn't.\n\n> Is there any way to find the longest running transaction?\n\nLook in pg_locks to see the lowest-numbered transaction ID --- each\ntransaction will be holding exclusive lock on its own XID. You can\ncorrelate that back to pg_stat_activity via the PID.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 12:23:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long running transactions "
},
{
"msg_contents": "[Tom Lane - Tue at 12:23:40PM -0400]\n> Look in pg_locks to see the lowest-numbered transaction ID --- each\n> transaction will be holding exclusive lock on its own XID. You can\n> correlate that back to pg_stat_activity via the PID.\n\nThanks a lot for the quick reply - I've already identified one\nlong-running transaction.\n\n(I'm not allowed to order by xid, and not allowed to cast it to\nanything, how come?)\n\n",
"msg_date": "Tue, 10 Oct 2006 18:39:13 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long running transactions"
},
{
"msg_contents": "[Tobias Brox - Tue at 06:39:13PM +0200]\n> Thanks a lot for the quick reply - I've already identified one\n> long-running transaction.\n\nbelonging to autovacuum ... how come?\n\n",
"msg_date": "Tue, 10 Oct 2006 18:41:42 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long running transactions"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n> [Tobias Brox - Tue at 06:39:13PM +0200]\n>> Thanks a lot for the quick reply - I've already identified one\n>> long-running transaction.\n\n> belonging to autovacuum ... how come?\n\nBlocked on someone else's lock, maybe?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 12:42:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long running transactions "
},
{
"msg_contents": "[Tom Lane - Tue at 12:42:52PM -0400]\n> > belonging to autovacuum ... how come?\n> \n> Blocked on someone else's lock, maybe?\n\nhardly, the autovacuum is the only one having such a low transaction id,\nand also the only one hanging around when waiting a bit and rechecking\nthe pg_locks table.\n",
"msg_date": "Tue, 10 Oct 2006 19:06:39 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long running transactions"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n>> Blocked on someone else's lock, maybe?\n\n> hardly, the autovacuum is the only one having such a low transaction id,\n> and also the only one hanging around when waiting a bit and rechecking\n> the pg_locks table.\n\nHmph. Is the autovac process actually doing anything (strace would be\nrevealing)? If not, can you attach to the autovac process with gdb and\nget a stack trace to see where it's blocked?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 13:09:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long running transactions "
},
{
"msg_contents": "[Tom Lane - Tue at 01:09:52PM -0400]\n> Hmph. Is the autovac process actually doing anything (strace would be\n> revealing)? If not, can you attach to the autovac process with gdb and\n> get a stack trace to see where it's blocked?\n\nSorry ... I SIGINT'ed it, and now it's gone :-( I thought reloading the\nconfig would restart autovacuum. Well, whatever, we still have the\nnightly vacuum crontab.\n",
"msg_date": "Tue, 10 Oct 2006 19:13:04 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long running transactions"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n> [Tom Lane - Tue at 01:09:52PM -0400]\n>> Hmph. Is the autovac process actually doing anything (strace would be\n>> revealing)? If not, can you attach to the autovac process with gdb and\n>> get a stack trace to see where it's blocked?\n\n> Sorry ... I SIGINT'ed it, and now it's gone :-( I thought reloading the\n> config would restart autovacuum.\n\nIt'll come back after the autovacuum naptime. If it gets stuck again,\nplease investigate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 13:18:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long running transactions "
},
{
"msg_contents": "[Tom Lane - Tue at 01:18:27PM -0400]\n> >> Hmph. Is the autovac process actually doing anything (strace would be\n> >> revealing)?\n\nIt's definitively doing something; mostly reading, but also some few\nwrites, semops and opens.\n\n> If not, can you attach to the autovac process with gdb and\n> >> get a stack trace to see where it's blocked?\n\n(gdb) bt\n#0 0xb7c599f8 in select () from /lib/tls/libc.so.6\n#1 0x08253c53 in pg_usleep ()\n#2 0x0812ee93 in vacuum_delay_point ()\n#3 0x0812f2a5 in lazy_vacuum_rel ()\n#4 0x0812ef7b in lazy_vacuum_rel ()\n#5 0x0812b4b6 in vac_update_relstats ()\n#6 0x0812a995 in vacuum ()\n#7 0x0818d2ca in autovac_stopped ()\n#8 0x0818ceae in autovac_stopped ()\n#9 0x0818c848 in autovac_stopped ()\n#10 0x0818c4e2 in autovac_start ()\n#11 0x08192c11 in PostmasterMain ()\n#12 0x08191dcf in PostmasterMain ()\n#13 0x081541b1 in main ()\n\n> It'll come back after the autovacuum naptime. If it gets stuck again,\n> please investigate.\n\nIt seems stuck, has had the same transid for a long while, and the\nnumber of undeletable dead rows in our tables are increasing.\n\n",
"msg_date": "Tue, 10 Oct 2006 19:49:55 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long running transactions"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n> (gdb) bt\n> #0 0xb7c599f8 in select () from /lib/tls/libc.so.6\n> #1 0x08253c53 in pg_usleep ()\n> #2 0x0812ee93 in vacuum_delay_point ()\n> #3 0x0812f2a5 in lazy_vacuum_rel ()\n> #4 0x0812ef7b in lazy_vacuum_rel ()\n> #5 0x0812b4b6 in vac_update_relstats ()\n\nThat doesn't look particularly blocked, and if you are seeing\nreads/writes too, then it's doing something.\n\n> It seems stuck, has had the same transid for a long while, and the\n> number of undeletable dead rows in our tables are increasing.\n\nPerhaps you have overly aggressive vacuum cost delay settings?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 14:04:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long running transactions "
},
{
"msg_contents": "[Tom Lane - Tue at 02:04:55PM -0400]\n> > It seems stuck, has had the same transid for a long while, and the\n> > number of undeletable dead rows in our tables are increasing.\n> \n> Perhaps you have overly aggressive vacuum cost delay settings?\n\nPerhaps, though I wouldn't expect it to sleep in the middle of a\ntransaction - and also, it really did seem to me that it's doing work\nrather than only sleeping. \n\nThe transaction id for the vacuum process is the same now as when I\nwrote the previous email, and the number of dead unremovable rows have\nincreased steadily.\n\nThe settings in effect are:\n\nautovacuum_vacuum_cost_delay = 500\nautovacuum_vacuum_cost_limit = 200\n\n",
"msg_date": "Tue, 10 Oct 2006 20:19:53 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long running transactions"
},
{
"msg_contents": "Tobias Brox <[email protected]> writes:\n>> Perhaps you have overly aggressive vacuum cost delay settings?\n\n> autovacuum_vacuum_cost_delay = 500\n> autovacuum_vacuum_cost_limit = 200\n\nWell, that's going to cause it to sleep half a second after every dozen\nor so page I/Os. I think you'd be well advised to reduce the delay.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 14:26:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long running transactions "
},
{
"msg_contents": "[Tom Lane - Tue at 02:26:53PM -0400]\n> > autovacuum_vacuum_cost_delay = 500\n> > autovacuum_vacuum_cost_limit = 200\n> \n> Well, that's going to cause it to sleep half a second after every dozen\n> or so page I/Os. I think you'd be well advised to reduce the delay.\n\nModified it to 20/250, and it definitively helped. Sorry for the\nlist verbosity; I should have been able to resolve this myself already\nsome 2-3 emails ago :-) I wanted a \"soft\" introduction of autovac in\nproduction, and assumed that it was better to begin with too much sleep\nthan too little! Well, well.\n",
"msg_date": "Tue, 10 Oct 2006 20:43:54 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long running transactions"
}
] |
[
{
"msg_contents": "I currently have a db supporting what is for the most part an OLAP data \nwarehousing application.\n\nOne table (good data) has roughly 120 million rows, divided into roughly \n40 different relational groups (logically by foreign key). Every time I \nadd data to this table, I need to afterwards scrub that group against \nknown \"bad data\" which is housed in a second table that has roughly 21 \nmillion rows.\n\nThe 120 million row good data table is called \"email_record\"\nThe 21 million row bad data table is called \"suppress\"\n\nThere are separate btree indexes on 'email_record_id', 'email_list_id' \nand 'email' on both tables.\n\nEach time I scrub data I pull out anywhere from 1 to 5 million rows from \nthe good table (depending on the size of the group i'm scrubbing) and \ncompare them against the 21 million rows in the 'suppress' table.\n\nSo far I've done this using a temporary staging table that stores only \nthe email_record_id for each row from the relevant group of the good \ntable. I use a plsql function that does roughly the following (i've \nincluded only sql syntax and inserted the constant '9' where i would \nnormally use a variable):\n\nThe characters: email_record_id int8, email varchar(255), email_list_id int8\n-------------------------------------------------------------\n\nCREATE TEMP TABLE temp_list_suppress(email_record_id int8);\n\nINSERT INTO temp_list_suppress\n\tSELECT email_record_id from ONLY email_record er\n\tWHERE email_list_id = 9 AND email IN\n\t(select email from suppress);\n\nCREATE INDEX unique_id_index on temp_list_suppress ( email_record_id );\n\nINSERT INTO er_banned\nSELECT * from ONLY email_record er WHERE EXISTS\n(SELECT 1 from temp_list_suppress ts where er.email_record_id = \nts.email_record_id)';\n\nDELETE FROM ONLY email_record WHERE email_list_id = 9 AND email_record_id IN\n\t(SELECT email_record_id from temp_list_suppress);\n\nTRUNCATE TABLE temp_list_suppress;\nDROP TABLE temp_list_suppress;\n--------------------------------------------------------------\n\nThe performance is dreadful, is there a more efficient way to do this? \nWould I be better off just grabbing * initially from the good table \ninstead of just the id to avoid more sequential searches later? Here are \nmy configs:\n\nDebian\nPostgres 8.1.4\ndual zeon\nram: 4 gigs\nraid 5\n\n# - Memory -\nshared_buffers = 3000\nwork_mem = 92768\nmaintenance_work_mem = 128384\n\nautovacuum is turned off, and the db is annalyzed and vacuumed regularly.\n\n\nRegards,\nBrendan\n",
"msg_date": "Tue, 10 Oct 2006 15:44:23 -0600",
"msg_from": "Brendan Curran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Scrub one large table against another"
},
{
"msg_contents": "Brendan Curran <[email protected]> writes:\n> CREATE TEMP TABLE temp_list_suppress(email_record_id int8);\n\n> INSERT INTO temp_list_suppress\n> \tSELECT email_record_id from ONLY email_record er\n> \tWHERE email_list_id = 9 AND email IN\n> \t(select email from suppress);\n\n> CREATE INDEX unique_id_index on temp_list_suppress ( email_record_id );\n\n> INSERT INTO er_banned\n> SELECT * from ONLY email_record er WHERE EXISTS\n> (SELECT 1 from temp_list_suppress ts where er.email_record_id = \n> ts.email_record_id)';\n\n> DELETE FROM ONLY email_record WHERE email_list_id = 9 AND email_record_id IN\n> \t(SELECT email_record_id from temp_list_suppress);\n\n> TRUNCATE TABLE temp_list_suppress;\n> DROP TABLE temp_list_suppress;\n\n> The performance is dreadful, is there a more efficient way to do this? \n\nHave you tried doing EXPLAIN ANALYZE of each of the INSERT/DELETE steps?\nIf you don't even know which part is slow, it's hard to improve.\n\nIt would probably help to do an \"ANALYZE temp_list_suppress\" right after\npopulating the temp table. As you have it, the second insert and delete\nare being planned with nothing more than a row count (obtained during\nCREATE INDEX) and no stats about distribution of the table contents.\n\nAlso, I'd be inclined to try replacing the EXISTS with an IN test;\nin recent PG versions the planner is generally smarter about IN.\n(Is there a reason why you are doing the INSERT one way and the\nDELETE the other?)\n\nBTW, that TRUNCATE right before the DROP seems quite useless,\nalthough it's not the main source of your problem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 18:14:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scrub one large table against another "
},
{
"msg_contents": "\n\nTom Lane wrote:\n> Brendan Curran <[email protected]> writes:\n>> CREATE TEMP TABLE temp_list_suppress(email_record_id int8);\n> \n>> INSERT INTO temp_list_suppress\n>> \tSELECT email_record_id from ONLY email_record er\n>> \tWHERE email_list_id = 9 AND email IN\n>> \t(select email from suppress);\n> \n>> CREATE INDEX unique_id_index on temp_list_suppress ( email_record_id );\n> \n>> INSERT INTO er_banned\n>> SELECT * from ONLY email_record er WHERE EXISTS\n>> (SELECT 1 from temp_list_suppress ts where er.email_record_id = \n>> ts.email_record_id)';\n> \n>> DELETE FROM ONLY email_record WHERE email_list_id = 9 AND email_record_id IN\n>> \t(SELECT email_record_id from temp_list_suppress);\n> \n>> TRUNCATE TABLE temp_list_suppress;\n>> DROP TABLE temp_list_suppress;\n> \n>> The performance is dreadful, is there a more efficient way to do this? \n> \n> Have you tried doing EXPLAIN ANALYZE of each of the INSERT/DELETE steps?\n> If you don't even know which part is slow, it's hard to improve.\n\nFIRST INSERT (Just the select is explained):\nHash Join (cost=8359220.68..9129843.00 rows=800912 width=32)\n Hash Cond: ((\"outer\".email)::text = (\"inner\".email)::text)\n -> Unique (cost=4414093.19..4522324.49 rows=21646260 width=25)\n -> Sort (cost=4414093.19..4468208.84 rows=21646260 width=25)\n Sort Key: suppress.email\n -> Seq Scan on suppress (cost=0.00..393024.60 \nrows=21646260 width=25)\n -> Hash (cost=3899868.47..3899868.47 rows=4606808 width=32)\n -> Bitmap Heap Scan on email_record er \n(cost=38464.83..3899868.47 rows=4606808 width=32)\n Recheck Cond: (email_list_id = 13)\n -> Bitmap Index Scan on list (cost=0.00..38464.83 \nrows=4606808 width=0)\n Index Cond: (email_list_id = 13)\n\nSECOND INSERT (Using EXISTS):\nSeq Scan on email_record er (cost=0.00..381554175.29 rows=62254164 \nwidth=1863)\n Filter: (subplan)\n SubPlan\n -> Index Scan using er_primeq_pk on er_primeq eq (cost=0.00..3.03 \nrows=1 width=0)\n Index Cond: ($0 = email_record_id)\n\nSECOND INSERT (Using IN):\nNested Loop (cost=26545.94..2627497.28 rows=27134 width=1863)\n -> HashAggregate (cost=26545.94..33879.49 rows=733355 width=8)\n -> Seq Scan on er_primeq (cost=0.00..24712.55 rows=733355 \nwidth=8)\n -> Index Scan using email_record_pkey on email_record er \n(cost=0.00..3.52 rows=1 width=1863)\n Index Cond: (er.email_record_id = \"outer\".email_record_id)\n Filter: (email_list_id = 13)\n\nDELETE\nNested Loop (cost=26545.94..2627497.28 rows=50846 width=6)\n -> HashAggregate (cost=26545.94..33879.49 rows=733355 width=8)\n -> Seq Scan on er_primeq (cost=0.00..24712.55 rows=733355 \nwidth=8)\n -> Index Scan using email_record_pkey on email_record \n(cost=0.00..3.52 rows=1 width=14)\n Index Cond: (email_record.email_record_id = \n\"outer\".email_record_id)\n Filter: (email_list_id = 9)\n\n\nTo get this explain data I used a sample \"temp_suppress\" table that \ncontained about 700k rows and was indexed but not analyzed...\n\n\n> \n> It would probably help to do an \"ANALYZE temp_list_suppress\" right after\n> populating the temp table. As you have it, the second insert and delete\n> are being planned with nothing more than a row count (obtained during\n> CREATE INDEX) and no stats about distribution of the table contents.\n> \n> Also, I'd be inclined to try replacing the EXISTS with an IN test;\n> in recent PG versions the planner is generally smarter about IN.\n> (Is there a reason why you are doing the INSERT one way and the\n> DELETE the other?)\n> \n> BTW, that TRUNCATE right before the DROP seems quite useless,\n> although it's not the main source of your problem.\n> \n> \t\t\tregards, tom lane\n> \n",
"msg_date": "Tue, 10 Oct 2006 16:37:00 -0600",
"msg_from": "Brendan Curran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scrub one large table against another"
},
{
"msg_contents": "Brendan Curran <[email protected]> writes:\n> Tom Lane wrote:\n>> Have you tried doing EXPLAIN ANALYZE of each of the INSERT/DELETE steps?\n\n> FIRST INSERT (Just the select is explained):\n\nEXPLAIN ANALYZE, please, not just EXPLAIN.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 18:47:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scrub one large table against another "
},
{
"msg_contents": "\n\nTom Lane wrote:\n> Brendan Curran <[email protected]> writes:\n>> Tom Lane wrote:\n>>> Have you tried doing EXPLAIN ANALYZE of each of the INSERT/DELETE steps?\n> \n>> FIRST INSERT (Just the select is explained):\n> \n> EXPLAIN ANALYZE, please, not just EXPLAIN.\n> \n> \t\t\tregards, tom lane\n> \n\nSorry, here is the EXPLAIN ANALYZE output of that first SELECT\n\nEXPLAIN ANALYZE SELECT email_record_id from ONLY email_record er\n\tWHERE email_list_id = 13 AND email IN\n\t(select email from suppress);\n\nHash Join (cost=8359220.68..9129843.00 rows=800912 width=8) (actual \ntime=2121601.603..2121601.603 rows=0 loops=1)\n Hash Cond: ((\"outer\".email)::text = (\"inner\".email)::text)\n -> Unique (cost=4414093.19..4522324.49 rows=21646260 width=25) \n(actual time=1165955.907..1434439.731 rows=21646261 loops=1)\n -> Sort (cost=4414093.19..4468208.84 rows=21646260 width=25) \n(actual time=1165955.903..1384667.715 rows=21646261 loops=1)\n Sort Key: suppress.email\n -> Seq Scan on suppress (cost=0.00..393024.60 \nrows=21646260 width=25) (actual time=37.784..609848.551 rows=21646261 \nloops=1)\n -> Hash (cost=3899868.47..3899868.47 rows=4606808 width=32) (actual \ntime=554522.983..554522.983 rows=3245336 loops=1)\n -> Bitmap Heap Scan on email_record er \n(cost=38464.83..3899868.47 rows=4606808 width=32) (actual \ntime=275640.435..541342.727 rows=3245336 loops=1)\n Recheck Cond: (email_list_id = 13)\n -> Bitmap Index Scan on list (cost=0.00..38464.83 \nrows=4606808 width=0) (actual time=275102.037..275102.037 rows=5172979 \nloops=1)\n Index Cond: (email_list_id = 13)\nTotal runtime: 2122693.864 ms\n\n\nSo much time is being spent in the Unique and Sort leaves... I would \nthink that it wouldn't need to do the unique portion, since there is no \nDISTINCT clause...\n",
"msg_date": "Tue, 10 Oct 2006 17:46:18 -0600",
"msg_from": "Brendan Curran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scrub one large table against another"
},
{
"msg_contents": "On Tue, Oct 10, 2006 at 05:46:18PM -0600, Brendan Curran wrote:\n> \n> \n> Tom Lane wrote:\n> >Brendan Curran <[email protected]> writes:\n> >>Tom Lane wrote:\n> >>>Have you tried doing EXPLAIN ANALYZE of each of the INSERT/DELETE steps?\n> >\n> >>FIRST INSERT (Just the select is explained):\n> >\n> >EXPLAIN ANALYZE, please, not just EXPLAIN.\n> >\n> >\t\t\tregards, tom lane\n> >\n> \n> Sorry, here is the EXPLAIN ANALYZE output of that first SELECT\n> \n> EXPLAIN ANALYZE SELECT email_record_id from ONLY email_record er\n> \tWHERE email_list_id = 13 AND email IN\n> \t(select email from suppress);\n> \n> Hash Join (cost=8359220.68..9129843.00 rows=800912 width=8) (actual \n> time=2121601.603..2121601.603 rows=0 loops=1)\n> Hash Cond: ((\"outer\".email)::text = (\"inner\".email)::text)\n> -> Unique (cost=4414093.19..4522324.49 rows=21646260 width=25) \n> (actual time=1165955.907..1434439.731 rows=21646261 loops=1)\n> -> Sort (cost=4414093.19..4468208.84 rows=21646260 width=25) \n> (actual time=1165955.903..1384667.715 rows=21646261 loops=1)\n> Sort Key: suppress.email\n> -> Seq Scan on suppress (cost=0.00..393024.60 \n> rows=21646260 width=25) (actual time=37.784..609848.551 rows=21646261 \n> loops=1)\n> -> Hash (cost=3899868.47..3899868.47 rows=4606808 width=32) (actual \n> time=554522.983..554522.983 rows=3245336 loops=1)\n> -> Bitmap Heap Scan on email_record er \n> (cost=38464.83..3899868.47 rows=4606808 width=32) (actual \n> time=275640.435..541342.727 rows=3245336 loops=1)\n> Recheck Cond: (email_list_id = 13)\n> -> Bitmap Index Scan on list (cost=0.00..38464.83 \n> rows=4606808 width=0) (actual time=275102.037..275102.037 rows=5172979 \n> loops=1)\n> Index Cond: (email_list_id = 13)\n> Total runtime: 2122693.864 ms\n> \n> \n> So much time is being spent in the Unique and Sort leaves... I would \n> think that it wouldn't need to do the unique portion, since there is no \n> DISTINCT clause...\n\nI think that's coming about because of the IN. Try a simple join\ninstead...\n\nSELECT email_record_id FROM ONLY email_record er JOIN suppress s USING\n(email) WHERE er.email_list_id = 13;\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 10 Oct 2006 20:03:02 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scrub one large table against another"
},
{
"msg_contents": "Brendan Curran <[email protected]> writes:\n> So much time is being spent in the Unique and Sort leaves... I would \n> think that it wouldn't need to do the unique portion, since there is no \n> DISTINCT clause...\n\nThere's nothing in that query suggesting that suppress.email is unique.\nIf you know that it is, try using a plain join instead of an IN.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 Oct 2006 22:50:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scrub one large table against another "
},
{
"msg_contents": "Tom Lane wrote:\n> Brendan Curran <[email protected]> writes:\n>> So much time is being spent in the Unique and Sort leaves... I would \n>> think that it wouldn't need to do the unique portion, since there is no \n>> DISTINCT clause...\n> \n> There's nothing in that query suggesting that suppress.email is unique.\n> If you know that it is, try using a plain join instead of an IN.\n> \n> \t\t\tregards, tom lane\n> \n\n\nInterestingly, and thank you to Tom and Jim, the explicit JOIN improved \nperformance tremendously (RESULTS BELOW). I converted the entire query \nto use explicit joins instead of IN and EXISTS and discovered acceptable \nperformance. I think the next place to go from here is RAID1/RAID10 and \npossibly partitioning my large table (Welcome to DDL insanity, right?).\n\nI have to add that I'm a little surprised the documentation is so \ngenerous to IN and EXISTS. Is there something amiss in my configuration \nthat prevents them from performing correctly? If not, I can't imagine a \ntime when IN or EXISTS would be more performant than an explicit JOIN...\n\nAdditionally, I manually scrub for duplicates at the group level in the \nemail_record table to keep my records unique. I would like to use a \nunique constraint, but have found that batching in JDBC is impossible \ndue to irrecoverable errors even when using BEFORE INSERT triggers to \njust return NULL if a record exists already. Has anyone got an elegant \nsolution for the 'add only if not exists already' problem similar to \nMSSQL's MERGE command?\n\nJust one more thing... I have found that maintaining a btree index on a \nvarchar(255) value is extremely expensive on insert/update/delete. It is \nunfortunately necessary for me to maintain this index for queries and \nreports so I am transitioning to using an unindexed staging table to \nimport data into before merging it with the larger table. All the docs \nand posts recommend is to drop the index, import your data, and then \ncreate the index again. This is untenable on a daily / bi-weekly basis. \nIs there a more elegant solution to this indexing problem?\n\nThank you for all of your help!\n\nEXPLAIN ANALYZE result comparison...\n\n1. EXPLAIN ANALYZE SELECT email_record_id from ONLY email_record er\n WHERE email_list_id = 13 AND email IN\n (select email from suppress);\n\nHash Join (cost=8359220.68..9129843.00 rows=800912 width=8) (actual \ntime=2121601.603..2121601.603 rows=0 loops=1)\n Hash Cond: ((\"outer\".email)::text = (\"inner\".email)::text)\n -> Unique (cost=4414093.19..4522324.49 rows=21646260 width=25) \n(actual time=1165955.907..1434439.731 rows=21646261 loops=1)\n -> Sort (cost=4414093.19..4468208.84 rows=21646260 width=25) \n(actual time=1165955.903..1384667.715 rows=21646261 loops=1)\n Sort Key: suppress.email\n -> Seq Scan on suppress (cost=0.00..393024.60 \nrows=21646260 width=25) (actual time=37.784..609848.551 rows=21646261 \nloops=1)\n -> Hash (cost=3899868.47..3899868.47 rows=4606808 width=32) (actual \ntime=554522.983..554522.983 rows=3245336 loops=1)\n -> Bitmap Heap Scan on email_record er \n(cost=38464.83..3899868.47 rows=4606808 width=32) (actual \ntime=275640.435..541342.727 rows=3245336 loops=1)\n Recheck Cond: (email_list_id = 13)\n -> Bitmap Index Scan on list (cost=0.00..38464.83 \nrows=4606808 width=0) (actual time=275102.037..275102.037 rows=5172979 \nloops=1)\n Index Cond: (email_list_id = 13)\nTotal runtime: 2,122,693.864 ms\n--------------------------------------------------------\n\n2. EXPLAIN ANALYZE SELECT email_record_id FROM ONLY email_record er JOIN \nsuppress s USING\n(email) WHERE er.email_list_id = 13;\n\nHash Join (cost=3945127.49..5000543.11 rows=800912 width=8) (actual \ntime=808874.088..808874.088 rows=0 loops=1)\n Hash Cond: ((\"outer\".email)::text = (\"inner\".email)::text)\n -> Seq Scan on suppress s (cost=0.00..393024.60 rows=21646260 \nwidth=25) (actual time=661.518..216933.399 rows=21646261 loops=1)\n -> Hash (cost=3899868.47..3899868.47 rows=4606808 width=32) (actual \ntime=494294.932..494294.932 rows=3245336 loops=1)\n -> Bitmap Heap Scan on email_record er \n(cost=38464.83..3899868.47 rows=4606808 width=32) (actual \ntime=242198.226..485942.542 rows=3245336 loops=1)\n Recheck Cond: (email_list_id = 13)\n -> Bitmap Index Scan on list (cost=0.00..38464.83 \nrows=4606808 width=0) (actual time=241769.786..241769.786 rows=5172979 \nloops=1)\n Index Cond: (email_list_id = 13)\nTotal runtime: 808,884.387 ms\n\n\n",
"msg_date": "Wed, 11 Oct 2006 10:53:41 -0600",
"msg_from": "Brendan Curran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scrub one large table against another"
},
{
"msg_contents": "> \n> What prevents you from using an aggregate function?\n> \n\nI guess I could actually obtain the results in an aggregate function and use those to maintain a \nsummary table. There is a web view that requires 'as accurate as possible' numbers to be queried per \ngroup (all 40 groups are displayed on the same page) and so constant aggregates over the entire \ntable would be a nightmare.\n\n> Probably not 2x, but better performance than now. You probably don't \n> want RAID 1, depending on your setup, many list member swear by RAID 10. \n> Of course, your setup will depend on how much money you have to burn. \n> That said, RAID 1 testing will allow you to determine the upper bounds \n> of your hardware. Some folks say they get better performance with WAL \n> off the main RAID, some keep it on. Only testing will allow you to\n> determine what is optimal. \n\nI will have to try moving WAL off those raid spindles, I have seen the posts regarding this.\n\n> In the meantime, you need to identify the \n> bottleneck of your operation. You should collect vmstat and iostat \n> statistics for your present setup. Good luck!\n> \n\nI have to confess that I am a bit of a novice with vmstat. Below is a sample of my vmstat output \nwhile running two scrubbing queries simultaneously:\n\nmachine:/dir# vmstat -S M 2\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 1 4 117 15 2962 0 0 100 25 96 107 2 0 86 11\n 0 3 4 117 15 2962 0 0 4884 1860 415 841 18 1 52 29\n 1 1 4 115 15 2964 0 0 2246 1222 462 394 8 0 51 41\n 0 2 4 114 14 2967 0 0 3932 2238 485 613 12 0 62 25\n 1 1 4 115 13 2966 0 0 3004 1684 507 609 8 0 60 31\n 0 3 4 116 13 2965 0 0 4688 4000 531 613 15 1 52 33\n 1 1 4 117 13 2964 0 0 2890 268 433 441 9 1 58 32\n 0 1 4 114 13 2968 0 0 2802 4708 650 501 8 1 64 28\n 0 2 4 114 13 2968 0 0 4850 1696 490 574 15 1 57 27\n 0 2 4 116 13 2966 0 0 4300 3062 540 520 13 1 61 26\n 0 2 4 115 13 2966 0 0 3292 3608 549 455 10 1 65 24\n 0 3 4 115 13 2966 0 0 4856 2098 505 564 15 1 59 26\n 0 3 4 115 13 2966 0 0 1608 2314 447 413 4 0 63 33\n 0 3 4 116 13 2966 0 0 6206 1664 442 649 18 1 52 29\n 1 1 4 115 13 2966 0 0 1886 1262 464 412 5 0 60 35\n 0 3 4 118 13 2964 0 0 2510 4138 571 493 7 1 64 28\n 1 1 4 117 13 2964 0 0 1632 56 325 373 5 0 53 42\n 0 3 4 116 13 2965 0 0 5358 3510 504 649 14 1 59 26\n 1 1 4 118 13 2964 0 0 2814 920 447 403 8 0 63 29\n\nI know that wa is the time spent waiting on IO, but I lack a benchmark to determine just what I \nshould expect from my hardware (three 146GB U320 SCSI 10k drives in raid 5 on a Dell PERC4ei PE2850 \ncontroller). Those drives are dedicated completely to a /data mount that contains only \n/data/postgresql/8.1/main. I have another two drives in raid 1 for everything else (OS, apps, etc.). \nCan you give me any pointers based on that vmstat output?\n\nRegards and Thanks,\nBrendan\n",
"msg_date": "Wed, 11 Oct 2006 11:25:02 -0600",
"msg_from": "Brendan Curran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scrub one large table against another (vmstat output)"
},
{
"msg_contents": "On Wed, Oct 11, 2006 at 10:53:41AM -0600, Brendan Curran wrote:\n> Interestingly, and thank you to Tom and Jim, the explicit JOIN improved \n> performance tremendously (RESULTS BELOW). I converted the entire query \n> to use explicit joins instead of IN and EXISTS and discovered acceptable \n> performance. I think the next place to go from here is RAID1/RAID10 and \n> possibly partitioning my large table (Welcome to DDL insanity, right?).\n \nRemember that partitioning is not a magic bullet: it only helps in cases\nwhere you need to keep a lot of data, but normally only access a small\nportion of it.\n\nWAL on RAID5 without a really good controller will probably kill you.\nData being there isn't too much better. You'll probably be better with\neither 1 raid 10 or 2 raid 1s.\n\n> I have to add that I'm a little surprised the documentation is so \n> generous to IN and EXISTS. Is there something amiss in my configuration \n> that prevents them from performing correctly? If not, I can't imagine a \n> time when IN or EXISTS would be more performant than an explicit JOIN...\n \nWell, IN != EXISTS != JOIN. Exists just stops as soon as it finds a\nrecord. For some cases, it's equivalent to IN, but not all. IN has to\nde-duplicate it's list in some fashion. For small IN lists, you can do\nthis with an OR, but at some point you need to switch to an actual\nunique (actually, I suspect the difference in PostgreSQL just depends on\nif you passed values into IN or a subquery). A join on the other hand\ndoesn't worry about duplicates at all. There may be some brains in the\nplanner that realize if a subquery will return a unique set (ie: you're\nquerying on a primary key).\n\n> Additionally, I manually scrub for duplicates at the group level in the \n> email_record table to keep my records unique. I would like to use a \n> unique constraint, but have found that batching in JDBC is impossible \n> due to irrecoverable errors even when using BEFORE INSERT triggers to \n> just return NULL if a record exists already. Has anyone got an elegant \n> solution for the 'add only if not exists already' problem similar to \n> MSSQL's MERGE command?\n \nYour best bet (until we have something akin to MERGE, hopefully in 8.3)\nis to load the data into a TEMP table and de-dupe it from there.\nDepending on what you're doing you might want to delete it, or update an\nID column in the temp table. Note that assumes that only one process is\nloading data at any time, if that's not the case you have to get\ntrickier.\n\n> Just one more thing... I have found that maintaining a btree index on a \n> varchar(255) value is extremely expensive on insert/update/delete. It is \n> unfortunately necessary for me to maintain this index for queries and \n> reports so I am transitioning to using an unindexed staging table to \n> import data into before merging it with the larger table. All the docs \n> and posts recommend is to drop the index, import your data, and then \n> create the index again. This is untenable on a daily / bi-weekly basis. \n> Is there a more elegant solution to this indexing problem?\n\nYou might be happier with tsearch than a regular index.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 11 Oct 2006 15:52:52 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scrub one large table against another"
},
{
"msg_contents": "Hi, Brendan,\n\nBrendan Curran wrote:\n>> What prevents you from using an aggregate function?\n> \n> I guess I could actually obtain the results in an aggregate function and\n> use those to maintain a summary table. There is a web view that requires\n> 'as accurate as possible' numbers to be queried per group (all 40 groups\n> are displayed on the same page) and so constant aggregates over the\n> entire table would be a nightmare.\n\nThat sounds just like a case for GROUP BY and a materialized view.\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Thu, 12 Oct 2006 08:55:35 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scrub one large table against another (vmstat output)"
},
{
"msg_contents": "\n> Well, IN != EXISTS != JOIN. Exists just stops as soon as it finds a\n> record. For some cases, it's equivalent to IN, but not all. IN has to\n> de-duplicate it's list in some fashion. For small IN lists, you can do\n> this with an OR, but at some point you need to switch to an actual\n> unique (actually, I suspect the difference in PostgreSQL just depends on\n> if you passed values into IN or a subquery). A join on the other hand\n> doesn't worry about duplicates at all. There may be some brains in the\n> planner that realize if a subquery will return a unique set (ie: you're\n> querying on a primary key).\n> \n\nI agree, and it makes sense now that I consider it that IN would force the planner to implement some \nform of unique check - possibly leveraging a PK or unique index if one is already available. Maybe \nI'll tack up a note to the online documentation letting people know so that it's a little more \nexplicitly clear that when you choose IN on data that isn't explicitly unique (to the planner i.e. \npost-analyze) you get the baggage of a forced unique whether you need it or not. Or perhaps someone \nthat knows the internals of the planner a little better than me should put some info up regarding that?\n\n> \n>> Just one more thing... I have found that maintaining a btree index on a \n>> varchar(255) value is extremely expensive on insert/update/delete. It is \n>> unfortunately necessary for me to maintain this index for queries and \n>> reports so I am transitioning to using an unindexed staging table to \n>> import data into before merging it with the larger table. All the docs \n>> and posts recommend is to drop the index, import your data, and then \n>> create the index again. This is untenable on a daily / bi-weekly basis. \n>> Is there a more elegant solution to this indexing problem?\n> \n> You might be happier with tsearch than a regular index.\n\nThanks, I'll look into using tsearch2 as a possibility. From what I've seen so far it would add \nquite a bit of complexity (necessary updates after inserts, proprietary query syntax that might \nrequire a large amount of specialization from client apps) but in the end the overhead may be less \nthan that of maintaining the btree.\n\nThanks and Regards,\nB\n",
"msg_date": "Thu, 12 Oct 2006 12:05:04 -0600",
"msg_from": "Brendan Curran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Scrub one large table against another"
},
{
"msg_contents": "Brendan Curran <[email protected]> writes:\n> I'll tack up a note to the online documentation letting people know so\n> that it's a little more explicitly clear that when you choose IN on\n> data that isn't explicitly unique (to the planner i.e. post-analyze)\n> you get the baggage of a forced unique whether you need it or not. Or\n> perhaps someone that knows the internals of the planner a little\n> better than me should put some info up regarding that?\n\nYou get a forced unique step, period --- the planner doesn't try to\nshortcut on the basis of noticing a relevant unique constraint.\nWe have some plan techniques that might look like they are not checking\nuniqueness (eg, an \"IN Join\") but they really are.\n\nThis is an example of what I was talking about just a minute ago, about\nnot wanting to rely on constraints that could go away while the plan is\nstill potentially usable. It's certainly something that we should look\nat adding as soon as the plan-invalidation infrastructure is there to\nmake it safe to do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2006 14:25:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scrub one large table against another "
}
] |
[
{
"msg_contents": "\n\n\n Hello.\n \n Simply jumping on the bandwagon, just my 2 cents:\n \n why not just like in some other (commercial) databases:\n \n a statement to say: use index ............\n \n I know this is against all though but if even the big ones can not resist\n the pressure of their users, why not?\n \n Henk Sanders\n \n> > -----Oorspronkelijk bericht-----\n> > Van: [email protected]\n> > [mailto:[email protected]]Namens Bucky Jordan\n> > Verzonden: woensdag 11 oktober 2006 16:27\n> > Aan: Tom Lane; Brian Herlihy\n> > CC: Postgresql Performance\n> > Onderwerp: Re: [PERFORM] Simple join optimized badly? \n> > \n> > \n> > > Brian Herlihy <[email protected]> writes:\n> > > > What would it take for hints to be added to postgres?\n> > > \n> > > A *whole lot* more thought and effort than has been expended on the\n> > > subject to date.\n> > > \n> > > Personally I have no use for the idea of \"force the planner to do\n> > > exactly X given a query of exactly Y\". You don't have exactly Y\n> > > today, tomorrow, and the day after (if you do, you don't need a\n> > > hint mechanism at all, you need a mysql-style query cache).\n> > > IMHO most of the planner mistakes we see that could be fixed via\n> > > hinting are really statistical estimation errors, and so the right\n> > > level to be fixing them at is hints about how to estimate the number\n> > > of rows produced for given conditions. Mind you that's still a plenty\n> > > hard problem, but you could at least hope that a hint of that form\n> > > would be useful for more than one query.\n> > > \n> > \n> > Do I understand correctly that you're suggesting it might not be a bad\n> > idea to allow users to provide statistics?\n> > \n> > Is this along the lines of \"I'm loading a big table and touching every\n> > row of data, so I may as well collect some stats along the way\" and \"I\n> > know my data contains these statistical properties, but the analyzer\n> > wasn't able to figure that out (or maybe can't figure it out efficiently\n> > enough)\"?\n> > \n> > While it seems like this would require more knowledge from the user\n> > (e.g. more about their data, how the planner works, and how it uses\n> > statistics) this would actually be helpful/required for those who really\n> > care about performance. I guess it's the difference between a tool\n> > advanced users can get long term benefit from, or a quick fix that will\n> > probably come back to bite you. I've been pleased with Postgres'\n> > thoughtful design; recently I've been doing some work with MySQL, and\n> > can't say I feel the same way.\n> > \n> > Also, I'm guessing this has already come up at some point, but what\n> > about allowing PG to do some stat collection during queries? If you're\n> > touching a lot of data (such as an import process) wouldn't it be more\n> > efficient (and perhaps more accurate) to collect stats then, rather than\n> > having to re-scan? It would be nice to be able to turn this on/off on a\n> > per query basis, seeing as it could have pretty negative impacts on OLTP\n> > performance...\n> > \n> > - Bucky\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> > \n> > http://archives.postgresql.org\n> > \n",
"msg_date": "Thu, 12 Oct 2006 09:32:51 +0200",
"msg_from": "\"H.J. Sanders\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: Simple join optimized badly? "
},
{
"msg_contents": "H.J. Sanders wrote:\n\n> why not just like in some other (commercial) databases:\n> \n> a statement to say: use index ............\n> \n> I know this is against all though but if even the big ones can not resist\n> the pressure of their users, why not?\n> \n\nYeah - some could not (e.g. Oracle), but some did (e.g. DB2), and it \nseemed (to me anyway) significant DB2's optimizer worked much better \nthan Oracle's last time I used both of them (Oracle 8/9 and DB2 7/8).\n\ncheers\n\nMark\n",
"msg_date": "Thu, 12 Oct 2006 22:59:23 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Simple join optimized badly?"
},
{
"msg_contents": "On Thu, Oct 12, 2006 at 10:59:23PM +1300, Mark Kirkwood wrote:\n> H.J. Sanders wrote:\n> \n> > why not just like in some other (commercial) databases:\n> > \n> > a statement to say: use index ............\n> > \n> > I know this is against all though but if even the big ones can not resist\n> > the pressure of their users, why not?\n> > \n> \n> Yeah - some could not (e.g. Oracle), but some did (e.g. DB2), and it \n> seemed (to me anyway) significant DB2's optimizer worked much better \n> than Oracle's last time I used both of them (Oracle 8/9 and DB2 7/8).\n\nIf someone's going to commit to putting effort into improving the\nplanner then that's wonderful. But I can't recall any significant\nplanner improvements since min/max (which I'd argue was more of a bug\nfix than an improvement). In fact, IIRC it took at least 2 major\nversions to get min/max fixed, and that was a case where it was very\nclear-cut what had to be done.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 12 Oct 2006 08:52:49 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Simple join optimized badly?"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> If someone's going to commit to putting effort into improving the\n> planner then that's wonderful. But I can't recall any significant\n> planner improvements since min/max (which I'd argue was more of a bug\n> fix than an improvement).\n\nHmph. Apparently I've wasted most of the last five years.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2006 10:44:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Simple join optimized badly? "
},
{
"msg_contents": "On Thu, Oct 12, 2006 at 10:44:20AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > If someone's going to commit to putting effort into improving the\n> > planner then that's wonderful. But I can't recall any significant\n> > planner improvements since min/max (which I'd argue was more of a bug\n> > fix than an improvement).\n> \n> Hmph. Apparently I've wasted most of the last five years.\n\nOk, now that I've actually looked at the release notes, I take that back\nand apologize. But while there's a lot of improvements that have been\nmade, there's still some seriously tough problems that have been talked\nabout for a long time and there's still no \"light at the end of the\ntunnel\", like how to handle multi-column statistics.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 12 Oct 2006 11:37:47 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Simple join optimized badly?"
},
{
"msg_contents": "On Thu, 2006-10-12 at 09:44, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > If someone's going to commit to putting effort into improving the\n> > planner then that's wonderful. But I can't recall any significant\n> > planner improvements since min/max (which I'd argue was more of a bug\n> > fix than an improvement).\n> \n> Hmph. Apparently I've wasted most of the last five years.\n\nI appreciate the work, and trust me, I've noticed the changes in the\nquery planner over time. \n\nThanks for the hard work, and I'm sure there are plenty of other\nthankful people too.\n",
"msg_date": "Thu, 12 Oct 2006 12:59:42 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Simple join optimized badly?"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n>> If someone's going to commit to putting effort into improving the\n>> planner then that's wonderful. But I can't recall any significant\n>> planner improvements since min/max (which I'd argue was more of a bug\n>> fix than an improvement).\n> \n> Hmph. Apparently I've wasted most of the last five years.\n> \n\nIn my opinion your on-going well thought out planner improvements are \n*exactly* the approach we need to keep doing...\n\nCheers\n\nMark\n",
"msg_date": "Fri, 13 Oct 2006 11:07:32 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Simple join optimized badly?"
},
{
"msg_contents": "Jim C. Nasby wrote:\n\n> \n> Ok, now that I've actually looked at the release notes, I take that back\n> and apologize. But while there's a lot of improvements that have been\n> made, there's still some seriously tough problems that have been talked\n> about for a long time and there's still no \"light at the end of the\n> tunnel\", like how to handle multi-column statistics.\n\nYeah - multi-column stats and cost/stats for functions look the the next \n feature additions we need to get going on....\n\nCheers\n\nMark\n",
"msg_date": "Fri, 13 Oct 2006 11:26:34 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Simple join optimized badly?"
}
] |
[
{
"msg_contents": "Posting here instead of hackers since this is where the thread got\nstarted...\n\nThe argument has been made that producing a hints system will be as hard\nas actually fixing the optimizer. There's also been clamoring for an\nactual proposal, so here's one that (I hope) wouldn't be very difficult\nto implemen.\n\nMy goal with this is to keep the coding aspect as simple as possible, so\nthat implementation and maintenance of this isn't a big burden. Towards\nthat end, these hints either tell the planner specifically how to handle\nsome aspect of a query, or they tell it to modify specific cost\nestimates. My hope is that this information could be added to the\ninternal representation of a query without much pain, and that the\nplanner can then use that information when generating plans.\n\nThe syntax these hints is something arbitrary. I'm borrowing Oracle's\nidea of embedding hints in comments, but we can use some other method if\ndesired. Right now I'm more concerned with getting the general idea\nacross.\n\nSince this is such a controversial topic, I've left this at a 'rough\ndraft' stage - it's meant more as a framework for discussion than a\nfinal proposal for implementation.\n\nForcing a Plan\n--------------\nThese hints would outright force the planner to do things a certain way.\n\n... FROM table /* ACCESS {SEQSCAN | [[NO] BITMAP] INDEX index_name} */\n\nThis would force the planner to access table via a seqscan or\nindex_name. For the index case, you can also specify if the access must\nor must not be via a bitmap scan. If neither is specified, the planner\nis free to choose either one.\n\nTheoretically, we could also allow \"ACCESS INDEX\" without an index name,\nwhich would simply enforce that a seqscan not be used, but I'm not sure\nhow useful that would be.\n\n... FROM a JOIN b /* {HASH|NESTED LOOP|MERGE} JOIN */ ON (...)\n... FROM a JOIN b ON (...) /* [HASH|NESTED LOOP|MERGE] JOIN */\n\nForce the specified join mechanism on the join. The first form would not\nenforce a join order, it would only force table b to be joined to the\nrest of the relations using the specified join type. The second form\nwould specify that a joins to b in that order, and optionally specify\nwhat type of join to use.\n\n... GROUP BY ... /* {HASH|SORT} AGGREGATE */\n\nSpecify how aggregation should be handled.\n\nCost Tweaking\n-------------\nIt would also be useful to allow tweaking of planner cost estimates.\nThis would take the general form of\n\nnode operator value\n\nwhere node would be a planner node/hint (ie: ACCESS INDEX), operator\nwould be +, -, *, /, and value would be the amount to change the\nestimate by. So \"ACCESS INDEX my_index / 2\" would tell the planner to\ncut the estimated cost of any index scan on a given table in half.\n\n(I realize the syntax will probably need to change to avoid pain in the\ngrammar code.)\n\nUnlike the hints above that are ment to force a certain behavior on an\noperation, you could potentially have multiple cost hints in a single\nlocation, ie:\n\nFROM a /* HASH JOIN * 1.1 NESTED LOOP JOIN * 2 MERGE JOIN + 5000 */\n JOIN b ON (...) /* NESTED LOOP JOIN - 5000 */\n\nThe first comment block would apply to any joins against a, while the\nsecond one would apply only to joins between a and b. The effects would\nbe cumulative, so this example means that any merge join against a gets\nan added cost of 5000, unless it's a join with b (because +5000 + -5000\n= 0). I think you could end up with odd cases if the second form just\nover-rode the first, which is why it should be cummulative.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 12 Oct 2006 10:14:39 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hints proposal"
},
{
"msg_contents": "\nBecause DB2 doesn't like hints, and the fact that they have gotten to a\npoint where they feel they do not need them, I feel we too can get to a\npoint where we don't need them either. The question is whether we can\nget there quickly enough for our userbase.\n\nI perfer attacking the problem at the table definition level, like\nsomething like \"volatile\", or adding to the existing table statistics.\n\n---------------------------------------------------------------------------\n\nJim C. Nasby wrote:\n> Posting here instead of hackers since this is where the thread got\n> started...\n> \n> The argument has been made that producing a hints system will be as hard\n> as actually fixing the optimizer. There's also been clamoring for an\n> actual proposal, so here's one that (I hope) wouldn't be very difficult\n> to implemen.\n> \n> My goal with this is to keep the coding aspect as simple as possible, so\n> that implementation and maintenance of this isn't a big burden. Towards\n> that end, these hints either tell the planner specifically how to handle\n> some aspect of a query, or they tell it to modify specific cost\n> estimates. My hope is that this information could be added to the\n> internal representation of a query without much pain, and that the\n> planner can then use that information when generating plans.\n> \n> The syntax these hints is something arbitrary. I'm borrowing Oracle's\n> idea of embedding hints in comments, but we can use some other method if\n> desired. Right now I'm more concerned with getting the general idea\n> across.\n> \n> Since this is such a controversial topic, I've left this at a 'rough\n> draft' stage - it's meant more as a framework for discussion than a\n> final proposal for implementation.\n> \n> Forcing a Plan\n> --------------\n> These hints would outright force the planner to do things a certain way.\n> \n> ... FROM table /* ACCESS {SEQSCAN | [[NO] BITMAP] INDEX index_name} */\n> \n> This would force the planner to access table via a seqscan or\n> index_name. For the index case, you can also specify if the access must\n> or must not be via a bitmap scan. If neither is specified, the planner\n> is free to choose either one.\n> \n> Theoretically, we could also allow \"ACCESS INDEX\" without an index name,\n> which would simply enforce that a seqscan not be used, but I'm not sure\n> how useful that would be.\n> \n> ... FROM a JOIN b /* {HASH|NESTED LOOP|MERGE} JOIN */ ON (...)\n> ... FROM a JOIN b ON (...) /* [HASH|NESTED LOOP|MERGE] JOIN */\n> \n> Force the specified join mechanism on the join. The first form would not\n> enforce a join order, it would only force table b to be joined to the\n> rest of the relations using the specified join type. The second form\n> would specify that a joins to b in that order, and optionally specify\n> what type of join to use.\n> \n> ... GROUP BY ... /* {HASH|SORT} AGGREGATE */\n> \n> Specify how aggregation should be handled.\n> \n> Cost Tweaking\n> -------------\n> It would also be useful to allow tweaking of planner cost estimates.\n> This would take the general form of\n> \n> node operator value\n> \n> where node would be a planner node/hint (ie: ACCESS INDEX), operator\n> would be +, -, *, /, and value would be the amount to change the\n> estimate by. So \"ACCESS INDEX my_index / 2\" would tell the planner to\n> cut the estimated cost of any index scan on a given table in half.\n> \n> (I realize the syntax will probably need to change to avoid pain in the\n> grammar code.)\n> \n> Unlike the hints above that are ment to force a certain behavior on an\n> operation, you could potentially have multiple cost hints in a single\n> location, ie:\n> \n> FROM a /* HASH JOIN * 1.1 NESTED LOOP JOIN * 2 MERGE JOIN + 5000 */\n> JOIN b ON (...) /* NESTED LOOP JOIN - 5000 */\n> \n> The first comment block would apply to any joins against a, while the\n> second one would apply only to joins between a and b. The effects would\n> be cumulative, so this example means that any merge join against a gets\n> an added cost of 5000, unless it's a join with b (because +5000 + -5000\n> = 0). I think you could end up with odd cases if the second form just\n> over-rode the first, which is why it should be cummulative.\n> -- \n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 12 Oct 2006 11:19:10 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "On 10/12/06, Jim C. Nasby <[email protected]> wrote:\n>\n> Posting here instead of hackers since this is where the thread got\n> started...\n>\n> The argument has been made that producing a hints system will be as hard\n> as actually fixing the optimizer. There's also been clamoring for an\n> actual proposal, so here's one that (I hope) wouldn't be very difficult\n> to implemen.\n>\n> My goal with this is to keep the coding aspect as simple as possible, so\n> that implementation and maintenance of this isn't a big burden. Towards\n> that end, these hints either tell the planner specifically how to handle\n> some aspect of a query, or they tell it to modify specific cost\n> estimates. My hope is that this information could be added to the\n> internal representation of a query without much pain, and that the\n> planner can then use that information when generating plans.\n\n\nI've been following the last thread with a bit of interest. I like the\nproposal. It seems simple and easy to use. What is it about hinting that\nmakes it so easily breakable with new versions? I don't have any experience\nwith Oracle, so I'm not sure how they screwed logic like this up. Hinting\nto use a specific merge or scan seems fairly straight forward; if the query\nrequests to use an index on a join, I don't see how hard it is to go with\nthe suggestion. It will become painfully obvious to the developer if his\nhinting is broken.\n\nOn 10/12/06, Jim C. Nasby <[email protected]> wrote:\nPosting here instead of hackers since this is where the thread gotstarted...The argument has been made that producing a hints system will be as hard\nas actually fixing the optimizer. There's also been clamoring for anactual proposal, so here's one that (I hope) wouldn't be very difficultto implemen.My goal with this is to keep the coding aspect as simple as possible, so\nthat implementation and maintenance of this isn't a big burden. Towardsthat end, these hints either tell the planner specifically how to handlesome aspect of a query, or they tell it to modify specific cost\nestimates. My hope is that this information could be added to theinternal representation of a query without much pain, and that theplanner can then use that information when generating plans.\n \nI've been following the last thread with a bit of interest. I like the proposal. It seems simple and easy to use. What is it about hinting that makes it so easily breakable with new versions? I don't have any experience with Oracle, so I'm not sure how they screwed logic like this up. Hinting to use a specific merge or scan seems fairly straight forward; if the query requests to use an index on a join, I don't see how hard it is to go with the suggestion. It will become painfully obvious to the developer if his hinting is broken.",
"msg_date": "Thu, 12 Oct 2006 09:26:24 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "[ This is off-topic for -performance, please continue the thread in\n-hackers ]\n\n\"Jim C. Nasby\" <[email protected]> writes:\n> These hints would outright force the planner to do things a certain way.\n> ... FROM table /* ACCESS {SEQSCAN | [[NO] BITMAP] INDEX index_name} */\n\nThis proposal seems to deliberately ignore every point that has been\nmade *against* doing things that way. It doesn't separate the hints\nfrom the queries, it doesn't focus on fixing the statistical or cost\nmisestimates that are at the heart of the issue, and it takes no account\nof the problem of hints being obsoleted by system improvements.\n\n> It would also be useful to allow tweaking of planner cost estimates.\n> This would take the general form of\n> node operator value\n\nThis is at least focusing on the right sort of thing, although I still\nfind it completely misguided to be attaching hints like this to\nindividual queries.\n\nWhat I would like to see is information *stored in a system catalog*\nthat affects the planner's cost estimates. As an example, the DBA might\nknow that a particular table is touched sufficiently often that it's\nlikely to remain RAM-resident, in which case reducing the page fetch\ncost estimates for just that table would make sense. (BTW, this is\nsomething the planner could in principle know, but we're unlikely to\ndo it anytime soon, for a number of reasons including a desire for plan\nstability.) The other general category of thing I think we need is a\nway to override selectivity estimates for particular forms of WHERE\nclauses.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2006 11:42:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal "
},
{
"msg_contents": "Bruce Momjian wrote:\n> Because DB2 doesn't like hints, and the fact that they have gotten to a\n> point where they feel they do not need them, I feel we too can get to a\n> point where we don't need them either. The question is whether we can\n> get there quickly enough for our userbase.\n\nIn all fairness, when I used to work with DB2 we often had to rewrite \nqueries to persuade the planner to choose a different plan. Often it was \nmore of an issue of plan stability; a query would suddenly become \nhorribly slow in production because a table had grown slowly to the \npoint that it chose a different plan than before. Then we had to modify \nthe query again, or manually set the statistics. In extreme cases we had \nto split a query to multiple parts and use temporary tables and move \nlogic to the application to get a query to perform consistently and fast \nenough. I really really missed hints.\n\nBecause DB2 doesn't have MVCC, an accidental table scan is very serious, \nbecause with stricter isolation levels that keeps the whole table locked.\n\nThat said, I really don't like the idea of hints like \"use index X\" \nembedded in a query. I do like the idea of hints that give the planner \nmore information about the data. I don't have a concrete proposal, but \nhere's some examples of hints I'd like to see:\n\n\"table X sometimes has millions of records and sometimes it's empty\"\n\"Expression (table.foo = table2.bar * 2) has selectivity 0.99\"\n\"if foo.bar = 5 then foo.field2 IS NULL\"\n\"Column X is unique\"\n\"function foobar() always returns either 1 or 2, and it returns 2 90% of \nthe time.\"\n\"if it's Monday, then table NEW_ORDERS has a cardinality of 100000, \notherwise 10.\"\n\nBTW: Do we make use of CHECK constraints in the planner? In DB2, that \nwas one nice and clean way of hinting the planner about things. If I \nremember correctly, you could even define CHECK constraints that weren't \nactually checked at run-time, but were used by the planner.\n\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Thu, 12 Oct 2006 16:55:17 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "On 10/12/06, Tom Lane <[email protected]> wrote:\n> [ This is off-topic for -performance, please continue the thread in\n> -hackers ]\n\n> This proposal seems to deliberately ignore every point that has been\n> made *against* doing things that way. It doesn't separate the hints\n> from the queries, it doesn't focus on fixing the statistical or cost\n> misestimates that are at the heart of the issue, and it takes no account\n> of the problem of hints being obsoleted by system improvements.\n\nwhat about extending the domain system so that we can put in ranges\nthat override the statistics or (imo much more importantly) provide\ninformation when the planner would have to restort to a guess. my case\nfor this is prepared statements with a parameterized limit clause.\n\nprepare foo(l int) as select * from bar limit $1;\n\nmaybe:\ncreate domain foo_lmt as int hint 1; -- probably needs to be fleshed out\nprepare foo(l foolmt) as select * from bar limit $1;\n\nthis says: \"if you have to guess me, please use this\"\n\nwhat I like about this over previous attempts to persuade you is the\ngrammar changes are localized and also imo future proofed. planner can\nignore the hints if they are not appropriate for the oparation.\n\nmerlin\n",
"msg_date": "Thu, 12 Oct 2006 12:22:10 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> BTW: Do we make use of CHECK constraints in the planner?\n\nOnly for \"constraint exclusion\", and at the moment that's off by default.\n\nThe gating problem here is that if the planner relies on a CHECK\nconstraint, and then you drop the constraint, the previously generated\nplan might start to silently deliver wrong answers. So I'd like to see\na plan invalidation mechanism in place before we go very far down the\npath of relying on constraints for planning. That's something I'm going\nto try to make happen for 8.3, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2006 12:24:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal "
},
{
"msg_contents": "On Thu, Oct 12, 2006 at 11:42:32AM -0400, Tom Lane wrote:\n> [ This is off-topic for -performance, please continue the thread in\n> -hackers ]\n> \n> \"Jim C. Nasby\" <[email protected]> writes:\n> > These hints would outright force the planner to do things a certain way.\n> > ... FROM table /* ACCESS {SEQSCAN | [[NO] BITMAP] INDEX index_name} */\n> \n> This proposal seems to deliberately ignore every point that has been\n> made *against* doing things that way. It doesn't separate the hints\n> from the queries, it doesn't focus on fixing the statistical or cost\n> misestimates that are at the heart of the issue, and it takes no account\n> of the problem of hints being obsoleted by system improvements.\n \nYes, but it does one key thing: allows DBAs to fix problems *NOW*. See\nalso my comment below.\n\n> > It would also be useful to allow tweaking of planner cost estimates.\n> > This would take the general form of\n> > node operator value\n> \n> This is at least focusing on the right sort of thing, although I still\n> find it completely misguided to be attaching hints like this to\n> individual queries.\n \nYes, but as I mentioned the idea here was to come up with something that\nis (hopefully) easy to define and implement. In other words, something\nthat should be doable for 8.3. Because this proposal essentially amounts\nto limiting plans the planner will consider and tweaking it's cost\nestimates, I'm hoping that it should be (relatively) easy to implement.\n\n> What I would like to see is information *stored in a system catalog*\n> that affects the planner's cost estimates. As an example, the DBA might\n> know that a particular table is touched sufficiently often that it's\n> likely to remain RAM-resident, in which case reducing the page fetch\n> cost estimates for just that table would make sense. (BTW, this is\n> something the planner could in principle know, but we're unlikely to\n> do it anytime soon, for a number of reasons including a desire for plan\n> stability.)\n\nAll this stuff is great and I would love to see it! But this is all so\nabstract that I'm doubtful this could make it into 8.4, let alone 8.3.\nEspecially if we want a comprehensive system that will handle most/all\ncases. I don't know if we even have a list of all the cases we need to\nhandle.\n\n> The other general category of thing I think we need is a\n> way to override selectivity estimates for particular forms of WHERE\n> clauses.\n\nI hadn't thought about that for hints, but it would be a good addition.\nI think the stats-tweaking model would work, but we'd probably want to\nallow \"=\" as well (which could go into the other stats tweaking hints as\nwell).\n\n... WHERE a = b /* SELECTIVITY {+|-|*|/|=} value */\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 12 Oct 2006 11:25:25 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "OK, I just have to comment...\n\n\"Jim C. Nasby\" <[email protected]> writes:\n> > These hints would outright force the planner to do things a certain way.\n> > ... FROM table /* ACCESS {SEQSCAN | [[NO] BITMAP] INDEX index_name} */\n> \n> This proposal seems to deliberately ignore every point that has been\n> made *against* doing things that way. It doesn't separate the hints\n> from the queries, it doesn't focus on fixing the statistical or cost\n> misestimates that are at the heart of the issue, and it takes no account\n> of the problem of hints being obsoleted by system improvements.\n\nBut whatever arguments you made about planner improvements and the like,\nit will NEVER be possible to correctly estimate in all cases the\nstatistics for a query, even if you perfectly know WHAT statistics you\nneed, which is also not the case all the time. \n\nTom, you're the one who knows best how the planner works... can you bet\nanything you care about on the fact that one day the planner will never\never generate a catastrophic plan without DBA tweaking ? And how far in\ntime we'll get to that point ?\n\nUntil that point is achieved, the above proposal is one of the simplest\nto understand for the tweaking DBA, and the fastest to deploy when faced\nwith catastrophic plans. And I would guess it is one of the simplest to\nbe implemented and probably not very high maintenance either, although\nthis is just a guess.\n\nIf I could hint some of my queries, I would enable anonymous prepared\nstatements to take into account the parameter values, but I can't\nbecause that results in runaway queries every now and then, so I had to\nforce postgres generate generic queries without knowing anything about\nparameter values... so the effect for me is an overall slower postgres\nsystem because I couldn't fix the particular problems I had and had to\ntweak general settings. And when I have a problem I can't wait until the\nplanner is fixed, I have to solve it immediately... the current means to\ndo that are suboptimal. \n\nThe argument that planner hints would hide problems from being solved is\na fallacy. To put a hint in place almost the same amount of analysis is\nneeded from the DBA as solving the problem now, so users who ask now for\nhelp will further do it even in the presence of hints. The ones who\nwouldn't are not coming for help now either, they know their way out of\nthe problems... and the ones who still report a shortcoming of the\nplanner will do it with hints too.\n\nI would even say it would be an added benefit, cause then you could\nreally see how well a specific plan will do without having the planner\ncapable to generate alone that plan... so knowledgeable users could come\nto you further down the road when they know where the planner is wrong,\nsaving you time.\n\nI must say it again, this kind of query-level hinting would be the\neasiest to understand for the developers... there are many\ntrial-end-error type of programmers out there, if you got a hint wrong,\nyou fix it and move on, doesn't need to be perfect, it just have to be\ngood enough. I heavily doubt that postgres will get bad publicity\nbecause user Joe sot himself in the foot by using bad hints... the\nprobability for that is low, you must actively put those hints there,\nand if you take the time to do that then you're not the average Joe, and\nprobably not so lazy either, and if you're putting random hints, then\nyou would probably mess it up some other way anyway.\n\nAnd the thing about missing new features is also not very founded. If I\nwould want to exclude a full table scan on a specific table for a\nspecific query, than that's about for sure that I want to do that\nregardless what new features postgres will offer in the future. Picking\none specific access method is more prone to missing new access methods,\nbut even then, when I upgrade the DB server to a new version, I usually\nhave enough other compatibility problems (till now I always had some on\nevery upgrade I had) that making a round of upgrading hints is not an\noutstanding problem. And if the application works good enough with\nsuboptimal plans, why would I even take that extra effort ?\n\nI guess the angle is: I, as a practicing DBA would like to be able to\nexperiment and get most out of the imperfect tool I have, and you, the\ndevelopers, want to make the tool perfect... I don't care about perfect\ntools, it just have to do the job... hints or anything else, if I can\nmake it work GOOD ENOUGH, it's all fine. And hints is something I would\nunderstand and be able to use.\n\nThanks for your patience if you're still reading this...\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Thu, 12 Oct 2006 18:34:25 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Yes, but as I mentioned the idea here was to come up with something that\n> is (hopefully) easy to define and implement. In other words, something\n> that should be doable for 8.3.\n\nSorry, but that is not anywhere on my list of criteria for an important\nfeature. Having to live with a quick-and-dirty design for the\nforeseeable future is an ugly prospect --- and anything that puts hints\ninto application code is going to lock us down to supporting it forever.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2006 12:37:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal "
},
{
"msg_contents": ">What is it about hinting that makes it so easily breakable with new versions? I >don't have any experience with Oracle, so I'm not sure how they screwed logic like >this up. \n\nI don't have a ton of experience with oracle either, mostly DB2, MSSQL and PG. So, I thought I'd do some googling, and maybe others might find this useful info. \n\nhttp://asktom.oracle.com/pls/ask/f?p=4950:8:2177642270773127589::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:7038986332061\n\nInteresting quote: \"In Oracle Applications development (11i apps - HR, CRM, etc) Hints are strictly forbidden. We find the underlying cause and fix it.\" and\n\"Hints -- only useful if you are in RBO and you want to make use of an access \npath.\"\n\nMaybe because I haven't had access to hints before, I've never been tempted to use them. However, I can't remember having to re-write SQL due to a PG upgrade either.\n\nOh, and if you want to see everything that gets broken/depreciated with new versions, just take a look at oracle's release notes for 9i and 10g. I particularly dislike how they rename stuff for no apparent reason (e.g. NOPARALLEL is now NO_PARALLEL - http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php)\n\nAt the very least, I agree it is important to separate the query (what data do I want) from performance options (config, indexes, hints, etc). The data I want doesn't change unless I have a functionality/requirements change. So I'd prefer not to have to go back and change that code just to tweak performance. In addition, this creates an even bigger mess for dynamic queries. I would be much more likely to consider hints if they could be applied separately.\n\n- Bucky\n\n",
"msg_date": "Thu, 12 Oct 2006 12:40:13 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Jim,\n\n>>> These hints would outright force the planner to do things a certain way.\n>>> ... FROM table /* ACCESS {SEQSCAN | [[NO] BITMAP] INDEX index_name} */\n>> This proposal seems to deliberately ignore every point that has been\n>> made *against* doing things that way. It doesn't separate the hints\n>> from the queries, it doesn't focus on fixing the statistical or cost\n>> misestimates that are at the heart of the issue, and it takes no account\n>> of the problem of hints being obsoleted by system improvements.\n> \n> Yes, but it does one key thing: allows DBAs to fix problems *NOW*. See\n> also my comment below.\n\nI don't see how adding extra tags to queries is easier to implement than \nan ability to modify the system catalogs. Quite the opposite, really.\n\nAnd, as I said, if you're going to push for a feature that will be \nobsolesced in one version, then you're going to have a really rocky row \nto hoe.\n\n> Yes, but as I mentioned the idea here was to come up with something that\n> is (hopefully) easy to define and implement. In other words, something\n> that should be doable for 8.3. Because this proposal essentially amounts\n> to limiting plans the planner will consider and tweaking it's cost\n> estimates, I'm hoping that it should be (relatively) easy to implement.\n\nEven I, the chief marketing geek, am more concerned with getting a \nfeature that we will still be proud of in 5 years than getting one in \nthe next nine months. Keep your pants on!\n\nI actually think the way to attack this issue is to discuss the kinds of \nerrors the planner makes, and what tweaks we could do to correct them. \nHere's the ones I'm aware of:\n\n-- Incorrect selectivity of WHERE clause\n-- Incorrect selectivity of JOIN\n-- Wrong estimate of rows returned from SRF\n-- Incorrect cost estimate for index use\n\nCan you think of any others?\n\nI also feel that a tenet of the design of the \"planner tweaks\" system \nought to be that the tweaks are collectible and analyzable in some form. \n This would allow DBAs to mail in their tweaks to -performance or \n-hackers, and then allow us to continue improving the planner.\n\n--Josh Berkus\n\n\n\n\n",
"msg_date": "Thu, 12 Oct 2006 09:40:30 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "On Thu, 2006-10-12 at 10:14 -0500, Jim C. Nasby wrote:\n> The syntax these hints is something arbitrary. I'm borrowing Oracle's\n> idea of embedding hints in comments, but we can use some other method if\n> desired. Right now I'm more concerned with getting the general idea\n> across.\n> \n\nIs there any advantage to having the hints in the queries? To me that's\nasking for trouble with no benefit at all. It would seem to me to be\nbetter to have a system catalog that defined hints as something like:\n\n\"If user A executes a query matching regex R, then coerce (or force) the\nplanner in this way.\"\n\nI'm not suggesting that we do that, but it seems better then embedding\nthe hints in the queries themselves.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 12 Oct 2006 09:42:55 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Csaba,\n\n> I guess the angle is: I, as a practicing DBA would like to be able to\n> experiment and get most out of the imperfect tool I have, and you, the\n> developers, want to make the tool perfect... I don't care about perfect\n> tools, it just have to do the job... hints or anything else, if I can\n> make it work GOOD ENOUGH, it's all fine. And hints is something I would\n> understand and be able to use.\n\nHmmm, if you already understand Visual Basic syntax, should we support \nthat too? Or maybe we should support MySQL's use of '0000-00-00' as the \n\"zero\" date because people \"understand\" that?\n\nWe're just not going to adopt a bad design because Oracle DBAs are used \nto it. If we wanted to do that, we could shut down the project and \njoin a proprietary DB staff.\n\nThe current discussion is:\n\na) Planner tweaking is sometimes necessary;\nb) Oracle HINTS are a bad design for planner tweaking;\nc) Can we come up with a good design for planner tweaking?\n\nSo, how about suggestions for a good design?\n\n--Josh Berkus\n\n",
"msg_date": "Thu, 12 Oct 2006 09:45:23 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "On Thu, Oct 12, 2006 at 09:26:24AM -0600, Joshua Marsh wrote:\n> On 10/12/06, Jim C. Nasby <[email protected]> wrote:\n> >\n> >Posting here instead of hackers since this is where the thread got\n> >started...\n> >\n> >The argument has been made that producing a hints system will be as hard\n> >as actually fixing the optimizer. There's also been clamoring for an\n> >actual proposal, so here's one that (I hope) wouldn't be very difficult\n> >to implemen.\n> >\n> >My goal with this is to keep the coding aspect as simple as possible, so\n> >that implementation and maintenance of this isn't a big burden. Towards\n> >that end, these hints either tell the planner specifically how to handle\n> >some aspect of a query, or they tell it to modify specific cost\n> >estimates. My hope is that this information could be added to the\n> >internal representation of a query without much pain, and that the\n> >planner can then use that information when generating plans.\n> \n> \n> I've been following the last thread with a bit of interest. I like the\n> proposal. It seems simple and easy to use. What is it about hinting that\n> makes it so easily breakable with new versions? I don't have any experience\n> with Oracle, so I'm not sure how they screwed logic like this up. Hinting\n> to use a specific merge or scan seems fairly straight forward; if the query\n> requests to use an index on a join, I don't see how hard it is to go with\n> the suggestion. It will become painfully obvious to the developer if his\n> hinting is broken.\n\nThe problem is that when you 'hint' (which is actually not a great name\nfor the first part of my proposal, since it's really forcing the planner\nto do something), you're tying the planner's hands. As the planner\nimproves in newer versions, it's very possible to end up with forced\nquery plans that are much less optimal than what the newer planner could\ncome up with. This is especially true as new query execution nodes are\ncreated, such as hashaggregate.\n\nThe other downside is that it's per-query. It would certainly be useful\nto be able to nudge the planner in the right direction on a per-table\nlevel, but it's just not clear how to accomplish that. Like I said, the\nidea behind my proposal is to have something that can be done soon, like\nfor 8.3.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 12 Oct 2006 11:46:07 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "On Thu, Oct 12, 2006 at 04:55:17PM +0100, Heikki Linnakangas wrote:\n> Bruce Momjian wrote:\n> >Because DB2 doesn't like hints, and the fact that they have gotten to a\n> >point where they feel they do not need them, I feel we too can get to a\n> >point where we don't need them either. The question is whether we can\n> >get there quickly enough for our userbase.\n> \n> In all fairness, when I used to work with DB2 we often had to rewrite \n> queries to persuade the planner to choose a different plan. Often it was \n> more of an issue of plan stability; a query would suddenly become \n> horribly slow in production because a table had grown slowly to the \n> point that it chose a different plan than before. Then we had to modify \n> the query again, or manually set the statistics. In extreme cases we had \n> to split a query to multiple parts and use temporary tables and move \n> logic to the application to get a query to perform consistently and fast \n> enough. I really really missed hints.\n \nOracle has an interesting way to deal with this, in that you can store a\nplan that the optimizer generates and tell it to always use it for that\nquery. There's some other management tools built on top of that. I don't\nknow how commonly it's used, though...\n\nAlso, on the DB2 argument... I'm wondering what happens when people end\nup with a query that they can't get to execute the way it should? Is the\nplanner *that* good that it never happens? Do you have to wait for a\nfixpack when it does happen? I'm all for having a super-smart planner,\nbut I'm highly doubtful it will always know exactly what to do.\n\n> That said, I really don't like the idea of hints like \"use index X\" \n> embedded in a query. I do like the idea of hints that give the planner \n> more information about the data. I don't have a concrete proposal, but \n\nWhich is part of the problem... there's nothing to indicate we'll have\nsupport for these improved hints anytime soon, especially if a number of\nthem depend on plan invalidation.\n\n> here's some examples of hints I'd like to see:\n> \n> \"table X sometimes has millions of records and sometimes it's empty\"\n> \"Expression (table.foo = table2.bar * 2) has selectivity 0.99\"\n> \"if foo.bar = 5 then foo.field2 IS NULL\"\n> \"Column X is unique\"\n> \"function foobar() always returns either 1 or 2, and it returns 2 90% of \n> the time.\"\n> \"if it's Monday, then table NEW_ORDERS has a cardinality of 100000, \n> otherwise 10.\"\n> \n> BTW: Do we make use of CHECK constraints in the planner? In DB2, that \n> was one nice and clean way of hinting the planner about things. If I \n> remember correctly, you could even define CHECK constraints that weren't \n> actually checked at run-time, but were used by the planner.\n\nI think you're right... and it is an elegant way to hint the planner.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 12 Oct 2006 11:53:27 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "> Hmmm, if you already understand Visual Basic syntax, should we support \n> that too? Or maybe we should support MySQL's use of '0000-00-00' as the \n> \"zero\" date because people \"understand\" that?\n\nYou completely misunderstood me... I have no idea about oracle hints,\nnever used Oracle in fact. My company uses oracle, but I have only very\nvery limited contact with oracle issues, and never touched a hint.\n\nI'm only talking about ease of use, learning curves, and complexity in\ngeneral. While I do like the idea of an all automatic system optimizer\nwhich takes your query portofolio and analyzes the data based on those\nqueries and creates you all the indexes you need and all that, that's\nnot gonna happen soon, because it's a very complex thing to implement.\n\nThe alternative is that you take your query portofolio, analyze it\nyourself, figure out what statistics you need, create indexes, tweak\nqueries, hint the planner for correlations and stuff... which is a\ncomplex task, and if you have to tell the server about some correlations\nwith the phase of the moon, you're screwed cause there will never be any\nDB engine which will understand that. \n\nBut you always can put the corresponding hint in the query when you know\nthe correlation is there...\n\nThe problem is that the application sometimes really knows better than\nthe server, when the correlations are not standard.\n\n> We're just not going to adopt a bad design because Oracle DBAs are used \n> to it. If we wanted to do that, we could shut down the project and \n> join a proprietary DB staff.\n\nI have really nothing to do with Oracle. I think you guys are simply too\nblinded by Oracle hate... I don't care about Oracle.\n\n> The current discussion is:\n> \n> a) Planner tweaking is sometimes necessary;\n> b) Oracle HINTS are a bad design for planner tweaking;\n\nWhile there are plenty of arguments you made against query level hints\n(can we not call them Oracle-hints ?), there are plenty of users of\npostgres who expressed they would like them. I guess they were tweaking\npostgres installations when they needed it, and not Oracle\ninstallations. I expressed it clearly that for me query level hinting\nwould give more control and better understanding of what I have to do\nfor the desired result. Perfect planning -> forget it, I only care about\ngood enough with reasonable tuning effort. If I have to tweak statistics\nI will NEVER be sure postgres will not backfire on me again. On the\nother hand if I say never do a seq scan on this table for this query, I\ncould be sure it won't...\n\n> c) Can we come up with a good design for planner tweaking?\n\nAngles again: good enough now is better for end users, but programmers\nalways go for perfect tomorrow... pity.\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Thu, 12 Oct 2006 19:04:46 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "> I'm not suggesting that we do that, but it seems better then embedding\n> the hints in the queries themselves.\n\nOK, what about this: if I execute the same query from a web client, I\nwant the not-so-optimal-but-safe plan, if I execute it asynchronously, I\nlet the planner choose the\nbest-overall-performance-but-sometimes-may-be-slow plan ?\n\nWhat kind of statistics/table level hinting will get you this ?\n\nI would say only query level hinting will buy you query level control.\nAnd that's perfectly good in some situations.\n\nI really can't see why a query-level hinting mechanism is so evil, why\nit couldn't be kept forever, and augmented with the possibility of\ncorrelation hinting, or table level hinting. \n\nThese are really solving different problems, with some overlapping...\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Thu, 12 Oct 2006 19:15:40 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Csaba Nagy <[email protected]> writes:\n> Until that point is achieved, the above proposal is one of the simplest\n> to understand for the tweaking DBA, and the fastest to deploy when faced\n> with catastrophic plans. And I would guess it is one of the simplest to\n> be implemented and probably not very high maintenance either, although\n> this is just a guess.\n\nThat guess is wrong ... but more to the point, if you think that \"simple\nand easy to implement\" should be the overriding concern for designing a\nnew feature, see mysql. They've used that design approach for years and\nlook what a mess they've got. This project has traditionally done\nthings differently and I feel no need to change that mindset now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2006 13:18:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal "
},
{
"msg_contents": "On Thu, Oct 12, 2006 at 11:25:25AM -0500, Jim C. Nasby wrote:\n> Yes, but it does one key thing: allows DBAs to fix problems *NOW*. See\n> also my comment below.\n\nIf I may argue in the other direction, speaking as one whose career\n(if we may be generous enough to call it that) has been pretty much\nexclusively on the operations end of things, I think that's an awful\nidea.\n\nThere are two ways that quick-fix solve-the-problem-now hints are\ngoing to be used. One is in the sort of one-off query that a DBA has\nto run from time to time, that takes a long time, but that isn't\nreally a part of regular application load. The thing is, if you\nalready know your data well enough to provide a useful hint, you also\nknow your data well enough to work around the problem in the short\nrun (with some temp table tricks and the like). \n\nThe _other_ way it's going to be used is as a stealthy alteration to\nregular behaviour, to solve a particular nasty performance problem\nthat happens to result on a given day. And every single time I've\nseen anything like that done, the long term effect is always\nmonstrous. Two releases later, all your testing and careful\ninspection and planning goes to naught one Saturday night at 3 am\n(because we all know computers know what time it is _where you are_)\nwhen the one-off trick that you pulled last quarter to solve the\nmanager's promise (which was made while out golfing, so nobody wrote\nanything down) turns out to have a nasty effect now that the data\ndistribution is different. Or you think so. But now you're not\nsure, because the code was tweaked a little to take some advantage of\nsomething you now have because of the query plans that you ended up\ngetting because of the hint that was there because of the golf game,\nso now if you start fiddling with the hints, maybe you break\nsomething else. And you're tired, but the client is on the phone\nfrom Hong King _right now_.\n\nThe second case is, from my experience, exactly the sort of thing you\nwant really a lot when the golf game is just over, and the sort of\nthing you end up kicking yourself for in run-on sentences in the\nmiddle of the night six months after the golf game is long since\nforgotten.\n\nThe idea for knobs on the planner that allows the DBA to give\ndirected feedback, from which new planner enhancements can also come,\nseems to me a really good idea. But any sort of quick and dirty hint\nfor right now gives me the willies.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n\"The year's penultimate month\" is not in truth a good way of saying\nNovember.\n\t\t--H.W. Fowler\n",
"msg_date": "Thu, 12 Oct 2006 13:45:03 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "On 10/12/06, Andrew Sullivan <[email protected]> wrote:\n> On Thu, Oct 12, 2006 at 11:25:25AM -0500, Jim C. Nasby wrote:\n> > Yes, but it does one key thing: allows DBAs to fix problems *NOW*. See\n> > also my comment below.\n>\n> If I may argue in the other direction, speaking as one whose career\n> (if we may be generous enough to call it that) has been pretty much\n> exclusively on the operations end of things, I think that's an awful\n> idea.\n>\n> There are two ways that quick-fix solve-the-problem-now hints are\n> going to be used. One is in the sort of one-off query that a DBA has\n\nthird way: to solve the problem of data (especially constants) not\nbeing available to the planner at the time the plan was generated.\nthis happens most often with prepared statements and sql udfs. note\nthat changes to the plan generation mechanism (i think proposed by\npeter e a few weeks back) might also solve this.\n\nIn a previous large project I had to keep bitmap scan and seqscan off\nall the time because of this problem (the project used a lot of\nprepared statements).\n\nor am i way off base here?\n\nmerlin\n",
"msg_date": "Thu, 12 Oct 2006 14:21:55 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "On Thu, Oct 12, 2006 at 02:21:55PM -0400, Merlin Moncure wrote:\n> third way: to solve the problem of data (especially constants) not\n> being available to the planner at the time the plan was generated.\n> this happens most often with prepared statements and sql udfs. note\n> that changes to the plan generation mechanism (i think proposed by\n> peter e a few weeks back) might also solve this.\n\nYou're right about this, but you also deliver the reason why we don't\nneed hints for that: the plan generation mechanism is a better\nsolution to that problem. It's this latter thing that I keep coming\nback to. As a user of PostgreSQL, the thing that I really like about\nit is its pragmatic emphasis on correctness. In my experience, it's\na system that feels very UNIX-y: there's a willingness to accept\n\"80/20\" answers to a problem in the event you at least have a way to\nget the last 20, but the developers are opposed to anything that\nseems really kludgey.\n\nIn the case you're talking about, it seems to me that addressing the\nproblems where they come from is a better solution that trying to\nfind some way to work around them. And most of the use-cases I hear\nfor a statement-level hints system fall into this latter category.\n\nA\n-- \nAndrew Sullivan | [email protected]\nUnfortunately reformatting the Internet is a little more painful \nthan reformatting your hard drive when it gets out of whack.\n\t\t--Scott Morris\n",
"msg_date": "Thu, 12 Oct 2006 15:03:47 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "On Thu, 2006-10-12 at 19:15 +0200, Csaba Nagy wrote:\n> > I'm not suggesting that we do that, but it seems better then embedding\n> > the hints in the queries themselves.\n> \n> OK, what about this: if I execute the same query from a web client, I\n> want the not-so-optimal-but-safe plan, if I execute it asynchronously, I\n> let the planner choose the\n> best-overall-performance-but-sometimes-may-be-slow plan ?\n> \n\nConnect as a different user to control whether the hint matches or not.\nIf this doesn't work for you, read below.\n\n> What kind of statistics/table level hinting will get you this ?\n> \n\nIt's based not just on the table, but on environment as well, such as\nthe user/role.\n\n> I would say only query level hinting will buy you query level control.\n> And that's perfectly good in some situations.\n\nMy particular proposal allows arbitrary regexes on the raw query. You\ncould add a comment with a \"query id\" in it. \n\nMy proposal has these advantages over query comments:\n(1) Most people's needs would be solved by just matching the query\nform. \n(2) If the DBA really wanted to separate out queries individually (not\nbased on the query form), he could do it, but it would have an extra\nstep that might encourage him to reconsider the necessity\n(3) If someone went to all that work to shoot themselves in the foot\nwith unmanagable hints that are way too specific, the postgres\ndevelopers are unlikely to be blamed\n(4) No backwards compatibility issues that I can see, aside from people\nmaking their own hints unmanagable. If someone started getting bad\nplans, they could just remove all the hints from the system catalogs and\nit would be just as if they had never used hints. If they added ugly\ncomments to their queries it wouldn't really have a bad effect.\n\nTo formalize the proposal a litte, you could have syntax like:\n\nCREATE HINT [FOR USER username] MATCHES regex APPLY HINT some_hint;\n\nWhere \"some_hint\" would be a hinting language perhaps like Jim's, except\nnot guaranteed to be compatible between versions of PostgreSQL. The\ndevelopers could change the hinting language at every release and people\ncan just re-write the hints without changing their application.\n\n> I really can't see why a query-level hinting mechanism is so evil, why\n> it couldn't be kept forever, and augmented with the possibility of\n> correlation hinting, or table level hinting. \n\nWell, I wouldn't say \"evil\". Query hints are certainly against the\nprinciples of a relational database, which separate the logical query\nfrom the physical storage.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 12 Oct 2006 12:07:11 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "On Thu, Oct 12, 2006 at 09:40:30AM -0700, Josh Berkus wrote:\n> Jim,\n> \n> >>>These hints would outright force the planner to do things a certain way.\n> >>>... FROM table /* ACCESS {SEQSCAN | [[NO] BITMAP] INDEX index_name} */\n> >>This proposal seems to deliberately ignore every point that has been\n> >>made *against* doing things that way. It doesn't separate the hints\n> >>from the queries, it doesn't focus on fixing the statistical or cost\n> >>misestimates that are at the heart of the issue, and it takes no account\n> >>of the problem of hints being obsoleted by system improvements.\n> > \n> >Yes, but it does one key thing: allows DBAs to fix problems *NOW*. See\n> >also my comment below.\n> \n> I don't see how adding extra tags to queries is easier to implement than \n> an ability to modify the system catalogs. Quite the opposite, really.\n> \n> And, as I said, if you're going to push for a feature that will be \n> obsolesced in one version, then you're going to have a really rocky row \n> to hoe.\n \nUnless you've got a time machine or a team of coders in your back\npocket, I don't see how the planner will suddenly become perfect in\n8.4...\n\n> >Yes, but as I mentioned the idea here was to come up with something that\n> >is (hopefully) easy to define and implement. In other words, something\n> >that should be doable for 8.3. Because this proposal essentially amounts\n> >to limiting plans the planner will consider and tweaking it's cost\n> >estimates, I'm hoping that it should be (relatively) easy to implement.\n> \n> Even I, the chief marketing geek, am more concerned with getting a \n> feature that we will still be proud of in 5 years than getting one in \n> the next nine months. Keep your pants on!\n \nHey, I wrote that email while dressed! :P\n\nWe've been seeing the same kinds of problems that are very difficult (or\nimpossible) to fix cropping up for literally years... it'd be really\ngood to at least be able to force the planner to do the sane thing even\nif we don't have the manpower to fix it right now...\n\n> I actually think the way to attack this issue is to discuss the kinds of \n> errors the planner makes, and what tweaks we could do to correct them. \n> Here's the ones I'm aware of:\n> \n> -- Incorrect selectivity of WHERE clause\n> -- Incorrect selectivity of JOIN\n> -- Wrong estimate of rows returned from SRF\n> -- Incorrect cost estimate for index use\n> \n> Can you think of any others?\n \nThere's a range of correlations where the planner will incorrectly\nchoose a seqscan over an indexscan.\n\nFunction problems aren't limited to SRFs... we have 0 statistics ability\nfor functions.\n\nThere's the whole issue of multi-column statistics.\n\n> I also feel that a tenet of the design of the \"planner tweaks\" system \n> ought to be that the tweaks are collectible and analyzable in some form. \n> This would allow DBAs to mail in their tweaks to -performance or \n> -hackers, and then allow us to continue improving the planner.\n\nWell, one nice thing about the per-query method is you can post before\nand after EXPLAIN ANALYZE along with the hints. But yes, as we move\ntowards a per-table/index/function solution, there should be an easy way\nto see how those hints are affecting the system and to report that data\nback to the community.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 12 Oct 2006 14:24:10 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "On Thu, Oct 12, 2006 at 09:42:55AM -0700, Jeff Davis wrote:\n> On Thu, 2006-10-12 at 10:14 -0500, Jim C. Nasby wrote:\n> > The syntax these hints is something arbitrary. I'm borrowing Oracle's\n> > idea of embedding hints in comments, but we can use some other method if\n> > desired. Right now I'm more concerned with getting the general idea\n> > across.\n> > \n> \n> Is there any advantage to having the hints in the queries? To me that's\n> asking for trouble with no benefit at all. It would seem to me to be\n> better to have a system catalog that defined hints as something like:\n> \n> \"If user A executes a query matching regex R, then coerce (or force) the\n> planner in this way.\"\n> \n> I'm not suggesting that we do that, but it seems better then embedding\n> the hints in the queries themselves.\n\nMy experience is that on the occasions when I want to beat the planner\ninto submission, it's usually a pretty complex query that's the issue,\nand that it's unlikely to have more than a handful of them in the\napplication. That makes me think a regex facility would just get in the\nway, but perhaps others have much more extensive need of hinting.\n\nI also suspect that writing that regex could become a real bear.\n\nHaving said that... I see no reason why it couldn't work... but the real\nchallenge is defining the hints.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 12 Oct 2006 14:34:15 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Jim,\n\n> > I don't see how adding extra tags to queries is easier to implement\n> > than an ability to modify the system catalogs. Quite the opposite,\n> > really.\n> >\n> > And, as I said, if you're going to push for a feature that will be\n> > obsolesced in one version, then you're going to have a really rocky\n> > row to hoe.\n>\n> Unless you've got a time machine or a team of coders in your back\n> pocket, I don't see how the planner will suddenly become perfect in\n> 8.4...\n\nSince you're not a core code contributor, I really don't see why you \ncontinue to claim that query hints are going to be easier to implement \nthan relation-level statistics modification. You think it's easier, but \nthe people who actually work on the planner don't believe that it is.\n\n> We've been seeing the same kinds of problems that are very difficult (or\n> impossible) to fix cropping up for literally years... it'd be really\n> good to at least be able to force the planner to do the sane thing even\n> if we don't have the manpower to fix it right now...\n\nAs I've said to other people on this thread, you keep making the incorrect \nassumption that Oracle-style query hints are the only possible way of \nmanual nuts-and-bolts query tuning. They are not.\n\n> > I actually think the way to attack this issue is to discuss the kinds\n> > of errors the planner makes, and what tweaks we could do to correct\n> > them. Here's the ones I'm aware of:\n> >\n> > -- Incorrect selectivity of WHERE clause\n> > -- Incorrect selectivity of JOIN\n> > -- Wrong estimate of rows returned from SRF\n> > -- Incorrect cost estimate for index use\n> >\n> > Can you think of any others?\n>\n> There's a range of correlations where the planner will incorrectly\n> choose a seqscan over an indexscan.\n\nPlease list some if you have ones which don't fall into one of the four \nproblems above.\n\n> Function problems aren't limited to SRFs... we have 0 statistics ability\n> for functions.\n>\n> There's the whole issue of multi-column statistics.\n\nSure, but again that falls into the category of \"incorrect selectivity for \nWHERE/JOIN\". Don't make things more complicated than they need to be.\n\n> Well, one nice thing about the per-query method is you can post before\n> and after EXPLAIN ANALYZE along with the hints.\n\nOne bad thing is that application designers will tend to use the hint, fix \nthe immediate issue, and never report a problem at all. And query hints \nwould not be collectable in any organized way except the query log, which \nwould then require very sophisticated text parsing to get any useful \ninformation at all.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Thu, 12 Oct 2006 13:58:22 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "On 12-10-2006 21:07 Jeff Davis wrote:\n> On Thu, 2006-10-12 at 19:15 +0200, Csaba Nagy wrote:\n> \n> To formalize the proposal a litte, you could have syntax like:\n> \n> CREATE HINT [FOR USER username] MATCHES regex APPLY HINT some_hint;\n> \n> Where \"some_hint\" would be a hinting language perhaps like Jim's, except\n> not guaranteed to be compatible between versions of PostgreSQL. The\n> developers could change the hinting language at every release and people\n> can just re-write the hints without changing their application.\n\nThere are some disadvantages of not writing the hints in a query. But of \ncourse there are disadvantages to do as well ;)\n\nOne I can think of is that it can be very hard to define which hint \nshould apply where. Especially in complex queries, defining at which \npoint exaclty you'd like your hint to work is not a simple matter, \nunless you can just place a comment right at that position.\n\nSay you have a complex query with several joins of the same table. And \nin all but one of those joins postgresql actually chooses the best \noption, but somehow you keep getting some form of join while a nested \nloop would be best. How would you pinpoint just that specific clause, \nwhile the others remain \"unhinted\" ?\n\nYour approach seems to be a bit similar to aspect oriented programming \n(in java for instance). You may need a large amount of information about \nthe queries and it is likely a \"general\" regexp with \"general\" hint will \nnot do much good (at least I expect a hinting-system to be only useable \nin corner cases and very specific points in a query).\n\nBy the way, wouldn't it be possible if the planner learned from a query \nexecution, so it would know if a choice for a specific plan or estimate \nwas actually correct or not for future reference? Or is that in the line \nof DB2's complexity and a very hard problem and/or would it add too much \noverhead?\n\nBest regards,\n\nArjen\n",
"msg_date": "Thu, 12 Oct 2006 23:07:12 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "> > Well, one nice thing about the per-query method is you can post\nbefore\n> > and after EXPLAIN ANALYZE along with the hints.\n> \n> One bad thing is that application designers will tend to use the hint,\nfix\n> the immediate issue, and never report a problem at all. And query\nhints\n> would not be collectable in any organized way except the query log,\nwhich\n> would then require very sophisticated text parsing to get any useful\n> information at all.\n> \nOr they'll report it when the next version of Postgres \"breaks\" their\napp because the hints changed, or because the planner does something\nelse which makes those hints obsolete.\n\nMy main concern with hints (aside from the fact I'd rather see more\nintelligence in the planner/stats) is managing them appropriately. I\nhave two general types of SQL where I'd want to use hints- big OLAP\nstuff (where I have a lot of big queries, so it's not just one or two\nwhere I'd need them) or large dynamically generated queries (Users\nbuilding custom queries). Either way, I don't want to put them on a\nquery itself.\n\nWhat about using regular expressions, plus, if you have a function\n(views, or any other statement that is stored), you can assign a rule to\nthat particular function. So you get matching, plus explicit selection.\nThis way it's easy to find all your hints, turn them off, manage them,\netc. (Not to mention dynamically generated SQL is ugly enough without\nhaving to put hints in there).\n\n- Bucky\n",
"msg_date": "Thu, 12 Oct 2006 17:19:29 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "[ trying once again to push this thread over to -hackers where it belongs ]\n\nArjen van der Meijden <[email protected]> writes:\n> On 12-10-2006 21:07 Jeff Davis wrote:\n>> On Thu, 2006-10-12 at 19:15 +0200, Csaba Nagy wrote:\n>> To formalize the proposal a litte, you could have syntax like:\n>> CREATE HINT [FOR USER username] MATCHES regex APPLY HINT some_hint;\n>> \n>> Where \"some_hint\" would be a hinting language perhaps like Jim's, except\n>> not guaranteed to be compatible between versions of PostgreSQL. The\n>> developers could change the hinting language at every release and people\n>> can just re-write the hints without changing their application.\n\nDo you have any idea how much push-back there would be to that? In\npractice we'd be bound by backwards-compatibility concerns for the hints\ntoo.\n\n> There are some disadvantages of not writing the hints in a query. But of \n> course there are disadvantages to do as well ;)\n\n> One I can think of is that it can be very hard to define which hint \n> should apply where. Especially in complex queries, defining at which \n> point exaclty you'd like your hint to work is not a simple matter, \n> unless you can just place a comment right at that position.\n\nThe problems that you are seeing all come from the insistence that a\nhint should be textually associated with a query. Using a regex is a\nlittle better than putting it right into the query, but the only thing\nthat really fixes is not having the hints directly embedded into\nclient-side code. It's still wrong at the conceptual level.\n\nThe right way to think about it is to ask why is the planner not picking\nthe right plan to start with --- is it missing a statistical\ncorrelation, or are its cost parameters wrong for a specific case, or\nis it perhaps unable to generate the desired plan at all? (If the\nlatter, no amount of hinting is going to help.) If it's a statistics or\ncosting problem, I think the right thing is to try to fix it with hints\nat that level. You're much more likely to fix the behavior across a\nclass of queries than you will be with a hint textually matched to a\nspecific query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 12 Oct 2006 17:28:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal "
},
{
"msg_contents": "> By the way, wouldn't it be possible if the planner learned from a query \n> execution, so it would know if a choice for a specific plan or estimate \n> was actually correct or not for future reference? Or is that in the line \n> of DB2's complexity and a very hard problem and/or would it add too much \n> overhead?\n\nJust thinking out-loud here...\n\nWow, a learning cost based planner sounds a-lot like problem for control & dynamical systems\ntheory. As I understand it, much of the advice given for setting PostgreSQL's tune-able\nparameters are from \"RULES-OF-THUMB.\" I am sure that effect on server performance from all of the\nparameters could be modeled and an adaptive feed-back controller could be designed to tuned these\nparameters as demand on the server changes.\n\nAl-thought, I suppose that a controller like this would have limited success since some of the\nmost affective parameters are non-run-time tune-able.\n\nIn regards to query planning, I wonder if there is way to model a controller that could\nadjust/alter query plans based on a comparison of expected and actual query execution times.\n\n\nRegards,\n\nRichard Broersma Jr.\n",
"msg_date": "Thu, 12 Oct 2006 14:41:11 -0700 (PDT)",
"msg_from": "Richard Broersma Jr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Bucky Jordan wrote:\n\n> What about using regular expressions, plus, if you have a function\n> (views, or any other statement that is stored), you can assign a rule to\n> that particular function. So you get matching, plus explicit selection.\n> This way it's easy to find all your hints, turn them off, manage them,\n> etc. (Not to mention dynamically generated SQL is ugly enough without\n> having to put hints in there).\n\nThe regular expression idea that's being floated around makes my brain\nfeel like somebody is screeching a blackboard nearby. I don't think\nit's a sane idea. I think you could achieve something similar by using\nstored plan representations, like we do for rewrite rules. So you'd\nlook for, say, a matching join combination in a catalog, and get a\nselectivity from a function that would get the selectivities of the\nconditions on the base tables. Or something like that anyway.\n\nThat gets ugly pretty fast when you have to extract selectivities for\nall the possible join paths in any given query.\n\nBut please don't talk about regular expressions.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 12 Oct 2006 18:02:07 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "On Thu, 2006-10-12 at 17:28 -0400, Tom Lane wrote:\n> [ trying once again to push this thread over to -hackers where it belongs ]\n> \n> Arjen van der Meijden <[email protected]> writes:\n> > On 12-10-2006 21:07 Jeff Davis wrote:\n> >> On Thu, 2006-10-12 at 19:15 +0200, Csaba Nagy wrote:\n> >> To formalize the proposal a litte, you could have syntax like:\n> >> CREATE HINT [FOR USER username] MATCHES regex APPLY HINT some_hint;\n> >> \n> >> Where \"some_hint\" would be a hinting language perhaps like Jim's, except\n> >> not guaranteed to be compatible between versions of PostgreSQL. The\n> >> developers could change the hinting language at every release and people\n> >> can just re-write the hints without changing their application.\n> \n> Do you have any idea how much push-back there would be to that? In\n> practice we'd be bound by backwards-compatibility concerns for the hints\n> too.\n> \n\nNo, I don't have any idea, except that it would be less push-back than\nchanging a language that's embedded in client code. Also, I see no\nreason to think that a hint would not be obsolete upon a new release\nanyway.\n\n> The problems that you are seeing all come from the insistence that a\n> hint should be textually associated with a query. Using a regex is a\n> little better than putting it right into the query, but the only thing\n\n\"Little better\" is all I was going for. I was just making the\nobservation that we can separate two concepts:\n(1) Embedding code in the client's queries, which I see as very\nundesirable and unnecessary\n(2) Providing very specific hints\n\nwhich at least gives us a place to talk about the debate more\nreasonably.\n\n> that really fixes is not having the hints directly embedded into\n> client-side code. It's still wrong at the conceptual level.\n> \n\nI won't disagree with that. I will just say it's no more wrong than\napplying the same concept in addition to embedding the hints in client\nqueries.\n\n> The right way to think about it is to ask why is the planner not picking\n> the right plan to start with --- is it missing a statistical\n> correlation, or are its cost parameters wrong for a specific case, or\n> is it perhaps unable to generate the desired plan at all? (If the\n> latter, no amount of hinting is going to help.) If it's a statistics or\n> costing problem, I think the right thing is to try to fix it with hints\n> at that level. You're much more likely to fix the behavior across a\n> class of queries than you will be with a hint textually matched to a\n> specific query.\n> \n\nAgreed.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 12 Oct 2006 15:15:03 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "On Thu, 2006-10-12 at 14:34 -0500, Jim C. Nasby wrote:\n> On Thu, Oct 12, 2006 at 09:42:55AM -0700, Jeff Davis wrote:\n> > On Thu, 2006-10-12 at 10:14 -0500, Jim C. Nasby wrote:\n> > > The syntax these hints is something arbitrary. I'm borrowing Oracle's\n> > > idea of embedding hints in comments, but we can use some other method if\n> > > desired. Right now I'm more concerned with getting the general idea\n> > > across.\n> > > \n> > \n> > Is there any advantage to having the hints in the queries? To me that's\n> > asking for trouble with no benefit at all. It would seem to me to be\n> > better to have a system catalog that defined hints as something like:\n> > \n> > \"If user A executes a query matching regex R, then coerce (or force) the\n> > planner in this way.\"\n> > \n> > I'm not suggesting that we do that, but it seems better then embedding\n> > the hints in the queries themselves.\n> \n> My experience is that on the occasions when I want to beat the planner\n> into submission, it's usually a pretty complex query that's the issue,\n> and that it's unlikely to have more than a handful of them in the\n> application. That makes me think a regex facility would just get in the\n> way, but perhaps others have much more extensive need of hinting.\n> \n> I also suspect that writing that regex could become a real bear.\n> \n\nWell, writing the regex is just matching criteria to apply the hint. If\nyou really need a quick fix, you can just write a comment with a query\nid number in the query. The benefit there is that when the hint is\nobsolete later (as the planner improves, or data changes\ncharacteristics) you drop the hint and the query is planned without\ninterference. No application changes required.\n\nAlso, and perhaps more importantly, let's say you are trying to improve\nthe performance of an existing application where it's impractical to\nchange the query text (24/7 app, closed source, etc.). You can still\napply a hint if you're willing to write the regex. Just enable query\nlogging or some such to capture the query, and copy it verbatim except\nfor a few parameters which are unknown. Instant regex. If you have to\nchange the query text to apply the hint, it would be impossible in this\ncase.\n\n> Having said that... I see no reason why it couldn't work... but the real\n> challenge is defining the hints.\n\nRight. The only thing I was trying to solve was the problems associated\nwith the hint itself embedded in the client code. I view that as a\nproblem that doesn't need to exist.\n\nI'll leave it to smarter people to either improve the planner or develop\na hinting language. I don't even need hints myself, just offering a\nsuggestion.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Thu, 12 Oct 2006 15:41:00 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Quoth [email protected] (Richard Broersma Jr):\n>> By the way, wouldn't it be possible if the planner learned from a query \n>> execution, so it would know if a choice for a specific plan or estimate \n>> was actually correct or not for future reference? Or is that in the line \n>> of DB2's complexity and a very hard problem and/or would it add too much \n>> overhead?\n>\n> Just thinking out-loud here...\n>\n> Wow, a learning cost based planner sounds a-lot like problem for\n> control & dynamical systems theory.\n\nAlas, dynamic control theory, home of considerable numbers of\nHamiltonian equations, as well as Pontryagin's Minimum Principle, is\nreplete with:\n a) Gory multivariate calculus\n b) Need for all kinds of continuity requirements (e.g. - continuous,\n smooth functions with no discontinuities or other \"nastiness\") \n otherwise the math gets *really* nasty\n\nWe don't have anything even resembling \"continuous\" because our\nmeasures are all discrete (e.g. - the base values are all integers).\n\n> As I understand it, much of the advice given for setting\n> PostgreSQL's tune-able parameters are from \"RULES-OF-THUMB.\" I am\n> sure that effect on server performance from all of the parameters\n> could be modeled and an adaptive feed-back controller could be\n> designed to tuned these parameters as demand on the server changes.\n\nOptimal control theory loves the \"bang-bang\" control, where you go to\none extreme or another, which requires all those continuity conditions\nI mentioned, and is almost certainly not the right answer here.\n\n> Al-thought, I suppose that a controller like this would have limited\n> success since some of the most affective parameters are non-run-time\n> tune-able.\n>\n> In regards to query planning, I wonder if there is way to model a\n> controller that could adjust/alter query plans based on a comparison\n> of expected and actual query execution times.\n\nI think there would be something awesomely useful about recording\nexpected+actual statistics along with some of the plans.\n\nThe case that is easiest to argue for is where Actual >>> Expected\n(e.g. - Actual \"was a whole lot larger than\" Expected); in such cases,\nyou've already spent a LONG time on the query, which means that\nspending millisecond recording the moral equivalent to \"Explain\nAnalyze\" output should be an immaterial cost.\n\nIf we could record a whole lot of these cases, and possibly, with some\nanonymization / permissioning, feed the data to a central place, then\nsome analysis could be done to see if there's merit to particular\nmodifications to the query plan cost model.\n\nPart of the *really* fundamental query optimization problem is that\nthere seems to be some evidence that the cost model isn't perfectly\nreflective of the costs of queries. Improving the quality of the cost\nmodel is one of the factors that would improve the performance of the\nquery optimizer. That would represent a fundamental improvement.\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in name ^ \"@\" ^ tld;;\nhttp://linuxdatabases.info/info/languages.html\n\"If I can see farther it is because I am surrounded by dwarves.\"\n-- Murray Gell-Mann \n",
"msg_date": "Thu, 12 Oct 2006 22:54:02 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Quoth [email protected] (Jeff Davis):\n> On Thu, 2006-10-12 at 17:28 -0400, Tom Lane wrote:\n>> [ trying once again to push this thread over to -hackers where it belongs ]\n>> \n>> Arjen van der Meijden <[email protected]> writes:\n>> > On 12-10-2006 21:07 Jeff Davis wrote:\n>> >> On Thu, 2006-10-12 at 19:15 +0200, Csaba Nagy wrote:\n>> >> To formalize the proposal a litte, you could have syntax like:\n>> >> CREATE HINT [FOR USER username] MATCHES regex APPLY HINT some_hint;\n>> >> \n>> >> Where \"some_hint\" would be a hinting language perhaps like\n>> >> Jim's, except not guaranteed to be compatible between versions\n>> >> of PostgreSQL. The developers could change the hinting language\n>> >> at every release and people can just re-write the hints without\n>> >> changing their application.\n>> \n>> Do you have any idea how much push-back there would be to that? In\n>> practice we'd be bound by backwards-compatibility concerns for the\n>> hints too.\n>\n> No, I don't have any idea, except that it would be less push-back\n> than changing a language that's embedded in client code. Also, I see\n> no reason to think that a hint would not be obsolete upon a new\n> release anyway.\n\nI see *plenty* of reason.\n\n1. Suppose the scenario where Hint h was useful hasn't been affected\n by *any* changes in how the query planner works in the new\n version, it *obviously* continues to be necessary.\n\n2. If Version n+0.1 hasn't resolved all/most cases where Hint h was\n useful in Version n, then people will entirely reasonably expect\n for Hint h to continue to be in effect in version n+0.1\n\n3. Suppose support for Hint h is introduced in PostgreSQL version\n n, and an optimization that makes it obsolete does not arrive\n until version n+0.3, which is quite possible. That hint has been\n carried forward for 2 versions already, long enough for client\n code that contains it to start to ossify. (After all, if\n developers get promoted to new projects every couple of years,\n two versions is plenty of time for the original programmer to \n be gone...)\n\nThat's not just one good reason, but three.\n\n>> The problems that you are seeing all come from the insistence that a\n>> hint should be textually associated with a query. Using a regex is a\n>> little better than putting it right into the query, but the only thing\n>\n> \"Little better\" is all I was going for. I was just making the\n> observation that we can separate two concepts:\n> (1) Embedding code in the client's queries, which I see as very\n> undesirable and unnecessary\n> (2) Providing very specific hints\n>\n> which at least gives us a place to talk about the debate more\n> reasonably.\n\nIt seems to me that there is a *LOT* of merit in trying to find\nalternatives to embedding code into client queries, to be sure.\n\n>> that really fixes is not having the hints directly embedded into\n>> client-side code. It's still wrong at the conceptual level.\n>\n> I won't disagree with that. I will just say it's no more wrong than\n> applying the same concept in addition to embedding the hints in client\n> queries.\n>\n>> The right way to think about it is to ask why is the planner not\n>> picking the right plan to start with --- is it missing a\n>> statistical correlation, or are its cost parameters wrong for a\n>> specific case, or is it perhaps unable to generate the desired plan\n>> at all? (If the latter, no amount of hinting is going to help.)\n>> If it's a statistics or costing problem, I think the right thing is\n>> to try to fix it with hints at that level. You're much more likely\n>> to fix the behavior across a class of queries than you will be with\n>> a hint textually matched to a specific query.\n>\n> Agreed.\n\nThat's definitely a useful way to look at the issue, which seems to be\nlacking in many of the cries for hints.\n\nPerhaps I'm being unfair, but it often seems that people demanding\nhinting systems are uninterested in why the planner is getting things\nwrong. Yes, they have an immediate problem (namely the wrong plan\nthat is getting generated) that they want to resolve.\n\nBut I'm not sure that you can get anything out of hinting without\ncoming close to answering \"why the planner got it wrong.\"\n-- \n\"cbbrowne\",\"@\",\"gmail.com\"\nhttp://linuxfinances.info/info/lsf.html\n\"Optimization hinders evolution.\" -- Alan Perlis\n",
"msg_date": "Thu, 12 Oct 2006 23:12:29 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "\n> The _other_ way it's going to be used is as a stealthy \n> alteration to regular behaviour, to solve a particular nasty \n> performance problem that happens to result on a given day. \n> And every single time I've seen anything like that done, the \n> long term effect is always monstrous.\n\nFunny, I very seldom use Informix hints (mostly none, maybe 2 per\nproject),\nbut I have yet to see one that backfires on me, even lightly.\nI use hints like: don't use that index, use that join order, use that\nindex\n\nCan you give us an example that had such a monstrous effect in Oracle,\nother than that the hint was a mistake in the first place ?\n\nAndreas\n",
"msg_date": "Fri, 13 Oct 2006 10:41:36 +0200",
"msg_from": "\"Zeugswetter Andreas ADI SD\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "\n> I actually think the way to attack this issue is to discuss the kinds\nof errors the planner makes, and what tweaks we could do to correct\nthem. \n> Here's the ones I'm aware of:\n> \n> -- Incorrect selectivity of WHERE clause\n> -- Incorrect selectivity of JOIN\n> -- Wrong estimate of rows returned from SRF\n> -- Incorrect cost estimate for index use\n> \n> Can you think of any others?\n\nI think your points are too generic, there is no way to get them all\n100% correct from statistical\ndata even with data hints (and it is usually not at all necessary for\ngood enough plans).\nI think we need to more precisely define the problems of our system with\npoint in time statistics\n\n-- no reaction to degree of other concurrent activity\n-- no way to react to abnormal skew that only persists for a very short\nduration\n-- too late reaction to changing distribution (e.g. current date column\nwhen a new year starts)\n\tand the variant: too late adaption when a table is beeing filled\n-- missing cost/selectivity estimates for several parts of the system \n\nAndreas\n",
"msg_date": "Fri, 13 Oct 2006 11:07:29 +0200",
"msg_from": "\"Zeugswetter Andreas ADI SD\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "On Fri, Oct 13, 2006 at 10:41:36AM +0200, Zeugswetter Andreas ADI SD wrote:\n> Can you give us an example that had such a monstrous effect in Oracle,\n> other than that the hint was a mistake in the first place ?\n\nOf course the hint was a mistake in the first place; the little story\nI told was exactly an example of such a case. The hint shouldn't\nhave been put in place at the beginning; instead, the root cause\nshould have been uncovered. It was not, the DBA added a hint, and\nlater that hint turned out to have unfortunate consequences for\nsome other use case. And it's a long-term monstrosity, remember,\nnot a short one: the problem is in maintenance overall.\n\nThis is a particularly sensitive area for PostgreSQL, because the\nplanner has been making giant leaps forward with every release. \nIndeed, as Oracle's planner got better, the hints people had in place\nsometimes started to cause them to have to re-tune everything. My\nOracle-using acquaintances tell me this has gotten better in recent\nreleases; but in the last two days, one person pointed out that hints\nare increasingly relied on by one part of Oracle, even as another\nOracle application insists that they never be used. That's exactly\nthe sort of disagreement I'd expect to see when people have come to\nrely on what is basically a kludge in the first place.\n\nAnd remember, the places where PostgreSQL is getting used most\nheavily are still the sort of environments where people will take a\nlot of short cuts to achieve an immediate result, and be annoyed when\nthat short cut later turns out to have been expensive. Postgres will\nget a black eye from that (\"Too hard to manage! Upgrades cause all\nsorts of breakage!\").\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n",
"msg_date": "Fri, 13 Oct 2006 10:09:15 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "> And remember, the places where PostgreSQL is getting used most\n> heavily are still the sort of environments where people will take a\n> lot of short cuts to achieve an immediate result, and be annoyed when\n> that short cut later turns out to have been expensive. Postgres will\n> get a black eye from that (\"Too hard to manage! Upgrades cause all\n> sorts of breakage!\").\n\nThose guys will do their shortcuts anyway, and possibly reject postgres\nas not suitable even before that if they can't take any shortcuts.\n\nAnd upgrades are always causing breakage, I didn't have one upgrade\nwithout some things to fix, so I would expect people is expecting that.\nAnd that's true for Oracle too, our oracle guys always have something to\nfix after an upgrade. And I repeat, I always had something to fix for\npostgres too on all upgrades I've done till now.\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Fri, 13 Oct 2006 16:20:08 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "\n> > Can you give us an example that had such a monstrous effect in\nOracle, \n> > other than that the hint was a mistake in the first place ?\n> \n> Of course the hint was a mistake in the first place; the \n> little story I told was exactly an example of such a case. \n> The hint shouldn't have been put in place at the beginning; \n> instead, the root cause should have been uncovered.\n\nThis is not an example. For us to understand, we need an actual case\nwith syntax and all, and what happened.\n\nImho the use of a stupid hint, that was added without analyzing\nthe cause and background of the problem is no proof that statement hints\nare bad,\nonly that the person involved was not doing his job.\n\nAndreas\n",
"msg_date": "Fri, 13 Oct 2006 17:04:18 +0200",
"msg_from": "\"Zeugswetter Andreas ADI SD\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "On Thu, Oct 12, 2006 at 01:58:22PM -0700, Josh Berkus wrote:\n> > Unless you've got a time machine or a team of coders in your back\n> > pocket, I don't see how the planner will suddenly become perfect in\n> > 8.4...\n> \n> Since you're not a core code contributor, I really don't see why you \n> continue to claim that query hints are going to be easier to implement \n> than relation-level statistics modification. You think it's easier, but \n> the people who actually work on the planner don't believe that it is.\n \nWell, that's not what I said (my point being that until the planner and\nstats are perfect you need a way to over-ride them)... but I've also\nnever said hints would be faster or easier than stats modification (I\nsaid I hope they would). But we'll never know which will be faster or\neasier until there's actually a proposal for improving the stats.\n\n> > We've been seeing the same kinds of problems that are very difficult (or\n> > impossible) to fix cropping up for literally years... it'd be really\n> > good to at least be able to force the planner to do the sane thing even\n> > if we don't have the manpower to fix it right now...\n> \n> As I've said to other people on this thread, you keep making the incorrect \n> assumption that Oracle-style query hints are the only possible way of \n> manual nuts-and-bolts query tuning. They are not.\n\nNo, I've never said that. What I've said is a) I doubt that any system\nwill always be correct for every query, meaning you need to be able to\nchange things on a per-query basis, and b) I'm hoping that simple hints\nwill be easy enough to implement that they can go into 8.3.\n\nI completely agree that it's much better *in the long run* to improve\nthe planner and the statistics system so that we don't need hints. But\nthere's been no plan put forward for how to do that, which means we also\nhave no idea when some of these problems will be resolved. If someone\ncomes up with a plan for that, then we can actually look at which options\nare better and how soon we can get fixes for these problems in place.\n\nUnfortunately, this problem is difficult enough that I suspect it could\ntake a long time just to come up with an idea of how to fix these\nproblems, which means that without some way to override the planner our\nusers are stuck in the same place for the foreseeable future. If that\nturns out to be the case, then I think we should implement per-query\nhints now so that users can handle bad plans while we focus on how to\nimprove the stats and planner so that in the future hints will become\npointless.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 13 Oct 2006 11:16:14 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> I completely agree that it's much better *in the long run* to improve\n> the planner and the statistics system so that we don't need hints. But\n> there's been no plan put forward for how to do that, which means we also\n> have no idea when some of these problems will be resolved.\n\nYou keep arguing on the assumption that the planner is static and\nthere's no one working on it. That is false --- although this thread\nis certainly wasting a lot of time that could have been used more\nproductively ;-).\n\nI also dispute your assumption that hints of the style you propose\nwill be easier to implement or maintain than the sort of\nstatistical-assumption tweaking that's been counter-proposed. Just for\nstarters, how are you going to get those hints through the parser and\nrewriter? That's going to take an entire boatload of very ugly code\nthat isn't needed at all in a saner design.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 13 Oct 2006 12:36:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal "
},
{
"msg_contents": "On Thu, 2006-10-12 at 23:12 -0400, Christopher Browne wrote:\n> > No, I don't have any idea, except that it would be less push-back\n> > than changing a language that's embedded in client code. Also, I see\n> > no reason to think that a hint would not be obsolete upon a new\n> > release anyway.\n> \n> I see *plenty* of reason.\n> \n> 1. Suppose the scenario where Hint h was useful hasn't been affected\n> by *any* changes in how the query planner works in the new\n> version, it *obviously* continues to be necessary.\n> \n> 2. If Version n+0.1 hasn't resolved all/most cases where Hint h was\n> useful in Version n, then people will entirely reasonably expect\n> for Hint h to continue to be in effect in version n+0.1\n> \n\nFair enough. I had considered those situations, but a lot of people are\ntalking about \"I need a better plan now, can't wait for planner\nimprovements\". Also, even if the hint is still useful, I would think\nthat on a new version you'd want to test to see how useful it still is.\n\n> 3. Suppose support for Hint h is introduced in PostgreSQL version\n> n, and an optimization that makes it obsolete does not arrive\n> until version n+0.3, which is quite possible. That hint has been\n> carried forward for 2 versions already, long enough for client\n> code that contains it to start to ossify. (After all, if\n> developers get promoted to new projects every couple of years,\n> two versions is plenty of time for the original programmer to \n> be gone...)\n\nOk, that is a good reason. But it's not helped at all by putting the\nhints in the queries themselves.\n\n> > \"Little better\" is all I was going for. I was just making the\n> > observation that we can separate two concepts:\n> > (1) Embedding code in the client's queries, which I see as very\n> > undesirable and unnecessary\n> > (2) Providing very specific hints\n> >\n> > which at least gives us a place to talk about the debate more\n> > reasonably.\n> \n> It seems to me that there is a *LOT* of merit in trying to find\n> alternatives to embedding code into client queries, to be sure.\n> \n\nI think almost any alternative to client query hints is worth\nconsidering.\n\n> >> The right way to think about it is to ask why is the planner not\n> >> picking the right plan to start with --- is it missing a\n> >> statistical correlation, or are its cost parameters wrong for a\n> >> specific case, or is it perhaps unable to generate the desired plan\n> >> at all? (If the latter, no amount of hinting is going to help.)\n> >> If it's a statistics or costing problem, I think the right thing is\n> >> to try to fix it with hints at that level. You're much more likely\n> >> to fix the behavior across a class of queries than you will be with\n> >> a hint textually matched to a specific query.\n> >\n> > Agreed.\n> \n> That's definitely a useful way to look at the issue, which seems to be\n> lacking in many of the cries for hints.\n> \n> Perhaps I'm being unfair, but it often seems that people demanding\n> hinting systems are uninterested in why the planner is getting things\n> wrong. Yes, they have an immediate problem (namely the wrong plan\n> that is getting generated) that they want to resolve.\n> \n> But I'm not sure that you can get anything out of hinting without\n> coming close to answering \"why the planner got it wrong.\"\n\nRight. And it's not always easy to determine why the planner got it\nwrong without making it execute other plans through hinting :)\n\nNote: I'll restate this just to be clear. I'm not advocating an overly-\nspecific, band-aid style hinting language. My only real concern is that\nif one appears, I would not like it to appear in the client's queries. \n\nSame goes for more general kinds of hints. We don't want a bunch of\nclient queries to contain comments like \"table foo has a\nrandom_page_cost of 1.1\". That belongs in the system catalogs.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 13 Oct 2006 09:45:54 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n\n>> > I actually think the way to attack this issue is to discuss the kinds\n>> > of errors the planner makes, and what tweaks we could do to correct\n>> > them. Here's the ones I'm aware of:\n>> >\n>> > -- Incorrect selectivity of WHERE clause\n>> > -- Incorrect selectivity of JOIN\n>> > -- Wrong estimate of rows returned from SRF\n>> > -- Incorrect cost estimate for index use\n>> >\n>> > Can you think of any others?\n\n -- Incorrect estimate for result of DISTINCT or GROUP BY.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Fri, 13 Oct 2006 12:46:45 -0400",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "On Thu, 2006-10-12 at 18:02 -0400, Alvaro Herrera wrote:\n> Bucky Jordan wrote:\n> \n> > What about using regular expressions, plus, if you have a function\n> > (views, or any other statement that is stored), you can assign a rule to\n> > that particular function. So you get matching, plus explicit selection.\n> > This way it's easy to find all your hints, turn them off, manage them,\n> > etc. (Not to mention dynamically generated SQL is ugly enough without\n> > having to put hints in there).\n> \n> The regular expression idea that's being floated around makes my brain\n> feel like somebody is screeching a blackboard nearby. I don't think\n> it's a sane idea. I think you could achieve something similar by using\n> stored plan representations, like we do for rewrite rules. So you'd\n> look for, say, a matching join combination in a catalog, and get a\n> selectivity from a function that would get the selectivities of the\n> conditions on the base tables. Or something like that anyway.\n> \n> That gets ugly pretty fast when you have to extract selectivities for\n> all the possible join paths in any given query.\n> \n> But please don't talk about regular expressions.\n> \n\nIt sounds horrible to me too, and I'm the one that thought of it (or at\nleast I'm the one that introduced it to this thread).\n\nHowever, everything is relative. Since the other idea floating around is\nto put the same hinting information into the client queries themselves,\nregexes look great by comparison (in my opinion).\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 13 Oct 2006 10:00:29 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: Jeff Davis [mailto:[email protected]]\n> Sent: Friday, October 13, 2006 1:00 PM\n> To: Alvaro Herrera\n> Cc: Bucky Jordan; [email protected]; Jim C. Nasby; pgsql-\n> [email protected]\n> Subject: Re: [HACKERS] [PERFORM] Hints proposal\n> \n> On Thu, 2006-10-12 at 18:02 -0400, Alvaro Herrera wrote:\n> > Bucky Jordan wrote:\n> >\n> > > What about using regular expressions, plus, if you have a function\n> > > (views, or any other statement that is stored), you can assign a\nrule\n> to\n> > > that particular function. So you get matching, plus explicit\n> selection.\n> > > This way it's easy to find all your hints, turn them off, manage\nthem,\n> > > etc. (Not to mention dynamically generated SQL is ugly enough\nwithout\n> > > having to put hints in there).\n> >\n> > The regular expression idea that's being floated around makes my\nbrain\n> > feel like somebody is screeching a blackboard nearby. I don't think\n> > it's a sane idea. I think you could achieve something similar by\nusing\n> > stored plan representations, like we do for rewrite rules. So you'd\n> > look for, say, a matching join combination in a catalog, and get a\n> > selectivity from a function that would get the selectivities of the\n> > conditions on the base tables. Or something like that anyway.\n> >\n> > That gets ugly pretty fast when you have to extract selectivities\nfor\n> > all the possible join paths in any given query.\n> >\n> > But please don't talk about regular expressions.\n> >\n> \n> It sounds horrible to me too, and I'm the one that thought of it (or\nat\n> least I'm the one that introduced it to this thread).\n> \n> However, everything is relative. Since the other idea floating around\nis\n> to put the same hinting information into the client queries\nthemselves,\n> regexes look great by comparison (in my opinion).\n\nI was merely expressing the same opinion. But I'm not one of those\nworking on the planner, and all I can say to those of you who are is\nyour efforts on good design are most appreciated, even if they do take\nlonger than we users would like at times.\n\nMy only point was that they should *NOT* be put in queries themselves as\nthis scatters the nightmare into user code as well. Of course, other\nmore sane ideas are most welcome. I don't like screeching on blackboards\neither. (regular expressions, although very valuable at times, seem to\nhave that effect quite often...)\n\n- Bucky\n",
"msg_date": "Fri, 13 Oct 2006 13:08:27 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "On Fri, 2006-10-13 at 13:08 -0400, Bucky Jordan wrote:\n> > It sounds horrible to me too, and I'm the one that thought of it (or\n> at\n> > least I'm the one that introduced it to this thread).\n> > \n> > However, everything is relative. Since the other idea floating around\n> is\n> > to put the same hinting information into the client queries\n> themselves,\n> > regexes look great by comparison (in my opinion).\n> \n> I was merely expressing the same opinion. But I'm not one of those\n\nI didn't mean to imply otherwise.\n\n> working on the planner, and all I can say to those of you who are is\n> your efforts on good design are most appreciated, even if they do take\n> longer than we users would like at times.\n> \n> My only point was that they should *NOT* be put in queries themselves as\n> this scatters the nightmare into user code as well. Of course, other\n> more sane ideas are most welcome. I don't like screeching on blackboards\n> either. (regular expressions, although very valuable at times, seem to\n> have that effect quite often...)\n\nRight. And I think the sane ideas are along the lines of estimate & cost\ncorrections (like Tom is saying).\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 13 Oct 2006 10:23:31 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "On Fri, Oct 13, 2006 at 10:23:31AM -0700, Jeff Davis wrote:\n> Right. And I think the sane ideas are along the lines of estimate & cost\n> corrections (like Tom is saying).\n\nLet me ask this... how long do you (and others) want to wait for those?\nIt's great that the planner is continually improving, but it also\nappears that there's still a long road ahead. Having a dune-buggy to get\nto your destination ahead of the road might not be a bad idea... :)\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 13 Oct 2006 12:30:24 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Fri, Oct 13, 2006 at 10:23:31AM -0700, Jeff Davis wrote:\n>> Right. And I think the sane ideas are along the lines of estimate & cost\n>> corrections (like Tom is saying).\n> \n> Let me ask this... how long do you (and others) want to wait for those?\n> It's great that the planner is continually improving, but it also\n> appears that there's still a long road ahead. Having a dune-buggy to get\n> to your destination ahead of the road might not be a bad idea... :)\n\nIt's all about resources Jim.. I have yet to see anyone step up and\noffer to help work on the planner in this thread (except Tom of course).\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Fri, 13 Oct 2006 10:33:24 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "> I completely agree that it's much better *in the long run* to improve\n> the planner and the statistics system so that we don't need hints. But\n> there's been no plan put forward for how to do that, which means we\nalso\n> have no idea when some of these problems will be resolved. If someone\n> comes up with a plan for that, then we can actually look at which\noptions\n> are better and how soon we can get fixes for these problems in place.\n> \n\nWould it be helpful to have a database of EXPLAIN ANALYZE results and\nrelated details that developers could search through? I guess we sort of\nhave that on the mailing list, but search/reporting features on that are\npretty limited. Something like the \"Report Bug\" feature that seems to be\ngrowing popular in other software (Windows, OS X, Firefox, etc) might\nallow collection of useful data. The goal would be to identify the most\ncommon problems, and some hints at what's causing them.\n\nMaybe have a form based submission so you could ask the user some\nrequired questions, ensure that they aren't just submitting EXPLAIN\nresults (parse and look for times maybe?), etc?\n\nI guess the general question is, what information could the users\nprovide developers to help with this, and how can it be made easy for\nthe users to submit the information, and easy for the developers to\naccess in a meaningful way?\n\nAs a developer/contributor, what questions would you want to ask a user?\n>From reading the mailing lists, these seem to be common ones:\n- Copy of your postgres.conf\n- Basic hardware info\n- Explain Analyze Results of poor performing query\n- Explain Analyze Results of anything you've gotten to run better\n- Comments\n\nIf there's interest- web development is something I can actually do\n(unlike pg development) so I might actually be able to help with\nsomething like this.\n\n- Bucky\n\n",
"msg_date": "Fri, 13 Oct 2006 13:33:43 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "On Fri, Oct 13, 2006 at 12:30:24PM -0500, Jim C. Nasby wrote:\n> On Fri, Oct 13, 2006 at 10:23:31AM -0700, Jeff Davis wrote:\n> > Right. And I think the sane ideas are along the lines of estimate\n> > & cost corrections (like Tom is saying).\n> \n> Let me ask this... how long do you (and others) want to wait for\n> those?\n\nThat's a good question, but see below.\n\n> It's great that the planner is continually improving, but it\n> also appears that there's still a long road ahead. Having a\n> dune-buggy to get to your destination ahead of the road might not be\n> a bad idea... :)\n\nWhat evidence do you have that adding per-query hints would take less\ntime and be less work, even in the short term, than the current\nstrategy of continuously improving the planner and optimizer?\n\nCheers,\nD\n-- \nDavid Fetter <[email protected]> http://fetter.org/\nphone: +1 415 235 3778 AIM: dfetter666\n Skype: davidfetter\n\nRemember to vote!\n",
"msg_date": "Fri, 13 Oct 2006 10:43:57 -0700",
"msg_from": "David Fetter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "On Fri, 2006-10-13 at 12:30 -0500, Jim C. Nasby wrote:\n> On Fri, Oct 13, 2006 at 10:23:31AM -0700, Jeff Davis wrote:\n> > Right. And I think the sane ideas are along the lines of estimate & cost\n> > corrections (like Tom is saying).\n> \n> Let me ask this... how long do you (and others) want to wait for those?\n> It's great that the planner is continually improving, but it also\n> appears that there's still a long road ahead. Having a dune-buggy to get\n> to your destination ahead of the road might not be a bad idea... :)\n\nFair enough. I can wait indefinitely right now, because I don't have any\nserious problems with the planner as-is.\n\nI am trying to empathize with people who are desperate to force plans\nsometimes. Your original proposal included hints in the client queries.\nI suggested that regexes on the server can accomplish the same goal\nwhile avoiding a serious drawback. Don't you think some kind of server\nmatching rule is better?\n\nI think an idea to get the hints into the server, regardless of the\ntypes of hints you want to use, make it more likely to be accepted.\nDon't you think that's a better road to take?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Fri, 13 Oct 2006 10:44:41 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Jim C. Nasby wrote:\n> \n>> On Fri, Oct 13, 2006 at 10:23:31AM -0700, Jeff Davis wrote:\n>> \n>>> Right. And I think the sane ideas are along the lines of estimate & cost\n>>> corrections (like Tom is saying).\n>>> \n>> Let me ask this... how long do you (and others) want to wait for those?\n>> It's great that the planner is continually improving, but it also\n>> appears that there's still a long road ahead. Having a dune-buggy to get\n>> to your destination ahead of the road might not be a bad idea... :)\n>> \n>\n> It's all about resources Jim.. I have yet to see anyone step up and\n> offer to help work on the planner in this thread (except Tom of course).\n>\n>\n> \n\nIt's worse than that. Dune buggies do not run cost free. They require \noil, petrol, and maintenance. Somebody wants to build a Maserati and you \nwant to divert resources to maintaining dune buggies?\n\ncheers\n\nandrew\n\n",
"msg_date": "Fri, 13 Oct 2006 13:58:33 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Fri, Oct 13, 2006 at 10:23:31AM -0700, Jeff Davis wrote:\n>> Right. And I think the sane ideas are along the lines of estimate & cost\n>> corrections (like Tom is saying).\n> \n> Let me ask this... how long do you (and others) want to wait for those?\n\nwell - we waited and got other features in the past and we will wait and\nget new features in the future too...\n\n> It's great that the planner is continually improving, but it also\n> appears that there's still a long road ahead. Having a dune-buggy to get\n> to your destination ahead of the road might not be a bad idea... :)\n\nwell the planner has improved dramatically in the last years - we have\napps here that are magnitudes faster with 8.1(even faster with 8.2) then\nthey were with 7.3/7.4 - most of that is pure plain planner improvements.\n\nAnd I don't really believe that adding proper per statement hint support\nis any easier then continuing to improve the planner or working on (imho\nmuch more useful) improvements to the statistics infrastructure or\nfunctionality to tweak the statistics usage of the planner.\nThe later however would prove to be much more useful for most of the\ncurrent \"issues\" and have benefits for most of the userbase.\n\n\nStefan\n",
"msg_date": "Fri, 13 Oct 2006 20:07:17 +0200",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "Jim,\n\n> Well, that's not what I said (my point being that until the planner and\n> stats are perfect you need a way to over-ride them)... but I've also\n> never said hints would be faster or easier than stats modification (I\n> said I hope they would).\n\nYes, you did. Repeatedly. On this and other threads, you've made the \nstatement at least three times that per-query hints are the only way to go \nfor 8.3. Your insistence on this view has been so strident that if I \ndidn't know you better, I would assume some kind of hidden agenda.\n\nStop harping on the \"per-query hints are the true way and the only way\", or \nprepare to have people start simply ignoring you.\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Fri, 13 Oct 2006 15:57:23 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "Andreas,\n\n> I think we need to more precisely define the problems of our system with\n> point in time statistics\n>\n> -- no reaction to degree of other concurrent activity\n> -- no way to react to abnormal skew that only persists for a very short\n> duration\n> -- too late reaction to changing distribution (e.g. current date column\n> when a new year starts)\n> \tand the variant: too late adaption when a table is beeing filled\n> -- missing cost/selectivity estimates for several parts of the system\n\nHow would we manage point-in-time statistics? How would we collect them & \nstore them? I think this is an interesting idea, but very, very hard to \ndo ...\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Fri, 13 Oct 2006 15:59:17 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "Csaba,\n\n> And upgrades are always causing breakage, I didn't have one upgrade\n> without some things to fix, so I would expect people is expecting that.\n> And that's true for Oracle too, our oracle guys always have something to\n> fix after an upgrade. And I repeat, I always had something to fix for\n> postgres too on all upgrades I've done till now.\n\nReally? Since 7.4, I've been able to do most upgrades without any \ntroubleshooting. \n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Fri, 13 Oct 2006 16:34:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Hints proposal"
},
{
"msg_contents": "On Fri, Oct 13, 2006 at 03:57:23PM -0700, Josh Berkus wrote:\n> Jim,\n> \n> > Well, that's not what I said (my point being that until the planner and\n> > stats are perfect you need a way to over-ride them)... but I've also\n> > never said hints would be faster or easier than stats modification (I\n> > said I hope they would).\n> \n> Yes, you did. Repeatedly. On this and other threads, you've made the \n> statement at least three times that per-query hints are the only way to go \n> for 8.3. Your insistence on this view has been so strident that if I \n> didn't know you better, I would assume some kind of hidden agenda.\n\nLet me clarify, because that's not what I meant. Right now, there's not\neven a shadow of a design for anything else, and this is a tough nut to\ncrack. That means it doesn't appear that anything else could be done for\n8.3. If I'm wrong, great. If not, we should get something in place for\nusers now while we come up with something better.\n\nSo, does anyone out there have a plan for how we could give user's the\nability to control the planner at a per-table level in 8.3 or even 8.4?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Fri, 13 Oct 2006 18:36:15 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "Jim C. Nasby wrote:\n\n> So, does anyone out there have a plan for how we could give user's the\n> ability to control the planner at a per-table level in 8.3 or even 8.4?\n\nPer-table level? Some of the problems that have been put forward have\nto do with table combinations (for example selectivity of joins), so not\nall problems will be solved with a per-table design.\n\nI think if it were per table, you could get away with storing stuff in\npg_statistics or some such. But how do you express statistics for\njoins? How do you express cross-column correlation?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 13 Oct 2006 21:29:01 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Let me clarify, because that's not what I meant. Right now, there's not\n> even a shadow of a design for anything else, and this is a tough nut to\n> crack.\n\nI think you are not exactly measuring on a level playing field. On the\ntextually-embedded-hints side, I see a very handwavy suggestion of a\nsyntax and absolutely nothing about how it might be implemented --- in\nparticular, nothing about how the information would be transmitted\nthrough to the planner, and nothing about exactly how the planner would\nuse it if it had it. (No, I don't think \"the planner will obey the\nhints\" is an implementation sketch.) On the other side, the concept of\nsystem catalog(s) containing overrides for statistical or costing\nestimates is pretty handwavy too, but at least it's perfectly clear\nwhere it would plug into the planner: before running one of the current\nstats estimation or costing functions, we'd look for a matching override\ncommand in the catalogs. The main question seems to be what we'd like\nto be able to match on ... but that doesn't sound amazingly harder than\nspecifying what an embedded hint does.\n\nIMO a textual hint facility will actually require *more* infrastructure\ncode to be written than what's being suggested for alternatives.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 14 Oct 2006 00:19:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal "
},
{
"msg_contents": "Josh Berkus wrote:\n> I actually think the way to attack this issue is to discuss the kinds of \n> errors the planner makes, and what tweaks we could do to correct them. \n> Here's the ones I'm aware of:\n> \n> -- Incorrect selectivity of WHERE clause\n> -- Incorrect selectivity of JOIN\n> -- Wrong estimate of rows returned from SRF\n> -- Incorrect cost estimate for index use\n> \n> Can you think of any others?\n\nThe one that started this discussion: Lack of cost information for functions. I think this feature is a good idea independent of the whole HINTS discussion.\n\nAt a minimum, a rough categorization is needed, such as \"Lighning fast / Fast / Medium / Slow / Ludicrously slow\", with some sort if milliseconds or CPU cycles associated with each category. Or perhaps something like, \"This is (much faster|faster|same as|slower|much slower) than reading a block from the disk.\"\n\nIf I understand Tom and others, the planner already is capable of taking advantage of this information, it just doesn't have it yet. It could be part of the CREATE FUNCTION command.\n\n CREATE OR REPLACE FUNCTION foobar(text, text, text) RETURNS text\n AS '/usr/local/pgsql/lib/foobar.so', 'foobar'\n COST LUDICROUSLY_SLOW\n LANGUAGE 'C' STRICT;\n\nBetter yet ('tho I have no idea how hard this would be to implement...) would be an optional second function with the same parameter signature as the main function, but it would return a cost estimate:\n\n CREATE OR REPLACE FUNCTION foobar(text, text, text) RETURNS text\n AS '/usr/local/pgsql/lib/foobar.so', 'foobar'\n COST foobar_cost\n LANGUAGE 'C' STRICT;\n\nThe planner could call it with the same parameters it was about to use, and get an accurate estimate for the specific operation that is about to be done. In my particular case (running an NP-complete problem), there are cases where I can determine ahead of time that the function will be fast, but in most cases it is *really* slow.\n\nCraig\n",
"msg_date": "Sun, 15 Oct 2006 16:55:54 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "So let's cut to the bone: If someone thinks a proposal is a bad idea, and they're volunteering their time on an open-source project, why would they implement the proposal?\n\nIn all the heat and smoke, I believe there are two basic conclusions we all agree on.\n\n1. Optimizer:\n a) A perfect optimizer would be a wonderful thing\n b) Optimization is a hard problem\n c) Any problem that can be solve by improving the optimizer *should*\n be solved by improving the optimizer.\n\n2. Hints\n a) On a aesthetic/theoretical level, hints suck. They're ugly and rude\n b) On a practical level, introducing hints will cause short- and long-term problems\n c) Hints would help DBAs solve urgent problems for which there is no other solution\n\nThe disagreements revolve around the degree to which 1 conflicts with 2.\n\n1. Developers feel very strongly about 2(a) and 2(b).\n2. DBAs \"in the trenches\" feel very strongly about 2(c).\n\nSo my question is: Is there any argument that can be made to persuade those of you who are volunteering your time on the optimizer to even consider a HINTS proposal? Has all this discussion changed your perspective on 2(c), and why it really matters to some of us? Are we just wasting our time, or is this a fruitful discussion?\n\nThanks,\nCraig\n",
"msg_date": "Sun, 15 Oct 2006 17:25:31 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "On Sun, Oct 15, 2006 at 05:25:31PM -0700, Craig A. James wrote:\n> So my question is: Is there any argument that can be made to persuade those \n> of you who are volunteering your time on the optimizer to even consider a \n> HINTS proposal? Has all this discussion changed your perspective on 2(c), \n> and why it really matters to some of us? Are we just wasting our time, or \n> is this a fruitful discussion?\n\nThey're waiting for an idea that captures their imagination. So far,\nit seems like a re-hashing of old ideas that have been previously shot\ndown, none of which seem overly imaginative, or can be shown to\nprovide significant improvement short term or long term... :-)\n\nHaha. That's my take on it. Sorry if it is harsh.\n\nTo get very competent people to volunteer their time, you need to make\nthem believe. They need to dream about it, and wake up the next morning\nfilled with a desire to try out some of their ideas.\n\nYou need to brain wash them... :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Mon, 16 Oct 2006 07:00:23 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Craig A. James wrote:\n>\n> \n> 2. Hints\n> a) On a aesthetic/theoretical level, hints suck. They're ugly and rude\n> b) On a practical level, introducing hints will cause short- and \n> long-term problems\n> c) Hints would help DBAs solve urgent problems for which there is no \n> other solution\n\nPretty good summary!\n\nMaybe there should be a 2d), 2e) and 2f).\n\n2d) Hints will damage the ongoing development of the optimizer by \nreducing or eliminating test cases for its improvement.\n2e) Hints will divert developer resource away from ongoing development \nof the optimizer.\n2f) Hints may demoralize the developer community - many of whom will \nhave been attracted to Postgres precisely because this was a realm where \ncrude solutions were discouraged.\n\nI understand that these points may seem a bit 'feel-good' and intangible \n- especially for the DBA's moving to Pg from Oracle, but I think they \nillustrate the mindset of the Postgres developer community, and the \ndeveloper community is, after all - the primary reason why Pg is such a \ngood product.\n\nOf course - if we can find a way to define 'hint like' functionality \nthat is more in keeping with the 'Postgres way' (e.g. some of the \nrelation level statistical additions as discussed), then some of 2d-2f) \nneed not apply.\n\nBest wishes\n\nMark\n\n\n\n\n\n\n",
"msg_date": "Tue, 17 Oct 2006 01:34:30 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "> 2d) Hints will damage the ongoing development of the optimizer by \n> reducing or eliminating test cases for its improvement.\n\nYou have no evidence for this. The mindset of the postgres community you\ncite further below usually mandates that you say things if you have\nevidence for them... and this one could be even backwards, by putting\nsuch a tool in normal mortals hands that they can experiment with\nexecution plans to see which one works better, thus giving more data to\nthe developers than it is possible now. This is of course a speculation\ntoo, but not at all weaker than yours.\n\n> 2e) Hints will divert developer resource away from ongoing development \n> of the optimizer.\n\nThis is undebatable, although the long term cost/benefit is not clear.\nAnd I would guess simple hinting would not need a genius to implement it\nas planner optimizations mostly do... so it could possibly be done by\nsomebody else than the core planner hackers (is there any more of them\nthan Tom ?), and such not detract them too much from the planner\noptimization tasks.\n\n> 2f) Hints may demoralize the developer community - many of whom will \n> have been attracted to Postgres precisely because this was a realm where \n> crude solutions were discouraged.\n\nI still don't get it why are you so against hints. Hints are a crude\nsolution only if you design them to be like that... otherwise they are\njust yet another tool to get the work done, preferably now. \n\n> I understand that these points may seem a bit 'feel-good' and intangible \n> - especially for the DBA's moving to Pg from Oracle, but I think they \n> illustrate the mindset of the Postgres developer community, and the \n> developer community is, after all - the primary reason why Pg is such a \n> good product.\n\nI fail to see why would be a \"hinted\" postgres an inferior product...\n\n> Of course - if we can find a way to define 'hint like' functionality \n> that is more in keeping with the 'Postgres way' (e.g. some of the \n> relation level statistical additions as discussed), then some of 2d-2f) \n> need not apply.\n\nI bet most of the users who wanted hints are perfectly fine with any\nvariations of it, if it solves the problems at hand. \n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Mon, 16 Oct 2006 15:27:46 +0200",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "I haven't weighed in on this because 1) I'm not a postgresql developer, \nand am firmly of the opinion that they who are doing the work get to \ndecide how the work gets done (especially when you aren't paying them \nfor the work), and 2) I don't have any experience as a developer with \nhints, and thus don't know any of the pluses or minuses. I do, however, \nknow my fellow developers. As general rules:\n\n1) If you give developers a feature, they will use it. The implicit \nassumption seems to be that if you're given a feature, you've been given \nit for a good reason, use it whenever possible. Therefor, any hints \nfeature *will* be used widely an in \"inappropriate\" circumstances. \nProtestations that this wasn't what the feature was meant for will fall \non deaf ears.\n\n2) Taking away a feature is painfull. Of course the developers will \n*say* that they're doing it in a portable way that'll be easy to change \nin the future, but we lie like cheap rugs. This is is often just a case \nof stupidity and/or ignorance, but even the best developers can get \ncaught- 99 out of 100 uses of the feature are portable and easy to \nupdate, it's #100 that's a true pain, and #100 was an accident, or a \nkludge to get the app out the door under shipping schedule, etc. Taking \naway, or breaking, a feature then just becomes a strong disincentive to \nupgrade.\n\n3) Developers are often astonishingly bad at predicting what is or is \nnot good for performance. A good example of this for databases is the \nassumption that index scans are always faster than sequential scans. \nThe plan the programmer thinks they want is often not the plan the \nprogrammer really wants. Especially considering generally the program \nhas so many other things they're having to deal with (the \"it's hard to \nremember you're out to drain the swamp when you're up to your ass in \nalligators\" problem) that we generally don't have the spare brainpower \nleft over for query optimization. Thus the strong tendancy to want to \nadopt simple, rough and ready, mostly kinda true rules (like \"index \nscans are always faster than sequential scans\") about what is or is not \ngood for performance.\n\nOr, in shorter forms:\n1) If you make it convient to use, expect it to be used a lot. If it \nshouldn't be used a lot, don't make it convient.\n2) Breaking features means that people won't upgrade.\n3) Programmers are idiots- design accordingly.\n\nBrian\n\n",
"msg_date": "Mon, 16 Oct 2006 11:36:23 -0400",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Brian Hurt wrote:\n> Or, in shorter forms:\n> 1) If you make it convient to use, expect it to be used a lot. If it \n> shouldn't be used a lot, don't make it convient.\n> 2) Breaking features means that people won't upgrade.\n> 3) Programmers are idiots- design accordingly.\n\nThe PostgreSQL project has had a philosophy of trying to limit user\nchoice when we can _usually_ make the right choice automatically. \nHistorically Oracle and others have favored giving users more choices,\nbut this adds complexity when using the database.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Mon, 16 Oct 2006 12:43:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "On Monday 16 October 2006 10:36, Brian Hurt wrote:\n\n> ... Therefor, any hints feature *will* be used widely\n> and in \"inappropriate\" circumstances. Protestations that\n> this wasn't what the feature was meant for will fall on \n> deaf ears. \n\nI don't really care about this topic, as I've used Oracle and never \nactually made use of its hint system, but I liked knowing it was there. \nBut what's better here, asking the optimizer to use what is tested with \nexplain analyze to be a better plan, or to convolute a query so \nhorribly it's hardly recognizable, in an effort to \"trick\" the \noptimizer?\n\nSomeone made a note earlier that any hints made irrelevant by optimizer \nimprovements would probably need to be removed, citing that as a \nmaintenence nightmare. But the same point holds for queries that have \nbeen turned into unmaintainable spaghetti or a series of cursors to \ncircumvent the optimizer. Personally, I'd rather grep my code for a \ncouple deprecated key-words than re-check every big query between \nupgrades to see if any optimizer improvements have been implemented.\n\nQuery planning is a very tough job, and SQL is a very high-level \nlanguage, making it doubly difficult to get the intended effect of a \nquery across to the optimizer. C allows inline assembler for exactly \nthis reason; sometimes the compiler is wrong about something, or \nexperience and testing shows a better way is available that no compiler \ntakes into account. As such a high-level language, SQL is inherently \nflawed for performace tuning, relying almost entirely on the optimizer \nknowing the best path. Here we have no recourse if the planner is just \nplain wrong.\n\nI almost wish the SQL standards committee would force syntax for sending \nlow-level commands to the optimizer for exactly this reason. C has \nthe \"inline\" keyword, so why can't SQL have something similar? I \nagree, hints are essentially retarded comments to try and persuade the \noptimizer to take a different action... what I'd actually like to see \nis some way of directly addressing the query-planner's API and \ncircumvent SQL entirely for really nasty or otherwise convoluted \nresult-sets, but of course I know that's rather unreasonable.\n\nC'mon, some of us DBAs have math degrees and know set theory... ;)\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n",
"msg_date": "Mon, 16 Oct 2006 12:00:01 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Csaba Nagy wrote:\n>> 2d) Hints will damage the ongoing development of the optimizer by \n>> reducing or eliminating test cases for its improvement.\n> \n> You have no evidence for this. \n\nMy evidence (which I think I've mentioned in a couple of previous \npostings), is the experience with the optimizer of that... err.. other \ndatabase that has hints, plus the experience of that (different) other \ndatabase that does not allow them :-) Several others have posted similar \ncomments.\n\n> \n>> 2f) Hints may demoralize the developer community - many of whom will \n>> have been attracted to Postgres precisely because this was a realm where \n>> crude solutions were discouraged.\n> \n> I still don't get it why are you so against hints. Hints are a crude\n> solution only if you design them to be like that... otherwise they are\n> just yet another tool to get the work done, preferably now. \n> \n> \n> I fail to see why would be a \"hinted\" postgres an inferior product...\n> \n\nA rushed. and crude implementation will make it an inferior product - \nnow not every hint advocate is demanding them to be like that, but the \ntone of many of the messages is \"I need hints because they can help me \n*now*, whereas optimizer improvements will take too long...\". That \nsounds to me like a quick fix. I think if we provide hint-like \nfunctionality it must be *part of* our optimizer improvement plan, not \ninstead of it!\n\nNow I may have come on a bit strong about this - and apologies if that's \nthe case, but one of the things that attracted me to Postgres originally \nwas the community attitude of \"doing things properly or sensibly\", I \nthink it would be a great loss - for the product, not just for me - if \nthat changes to something more like \"doing things quickly\".\n\nbest wishes\n\nMark\n\n",
"msg_date": "Tue, 17 Oct 2006 09:25:05 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "On Friday 13 October 2006 12:46, Gregory Stark wrote:\n> Josh Berkus <[email protected]> writes:\n> >> > I actually think the way to attack this issue is to discuss the kinds\n> >> > of errors the planner makes, and what tweaks we could do to correct\n> >> > them. Here's the ones I'm aware of:\n> >> >\n> >> > -- Incorrect selectivity of WHERE clause\n> >> > -- Incorrect selectivity of JOIN\n> >> > -- Wrong estimate of rows returned from SRF\n> >> > -- Incorrect cost estimate for index use\n> >> >\n> >> > Can you think of any others?\n>\n> -- Incorrect estimate for result of DISTINCT or GROUP BY.\n\nYeah, that one is bad. I also ran into one the other day where the planner \ndid not seem to understand the distinctness of a columns values across table \npartitions... \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n",
"msg_date": "Tue, 17 Oct 2006 22:18:52 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Hints proposal"
},
{
"msg_contents": "On Thursday 12 October 2006 12:40, Bucky Jordan wrote:\n> >What is it about hinting that makes it so easily breakable with new\n> > versions? I >don't have any experience with Oracle, so I'm not sure how\n> > they screwed logic like >this up. \n>\n> I don't have a ton of experience with oracle either, mostly DB2, MSSQL and\n> PG. So, I thought I'd do some googling, and maybe others might find this\n> useful info.\n>\n> http://asktom.oracle.com/pls/ask/f?p=4950:8:2177642270773127589::NO::F4950_\n>P8_DISPLAYID,F4950_P8_CRITERIA:7038986332061\n>\n> Interesting quote: \"In Oracle Applications development (11i apps - HR, CRM,\n> etc) Hints are strictly forbidden. We find the underlying cause and fix\n> it.\" and \"Hints -- only useful if you are in RBO and you want to make use\n> of an access path.\"\n>\n> Maybe because I haven't had access to hints before, I've never been tempted\n> to use them. However, I can't remember having to re-write SQL due to a PG\n> upgrade either.\n>\n\nWhen it happens it tends to look something like this:\nhttp://archives.postgresql.org/pgsql-performance/2006-01/msg00154.php\n\nFunny that for all the people who claim that improving the planner should be \nthe primary goal that no one ever took interest in the above case. \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n",
"msg_date": "Tue, 17 Oct 2006 22:40:19 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
},
{
"msg_contents": "Robert Treat <[email protected]> writes:\n> When it happens it tends to look something like this:\n> http://archives.postgresql.org/pgsql-performance/2006-01/msg00154.php\n\n> Funny that for all the people who claim that improving the planner should be \n> the primary goal that no one ever took interest in the above case. \n\nWell, you didn't provide sufficient data for anyone else to reproduce\nthe problem ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Oct 2006 22:55:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal "
},
{
"msg_contents": "On Tuesday 17 October 2006 22:55, Tom Lane wrote:\n> Robert Treat <[email protected]> writes:\n> > When it happens it tends to look something like this:\n> > http://archives.postgresql.org/pgsql-performance/2006-01/msg00154.php\n> >\n> > Funny that for all the people who claim that improving the planner should\n> > be the primary goal that no one ever took interest in the above case.\n>\n> Well, you didn't provide sufficient data for anyone else to reproduce\n> the problem ...\n>\n\nGeez Tom, cut me some slack... no one even bothered to respond that that post \nwith a \"hey we can't tell cause we need more information\"... \n\nnot that it matters because here is where I reposted the problem with more \ninformation \nhttp://archives.postgresql.org/pgsql-performance/2006-01/msg00248.php\nwhere you'll note that Josh agreed with my thinking that there was an issue \nwith the planner and he specifically asked for comments from you. \n\nAnd here is where I reposted the problem to -bugs\nhttp://archives.postgresql.org/pgsql-bugs/2006-01/msg00134.php where I make \nnote of discussing this with several other people, got Bruce to hazard a \nguess which was debunked, and where I noted to Bruce about 10 days later that \nthere had been no further action and no one had asked for the _sample \ndatabase_ I was able to put together. \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n",
"msg_date": "Wed, 18 Oct 2006 12:59:20 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hints proposal"
}
] |
[
{
"msg_contents": "\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Bucky\nJordan\nSent: Thursday, October 12, 2006 2:19 PM\nTo: [email protected]; Jim C. Nasby\nCc: [email protected]; [email protected]\nSubject: Re: [HACKERS] [PERFORM] Hints proposal\n\n> > Well, one nice thing about the per-query method is you can post\nbefore\n> > and after EXPLAIN ANALYZE along with the hints.\n> \n> One bad thing is that application designers will tend to use the hint,\nfix\n> the immediate issue, and never report a problem at all. And query\nhints\n> would not be collectable in any organized way except the query log,\nwhich\n> would then require very sophisticated text parsing to get any useful\n> information at all.\n> \nOr they'll report it when the next version of Postgres \"breaks\" their\napp because the hints changed, or because the planner does something\nelse which makes those hints obsolete.\n\nMy main concern with hints (aside from the fact I'd rather see more\nintelligence in the planner/stats) is managing them appropriately. I\nhave two general types of SQL where I'd want to use hints- big OLAP\nstuff (where I have a lot of big queries, so it's not just one or two\nwhere I'd need them) or large dynamically generated queries (Users\nbuilding custom queries). Either way, I don't want to put them on a\nquery itself.\n\nWhat about using regular expressions, plus, if you have a function\n(views, or any other statement that is stored), you can assign a rule to\nthat particular function. So you get matching, plus explicit selection.\nThis way it's easy to find all your hints, turn them off, manage them,\netc. (Not to mention dynamically generated SQL is ugly enough without\nhaving to put hints in there).\n\n- Bucky\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n",
"msg_date": "Sat, 14 Oct 2006 14:18:45 -0700",
"msg_from": "\"Mischa Sandberg\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Hints proposal"
}
] |
[
{
"msg_contents": "[Matthew T. O'Connor - Wed at 02:33:10PM -0400]\n> In addition autovacuum respects the work of manual or cron based \n> vacuums, so if you issue a vacuum right after a daily batch insert / \n> update, autovacuum won't repeat the work of that manual vacuum.\n\nI was experimenting a bit with autovacuum now. To make the least effect\npossible, I started with a too high cost_delay/cost_limit-ratio. The\neffect of this was that autovacuum \"never\" finished the transactions it\nstarted with, and this was actually causing the nightly vacuum to not do\nit's job good enough.\n",
"msg_date": "Sun, 15 Oct 2006 11:52:59 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "Tobias Brox wrote:\n> [Matthew T. O'Connor - Wed at 02:33:10PM -0400]\n> \n>> In addition autovacuum respects the work of manual or cron based \n>> vacuums, so if you issue a vacuum right after a daily batch insert / \n>> update, autovacuum won't repeat the work of that manual vacuum.\n>> \n>\n> I was experimenting a bit with autovacuum now. To make the least effect\n> possible, I started with a too high cost_delay/cost_limit-ratio. The\n> effect of this was that autovacuum \"never\" finished the transactions it\n> started with, and this was actually causing the nightly vacuum to not do\n> it's job good enough.\n\nYeah, I think if the delay settings are too high it can cause problems, \nthat's part of the reason we have yet to turn these on be default since \nwe won't have enough data to suggest good values. Can you tell us what \nsettings you finally settled on?\n\nBTW hopefully for 8.3 we are going to add the concept of maintenance \nwindows to autovacuum, during these periods you can lower the thresholds \nand perhaps even change the delay settings to make autovacuum more \naggressive during the maintenance window. This hopefully will just \nabout eliminate the need for nightly cron based vacuum runs.\n\n",
"msg_date": "Sun, 15 Oct 2006 10:42:34 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "[Matthew T. O'Connor - Sun at 10:42:34AM -0400]\n> Yeah, I think if the delay settings are too high it can cause problems, \n> that's part of the reason we have yet to turn these on be default since \n> we won't have enough data to suggest good values. Can you tell us what \n> settings you finally settled on?\n\nI'm still not yet settled, and the project manager is critical to\nautovacuum (adds complexity, no obvious benefits from it, we see from\nthe CPU graphs that it's causing iowait, iowait is bad). We're going to\nrun autovacuum biweekly now to see what effect it has on the server\nload.\n\nI've been using the cost/delay-setting of 200/200 for a week now, and\nI'm going to continue with 100/150 for a while. \n\nAre there any known disadvantages of lowering both values to the extreme\n- say, 20/20 instead of 200/200? That would efficiently mean \"sleep as\noften as possible, and sleep for 1 ms for each cost unit spent\" if\nI've understood the system right.\n\nAre there any logs that can help me, and eventually, are there any\nready-made scripts for checking when autovacuum is running, and\neventually for how long it keeps its transactions? I'll probably write\nup something myself if not.\n\n",
"msg_date": "Sun, 15 Oct 2006 16:52:12 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "On Sun, Oct 15, 2006 at 04:52:12PM +0200, Tobias Brox wrote:\n> Are there any logs that can help me, and eventually, are there any\n> ready-made scripts for checking when autovacuum is running, and\n> eventually for how long it keeps its transactions? I'll probably write\n> up something myself if not.\n\n8.2 adds some stats on when autovac last ran, per-table. I don't\nremember if it reports how long it took to vacuum the table, but that\nwould probably be useful info.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 18 Oct 2006 14:10:27 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] autovacuum on a -mostly- r/o table"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Sun, Oct 15, 2006 at 04:52:12PM +0200, Tobias Brox wrote:\n>> Are there any logs that can help me, and eventually, are there any\n>> ready-made scripts for checking when autovacuum is running, and\n>> eventually for how long it keeps its transactions? I'll probably\n>> write up something myself if not.\n> \n> 8.2 adds some stats on when autovac last ran, per-table. I don't\n> remember if it reports how long it took to vacuum the table, but that\n> would probably be useful info. \n\nIt does NOT. It's just the timestamp of the END of the vacuum / analyze.\n\n(I'm the author of the patch).\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 512-248-2683 E-Mail: [email protected]\nUS Mail: 430 Valona Loop, Round Rock, TX 78681-3893\n\n",
"msg_date": "Wed, 18 Oct 2006 15:59:22 -0500",
"msg_from": "\"Larry Rosenman\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] autovacuum on a -mostly- r/o table"
}
] |
[
{
"msg_contents": "Hello,\n\nShridhar Daithankar and Josh Berkus write on\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nshared_memory\n\n\"\"\"\nThere is one way to decide what is best for you. Set a high value of\nthis parameter and run the database for typical usage. Watch usage of\nshared memory using ipcs or similar tools. A recommended figure would\nbe between 1.2 to 2 times peak shared memory usage.\n\"\"\"\n\nI tried to find a way to do this on windows. Scanning all the lines of\nperfmon memory options, I could not see anithing like \"shared memory\nusage\".\n\nGoogling for \"shared memory usage\" just drove me to some ancient WRONG\ninformation that PostgreSQL is not possible on Windows because of\nlacking shared memory. (guess that was for Windows 95 or similiar)\n\nSo: has anybody a hint how I can check how much shared_memory is\nreally used by PostgreSQL on Windows, to fine tune this parameter?\n\nI learned the hard way that just rising it can lead to a hard\nperformance loss :)\n\nHarald\n\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\n-\nPython: the only language with more web frameworks than keywords.\n",
"msg_date": "Mon, 16 Oct 2006 11:15:58 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "measuring shared memory usage on Windows"
},
{
"msg_contents": "> Hello,\n> \n> Shridhar Daithankar and Josh Berkus write on \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n> \n> shared_memory\n> \n> \"\"\"\n> There is one way to decide what is best for you. Set a high \n> value of this parameter and run the database for typical \n> usage. Watch usage of shared memory using ipcs or similar \n> tools. A recommended figure would be between 1.2 to 2 times \n> peak shared memory usage.\n> \"\"\"\n> \n> I tried to find a way to do this on windows. Scanning all the \n> lines of perfmon memory options, I could not see anithing \n> like \"shared memory usage\".\n> \n> Googling for \"shared memory usage\" just drove me to some \n> ancient WRONG information that PostgreSQL is not possible on \n> Windows because of lacking shared memory. (guess that was for \n> Windows 95 or similiar)\n> \n> So: has anybody a hint how I can check how much shared_memory \n> is really used by PostgreSQL on Windows, to fine tune this parameter?\n> \n> I learned the hard way that just rising it can lead to a hard \n> performance loss :)\n\nNot really sure :) We're talking about anonymous mapped memory, and I\ndon't think perfmon lets you look at that. However, there is no limit to\nit as there often is on Unix - you can map up to whatever the virual RAM\nsize is (2Gb/3Gb dependingo n what boot flag you use, IIRC). You can\nmonitor it as a part of the total memory useage on the server, but\nthere's no way to automatically show the difference between them.\n\n//Magnus\n",
"msg_date": "Mon, 16 Oct 2006 12:17:28 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "Magnus,\n\n> So: has anybody a hint how I can check how much shared_memory\n> > is really used by PostgreSQL on Windows, to fine tune this parameter?\n> >\n> > I learned the hard way that just rising it can lead to a hard\n> > performance loss :)\n>\n> Not really sure :) We're talking about anonymous mapped memory, and I\n> don't think perfmon lets you look at that.\n\n\nthanks for the clarification. However,\n\n\"anonymous mapped memory\" site:microsoft.com\n\nturns out 0 (zero) results. And even splitting it up there seems to be\nnearly no information ... is the same thing by any chance also known by\ndifferent names?\n\n> However, there is no limit to it as there often is on Unix - you can map\nup to whatever the virtual RAM\n> size is (2Gb/3Gb dependingo n what boot flag you use, IIRC). You can\n> monitor it as a part of the total memory useage on the server, but\n> there's no way to automatically show the difference between them.\n\nSo the \"performance shock\" with high shared memory gets obvious: memory\nmapped files get swapped to disk. I assume that swapping is nearly\ntransparent for the application, leading to a nice trashing ...\n\nI'll keep on searching...\n\nHarald\n\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\n-\nPython: the only language with more web frameworks than keywords.\n\nMagnus,> So: has anybody a hint how I can check how much shared_memory> is really used by PostgreSQL on Windows, to fine tune this parameter?\n>> I learned the hard way that just rising it can lead to a hard> performance loss :)Not really sure :) We're talking about anonymous mapped memory, and Idon't think perfmon lets you look at that. \nthanks for the clarification. However,\"anonymous mapped memory\" site:microsoft.comturns out 0 (zero) results. And even splitting it up there seems to be nearly no information ... is the same thing by any chance also known by different names? \n> However, there is no limit to it as there often is on Unix - you can map up to whatever the virtual RAM> size is (2Gb/3Gb dependingo n what boot flag you use, IIRC). You can> monitor it as a part of the total memory useage on the server, but\n> there's no way to automatically show the difference between them.So the \"performance shock\" with high shared memory gets obvious: memory mapped files get swapped to disk. I assume that swapping is nearly transparent for the application, leading to a nice trashing ...\nI'll keep on searching...Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.",
"msg_date": "Mon, 16 Oct 2006 13:18:40 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "> \t> So: has anybody a hint how I can check how much shared_memory\n> \t> is really used by PostgreSQL on Windows, to fine tune \n> this parameter? \n> \t>\n> \t> I learned the hard way that just rising it can lead to a hard\n> \t> performance loss :)\n> \t\n> \tNot really sure :) We're talking about anonymous mapped \n> memory, and I\n> \tdon't think perfmon lets you look at that. \n> \n> \n> thanks for the clarification. However,\n> \n> \"anonymous mapped memory\" site:microsoft.com\n> \n> turns out 0 (zero) results. And even splitting it up there \n> seems to be nearly no information ... is the same thing by \n> any chance also known by different names? \n\nHmm. Yeah, most likely :) I may have grabbed that name from something\nelse. THe documentation for the call is on\nhttp://windowssdk.msdn.microsoft.com/en-us/library/ms685007(VS.80).aspx,\nwe specifu INVALID_HANDLE_VALUE for hFile, which means:\n\nIf hFile is INVALID_HANDLE_VALUE, the calling process must also specify\na mapping object size in the dwMaximumSizeHigh and dwMaximumSizeLow\nparameters. In this scenario, CreateFileMapping creates a file mapping\nobject of a specified size that the operating system paging file backs,\ninstead of by a named file in the file system.\n\n\n\n> > However, there is no limit to it as there often is on Unix \n> - you can \n> > map up to whatever the virtual RAM size is (2Gb/3Gb \n> dependingo n what \n> > boot flag you use, IIRC). You can monitor it as a part of the total \n> > memory useage on the server, but there's no way to \n> automatically show the difference between them.\n> \n> So the \"performance shock\" with high shared memory gets \n> obvious: memory mapped files get swapped to disk. I assume \n> that swapping is nearly transparent for the application, \n> leading to a nice trashing ... \n\nYes :-)\nThere is a performance manager counter for pages swapped out to disk. If\nthat one goes above very low numbers, you're in trouble...\n\n//Magnus\n",
"msg_date": "Mon, 16 Oct 2006 14:05:05 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "Magnus,\n\n> > \"anonymous mapped memory\" site:microsoft.com\n> > turns out 0 (zero) results. And even splitting it up there\n> > seems to be nearly no information ... is the same thing by\n> > any chance also known by different names?\n>\n> Hmm. Yeah, most likely :) I may have grabbed that name from something\n> else. THe documentation for the call is on\n> http://windowssdk.msdn.microsoft.com/en-us/library/ms685007(VS.80).aspx,\n> we specify INVALID_HANDLE_VALUE for hFile, which means:\n\n[...]\nCreateFileMapping creates a file mapping object of a specified size\nthat _the operating system paging file backs_\n[...]\n\nI assume that DWORD dwMaximumSizeHigh and DWORD dwMaximumSizeLow\nget filled with whatever I configure in shared_memory?\n\nMy reading of that function gives me the impression, that this kind of\nshared *memory* is essentially a shared disk file - \"_the operating\nsystem paging file backs_\"\n\nEspecially documentation lines like \"If an application specifies a\nsize for the file mapping object that is larger than the size of the\nactual named file on disk, the file on disk is increased to match the\nspecified size of the file mapping object.\"\n\nreally makes me think that that area is just a comfortable way to\naccess files on disk as memory areas; with the hope of propably better\ncaching then not-memory-mapped files.\n\nThat would explain my disturbing impressions of performance of\nPostgreSQL on win32 rising when lowering shared_memory...\n\nHarald\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\n-\nPython: the only language with more web frameworks than keywords.\n",
"msg_date": "Mon, 16 Oct 2006 14:18:30 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "\n> really makes me think that that area is just a comfortable way to\n> access files on disk as memory areas; with the hope of propably better\n> caching then not-memory-mapped files.\n\nNo, absolutely not. CreateFileMaping() does much the same thing\nas mmap() in Unix.\n\n> That would explain my disturbing impressions of performance of\n> PostgreSQL on win32 rising when lowering shared_memory...\n\nI don't know what your disturbing impressions are, but no it\ndoesn't explain them.\n\n\n",
"msg_date": "Mon, 16 Oct 2006 06:27:24 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "> > > \"anonymous mapped memory\" site:microsoft.com turns out 0 (zero) \n> > > results. And even splitting it up there seems to be nearly no \n> > > information ... is the same thing by any chance also known by \n> > > different names?\n> >\n> > Hmm. Yeah, most likely :) I may have grabbed that name from \n> something \n> > else. THe documentation for the call is on \n> > \n> http://windowssdk.msdn.microsoft.com/en-us/library/ms685007(VS.80).asp\n> > x, we specify INVALID_HANDLE_VALUE for hFile, which means:\n> \n> [...]\n> CreateFileMapping creates a file mapping object of a \n> specified size that _the operating system paging file backs_ [...]\n> \n> I assume that DWORD dwMaximumSizeHigh and DWORD \n> dwMaximumSizeLow get filled with whatever I configure in \n> shared_memory?\n\nYes. See the code in src/backend/port/win32 for details ;)\n\n\n> My reading of that function gives me the impression, that \n> this kind of shared *memory* is essentially a shared disk \n> file - \"_the operating system paging file backs_\"\n\nYes. Note that this does *not* mean that it actually stores anything in\nthe file. All it means that *if* windows needs to *page out* this data,\nit will do so to the pagefile, so the pagefile has to have enough room\nfor it. With a normal file, it would be paged out to the file instead of\nthe pagefile. But as long as there is enough free memory around, it will\nstay in RAM.\n\nIf a specific part of shared memory (the mmaped pagefile) is not\naccessed in a long time, it will get swapped out to the pagefile, yes.\nAnd I don't beleive there is a way to make that not happen.\n\n\n> Especially documentation lines like \"If an application \n> specifies a size for the file mapping object that is larger \n> than the size of the actual named file on disk, the file on \n> disk is increased to match the specified size of the file \n> mapping object.\"\n\nThis is irrelevant, because we are not mapping a file.\n\n\n> really makes me think that that area is just a comfortable \n> way to access files on disk as memory areas; with the hope of \n> propably better caching then not-memory-mapped files.\n\nThat shows that you don't really know how the memory manager in NT+\nworks ;-) *ALL* normal file I/O is handled through the memory manager\n:-) So yes, they are both different access methods to the memory\nmanager, really.\n\n\n> That would explain my disturbing impressions of performance \n> of PostgreSQL on win32 rising when lowering shared_memory...\n\nNot exactly. I can still see such a thing happening in some cases, but\nnot because all our shared memory actually hit disks. We'd be *dead* on\nperformance if it did.\n\n//Magnus\n",
"msg_date": "Mon, 16 Oct 2006 14:30:56 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "\n> I learned the hard way that just rising it can lead to a hard\n> performance loss :)\n\nI looked back in the list archives to try to find your post on the\nunderlying problem, but could only find this rather terse sentence.\nIf you have more detailed information please post or point me at it.\n\nBut...my first thought is that you have configured the shared memory\nregion so large that the system as a whole can not fit all the working set\nsizes for all running processes in to physical memory. This is a common\npitfall for databases with caches implemented as mapped shared\nuser space regions (which is basically all databases).\n\nFor example, if you have 1G of RAM on the box, you can't\nconfigure a cache of 900 meg and expect things to work well.\nThis is because the OS and associated other stuff running on\nthe box will use ~300megs. The system will page as a result.\n\nThe only sure fire way I know of to find the absolute maximum\ncache size that can be safely configured is to experiment with\nlarger and larger sizes until paging occurs, then back off a bit.\n\n\n\n\n\n",
"msg_date": "Mon, 16 Oct 2006 06:55:58 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "David,\n\n> For example, if you have 1G of RAM on the box, you can't\n> configure a cache of 900 meg and expect things to work well.\n> This is because the OS and associated other stuff running on\n> the box will use ~300megs. The system will page as a result.\n\nOvercommitting of memory leads to trashing, yes, that is also my experience.\n\n> The only sure fire way I know of to find the absolute maximum\n> cache size that can be safely configured is to experiment with\n> larger and larger sizes until paging occurs, then back off a bit.\n\nYeah, I know the trial and error method. But I also learned that\nreading the manuals and documentation often helps.\n\nSo after fastreading the various PostgreSQL tuning materials, I came\naccross formulas to calculate a fine starting point for shared memory\nsize; and the recommendation to check with shared_memory information\ntools if that size is okay.\n\nAnd THAT is exactly the challenge of this thread: I am searching for\ntools to check shared memory usage on Windows. ipcs is not available.\nAnd neither Magnus nor Dave, both main contributors of the win32 port\nof PostgreSQL, and both way wiser concerning Windows internas then me,\nknow of some :(\n\nThe challenge below that: I maintain a win32 PostgreSQL server, which\ngets slow every 3-4 weeks. After restarting it runs perfect, for again\n3-4 weeks.\n\nThe Oracle-guys at the same customer solved a similiar problem by\nsimply restarting Oracle every night. But that would be not good\nenough for my sence of honour :)\n\nThanks for your thoughts,\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\n-\nPython: the only language with more web frameworks than keywords.\n",
"msg_date": "Mon, 16 Oct 2006 22:13:38 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "measuring shared memory usage on Windows"
},
{
"msg_contents": "Magnus,\n\n\n> That shows that you don't really know how the memory manager in NT+\n> works ;-) *ALL* normal file I/O is handled through the memory manager\n> :-) So yes, they are both different access methods to the memory\n> manager, really.\n\n\"don't really\" is a overstatement, I do not know at all how the memory\nmanager works in NT+. All I learned is \"Inside Windows NT\" of H.\nCuster from 1993 :)\n\nSo, just to make sure I understood correctly:\n\nIf PostgreSQL reads a file from disk, Windows NT does this file I/O\nthough the same memory manager than when PostgreSQL puts parts of this\nread file [for example an index segment] into shared memory - which is\nnothing but a file, that usually stays in main memory.\n\nCorrect so far?\n\nI continued from this thoughts:\n\nlets say there is 500MB memory available, we have 100MB of\nshared_memory configured.\nNow PostgreSQL reads 100MB from a file - memory manager takes 100MB\nmemory to fullfill this file access (optimizations aside)\n\nNow PostgreSQL reshuffles that 100MB and decides: \"hmmmm, that may be\nvaluable for ALL of the currently running postgres.exe\" and pushes\nthose 100MB into shared memory for all to use. It caches the 100MB - a\nfine chunk of an index.\n\n From this kind of understanding, memory manager has 200MB in use: the\n100MB from the file read, and the 100MB of shared memory.\n\nOf course the 100MB of the file in memory manager will get flushed soon.\n\nNow, lets restrict PostgreSQL: I only give the minimum amout of shared\nmemory. It will NOT cache those 100MB in shared memory.\n\nBut: PostgreSQL really was correct. The other 20 postgres.exe access\nthe same file on a regular basis. Won't memory manager keep that file\n\"cached\" in RAM anyway?\n\nI try my theories :)) and contrary to all wisdom from all PostgreSQL\ntuning recommendations reconfigured shared memory nearly to the\nminimum: 1000 for maximum of 400 concurrent connections. (800 would be\nminimum). Single user performance was fine, now I am looking forward\nto full user scenario tomorrow.\n\nI will keep you posted.\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\n-\nPython: the only language with more web frameworks than keywords.\n",
"msg_date": "Mon, 16 Oct 2006 22:30:27 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "Harald Armin Massa wrote:\n>\n> Yeah, I know the trial and error method. But I also learned that\n> reading the manuals and documentation often helps.\n> \n> So after fastreading the various PostgreSQL tuning materials, I came\n> accross formulas to calculate a fine starting point for shared memory\n> size; and the recommendation to check with shared_memory information\n> tools if that size is okay.\n> \n> And THAT is exactly the challenge of this thread: I am searching for\n> tools to check shared memory usage on Windows. ipcs is not available.\n> And neither Magnus nor Dave, both main contributors of the win32 port\n> of PostgreSQL, and both way wiser concerning Windows internas then me,\n> know of some :(\n> \n>\n\nWould it help to have the postmaster report what the shared memory \nallocation is when it starts up? (this won't help with the activity \nstuff, but at least you would know for sure how *much* you have to use).\n\nIf I understand this correctly,\n\nShmemSegHdr->totalsize\n\nis what we have allocated, so making the startup section of bootstrap.c \nelog(LOG) could work.\n\nCheers\n\nMark\n\n",
"msg_date": "Thu, 19 Oct 2006 17:58:47 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "Mark,\n\n> And THAT is exactly the challenge of this thread: I am searching for\n> > tools to check shared memory usage on Windows. ipcs is not available.\n> > And neither Magnus nor Dave, both main contributors of the win32 port\n> > of PostgreSQL, and both way wiser concerning Windows internas then me,\n> > know of some :(\n\n\n\nWould it help to have the postmaster report what the shared memory\n> allocation is when it starts up? (this won't help with the activity\n> stuff, but at least you would know for sure how *much* you have to use).\n>\n> That would be of no use ... I am quite sure that PostgreSQL allocates the\nmemory as specified; that is: as much as is written in postgresql.conf.\n\nThe interesting part is usage of shared memory through the workload.\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\n-\nPython: the only language with more web frameworks than keywords.\n\nMark,> And THAT is exactly the challenge of this thread: I am searching for\n> tools to check shared memory usage on Windows. ipcs is not available.> And neither Magnus nor Dave, both main contributors of the win32 port> of PostgreSQL, and both way wiser concerning Windows internas then me,\n> know of some :( Would it help to have the postmaster report what the shared memory\nallocation is when it starts up? (this won't help with the activitystuff, but at least you would know for sure how *much* you have to use).That would be of no use ... I am quite sure that PostgreSQL allocates the memory as specified; that is: as much as is written in \npostgresql.conf. The interesting part is usage of shared memory through the workload. Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart\n0173/9409607-Python: the only language with more web frameworks than keywords.",
"msg_date": "Fri, 20 Oct 2006 10:00:17 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "Performance readers ....\n\nI follow up my promise:\n\n> I try my theories :)) and contrary to all wisdom from all PostgreSQL\n> tuning recommendations reconfigured shared memory nearly to the\n> minimum: 1000 for maximum of 400 concurrent connections. (800 would be\n> minimum). Single user performance was fine, now I am looking forward\n> to full user scenario tomorrow.\n>\n> I will keep you posted.\n>\n> Harald\n\n\nI went even further down,\n\nmax_connections = 200 #\nshared_buffers = 400 # min 16 or max_connections*2, 8KB each\n\nto the minimum allowed value of shared_buffers. And the response times are\nbetter then ever before (with 10.000, 20.000, 5.000 and 40.000shared_buffers):\n\nAn application-level response dropped from 16 / 9.5 seconds (first run /\ncached run) to 12 / 6.5 average runtime. That response time is mainly\ndependend on SQL performance, and esp. the drop is caused by this change.\n\nMoreover, the columns \"swapped out memory\" in task manager stay low at ~26k\nper postgres.exe, compared to ~106k as my shared_buffers where at 10.000.\n\nThe \"memory\" column of postgres.exe in task manager process still grows up\nto >10.000K, while now \"virtual memory\" stays ~3.600k per process\n\nSo: in this configuration / workload:\n-> windows 2k3\n-> small (~0,4GB) Database\n-> rather complex queries\n-> ~1 GB memory\n-> running in virtual machine\n\nand\n-> windows xp\n-> small (~0,4GB) Database\n-> ~0,5 GB memory\n-> rather complex queries\n-> running native (laptops)\n\nI could verify a substantial speed gain in single and many user situations\nby lowering shared_buffers to the allowed minimum.\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\n-\nPython: the only language with more web frameworks than keywords.\n\nPerformance readers ....I follow up my promise:I try my theories :)) and contrary to all wisdom from all PostgreSQL\ntuning recommendations reconfigured shared memory nearly to theminimum: 1000 for maximum of 400 concurrent connections. (800 would beminimum). Single user performance was fine, now I am looking forwardto full user scenario tomorrow.\nI will keep you posted.HaraldI went even further down,max_connections = 200 # shared_buffers = 400 # min 16 or max_connections*2, 8KB each to the minimum allowed value of shared_buffers. And the response times are better then ever before (with \n10.000, 20.000, 5.000 and 40.000 shared_buffers):An application-level response dropped from 16 / 9.5 seconds (first run / cached run) to 12 / 6.5 average runtime. That response time is mainly dependend on SQL performance, and esp. the drop is caused by this change.\nMoreover, the columns \"swapped out memory\" in task manager stay low at ~26k per postgres.exe, compared to ~106k as my shared_buffers where at 10.000.The \"memory\" column of postgres.exe in task manager process still grows up to >\n10.000K, while now \"virtual memory\" stays ~3.600k per processSo: in this configuration / workload: -> windows 2k3-> small (~0,4GB) Database-> rather complex queries-> ~1 GB memory \n-> running in virtual machineand -> windows xp\n-> small (~0,4GB) Database\n-> ~0,5 GB memory -> rather complex queries-> running native (laptops)I could verify a substantial speed gain in single and many user situations by lowering shared_buffers to the allowed minimum.\nHarald\n-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.",
"msg_date": "Fri, 20 Oct 2006 10:12:19 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: measuring shared memory usage on Windows"
},
{
"msg_contents": "Unfortunately often operating system virtual memory\nand filesystem caching code that does exactly the opposite of\nwhat a database application would like.\nFor some reason the kernel guys don't see it that way ;)\n\nOver the years there have been various kernel features added\nwith the overall goal of solving problems in this area : O_DIRECT,\nckrm, flags to mmap() and so on. So far I'm not sure any of\nthem has really succeeded. Hence the real-world wisdom to\n'let the filesystem cache do its thing' and configure a small\nshared memory cache in the application.\n\nIdeally one would want to use O_DIRECT or equivalent\nto bypass the OS's cleverness and manage the filesystem\ncaching oneself. However it turns out that enabling O_DIRECT\nmakes things much worse not better (YMMV obviously).\nIt's hard to achieve the level of concurrency that the kernel\ncan get for disk I/O, from user mode.\n\nAnother approach is to make the application cache size\ndynamic, with the goal that it can grow and shrink to\nreach the size that provides the best overall performance.\nI've seen attempts to drive the sizing using memory access\nlatency measurements done from user mode inside the application.\nHowever I'm not sure that anyone has taken this approach\nbeyond the science project stage.\n\nSo AFAIK this is still a generally unsolved problem.\n\nNT (Windows) is particularly interesting because it drives the\nfilesystem cache sizing with a signal that it mesures from the\nVM pages evicted per second counter. In order to keep its\nfeedback loop stable, the OS wants to see a non-zero value\nfor this signal at all times. So you will see that even under\nideal conditions the system will still page a little.\n(Unless that code has changed in Win2003 -- it's been a\nwhile since I checked). So don't drive yourself crazy trying\nto get it to stop paging ;)\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 20 Oct 2006 08:33:05 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: measuring shared memory usage on Windows"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nThis SQL sentence is very simple. I need to get better results. I have\ntried some posibilities and I didn't get good results.\n\nSELECT max(idcomment)\n FROM ficha vf\n INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR\nidestado=4))\n WHERE idstatus=3\n AND ctype=1\n\n\nQUERY PLAN\n\nAggregate (cost=2730.75..2730.76 rows=1 width=4) (actual\ntime=188.463..188.469 rows=1 loops=1)\n\n -> Hash Join (cost=1403.44..2730.72 rows=11 width=4) (actual\ntime=141.464..185.404 rows=513 loops=1)\n\n Hash Cond: (\"outer\".idfile = \"inner\".idficha)\n\n -> Seq Scan on \"comment\" c (cost=0.00..1321.75 rows=1083\nwidth=8) (actual time=0.291..36.112 rows=642 loops=1)\n\n Filter: ((idstatus = 3) AND (ctype = 1))\n\n -> Hash (cost=1403.00..1403.00 rows=178 width=4) (actual\ntime=141.004..141.004 rows=6282 loops=1)\n\n -> Seq Scan on ficha vf (cost=0.00..1403.00 rows=178\nwidth=4) (actual time=0.071..97.885 rows=6282 loops=1)\n\n Filter: (((idestado)::text = '3'::text) OR\n((idestado)::text = '4'::text))\n\nTotal runtime: 188.809 ms\n\n\nThanks in advance,\nRuben Rubio\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD4DBQFFNJzfIo1XmbAXRboRAgPRAJ99+S9wL21b+JN14bQbAoREFXYUcQCYpfEZ\np1MCcDMWqTxzSdtssUFWOw==\n=rUHB\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 17 Oct 2006 11:05:35 +0200",
"msg_from": "Ruben Rubio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimization of this SQL sentence"
},
{
"msg_contents": "Off hanbd I can't recommend anything, bur perhaps you could post the details of the tables (columns, indexes),and some info on what version of postgres you are using.\n\nAre the tables recently analyzed ? How many rows in them ?\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n\n-----Original Message-----\nFrom:\[email protected] on behalf of Ruben Rubio\nSent:\tTue 10/17/2006 2:05 AM\nTo:\[email protected]\nCc:\t\nSubject:\t[PERFORM] Optimization of this SQL sentence\n\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nThis SQL sentence is very simple. I need to get better results. I have\ntried some posibilities and I didn't get good results.\n\nSELECT max(idcomment)\n FROM ficha vf\n INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR\nidestado=4))\n WHERE idstatus=3\n AND ctype=1\n\n\nQUERY PLAN\n\nAggregate (cost=2730.75..2730.76 rows=1 width=4) (actual\ntime=188.463..188.469 rows=1 loops=1)\n\n -> Hash Join (cost=1403.44..2730.72 rows=11 width=4) (actual\ntime=141.464..185.404 rows=513 loops=1)\n\n Hash Cond: (\"outer\".idfile = \"inner\".idficha)\n\n -> Seq Scan on \"comment\" c (cost=0.00..1321.75 rows=1083\nwidth=8) (actual time=0.291..36.112 rows=642 loops=1)\n\n Filter: ((idstatus = 3) AND (ctype = 1))\n\n -> Hash (cost=1403.00..1403.00 rows=178 width=4) (actual\ntime=141.004..141.004 rows=6282 loops=1)\n\n -> Seq Scan on ficha vf (cost=0.00..1403.00 rows=178\nwidth=4) (actual time=0.071..97.885 rows=6282 loops=1)\n\n Filter: (((idestado)::text = '3'::text) OR\n((idestado)::text = '4'::text))\n\nTotal runtime: 188.809 ms\n\n\nThanks in advance,\nRuben Rubio\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD4DBQFFNJzfIo1XmbAXRboRAgPRAJ99+S9wL21b+JN14bQbAoREFXYUcQCYpfEZ\np1MCcDMWqTxzSdtssUFWOw==\n=rUHB\n-----END PGP SIGNATURE-----\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n-------------------------------------------------------\nClick link below if it is SPAM [email protected]\n\"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=45349c86275246672479766&[email protected]&retrain=spam&template=history&history_page=1\"\n!DSPAM:45349c86275246672479766!\n-------------------------------------------------------\n\n\n\n\n\n",
"msg_date": "Tue, 17 Oct 2006 02:21:58 -0700",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\n\nIndexes in comment\nComment rows: 17.250\n\nCREATE INDEX usuariofichaoncommnet\n ON \"comment\"\n USING btree\n (idusuarioficha);\n\nComment structure:\nCREATE TABLE \"comment\"\n(\n idcomment int4 NOT NULL DEFAULT\nnextval('comment_idcomment_seq'::regclass),\n score int4,\n title varchar,\n ctext varchar,\n idusuarioficha int4,\n galleta varchar,\n navlang int4,\n cdate timestamp DEFAULT now(),\n idstatus int4,\n ctype int4 NOT NULL,\n idfile int4 NOT NULL,\n nick varchar,\n nombre varchar,\n apellidos varchar,\n dni varchar,\n nacionalidad varchar,\n email varchar,\n telefono varchar,\n code varchar,\n memo varchar,\n c_ip varchar(30),\n codpais char(2),\n replay varchar,\n replaydate timestamp,\n advsent int4,\n usrwarn int4,\n nouserlink int4,\n aviso_confirmacion_15 timestamp,\n aviso_confirmacion_60 timestamp,\n CONSTRAINT comment_pkey PRIMARY KEY (idcomment)\n)\n\nFicha structure:\nNo indexes in ficha\nFicha rows: 17.850\n\nCREATE TABLE ficha\n(\n idficha int4 NOT NULL DEFAULT nextval('ficha_idficha_seq'::regclass),\n email varchar(255),\n web varchar(255),\n capacidadmin int4,\n capacidadmax int4,\n preciotb float4,\n preciota float4,\n cp varchar(20),\n telefono1 varchar(50),\n telefono2 varchar(50),\n fax varchar(50),\n uprecio varchar,\n udireccion varchar(512),\n comentarios varchar,\n ucapacidad varchar(512),\n upresentacion varchar,\n utipoaloj varchar(50),\n ulugares varchar,\n ucaracteristica varchar,\n idusuario int4,\n idlocacion int4,\n contacto varchar(255),\n fuente varchar(512),\n prefijopais varchar(10),\n idestado char(1),\n nombre varchar(255),\n idtipoalojamiento int4,\n ulocalidad varchar(255),\n creado timestamp DEFAULT now(),\n cachefault int4 DEFAULT 0,\n idpromotiontype_pc int4 NOT NULL DEFAULT 0,\n idpromotiontype_ant_pc int4,\n promostartdate_pc timestamp,\n promoenddate_pc timestamp,\n localidadruta varchar(255),\n urlsufix varchar(32),\n searchengine1 int4,\n searchengine2 int4,\n searchengine3 int4,\n searchengine4 int4,\n searchengine5 int4,\n searchengine6 int4,\n deseo1 int4,\n deseo2 int4,\n deseo3 int4,\n deseo4 int4,\n deseo5 int4,\n deseo6 int4,\n otherspecs varchar(510),\n lastchange timestamp,\n idsubestado int4,\n environment int4,\n prefijopais2 varchar,\n web_agencia varchar(255),\n lat varchar(25),\n long varchar(25),\n zoom int4,\n swzoombloq bool DEFAULT true,\n titulomapa_l0 varchar(255),\n titulomapa_l1 varchar(255),\n titulomapa_l2 varchar(255),\n titulomapa_l3 varchar(255),\n titulomapa_l4 varchar(255),\n titulomapa_l5 varchar(255),\n titulomapa_l6 varchar(255),\n titulomapa_l7 varchar(255),\n titulomapa_l8 varchar(255),\n titulomapa_l9 varchar(255),\n CONSTRAINT pk_ficha PRIMARY KEY (idficha),\n CONSTRAINT fk_ficha_geonivel6 FOREIGN KEY (idlocacion) REFERENCES\ngeonivel6 (idgeonivel6) ON UPDATE NO ACTION ON DELETE NO ACTION\n)\n\n\n\nGregory S. Williamson escribi�:\n> Off hanbd I can't recommend anything, bur perhaps you could post the details of the tables (columns, indexes),and some info on what version of postgres you are using.\n> \n> Are the tables recently analyzed ? How many rows in them ?\n> \n> Greg Williamson\n> DBA\n> GlobeXplorer LLC\n> \n> \n> -----Original Message-----\n> From:\[email protected] on behalf of Ruben Rubio\n> Sent:\tTue 10/17/2006 2:05 AM\n> To:\[email protected]\n> Cc:\t\n> Subject:\t[PERFORM] Optimization of this SQL sentence\n> \n> This SQL sentence is very simple. I need to get better results. I have\n> tried some posibilities and I didn't get good results.\n> \n> SELECT max(idcomment)\n> FROM ficha vf\n> INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR\n> idestado=4))\n> WHERE idstatus=3\n> AND ctype=1\n> \n> \n> QUERY PLAN\n> \n> Aggregate (cost=2730.75..2730.76 rows=1 width=4) (actual\n> time=188.463..188.469 rows=1 loops=1)\n> \n> -> Hash Join (cost=1403.44..2730.72 rows=11 width=4) (actual\n> time=141.464..185.404 rows=513 loops=1)\n> \n> Hash Cond: (\"outer\".idfile = \"inner\".idficha)\n> \n> -> Seq Scan on \"comment\" c (cost=0.00..1321.75 rows=1083\n> width=8) (actual time=0.291..36.112 rows=642 loops=1)\n> \n> Filter: ((idstatus = 3) AND (ctype = 1))\n> \n> -> Hash (cost=1403.00..1403.00 rows=178 width=4) (actual\n> time=141.004..141.004 rows=6282 loops=1)\n> \n> -> Seq Scan on ficha vf (cost=0.00..1403.00 rows=178\n> width=4) (actual time=0.071..97.885 rows=6282 loops=1)\n> \n> Filter: (((idestado)::text = '3'::text) OR\n> ((idestado)::text = '4'::text))\n> \n> Total runtime: 188.809 ms\n> \n> \n> Thanks in advance,\n> Ruben Rubio\n\n- ---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n- -------------------------------------------------------\nClick link below if it is SPAM [email protected]\n\"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=45349c86275246672479766&[email protected]&retrain=spam&template=history&history_page=1\"\n!DSPAM:45349c86275246672479766!\n- -------------------------------------------------------\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFFNKNeIo1XmbAXRboRAsiDAKCce+BeyYK63r24w2E1QNq/3maMJQCeNpNw\nGiwJ/KixMHH76919wQR31g8=\n=g/re\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 17 Oct 2006 11:33:18 +0200",
"msg_from": "Ruben Rubio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\nIf just just realized that is a litlle faster (15% faster) with this:\n\nCREATE INDEX idx_statustype\n ON \"comment\" USING btree (idstatus, ctype);\n\nAny other ideas?\n\n\nGregory S. Williamson escribi�:\n> Off hanbd I can't recommend anything, bur perhaps you could post the details of the tables (columns, indexes),and some info on what version of postgres you are using.\n> \n> Are the tables recently analyzed ? How many rows in them ?\n> \n> Greg Williamson\n> DBA\n> GlobeXplorer LLC\n> \n> \n> -----Original Message-----\n> From:\[email protected] on behalf of Ruben Rubio\n> Sent:\tTue 10/17/2006 2:05 AM\n> To:\[email protected]\n> Cc:\t\n> Subject:\t[PERFORM] Optimization of this SQL sentence\n> \n> This SQL sentence is very simple. I need to get better results. I have\n> tried some posibilities and I didn't get good results.\n> \n> SELECT max(idcomment)\n> FROM ficha vf\n> INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR\n> idestado=4))\n> WHERE idstatus=3\n> AND ctype=1\n> \n> \n> QUERY PLAN\n> \n> Aggregate (cost=2730.75..2730.76 rows=1 width=4) (actual\n> time=188.463..188.469 rows=1 loops=1)\n> \n> -> Hash Join (cost=1403.44..2730.72 rows=11 width=4) (actual\n> time=141.464..185.404 rows=513 loops=1)\n> \n> Hash Cond: (\"outer\".idfile = \"inner\".idficha)\n> \n> -> Seq Scan on \"comment\" c (cost=0.00..1321.75 rows=1083\n> width=8) (actual time=0.291..36.112 rows=642 loops=1)\n> \n> Filter: ((idstatus = 3) AND (ctype = 1))\n> \n> -> Hash (cost=1403.00..1403.00 rows=178 width=4) (actual\n> time=141.004..141.004 rows=6282 loops=1)\n> \n> -> Seq Scan on ficha vf (cost=0.00..1403.00 rows=178\n> width=4) (actual time=0.071..97.885 rows=6282 loops=1)\n> \n> Filter: (((idestado)::text = '3'::text) OR\n> ((idestado)::text = '4'::text))\n> \n> Total runtime: 188.809 ms\n> \n> \n> Thanks in advance,\n> Ruben Rubio\n\n- ---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n- -------------------------------------------------------\nClick link below if it is SPAM [email protected]\n\"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=45349c86275246672479766&[email protected]&retrain=spam&template=history&history_page=1\"\n!DSPAM:45349c86275246672479766!\n- -------------------------------------------------------\n\n\n\n\n\n\n- ---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFFNKT4Io1XmbAXRboRAurtAKC8YWjgzytaqkPjLfrohZ1aceZivwCgpDii\nwzxc4fktzIHTZRhPuJLi2Wc=\n=Korn\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 17 Oct 2006 11:40:08 +0200",
"msg_from": "Ruben Rubio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "am Tue, dem 17.10.2006, um 11:33:18 +0200 mailte Ruben Rubio folgendes:\n> > \n> > SELECT max(idcomment)\n> > FROM ficha vf\n> > INNER JOIN comment c ON (vf.idficha=c.idfile AND (idestado=3 OR\n> > idestado=4))\n> > WHERE idstatus=3\n> > AND ctype=1\n\ncheck for indexes on vf.idficha, c.idfile, idstatus and ctype.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Tue, 17 Oct 2006 11:48:41 +0200",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "On Oct 17, 2006, at 11:33 , Ruben Rubio wrote:\n\n> CREATE TABLE \"comment\"\n> (\n> idcomment int4 NOT NULL DEFAULT\n> nextval('comment_idcomment_seq'::regclass),\n[snip 28 columns]\n> CONSTRAINT comment_pkey PRIMARY KEY (idcomment)\n> )\n>\n> Ficha structure:\n> No indexes in ficha\n> Ficha rows: 17.850\n>\n> CREATE TABLE ficha\n> (\n> idficha int4 NOT NULL DEFAULT nextval \n> ('ficha_idficha_seq'::regclass),\n[snip 67 (!) columns]\n> CONSTRAINT pk_ficha PRIMARY KEY (idficha),\n> CONSTRAINT fk_ficha_geonivel6 FOREIGN KEY (idlocacion) REFERENCES\n> geonivel6 (idgeonivel6) ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n\n These tables are particularly egregious examples of ignorant \ndatabase design. You need to understand the relational model (http:// \nen.wikipedia.org/wiki/Relational_model), specifically data \nnormalization (http://en.wikipedia.org/wiki/Database_normalization) \nand 3NF (http://en.wikipedia.org/wiki/3NF).\n\nThese columns are particularly telling:\n\n searchengine1 int4,\n searchengine2 int4,\n searchengine3 int4,\n searchengine4 int4,\n searchengine5 int4,\n searchengine6 int4,\n deseo1 int4,\n deseo2 int4,\n deseo3 int4,\n deseo4 int4,\n deseo5 int4,\n deseo6 int4,\n titulomapa_l0 varchar(255),\n titulomapa_l1 varchar(255),\n titulomapa_l2 varchar(255),\n titulomapa_l3 varchar(255),\n titulomapa_l4 varchar(255),\n titulomapa_l5 varchar(255),\n titulomapa_l6 varchar(255),\n titulomapa_l7 varchar(255),\n titulomapa_l8 varchar(255),\n titulomapa_l9 varchar(255),\n\nRefactor into three separate tables:\n\n create table searchengine (\n idficha int references ficha (idficha),\n searchengine int,\n primary key (idficha, searchengine)\n );\n\n create table deseo (\n idficha int references ficha (idficha),\n deseo int,\n primary key (idficha, deseo)\n );\n\n create table titulomapa (\n idficha int references ficha (idficha),\n titulomapa int,\n primary key (idficha, titulomapa)\n );\n\nNow you can find all search engines for a single ficha row:\n\n select searchengine from searchengine where idficha = n\n\nThis design allows for more than 5 search engines per ficha row, and \nallows expressive joins such as:\n\n select ficha.idficha, searchengine.searchengine\n inner join searchengine on searchengine.idfciha = ficha.idficha\n\nAlso, most of your columns are nullable. This alone shows that you \ndon't understand your own data.\n\nLastly, note that in PostgreSQL these length declarations are not \nnecessary:\n\n contacto varchar(255),\n fuente varchar(512),\n prefijopais varchar(10)\n\nInstead, use:\n\n contacto text,\n fuente text,\n prefijopais text\n\nSee the PostgreSQL manual for an explanation of varchar vs. text.\n\nAlexander.\n\n",
"msg_date": "Tue, 17 Oct 2006 11:52:54 +0200",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "You could try rewriting the query like this:\n\nSELECT MAX(idcomment)\nFROM comment c\nWHERE idstatus=3 AND ctype=1\nAND EXISTS (SELECT 1 FROM ficha vf WHERE idestado IN ('3', '4') AND \nvf.idficha = c.idfile);\n\nThe planner can then try a backward scan on the comment_pkey index, \nwhich should be quicker than the seq scan assuming that there's a lot of \nrows that match the restrictions (idstatus=3 ctype=1 idestado IN ('3', \n'4')).\n\nBut see comments inline below:\n\nRuben Rubio wrote:\n> CREATE TABLE \"comment\"\n> (\n> idcomment int4 NOT NULL DEFAULT\n> nextval('comment_idcomment_seq'::regclass),\n> score int4,\n> title varchar,\n> ctext varchar,\n> idusuarioficha int4,\n> galleta varchar,\n> navlang int4,\n> cdate timestamp DEFAULT now(),\n> idstatus int4,\n> ctype int4 NOT NULL,\n> idfile int4 NOT NULL,\n> nick varchar,\n> nombre varchar,\n> apellidos varchar,\n> dni varchar,\n> nacionalidad varchar,\n> email varchar,\n> telefono varchar,\n> code varchar,\n> memo varchar,\n> c_ip varchar(30),\n> codpais char(2),\n> replay varchar,\n> replaydate timestamp,\n> advsent int4,\n> usrwarn int4,\n> nouserlink int4,\n> aviso_confirmacion_15 timestamp,\n> aviso_confirmacion_60 timestamp,\n> CONSTRAINT comment_pkey PRIMARY KEY (idcomment)\n> )\n\nWithout knowing anything about you're application, it looks like there's \na some fields in the comment-table that are duplicates of fields in the \nficha-table. Telefono and email for example. You should consider doing \nsome normalization.\n\n> No indexes in ficha\n\nExcept for the implicit idficha_pkey index.\n\n> CREATE TABLE ficha\n> (\n > ...\n > idestado char(1),\n\nIf idestado contains numbers (codes of some kind, I presume), you're \nbetter off using the smallint data type.\n\n > ....\n> searchengine1 int4,\n> searchengine2 int4,\n> searchengine3 int4,\n> searchengine4 int4,\n> searchengine5 int4,\n> searchengine6 int4,\n\nNormalization?!\n\n> deseo1 int4,\n> deseo2 int4,\n> deseo3 int4,\n> deseo4 int4,\n> deseo5 int4,\n> deseo6 int4,\n\nFor these as well...\n\n > ...\n> lat varchar(25),\n> long varchar(25),\n\nIsn't there's a better data type for latitude and longitude? Decimal, \nperhaps?\n\n> titulomapa_l0 varchar(255),\n> titulomapa_l1 varchar(255),\n> titulomapa_l2 varchar(255),\n> titulomapa_l3 varchar(255),\n> titulomapa_l4 varchar(255),\n> titulomapa_l5 varchar(255),\n> titulomapa_l6 varchar(255),\n> titulomapa_l7 varchar(255),\n> titulomapa_l8 varchar(255),\n> titulomapa_l9 varchar(255),\n\nAgain, normalization...\n\n- Heikki\n",
"msg_date": "Tue, 17 Oct 2006 11:00:18 +0100",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi to everyone,\n\nFirst of all I have to say that I now the database is not ok. There was\na people before me that didn't do the thinks right. I would like to\nnormalize the database, but it takes too much time (there is is hundred\nof SQLs to change and there is not enough technical resources). Also,\ndatacolumns in some places has same names, but the data that is stores\nhas different usages.\n\nThanks everyone for all hints, I ll try to do my best performing the\ndatabase structure.\n\nBy other hand, I was able to create the corrects index and with this\n\nAND EXISTS (SELECT 1 FROM ficha vf WHERE idestado IN ('3', '4') AND\nvf.idficha = c.idfile);\n\nit is really fast.\n\nThanks to everybody.\n\nRegards,\nRuben Rubio\n\n\n\nHeikki Linnakangas escribi�:\n> You could try rewriting the query like this:\n> \n> SELECT MAX(idcomment)\n> FROM comment c\n> WHERE idstatus=3 AND ctype=1\n> AND EXISTS (SELECT 1 FROM ficha vf WHERE idestado IN ('3', '4') AND\n> vf.idficha = c.idfile);\n> \n> The planner can then try a backward scan on the comment_pkey index,\n> which should be quicker than the seq scan assuming that there's a lot of\n> rows that match the restrictions (idstatus=3 ctype=1 idestado IN ('3',\n> '4')).\n> \n> But see comments inline below:\n> \n> Ruben Rubio wrote:\n>> CREATE TABLE \"comment\"\n>> (\n>> idcomment int4 NOT NULL DEFAULT\n>> nextval('comment_idcomment_seq'::regclass),\n>> score int4,\n>> title varchar,\n>> ctext varchar,\n>> idusuarioficha int4,\n>> galleta varchar,\n>> navlang int4,\n>> cdate timestamp DEFAULT now(),\n>> idstatus int4,\n>> ctype int4 NOT NULL,\n>> idfile int4 NOT NULL,\n>> nick varchar,\n>> nombre varchar,\n>> apellidos varchar,\n>> dni varchar,\n>> nacionalidad varchar,\n>> email varchar,\n>> telefono varchar,\n>> code varchar,\n>> memo varchar,\n>> c_ip varchar(30),\n>> codpais char(2),\n>> replay varchar,\n>> replaydate timestamp,\n>> advsent int4,\n>> usrwarn int4,\n>> nouserlink int4,\n>> aviso_confirmacion_15 timestamp,\n>> aviso_confirmacion_60 timestamp,\n>> CONSTRAINT comment_pkey PRIMARY KEY (idcomment)\n>> )\n> \n> Without knowing anything about you're application, it looks like there's\n> a some fields in the comment-table that are duplicates of fields in the\n> ficha-table. Telefono and email for example. You should consider doing\n> some normalization.\n> \n>> No indexes in ficha\n> \n> Except for the implicit idficha_pkey index.\n> \n>> CREATE TABLE ficha\n>> (\n>> ...\n>> idestado char(1),\n> \n> If idestado contains numbers (codes of some kind, I presume), you're\n> better off using the smallint data type.\n> \n>> ....\n>> searchengine1 int4,\n>> searchengine2 int4,\n>> searchengine3 int4,\n>> searchengine4 int4,\n>> searchengine5 int4,\n>> searchengine6 int4,\n> \n> Normalization?!\n> \n>> deseo1 int4,\n>> deseo2 int4,\n>> deseo3 int4,\n>> deseo4 int4,\n>> deseo5 int4,\n>> deseo6 int4,\n> \n> For these as well...\n> \n>> ...\n>> lat varchar(25),\n>> long varchar(25),\n> \n> Isn't there's a better data type for latitude and longitude? Decimal,\n> perhaps?\n> \n>> titulomapa_l0 varchar(255),\n>> titulomapa_l1 varchar(255),\n>> titulomapa_l2 varchar(255),\n>> titulomapa_l3 varchar(255),\n>> titulomapa_l4 varchar(255),\n>> titulomapa_l5 varchar(255),\n>> titulomapa_l6 varchar(255),\n>> titulomapa_l7 varchar(255),\n>> titulomapa_l8 varchar(255),\n>> titulomapa_l9 varchar(255),\n> \n> Again, normalization...\n> \n> - Heikki\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFFNK+jIo1XmbAXRboRAu6cAKCMUWHjcAYwN4DhVl1tSjMirgRAawCgvk8c\ngSB/4p1ZBOrDEwU9EW/yxw8=\n=yFoD\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 17 Oct 2006 12:25:39 +0200",
"msg_from": "Ruben Rubio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimization of this SQL sentence (SOLVED)"
},
{
"msg_contents": "\n> These tables are particularly egregious examples of ignorant database \n> design. You need to understand the relational model \n\nThis email is a *particularly* egregious example of rudeness. You owe Mr. Staubo, and the Postgress community, an apology.\n\nThere is absolutely no reason to insult people who come to this forum for help. That's why the forum is here, to help people who are \"ignorant\" and want to improve their knowledge.\n\nCraig\n\n",
"msg_date": "Tue, 17 Oct 2006 08:10:53 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:\n> Lastly, note that in PostgreSQL these length declarations are not \n> necessary:\n>\n> contacto varchar(255),\n> fuente varchar(512),\n> prefijopais varchar(10)\n>\n> Instead, use:\n>\n> contacto text,\n> fuente text,\n> prefijopais text\n>\n> See the PostgreSQL manual for an explanation of varchar vs. text.\n\nEnforcing length constraints with varchar(xyz) is good database design, not a \nbad one. Using text everywhere might be tempting because it works, but it's \nnot a good idea.\n",
"msg_date": "Tue, 17 Oct 2006 17:29:19 +0200",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "On Oct 17, 2006, at 17:10 , Craig A. James wrote:\n\n>> These tables are particularly egregious examples of ignorant \n>> database design. You need to understand the relational model\n>\n> This email is a *particularly* egregious example of rudeness. You \n> owe Mr. Staubo, and the Postgress community, an apology.\n\nI'm sorry you feel that way, but I don't think I was out of line. I \ndid point to several informative sources of documentation, and \ndescribed some of the problems (but by no means all) with the \nperson's schema and how to solve them. If you think the database \ndesign in question is *not* ignorant database design, please do \nexplain why, but on technical grounds. (Ignorance, of course, is not \na sin.)\n\nAlexander.\n",
"msg_date": "Tue, 17 Oct 2006 17:32:12 +0200",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:\n\n> Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:\n>> Lastly, note that in PostgreSQL these length declarations are not\n>> necessary:\n>>\n>> contacto varchar(255),\n>> fuente varchar(512),\n>> prefijopais varchar(10)\n>\n> Enforcing length constraints with varchar(xyz) is good database \n> design, not a\n> bad one. Using text everywhere might be tempting because it works, \n> but it's\n> not a good idea.\n\nEnforcing length constraints is generally a bad idea because it \nassumes you know the data domain as expressed in a quantity of \ncharacters. Off the top of your head, do you know the maximum length \nof a zip code? A street address? The name of a city?\n\nIn almost all cases the limit you invent is arbitrary, and the \nprobability of being incompatible with any given input is inversely \nproportional to that arbitrary limit.\n\nEncoding specific length constraints in the database makes sense when \nthey relate explicitly to business logic, but I can think of only a \nfew cases where it would make sense: restricting the length of \npasswords, user names, and so on. In a few cases you do know with \n100% certainty the limit of your field, such as with standardized \nabbreviations: ISO 3166 country codes, for example. And sometimes you \nwant to cap data due to storage or transmission costs.\n\nThe length constraint on text fields is primarily a historical \nartifact stemming from the way databases have traditionally been \nimplemented, as fixed-length fields in fixed-length row structures. \nThe inexplicable, improbable space-padded (!) \"character\" data type \nin ANSI SQL is a vestige of this legacy. PostgreSQL's variable-length \nrows and TOAST mechanism makes the point moot.\n\nQuoth the PostgreSQL manual, section 8.3:\n\n> There are no performance differences between these three types, \n> apart from the increased storage size when using the blank-padded \n> type. While character(n) has performance advantages in some other \n> database systems, it has no such advantages in PostgreSQL. In most \n> situations text or character varying should be used instead.\n\nAlexander.\n",
"msg_date": "Tue, 17 Oct 2006 17:50:41 +0200",
"msg_from": "Alexander Staubo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "On 10/17/06, Mario Weilguni <[email protected]> wrote:\n> Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:\n> > Lastly, note that in PostgreSQL these length declarations are not\n> > necessary:\n> >\n> > contacto varchar(255),\n> > fuente varchar(512),\n> > prefijopais varchar(10)\n> >\n> > Instead, use:\n> >\n> > contacto text,\n> > fuente text,\n> > prefijopais text\n> >\n> > See the PostgreSQL manual for an explanation of varchar vs. text.\n>\n> Enforcing length constraints with varchar(xyz) is good database design, not a\n> bad one. Using text everywhere might be tempting because it works, but it's\n> not a good idea.\n\nwhile you are correct, i think the spirit of the argument is wrong\nbecuase there is no constraint to be enforced in those fields. a\nlength constraint of n is only valid is n + 1 characters are an error\nand should be rejected by the database. anything else is IMO bad\nform. There are practial exceptions to this rule though, for example\nclient technology that might require a length.\n\nso, imo alexander is correct:\ncontacto varchar(255)\n\n...is a false constraint, why exactly 255? is that were the dart landed?\n\nspecifically limiting text fields so users 'don't enter too much data'\nis a manifestation c programmer's disease :)\n\nnote I am not picking on the OP here, just weighing in on the\nconstraint argument.\n\nmerlin\n",
"msg_date": "Tue, 17 Oct 2006 12:51:19 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "[email protected] (Alexander Staubo) writes:\n\n> On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:\n>\n>> Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:\n>>> Lastly, note that in PostgreSQL these length declarations are not\n>>> necessary:\n>>>\n>>> contacto varchar(255),\n>>> fuente varchar(512),\n>>> prefijopais varchar(10)\n>>\n>> Enforcing length constraints with varchar(xyz) is good database\n>> design, not a\n>> bad one. Using text everywhere might be tempting because it works,\n>> but it's\n>> not a good idea.\n>\n> Enforcing length constraints is generally a bad idea because it\n> assumes you know the data domain as expressed in a quantity of\n> characters. Off the top of your head, do you know the maximum length\n> of a zip code? A street address? The name of a city?\n\nIn the case of a zip code? Sure. US zip codes are integer values\neither 5 or 9 characters long.\n\nIn the case of some of our internal applications, we need to conform\nto some IETF and ITU standards which actually do enforce some maximum\nlengths on these sorts of things.\n\n> In almost all cases the limit you invent is arbitrary, and the\n> probability of being incompatible with any given input is inversely\n> proportional to that arbitrary limit.\n\nI'd be quite inclined to limit things like addresses to somewhat\nsmaller sizes than you might expect. If addresses are to be used to\ngenerate labels for envelopes, for instance, it's reasonably important\nto limit sizes to those that might fit on a label or an envelope.\n\n> Encoding specific length constraints in the database makes sense\n> when they relate explicitly to business logic, but I can think of\n> only a few cases where it would make sense: restricting the length\n> of passwords, user names, and so on. In a few cases you do know with\n> 100% certainty the limit of your field, such as with standardized\n> abbreviations: ISO 3166 country codes, for example. And sometimes\n> you want to cap data due to storage or transmission costs.\n\nThere's another reason: Open things up wide, and some people will fill\nthe space with rubbish.\n-- \n\"cbbrowne\",\"@\",\"acm.org\"\nhttp://linuxfinances.info/info/internet.html\n\"The Amiga is proof that if you build a better mousetrap, the rats\nwill gang up on you.\" -- Bill Roberts [email protected]\n",
"msg_date": "Tue, 17 Oct 2006 14:04:47 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "[email protected] (\"Merlin Moncure\") writes:\n> On 10/17/06, Mario Weilguni <[email protected]> wrote:\n>> Am Dienstag, 17. Oktober 2006 11:52 schrieb Alexander Staubo:\n>> > Lastly, note that in PostgreSQL these length declarations are not\n>> > necessary:\n>> >\n>> > contacto varchar(255),\n>> > fuente varchar(512),\n>> > prefijopais varchar(10)\n>> >\n>> > Instead, use:\n>> >\n>> > contacto text,\n>> > fuente text,\n>> > prefijopais text\n>> >\n>> > See the PostgreSQL manual for an explanation of varchar vs. text.\n>>\n>> Enforcing length constraints with varchar(xyz) is good database design, not a\n>> bad one. Using text everywhere might be tempting because it works, but it's\n>> not a good idea.\n>\n> while you are correct, i think the spirit of the argument is wrong\n> becuase there is no constraint to be enforced in those fields. a\n> length constraint of n is only valid is n + 1 characters are an error\n> and should be rejected by the database. anything else is IMO bad\n> form. There are practial exceptions to this rule though, for example\n> client technology that might require a length.\n>\n> so, imo alexander is correct:\n> contacto varchar(255)\n>\n> ...is a false constraint, why exactly 255? is that were the dart landed?\n\nYeah, 255 seems silly to me.\n\nIf I'm going to be arbitrary, there are two better choices:\n\n1. 80, because that's how many characters one can fit across a piece\n of paper whilst keeping things pretty readable;\n\n2. 64, because that will fit on a screen, and leave some space for a\n field name/description.\n\n> specifically limiting text fields so users 'don't enter too much\n> data' is a manifestation c programmer's disease :)\n\nNo, I can't agree. I'm pretty accustomed to languages that don't\npinch you the ways C does, and I still dislike having over-wide\ncolumns because it makes it more difficult to generate readable\nreports.\n-- \noutput = (\"cbbrowne\" \"@\" \"linuxfinances.info\")\nhttp://linuxdatabases.info/info/unix.html\n\"Instant coffee is like pouring hot water over the cremated remains of\na good friend.\"\n",
"msg_date": "Tue, 17 Oct 2006 14:09:41 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "Chris Browne wrote:\n> In the case of a zip code? Sure. US zip codes are integer values\n> either 5 or 9 characters long.\n\nSo your app will only work in the US?\nAnd only for US companies that only have US clients?\n\n\nSorry had to dig at that ;-P\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Wed, 18 Oct 2006 04:40:54 +0930",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "The world rejoiced as [email protected] (Shane Ambler) wrote:\n> Chris Browne wrote:\n>> In the case of a zip code? Sure. US zip codes are integer values\n>> either 5 or 9 characters long.\n>\n> So your app will only work in the US?\n> And only for US companies that only have US clients?\n>\n>\n> Sorry had to dig at that ;-P\n\nHeh. I'm not in the US, so that's not the sort of mistake I'd be\nlikely to make...\n\nThe thing is, the only place where they call this sort of thing a \"zip\ncode\" is the US. Elsewhere, it's called a postal code.\n-- \n(reverse (concatenate 'string \"gro.mca\" \"@\" \"enworbbc\"))\nhttp://linuxfinances.info/info/finances.html\nRules of the Evil Overlord #159. \"If I burst into rebel headquarters\nand find it deserted except for an odd, blinking device, I will not\nwalk up and investigate; I'll run like hell.\"\n<http://www.eviloverlord.com/>\n",
"msg_date": "Tue, 17 Oct 2006 15:23:28 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "Christopher Browne wrote:\n> The world rejoiced as [email protected] (Shane Ambler) wrote:\n>> Chris Browne wrote:\n>>> In the case of a zip code? Sure. US zip codes are integer values\n>>> either 5 or 9 characters long.\n>> So your app will only work in the US?\n>> And only for US companies that only have US clients?\n>>\n>>\n>> Sorry had to dig at that ;-P\n> \n> Heh. I'm not in the US, so that's not the sort of mistake I'd be\n> likely to make...\n> \n> The thing is, the only place where they call this sort of thing a \"zip\n> code\" is the US. Elsewhere, it's called a postal code.\n\nSame meaning/use different name (that's a locale issue for the client \ndisplaying the data) they will all use the same column for that data.\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Wed, 18 Oct 2006 05:11:03 +0930",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "Alexander Staubo wrote:\n> On Oct 17, 2006, at 17:10 , Craig A. James wrote:\n> \n>>> These tables are particularly egregious examples of ignorant \n>>> database design. You need to understand the relational model\n>>\n>> This email is a *particularly* egregious example of rudeness. You owe \n>> Mr. Staubo, and the Postgress community, an apology.\n> \n> I'm sorry you feel that way, but I don't think I was out of line. \n> ... If you think the database design in question is *not* \n> ignorant database design, please do explain why, but on technical \n> grounds. (Ignorance, of course, is not a sin.)\n\nThis is not about design. It's about someone who came for help, and got a derogatory remark. Is it really so hard to be helpful *and* use polite, encouraging language?\n\nCraig\n\n\n",
"msg_date": "Tue, 17 Oct 2006 19:47:41 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\n\n>> so, imo alexander is correct:\n>> contacto varchar(255)\n\nWhy do we have limits on this, for example?\ncontacto varchar(255)\n\n1) First of all, this is a web application. People use to enter really\nstrange thinks there, and a lot of rubbish. So, as someone commented\nbefore, I am ok with the idea of limit the field.\n\n2) Again, this is a web application. We could just limit the \"field\nfront end length\" but this will be not secure. We could limit the field\nfront end, and limit it with the code that process the data, but is much\nsecure if we just limit field in database.\n\nWhy 255?\nThis is free field. We are interested in users enter as much data as\nneeded (with a limit, we no not want \"The Quijote\" in that field)\n\nPeople use to imput as spected:\n\"Contact: Baltolom� Peralez.\"\nBut people also inserts thinks as:\n\"Contact: In the mornings, Bartolom� Perales, in the afternoons Juan\nPerales in the same telephone number\"\n\nIn the context of the application, we are not interested in stopping the\nuser with a message \"Hey, Your contact is too long\". We want the user to\ngo on. That's why 255. We could insert 260. Or maybe 250. But someone\ndecided 255 and for me its OK.\n\nPlease remember that I just put some data of a large applications, and\nthere is thinks there that has sense in context.\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFFNdJsIo1XmbAXRboRAuGVAKCupfXOHwxXOPHFdq+K6S0lXWNZUwCgml1i\nCS0eEJcQndEJb7h7Nsfh1CM=\n=0gpW\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 18 Oct 2006 09:06:20 +0200",
"msg_from": "Ruben Rubio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "Am Dienstag, 17. Oktober 2006 17:50 schrieb Alexander Staubo:\n> On Oct 17, 2006, at 17:29 , Mario Weilguni wrote:\n> >\n> > Enforcing length constraints with varchar(xyz) is good database\n> > design, not a\n> > bad one. Using text everywhere might be tempting because it works,\n> > but it's\n> > not a good idea.\n>\n> Enforcing length constraints is generally a bad idea because it\n> assumes you know the data domain as expressed in a quantity of\n> characters. Off the top of your head, do you know the maximum length\n> of a zip code? A street address? The name of a city?\n\nIt's not a bad idea. Usually I use postal codes with 25 chars, and never had \nany problem. With text, the limit would be ~1 GB. No matter how much testing \nin the application happens, the varchar(25) as last resort is a good idea.\n\nAnd in most cases, the application itself limits the length, and thus it's \ngood to reflect this in the database design.\n\nFeel free to use text anywhere for your application, and feel free to use \nnumeric(1000) instead of numeric(4) if you want to be prepared for really \nlong numbers, but don't tell other people it's bad database design - it \nisn't.\n\n\n\n\n\n",
"msg_date": "Wed, 18 Oct 2006 11:31:44 +0200",
"msg_from": "Mario Weilguni <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "Mario Weilguni wrote:\n\n>>\n>> � � contacto varchar(255),\n>> � � fuente varchar(512),\n>> � � prefijopais varchar(10)\n>>\n>> Instead, use:\n>>\n>> � � contacto text,\n>> � � fuente text,\n>> � � prefijopais text\n>>\n>> See the PostgreSQL manual for an explanation of varchar vs. text.\n> \n> Enforcing length constraints with varchar(xyz) is good database design, not a \n> bad one. Using text everywhere might be tempting because it works, but it's \n> not a good idea.\n> \n\nI've always used the rationale:\n\nIf you *know* that the data is length constrained, then it is ok to \nreflect this in the domain you use - err, thats why they have length \nlimits! e.g. if you know that 'prefijopais' can *never* be > 10 chars in \nlength, then varchar(10) is a good choice.\n\nIf the data length is unknown or known to be unlimited, then reflect \nthat in the domain you use - e.g if 'fuente' or 'contacto' have no \nreason to be constrained, then just use text.\n\nbest wishes\n\nMark\n\n",
"msg_date": "Wed, 18 Oct 2006 23:19:44 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "On Wed, Oct 18, 2006 at 11:31:44AM +0200, Mario Weilguni wrote:\n> It's not a bad idea. Usually I use postal codes with 25 chars, and never had \n> any problem. With text, the limit would be ~1 GB. No matter how much testing \n> in the application happens, the varchar(25) as last resort is a good idea.\n\n> And in most cases, the application itself limits the length, and thus it's \n> good to reflect this in the database design.\n\n> Feel free to use text anywhere for your application, and feel free to use \n> numeric(1000) instead of numeric(4) if you want to be prepared for really \n> long numbers, but don't tell other people it's bad database design - it \n> isn't.\n\nIt's unnecessary design.\n\nSuggestions in this regard lead towards the user seeing a database error,\ninstead of a nice specific message provided by the application.\n\nI used to use varchar instead of text, but have since softened, as the\nnumber of times it has ever actually saved me is zero, and the number of\ntimes it has screwed me up (picking too small of a limit too early) has\nbeen a few.\n\nIt's kind of like pre-optimization before there is a problem. Sometimes\nit works for you, sometimes it works against.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Wed, 18 Oct 2006 08:50:09 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "On Tue, Oct 17, 2006 at 12:51:19PM -0400, Merlin Moncure wrote:\n> so, imo alexander is correct:\n> contacto varchar(255)\n> \n> ...is a false constraint, why exactly 255? is that were the dart landed?\n\nBTW, if we get variable-length varlena headers at some point, then\nsetting certain limits might make sense to keep performance more\nconsistent.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 18 Oct 2006 14:17:34 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
},
{
"msg_contents": "On Tue, Oct 17, 2006 at 12:25:39PM +0200, Ruben Rubio wrote:\n> First of all I have to say that I now the database is not ok. There was\n> a people before me that didn't do the thinks right. I would like to\n> normalize the database, but it takes too much time (there is is hundred\n> of SQLs to change and there is not enough technical resources). Also,\n> datacolumns in some places has same names, but the data that is stores\n> has different usages.\n\nFWIW, things like views and rules make those transations a lot easier to\ntackle.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 18 Oct 2006 14:19:01 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence (SOLVED)"
},
{
"msg_contents": "On 10/18/06, Jim C. Nasby <[email protected]> wrote:\n> On Tue, Oct 17, 2006 at 12:51:19PM -0400, Merlin Moncure wrote:\n> > so, imo alexander is correct:\n> > contacto varchar(255)\n> >\n> > ...is a false constraint, why exactly 255? is that were the dart landed?\n>\n> BTW, if we get variable-length varlena headers at some point, then\n> setting certain limits might make sense to keep performance more\n> consistent.\n\nI would argue that it is assumptions about the underlying architecture\nthat got everyone into trouble in the first place :). I would prefer\nto treat length constraint as a constraint (n + 1 = error), unless\nthere was a *compelling* reason to do otherwise, which currently there\nisn't (or hasn't been since we got toast) a lot of this stuff s due\nto legacy thinking, a lot of dbf products had limts to varchar around\n255 or so.\n\nimo, a proper constraint system would apply everything at the domain\nlevel, and minlength and maxlength would get equal weight, and be\noptional for all types.\n\nmerlin\n",
"msg_date": "Wed, 18 Oct 2006 15:37:07 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization of this SQL sentence"
}
] |
[
{
"msg_contents": "Hello,\n\n\n\nI was going through the Performance Enhancements of 8.1.0, in that I have\nread \"Bitmap Scan\"\n\n\n\n\"*Bitmap Scan:* indexes will be dynamically converted to bitmaps in memory\nwhen appropriate, giving up to twenty times faster index performance on\ncomplex queries against very large tables. This also helps simplify database\nmanagement by greatly reducing the need for multi-column indexes.\"\n\n\n\nI didn't understand the \"Bitmap Scan\" and the sentence \"indexes will be\ndynamically converted to bitmaps in memory\". What does mean by \"Bitmap Scan\"\nin database?\n\n\n\nCan anybody help us regarding above query?\n\n\n\nThanks,\n\nSoni\n\n\nHello,\n \nI was going through the Performance Enhancements of 8.1.0, in that I have read \"Bitmap Scan\"\n\n \n\"Bitmap Scan: indexes will be dynamically converted to bitmaps in memory when appropriate, giving up to twenty times faster index performance on complex queries against very large tables. This also helps simplify database management by greatly reducing the need for multi-column indexes.\"\n\n \nI didn't understand the \"Bitmap Scan\" and the sentence \"indexes will be dynamically converted to bitmaps in memory\". What does mean by \"Bitmap Scan\" in database?\n\n \nCan anybody help us regarding above query?\n \nThanks,\nSoni",
"msg_date": "Tue, 17 Oct 2006 17:09:29 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regarding Bitmap Scan"
},
{
"msg_contents": "am Tue, dem 17.10.2006, um 17:09:29 +0530 mailte soni de folgendes:\n> I didn't understand the \"Bitmap Scan\" and the sentence \"indexes will be\n> dynamically converted to bitmaps in memory\". What does mean by \"Bitmap Scan\" in\n> database?\n\nFor instance, you have a large table with 5 indexes on this and a query\nthat checks conditions on this 5 columns.\n\nPG is now able to combine this 5 indexes and performs only 1 bitmap\nindex scan on this table, and not 5 independet nested bitmap scans.\n\nA realy very great performance-boost!\n\n\nHTH, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Tue, 17 Oct 2006 13:45:11 +0200",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regarding Bitmap Scan"
},
{
"msg_contents": "On 10/17/06, soni de <[email protected]> wrote:\n>\n> I didn't understand the \"Bitmap Scan\" and the sentence \"indexes will be\n> dynamically converted to bitmaps in memory\". What does mean by \"Bitmap Scan\"\n> in database?\n>\n>\n>\n> Can anybody help us regarding above query?\n>\n\nAssume you have a table:\nCREATE TABLE foo (\n some_key int,\n some_time timestamp with time zone,\n some_data text\n);\nAnd two indexes:\nCREATE INDEX foo_key ON foo (some_key);\nCREATE INDEX foo_time ON foo (some_time);\n\nNow, you make a query:\nSELECT * from foo WHERE some_key > 10 AND some_time >\n'2006-10-01'::timestamptz;\n\n...originally planner would choose only one index to use -- and would use\nthe\none which it think its best.\n\nThe 8.1 version does differently: It will scan foo_key index -- make a\nbitmap out of it,\nscan foo_time index -- make another bitmap out of it, binary AND these\nbitmaps,\nand will read the data from the table using such combined bitmap. It could\nas well\nuse \"OR\" if you used OR in your query.\n\nHence -- it can be faster, especially for large tables and selective\nqueries.\n\nRegards,\n DAwid\n\nOn 10/17/06, soni de <[email protected]> wrote: \n\n\nI\ndidn't understand the \"Bitmap Scan\" and the sentence \"indexes\nwill be dynamically converted to bitmaps in memory\". What does mean by\n\"Bitmap Scan\" in database?\n\n\n \n\nCan anybody help us regarding above query?\nAssume you have a table:\nCREATE TABLE foo (\n some_key int,\n some_time timestamp with time zone,\n some_data text\n);\nAnd two indexes:\nCREATE INDEX foo_key ON foo (some_key);\nCREATE INDEX foo_time ON foo (some_time);\n\nNow, you make a query:\nSELECT * from foo WHERE some_key > 10 AND some_time > '2006-10-01'::timestamptz;\n\n...originally planner would choose only one index to use -- and would use the\none which it think its best.\n\nThe 8.1 version does differently: It will scan foo_key index -- make a bitmap out of it,\nscan foo_time index -- make another bitmap out of it, binary AND these bitmaps,\nand will read the data from the table using such combined bitmap. It could as well\nuse \"OR\" if you used OR in your query.\n\nHence -- it can be faster, especially for large tables and selective queries.\n\nRegards,\n DAwid",
"msg_date": "Tue, 17 Oct 2006 14:27:31 +0200",
"msg_from": "\"Dawid Kuroczko\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regarding Bitmap Scan"
},
{
"msg_contents": "Thanks a lot for your help.\n\nThanks,\nSoni\n\n\nOn 10/17/06, Dawid Kuroczko <[email protected]> wrote:\n>\n> On 10/17/06, soni de <[email protected]> wrote:\n> >\n> > I didn't understand the \"Bitmap Scan\" and the sentence \"indexes will be\n> > dynamically converted to bitmaps in memory\". What does mean by \"Bitmap Scan\"\n> > in database?\n> >\n> >\n> >\n> > Can anybody help us regarding above query?\n> >\n>\n> Assume you have a table:\n> CREATE TABLE foo (\n> some_key int,\n> some_time timestamp with time zone,\n> some_data text\n> );\n> And two indexes:\n> CREATE INDEX foo_key ON foo (some_key);\n> CREATE INDEX foo_time ON foo (some_time);\n>\n> Now, you make a query:\n> SELECT * from foo WHERE some_key > 10 AND some_time >\n> '2006-10-01'::timestamptz;\n>\n> ...originally planner would choose only one index to use -- and would use\n> the\n> one which it think its best.\n>\n> The 8.1 version does differently: It will scan foo_key index -- make a\n> bitmap out of it,\n> scan foo_time index -- make another bitmap out of it, binary AND these\n> bitmaps,\n> and will read the data from the table using such combined bitmap. It\n> could as well\n> use \"OR\" if you used OR in your query.\n>\n> Hence -- it can be faster, especially for large tables and selective\n> queries.\n>\n> Regards,\n> DAwid\n>\n>\n>\n\nThanks a lot for your help.\n \nThanks,\nSoni \nOn 10/17/06, Dawid Kuroczko <[email protected]> wrote:\nOn 10/17/06, soni de <\[email protected]> wrote:\n \n\n\nI didn't understand the \"Bitmap Scan\" and the sentence \"indexes will be dynamically converted to bitmaps in memory\". What does mean by \"Bitmap Scan\" in database? \n\n \nCan anybody help us regarding above query?\nAssume you have a table:CREATE TABLE foo ( some_key int, some_time timestamp with time zone, some_data text);And two indexes:CREATE INDEX foo_key ON foo (some_key);CREATE INDEX foo_time ON foo (some_time);\nNow, you make a query:SELECT * from foo WHERE some_key > 10 AND some_time > '2006-10-01'::timestamptz;...originally planner would choose only one index to use -- and would use theone which it think its best.\nThe 8.1 version does differently: It will scan foo_key index -- make a bitmap out of it,scan foo_time index -- make another bitmap out of it, binary AND these bitmaps,and will read the data from the table using such combined bitmap. It could as well\nuse \"OR\" if you used OR in your query.Hence -- it can be faster, especially for large tables and selective queries.Regards, DAwid",
"msg_date": "Fri, 27 Oct 2006 18:42:35 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Regarding Bitmap Scan"
}
] |
[
{
"msg_contents": "Hi\n\nWe are facing performance problems in postgres while executing a query.\nWhen I execute this query on the server it takes 5-10 seconds. Also I\nget good performance while executing this query from my code in java\nwith the hard codes values. I face severe performance problems when I\nrun it using a prepared statement. \n\nThe query is as follows:\nSelect events.event_id, ctrl.real_name, events.tsds, events.value,\nevents.lds, events.correction, ctrl.type, ctrl.freq from\niso_midw_data_update_events events, iso_midw_control ctrl where\nevents.obj_id = ctrl.obj_id and events.event_id > 68971124 order by\nevents.event_id limit 2000\n\nThe above query executes in 5-10 seconds.\n\nHowever the below query executes in 8 mins:\n\nSelect events.event_id, ctrl.real_name, events.tsds, events.value,\nevents.lds, events.correction, ctrl.type, ctrl.freq from table events,\niso_midw_control ctrl where events.obj_id = ctrl.obj_id and\nevents.event_id > ?::bigint order by events.event_id limit ?\n\nsetLong(1, 68971124);\nsetInt(2, 2000);\n\nThe table has close to 5 million rows. The table has the following\nindex:\n\niso_midw_data_update_events_event_id_key\niso_midw_data_update_events_lds_idx\niso_midw_data_update_events_obj_id_idx\n\n\nThe table is described as follows:\n\nColumns_name data_type type_name\tcolumn_size\nlds\t\t2\tnumeric\t\t13\nobj_id\t\t2\tnumeric\t\t6\ntsds\t\t2\tnumeric\t\t13\nvalue\t\t12\tvarchar\t\t22\ncorrection\t2\tnumeric\t\t1\ndelta_lds_tsds\t2\tnumeric\t\t13\nevent_id\t-5\tbigserial\t8\n\nPlease tell me what I am missing while setting the prepared statement. I\nam using postgres7.4.2. and postgresql-8.1-407.jdbc3.jar.\n\n\nThanks\n\n\nRegards\n\nRohit\n\n\n\n\n\n\nJdbc/postgres performance\n\n\n\nHi\nWe are facing performance problems in postgres while executing a query. When I execute this query on the server it takes 5-10 seconds. Also I get good performance while executing this query from my code in java with the hard codes values. I face severe performance problems when I run it using a prepared statement. \nThe query is as follows:\nSelect events.event_id, ctrl.real_name, events.tsds, events.value, events.lds, events.correction, ctrl.type, ctrl.freq from iso_midw_data_update_events events, iso_midw_control ctrl where events.obj_id = ctrl.obj_id and events.event_id > 68971124 order by events.event_id limit 2000\nThe above query executes in 5-10 seconds.\nHowever the below query executes in 8 mins:\nSelect events.event_id, ctrl.real_name, events.tsds, events.value, events.lds, events.correction, ctrl.type, ctrl.freq from table events, iso_midw_control ctrl where events.obj_id = ctrl.obj_id and events.event_id > ?::bigint order by events.event_id limit ?\nsetLong(1, 68971124);\nsetInt(2, 2000);\nThe table has close to 5 million rows. The table has the following index:\niso_midw_data_update_events_event_id_key\niso_midw_data_update_events_lds_idx\niso_midw_data_update_events_obj_id_idx\n\nThe table is described as follows:\nColumns_name data_type type_name column_size\nlds 2 numeric 13\nobj_id 2 numeric 6\ntsds 2 numeric 13\nvalue 12 varchar 22\ncorrection 2 numeric 1\ndelta_lds_tsds 2 numeric 13\nevent_id -5 bigserial 8\nPlease tell me what I am missing while setting the prepared statement. I am using postgres7.4.2. and postgresql-8.1-407.jdbc3.jar.\n\nThanks\n\nRegards\nRohit",
"msg_date": "Tue, 17 Oct 2006 20:05:28 +0100",
"msg_from": "\"Behl, Rohit \\(Infosys\\)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Jdbc/postgres performance"
},
{
"msg_contents": "On 17-Oct-06, at 3:05 PM, Behl, Rohit ((Infosys)) wrote:\n\n> Hi\n>\n> We are facing performance problems in postgres while executing a \n> query. When I execute this query on the server it takes 5-10 \n> seconds. Also I get good performance while executing this query \n> from my code in java with the hard codes values. I face severe \n> performance problems when I run it using a prepared statement.\n>\n> The query is as follows:\n>\n> Select events.event_id, ctrl.real_name, events.tsds, events.value, \n> events.lds, events.correction, ctrl.type, ctrl.freq from \n> iso_midw_data_update_events events, iso_midw_control ctrl where \n> events.obj_id = ctrl.obj_id and events.event_id > 68971124 order by \n> events.event_id limit 2000\n>\n> The above query executes in 5-10 seconds.\n>\n> However the below query executes in 8 mins:\n>\n> Select events.event_id, ctrl.real_name, events.tsds, events.value, \n> events.lds, events.correction, ctrl.type, ctrl.freq from table \n> events, iso_midw_control ctrl where events.obj_id = ctrl.obj_id and \n> events.event_id > ?::bigint order by events.event_id limit ?\n> setLong(1, 68971124);\n>\n> setInt(2, 2000);\n>\n> The table has close to 5 million rows. The table has the following \n> index:\n>\n> iso_midw_data_update_events_event_id_key\n>\n> iso_midw_data_update_events_lds_idx\n>\n> iso_midw_data_update_events_obj_id_idx\n>\n>\n> The table is described as follows:\n>\n> Columns_name data_type type_name column_size\n>\n> lds 2 numeric 13\n>\n> obj_id 2 numeric 6\n>\n> tsds 2 numeric 13\n>\n> value 12 varchar 22\n>\n> correction 2 numeric 1\n>\n> delta_lds_tsds 2 numeric 13\n>\n> event_id -5 bigserial 8\n>\n> Please tell me what I am missing while setting the prepared \n> statement. I am using postgres7.4.2. and postgresql-8.1-407.jdbc3.jar.\n\nTry the same query with protocolVersion=2. There are some issues with \nprepared statements being slower if the parameters are not the same \ntype as the column being compared to.\n\nprotocol version 2 will issue the query exactly the same as psql \ndoes. Also note that your two queries are not identical. In the \nprepared query you cast to bigint ?\n\nVersion 8.1.x handles this better I think.\n>\n> Thanks\n>\n>\n> Regards\n>\n> Rohit\n>\n\n\nOn 17-Oct-06, at 3:05 PM, Behl, Rohit ((Infosys)) wrote: HiWe are facing performance problems in postgres while executing a query. When I execute this query on the server it takes 5-10 seconds. Also I get good performance while executing this query from my code in java with the hard codes values. I face severe performance problems when I run it using a prepared statement. The query is as follows:Select events.event_id, ctrl.real_name, events.tsds, events.value, events.lds, events.correction, ctrl.type, ctrl.freq from iso_midw_data_update_events events, iso_midw_control ctrl where events.obj_id = ctrl.obj_id and events.event_id > 68971124 order by events.event_id limit 2000The above query executes in 5-10 seconds.However the below query executes in 8 mins:Select events.event_id, ctrl.real_name, events.tsds, events.value, events.lds, events.correction, ctrl.type, ctrl.freq from table events, iso_midw_control ctrl where events.obj_id = ctrl.obj_id and events.event_id > ?::bigint order by events.event_id limit ?setLong(1, 68971124);setInt(2, 2000);The table has close to 5 million rows. The table has the following index:iso_midw_data_update_events_event_id_keyiso_midw_data_update_events_lds_idxiso_midw_data_update_events_obj_id_idx The table is described as follows:Columns_name data_type type_name column_sizelds 2 numeric 13obj_id 2 numeric 6tsds 2 numeric 13value 12 varchar 22correction 2 numeric 1delta_lds_tsds 2 numeric 13event_id -5 bigserial 8Please tell me what I am missing while setting the prepared statement. I am using postgres7.4.2. and postgresql-8.1-407.jdbc3.jar.Try the same query with protocolVersion=2. There are some issues with prepared statements being slower if the parameters are not the same type as the column being compared to.protocol version 2 will issue the query exactly the same as psql does. Also note that your two queries are not identical. In the prepared query you cast to bigint ?Version 8.1.x handles this better I think. Thanks RegardsRohit",
"msg_date": "Sun, 22 Oct 2006 11:11:17 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jdbc/postgres performance"
}
] |
[
{
"msg_contents": "\n Hi\r\n\nWe are facing performance problems in postgres while executing a query. When I execute this query on the server it takes 5-10 seconds. Also I get good performance while executing this query from my code in java with the hard codes values. I face severe performance problems when I run it using a prepared statement.\r\n\nThe query is as follows:\n\nSelect events.event_id, ctrl.real_name, events.tsds, events.value, events.lds, events.correction, ctrl.type, ctrl.freq from iso_midw_data_update_events events, iso_midw_control ctrl where events.obj_id = ctrl.obj_id and events.event_id > 68971124 order by events.event_id limit 2000\n\nThe above query executes in 5-10 seconds.\n\nHowever the below query executes in 8 mins:\n\nSelect events.event_id, ctrl.real_name, events.tsds, events.value, events.lds, events.correction, ctrl.type, ctrl.freq from table events, iso_midw_control ctrl where events.obj_id = ctrl.obj_id and events.event_id > ?::bigint order by events.event_id limit ?\n\nsetLong(1, 68971124);\n\nsetInt(2, 2000);\n\nThe table has close to 5 million rows. The table has the following index:\n\niso_midw_data_update_events_event_id_key\n\niso_midw_data_update_events_lds_idx\n\niso_midw_data_update_events_obj_id_idx\n\n\nThe table is described as follows:\n\nColumns_name data_type type_name column_size\n\nlds 2 numeric 13\n\nobj_id 2 numeric 6\n\ntsds 2 numeric 13\n\nvalue 12 varchar 22\n\ncorrection 2 numeric 1\n\ndelta_lds_tsds 2 numeric 13\n\nevent_id -5 bigserial 8\n\nPlease tell me what I am missing while setting the prepared statement. I am using postgres7.4.2. and postgresql-8.1-407.jdbc3.jar.\n\n\nThanks\n\n\nRegards\n\nRohit\n\n\n**************** CAUTION - Disclaimer *****************\nThis e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely for the use of the addressee(s). If you are not the intended recipient, please notify the sender by e-mail and delete the original message. Further, you are not to copy, disclose, or distribute this e-mail or its contents to any other person and any such actions are unlawful. This e-mail may contain viruses. Infosys has taken every reasonable precaution to minimize this risk, but is not liable for any damage you may sustain as a result of any virus in this e-mail. You should carry out your own virus checks before opening the e-mail or attachment. Infosys reserves the right to monitor and review the content of all messages sent to or from this e-mail address. Messages sent to or from this e-mail address may be stored on the Infosys e-mail system.\n***INFOSYS******** End of Disclaimer ********INFOSYS***\n",
"msg_date": "Wed, 18 Oct 2006 01:15:40 +0530",
"msg_from": "\"Rohit_Behl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Jdbc/postgres performance"
},
{
"msg_contents": "On 10/17/06, Rohit_Behl <[email protected]> wrote:\n> Select events.event_id, ctrl.real_name, events.tsds, events.value, events.lds, events.correction, ctrl.type, ctrl.freq from table events, iso_midw_control ctrl where events.obj_id = ctrl.obj_id and events.event_id > ?::bigint order by events.event_id limit ?\n\nunfortunately parameterized limit statements cause problems due to the\nfact the planner has a hard coded 'guess' of 10% of rows returned when\nthe plan is generated. I mention this everyime query hints proposal\ncomes up :-).\n\nbest you can do is to try turning off seqscan and possibly bitmap scan\nwhen the plan is generated.\n\nmerlin\n",
"msg_date": "Tue, 17 Oct 2006 16:28:49 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jdbc/postgres performance"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Merlin Moncure\n> Sent: Tuesday, October 17, 2006 4:29 PM\n> To: Rohit_Behl\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Jdbc/postgres performance\n> \n> On 10/17/06, Rohit_Behl <[email protected]> wrote:\n> > Select events.event_id, ctrl.real_name, events.tsds, events.value,\n> events.lds, events.correction, ctrl.type, ctrl.freq from table events,\n> iso_midw_control ctrl where events.obj_id = ctrl.obj_id and\n> events.event_id > ?::bigint order by events.event_id limit ?\n> \n> unfortunately parameterized limit statements cause problems due to the\n> fact the planner has a hard coded 'guess' of 10% of rows returned when\n> the plan is generated. I mention this everyime query hints proposal\n> comes up :-).\n\nI'm not sure that this has anything to do with hints (yes, I know hints\nare a popular topic as of late..) but from the 8.1 Manual:\n\n\"This is because when the statement is planned and the planner attempts\nto determine the optimal query plan, the actual values of any parameters\nspecified in the statement are unavailable.\"\n\nAfter a quick search on the JDBC list, it looks like there's some recent\ndiscussion on the subject of how to give the planner better insight for\nprepared statements (the subject is \"Blind Message\" if you're\nlooking...). \n\nSo, I'm off to go read there and perhaps join the jdbc mailing list too.\n\n\nBut, a more general postgres question. I assume if I want to turn\nprepared statements off altogether (say I'm using a jdbc abstraction\nlayer that likes parameterized statements, and there's other benefits to\nparameterizing other than just saving on db parse/plan) can I set\nmax_prepared_transactions to 0? Is there any other option outside of\nJDBC? (I'll be moving my other questions over to the JDBC list...)\n\nAlso, others might be interested in the JDBC documentation, which is\nseparate from the main Postgres manual and can be found at:\nhttp://jdbc.postgresql.org/documentation/\n\n\n- Bucky\n \n\n> best you can do is to try turning off seqscan and possibly bitmap scan\n> when the plan is generated.\n> \n\n",
"msg_date": "Tue, 17 Oct 2006 20:47:03 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jdbc/postgres performance"
},
{
"msg_contents": "On 10/18/06, Bucky Jordan <[email protected]> wrote:\n> > On 10/17/06, Rohit_Behl <[email protected]> wrote:\n> > > Select events.event_id, ctrl.real_name, events.tsds, events.value,\n> > events.lds, events.correction, ctrl.type, ctrl.freq from table events,\n> > iso_midw_control ctrl where events.obj_id = ctrl.obj_id and\n> > events.event_id > ?::bigint order by events.event_id limit ?\n> >\n\n> After a quick search on the JDBC list, it looks like there's some recent\n> discussion on the subject of how to give the planner better insight for\n> prepared statements (the subject is \"Blind Message\" if you're\n> looking...).\n>\n> So, I'm off to go read there and perhaps join the jdbc mailing list too.\n\nthis is not really a jdbc issue, just a practical problem with\nprepared statements...except for the mechanism if any the jdbc driver\nallows you to choose if a statement is prepared.\n\n> But, a more general postgres question. I assume if I want to turn\n> prepared statements off altogether (say I'm using a jdbc abstraction\n\nyou turn off prepared statements by not invoking sql prepare or\nPQprepare. (or, if jdbc implements its own protocol client, it's\nversion of PQprepare).\n\n> layer that likes parameterized statements, and there's other benefits to\n> parameterizing other than just saving on db parse/plan) can I set\n> max_prepared_transactions to 0? Is there any other option outside of\n\nthis setting is for 2pc and is not relevent to the discussion :) even\nif it were, im not so sure about a setting designed to enforce a\npartcular method of querying.\n\nyes, you are correct this is not exactly the use case for hints being\ndiscussed in -hackers. however, imho, this is much more important and\nrelevant so long as prepared statements continue to work the way they\ndo.\n\nmerlin\n",
"msg_date": "Wed, 18 Oct 2006 09:11:40 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jdbc/postgres performance"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> this is not really a jdbc issue, just a practical problem with\n> prepared statements...\n\nSpecifically, that the OP is running a 7.4 backend, which was our\nfirst venture into prepared parameterized statements. PG 8.1 will\ndo better, 8.2 should do better yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2006 01:19:39 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jdbc/postgres performance "
},
{
"msg_contents": "On 10/18/06, Tom Lane <[email protected]> wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > this is not really a jdbc issue, just a practical problem with\n> > prepared statements...\n>\n> Specifically, that the OP is running a 7.4 backend, which was our\n> first venture into prepared parameterized statements. PG 8.1 will\n> do better, 8.2 should do better yet.\n\nI haven't looked at 8.2 because I no longer work at my previous\nposition, but I was significantly affected by this problem through the\n8.1 release. The speed advantages of preparing certain types queries\nare dramatic and there are some decent use cases for pramaterizing\nlimit and other input parameters that are difficult to guess.\n\nmerlin\n",
"msg_date": "Wed, 18 Oct 2006 09:31:41 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jdbc/postgres performance"
}
] |
[
{
"msg_contents": "\nHi Merlin\n\nI have disabled seq-scan and now it works like a charm. Thanks it was a saver.\n\n\r\n\nRegards\n\nRohit\n\n\r\n\nOn 10/18/06, Bucky Jordan <[email protected]> wrote:\n\n> > On 10/17/06, Rohit_Behl <[email protected]> wrote:\n\n> > > Select events.event_id, ctrl.real_name, events.tsds, events.value,\n\n> > events.lds, events.correction, ctrl.type, ctrl.freq from table\r\n\n> > events, iso_midw_control ctrl where events.obj_id = ctrl.obj_id and\r\n\n> > events.event_id > ?::bigint order by events.event_id limit ?\n\n> >\n\n> After a quick search on the JDBC list, it looks like there's some\r\n\n> recent discussion on the subject of how to give the planner better\r\n\n> insight for prepared statements (the subject is \"Blind Message\" if\r\n\n> you're looking...).\n\n>\n\n> So, I'm off to go read there and perhaps join the jdbc mailing list too.\n\nthis is not really a jdbc issue, just a practical problem with prepared statements...except for the mechanism if any the jdbc driver allows you to choose if a statement is prepared.\n\n> But, a more general postgres question. I assume if I want to turn\r\n\n> prepared statements off altogether (say I'm using a jdbc abstraction\n\nyou turn off prepared statements by not invoking sql prepare or PQprepare. (or, if jdbc implements its own protocol client, it's version of PQprepare).\n\n> layer that likes parameterized statements, and there's other benefits\r\n\n> to parameterizing other than just saving on db parse/plan) can I set\r\n\n> max_prepared_transactions to 0? Is there any other option outside of\n\nthis setting is for 2pc and is not relevent to the discussion :) even if it were, im not so sure about a setting designed to enforce a partcular method of querying.\n\nyes, you are correct this is not exactly the use case for hints being discussed in -hackers. however, imho, this is much more important and relevant so long as prepared statements continue to work the way they do.\n\nmerlin\n\n\r\n\n**************** CAUTION - Disclaimer *****************\nThis e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely for the use of the addressee(s). If you are not the intended recipient, please notify the sender by e-mail and delete the original message. Further, you are not to copy, disclose, or distribute this e-mail or its contents to any other person and any such actions are unlawful. This e-mail may contain viruses. Infosys has taken every reasonable precaution to minimize this risk, but is not liable for any damage you may sustain as a result of any virus in this e-mail. You should carry out your own virus checks before opening the e-mail or attachment. Infosys reserves the right to monitor and review the content of all messages sent to or from this e-mail address. Messages sent to or from this e-mail address may be stored on the Infosys e-mail system.\n***INFOSYS******** End of Disclaimer ********INFOSYS***\n",
"msg_date": "Wed, 18 Oct 2006 15:40:01 +0530",
"msg_from": "\"Rohit_Behl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Jdbc/postgres performance"
},
{
"msg_contents": "\nHi\n\r\nI made the following changes to the conf file:\n\r\nenable_indexscan = true\n\nenable_seqscan = false\n\nWe also have a large amount of data being inserted into our tables. I was just wondering if this could have an impact on the inserts since I guess this change is on the database.\n\nPlease let me know.\n\nThanks\n\nRegards\n\nRohit\n\n\n________________________________\n\nFrom: Rohit_Behl\nSent: Wed 18/10/2006 11:10\nTo: Merlin Moncure\nCc: [email protected]\nSubject: Re: [PERFORM] Jdbc/postgres performance\n\n\n\nHi Merlin\n\nI have disabled seq-scan and now it works like a charm. Thanks it was a saver.\n\n\r\n\nRegards\n\nRohit\n\n\r\n\nOn 10/18/06, Bucky Jordan <[email protected]> wrote:\n\n> > On 10/17/06, Rohit_Behl <[email protected]> wrote:\n\n> > > Select events.event_id, ctrl.real_name, events.tsds, events.value,\n\n> > events.lds, events.correction, ctrl.type, ctrl.freq from table\r\n\n> > events, iso_midw_control ctrl where events.obj_id = ctrl.obj_id and\r\n\n> > events.event_id > ?::bigint order by events.event_id limit ?\n\n> >\n\n> After a quick search on the JDBC list, it looks like there's some\r\n\n> recent discussion on the subject of how to give the planner better\r\n\n> insight for prepared statements (the subject is \"Blind Message\" if\r\n\n> you're looking...).\n\n>\n\n> So, I'm off to go read there and perhaps join the jdbc mailing list too.\n\nthis is not really a jdbc issue, just a practical problem with prepared statements...except for the mechanism if any the jdbc driver allows you to choose if a statement is prepared.\n\n> But, a more general postgres question. I assume if I want to turn\r\n\n> prepared statements off altogether (say I'm using a jdbc abstraction\n\nyou turn off prepared statements by not invoking sql prepare or PQprepare. (or, if jdbc implements its own protocol client, it's version of PQprepare).\n\n> layer that likes parameterized statements, and there's other benefits\r\n\n> to parameterizing other than just saving on db parse/plan) can I set\r\n\n> max_prepared_transactions to 0? Is there any other option outside of\n\nthis setting is for 2pc and is not relevent to the discussion :) even if it were, im not so sure about a setting designed to enforce a partcular method of querying.\n\nyes, you are correct this is not exactly the use case for hints being discussed in -hackers. however, imho, this is much more important and relevant so long as prepared statements continue to work the way they do.\n\nmerlin\n\n\r\n\n**************** CAUTION - Disclaimer *****************\nThis e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely for the use of the addressee(s). If you are not the intended recipient, please notify the sender by e-mail and delete the original message. Further, you are not to copy, disclose, or distribute this e-mail or its contents to any other person and any such actions are unlawful. This e-mail may contain viruses. Infosys has taken every reasonable precaution to minimize this risk, but is not liable for any damage you may sustain as a result of any virus in this e-mail. You should carry out your own virus checks before opening the e-mail or attachment. Infosys reserves the right to monitor and review the content of all messages sent to or from this e-mail address. Messages sent to or from this e-mail address may be stored on the Infosys e-mail system.\n***INFOSYS******** End of Disclaimer ********INFOSYS***\n",
"msg_date": "Wed, 18 Oct 2006 17:28:29 +0530",
"msg_from": "\"Rohit_Behl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Jdbc/postgres performance"
},
{
"msg_contents": "Rohit_Behl wrote:\n> Hi\n> \n> I made the following changes to the conf file:\n> \n> enable_indexscan = true\n> \n> enable_seqscan = false\n> \n> We also have a large amount of data being inserted into our tables. I was just wondering if this could have an impact on the inserts since I guess this change is on the database.\n\nenable_seqscan shouldn't affect plain inserts, but it will affect \n*every* query in the system.\n\nI would suggest using setting \"prepareThreshold=0\" in the JDBC driver \nconnection URL, or calling pstmt.setPrepareThreshold(0) in the \napplication. That tells the driver not to use server-side prepare, and \nthe query will be re-planned every time you execute it with the real \nvalues of the parameters.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 18 Oct 2006 13:51:47 +0100",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jdbc/postgres performance"
},
{
"msg_contents": "On 10/18/06, Heikki Linnakangas <[email protected]> wrote:\n> I would suggest using setting \"prepareThreshold=0\" in the JDBC driver\n> connection URL, or calling pstmt.setPrepareThreshold(0) in the\n> application. That tells the driver not to use server-side prepare, and\n> the query will be re-planned every time you execute it with the real\n> values of the parameters.\n\nthat works. I think another alternative is to just turn off seqscan\ntemporarily for the session:\nset enable_seqscan=false;\n\nand re-enable it after prepareing the statement. however I agree that\nseqscan should be enabled for normal operation. in fact, this becomes\nmore and more important as your database becomes really big due to\npoor random i/o of hard drives.\n\nmerlin\n",
"msg_date": "Wed, 18 Oct 2006 09:20:54 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Jdbc/postgres performance"
}
] |
[
{
"msg_contents": "Hi list !\n\nI have two table with a 2-column index on both of them.\nIn the first table, the first colum of the index is the primary key, \nthe second one is an integer field.\nIn the second table, the two columns are the primary key.\nWhen I join these two tables, the 2-column index of the first table is \nnot used.\nWhy does the query planner think that this plan is better ?\n\nALTER TABLE geo.subcities_names\n ADD CONSTRAINT subcities_names_pkey PRIMARY KEY(subcity_gid, \nlanguage_id);\n\nCREATE INDEX subcities_gid_language_id\n ON geo.subcities\n USING btree\n (gid, official_language_id);\n\nEXPLAIN ANALYZE\nSELECT * FROM geo.subcities sc, geo.subcities_names scn\nWHERE sc.gid = scn.subcity_gid AND sc.official_language_id = \nscn.language_id;\n\nResult :\n\n Merge Join (cost=0.00..4867.91 rows=37917 width=240) (actual \ntime=0.037..149.022 rows=39323 loops=1)\n Merge Cond: (\"outer\".gid = \"inner\".subcity_gid)\n Join Filter: (\"outer\".official_language_id = \"inner\".language_id)\n -> Index Scan using subcities_pkey on subcities sc \n(cost=0.00..1893.19 rows=39357 width=200) (actual time=0.015..43.430 \nrows=39357 loops=1)\n -> Index Scan using subcities_names_pkey on subcities_names scn \n(cost=0.00..2269.39 rows=40517 width=40) (actual time=0.012..35.465 \nrows=40517 loops=1)\n Total runtime: 157.389 ms\n(6 rows)\n\n\nThanks for your suggestions !\nRegards\n--\nArnaud\n",
"msg_date": "Wed, 18 Oct 2006 13:21:09 +0200",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index on two columns not used"
},
{
"msg_contents": "Arnaud Lesauvage wrote:\n> I have two table with a 2-column index on both of them.\n> In the first table, the first colum of the index is the primary key, the \n> second one is an integer field.\n> In the second table, the two columns are the primary key.\n> When I join these two tables, the 2-column index of the first table is \n> not used.\n> Why does the query planner think that this plan is better ?\n> \n> ALTER TABLE geo.subcities_names\n> ADD CONSTRAINT subcities_names_pkey PRIMARY KEY(subcity_gid, \n> language_id);\n> \n> CREATE INDEX subcities_gid_language_id\n> ON geo.subcities\n> USING btree\n> (gid, official_language_id);\n> \n> EXPLAIN ANALYZE\n> SELECT * FROM geo.subcities sc, geo.subcities_names scn\n> WHERE sc.gid = scn.subcity_gid AND sc.official_language_id = \n> scn.language_id;\n\nMy theory:\n\nThere's no additional restrictions besides the join condition, so the \nsystem has to scan both tables completely. It chooses to use a full \nindex scan instead of a seq scan to be able to do a merge join. Because \nit's going to have to scan the indexes completely anyway, it chooses the \nsmallest index which is subcities_pkey.\n\nYou'd think that the system could do the merge using just the indexes, \nand only fetch the heap tuples for matches. If that were the case, using \nthe 2-column index would indeed be a good idea. However, PostgreSQL \ncan't use the values stored in the index to check the join condition, so \nall the heap tuples are fetched anyway. There was just recently \ndiscussion about this on this list: \nhttp://archives.postgresql.org/pgsql-performance/2006-09/msg00080.php.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 18 Oct 2006 12:40:05 +0100",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Heikki Linnakangas a �crit :\n> Arnaud Lesauvage wrote:\n>> I have two table with a 2-column index on both of them.\n>> In the first table, the first colum of the index is the primary key, the \n>> second one is an integer field.\n>> In the second table, the two columns are the primary key.\n>> When I join these two tables, the 2-column index of the first table is \n>> not used.\n>> Why does the query planner think that this plan is better ?\n> \n> You'd think that the system could do the merge using just the indexes, \n> and only fetch the heap tuples for matches. If that were the case, using \n> the 2-column index would indeed be a good idea. However, PostgreSQL \n> can't use the values stored in the index to check the join condition, so \n> all the heap tuples are fetched anyway. There was just recently \n> discussion about this on this list: \n> http://archives.postgresql.org/pgsql-performance/2006-09/msg00080.php.\n> \n\n\nThanks for your answer Heikki.\nI did not know that joins were not using index values, and \nthat PostgreSQL had to fecth the heap tuples anyway.\nDoes this mean that this 2-column index is useless ? (I \ncreated it for the join, I don't often filter on both \ncolumns otherwise)\n\nThis query was taken from my \"adminsitrative areas\" model \n(continents, countries, etc...). Whenever I query this \nmodel, I have to join many tables.\nI don't really know what the overhead of reading the \nheap-tuples is, but would it be a good idea to add \ndata-redundancy in my tables to avoid joins ? (adding \ncountry_id, continent_id, etc... in the \"cities\" table)\n\nRegards\n--\nArnaud\n\n",
"msg_date": "Wed, 18 Oct 2006 14:12:29 +0200",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Arnaud Lesauvage wrote:\n> I did not know that joins were not using index values, and that \n> PostgreSQL had to fecth the heap tuples anyway.\n> Does this mean that this 2-column index is useless ? (I created it for \n> the join, I don't often filter on both columns otherwise)\n\nWell, if no-one is using the index, it is useless..\n\n> This query was taken from my \"adminsitrative areas\" model (continents, \n> countries, etc...). Whenever I query this model, I have to join many \n> tables.\n> I don't really know what the overhead of reading the heap-tuples is, but \n> would it be a good idea to add data-redundancy in my tables to avoid \n> joins ? (adding country_id, continent_id, etc... in the \"cities\" table)\n\nIt depends. I would advise not to denormalize unless you really really \nhave to. It's hard to say without more knowledge of the application.\n\nIs the query you showed a typical one? It ran in about 160 ms, is that \ngood enough? It's doesn't sound too bad, considering that it returned \nalmost 40000 rows.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 18 Oct 2006 14:57:25 +0100",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Heikki Linnakangas a �crit :\n> Arnaud Lesauvage wrote:\n>> This query was taken from my \"adminsitrative areas\" model (continents, \n>> countries, etc...). Whenever I query this model, I have to join many \n>> tables.\n>> I don't really know what the overhead of reading the heap-tuples is, but \n>> would it be a good idea to add data-redundancy in my tables to avoid \n>> joins ? (adding country_id, continent_id, etc... in the \"cities\" table)\n> \n> It depends. I would advise not to denormalize unless you really really \n> have to. It's hard to say without more knowledge of the application.\n> \n> Is the query you showed a typical one? It ran in about 160 ms, is that \n> good enough? It's doesn't sound too bad, considering that it returned \n> almost 40000 rows.\n\n\nIt is quite typical, yes. It is the base query of a view. In \nfact, most views have a lot more joins (they join with all \nthe upper-level tables).\nBut 150ms is OK, indeed.\n",
"msg_date": "Wed, 18 Oct 2006 16:15:13 +0200",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Arnaud Lesauvage wrote:\n> It is quite typical, yes. It is the base query of a view. In fact, most \n> views have a lot more joins (they join with all the upper-level tables).\n> But 150ms is OK, indeed.\n\nIf the query using the view does anything more than a \"SELECT * FROM \nview\", you should do an explain analyze of the query instead of the \ndefinition of the view. The access plan might look very different.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 18 Oct 2006 15:50:35 +0100",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Heikki Linnakangas a �crit :\n> Arnaud Lesauvage wrote:\n>> It is quite typical, yes. It is the base query of a view. In fact, most \n>> views have a lot more joins (they join with all the upper-level tables).\n>> But 150ms is OK, indeed.\n> \n> If the query using the view does anything more than a \"SELECT * FROM \n> view\", you should do an explain analyze of the query instead of the \n> definition of the view. The access plan might look very different.\n\nThe views are used as linked tables in an Access Frontend.\nSome accesses are \"select * from view\", others might filter \non a country_id or something similar.\nFor the moment performance is good, so I think I'll keep a \nnormalized database as long as it is possible !\n\nThanks for your help Heikki !\n",
"msg_date": "Wed, 18 Oct 2006 17:05:48 +0200",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Arnaud Lesauvage <[email protected]> writes:\n> When I join these two tables, the 2-column index of the first table is \n> not used.\n> Why does the query planner think that this plan is better ?\n\nHm, is gid by itself nearly unique in these tables? If so, the merge\njoin would get only marginally more efficient by using both columns as\nmerge conditions. Heikki's probably right to guess that the planner\nthinks it's better to use the smaller index.\n\nHowever, if there are lots of duplicate gids, then it ought to have\npreferred the two-column merge ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2006 13:31:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used "
},
{
"msg_contents": "Tom Lane a �crit :\n> Arnaud Lesauvage <[email protected]> writes:\n>> When I join these two tables, the 2-column index of the first table is \n>> not used.\n>> Why does the query planner think that this plan is better ?\n> \n> Hm, is gid by itself nearly unique in these tables? If so, the merge\n> join would get only marginally more efficient by using both columns as\n> merge conditions. Heikki's probably right to guess that the planner\n> thinks it's better to use the smaller index.\n> \n> However, if there are lots of duplicate gids, then it ought to have\n> preferred the two-column merge ...\n\ngid is the primary key of the first table, so absolutely unique.\nThanks for the information Tom !\n\n--\nArnaud\n",
"msg_date": "Wed, 18 Oct 2006 19:51:07 +0200",
"msg_from": "Arnaud Lesauvage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Sorry for the amateurish question, but what are \"heap tuples\"?\n\nAlso, my understanding is that the following statement applies only for \ncomposite indexes: \"PostgreSQL can't use the values stored in the index \nto check the join condition\". I assume that PostgreSQL will be able to \nuse single-column-indexes for join conditions. Is this correct?\n\nThank you,\nPeter\n\nHeikki Linnakangas wrote:\n> Arnaud Lesauvage wrote:\n>> I have two table with a 2-column index on both of them.\n>> In the first table, the first colum of the index is the primary key, \n>> the second one is an integer field.\n>> In the second table, the two columns are the primary key.\n>> When I join these two tables, the 2-column index of the first table \n>> is not used.\n>> Why does the query planner think that this plan is better ?\n>>\n>> ALTER TABLE geo.subcities_names\n>> ADD CONSTRAINT subcities_names_pkey PRIMARY KEY(subcity_gid, \n>> language_id);\n>>\n>> CREATE INDEX subcities_gid_language_id\n>> ON geo.subcities\n>> USING btree\n>> (gid, official_language_id);\n>>\n>> EXPLAIN ANALYZE\n>> SELECT * FROM geo.subcities sc, geo.subcities_names scn\n>> WHERE sc.gid = scn.subcity_gid AND sc.official_language_id = \n>> scn.language_id;\n>\n> My theory:\n>\n> There's no additional restrictions besides the join condition, so the \n> system has to scan both tables completely. It chooses to use a full \n> index scan instead of a seq scan to be able to do a merge join. \n> Because it's going to have to scan the indexes completely anyway, it \n> chooses the smallest index which is subcities_pkey.\n>\n> You'd think that the system could do the merge using just the indexes, \n> and only fetch the heap tuples for matches. If that were the case, \n> using the 2-column index would indeed be a good idea. However, \n> PostgreSQL can't use the values stored in the index to check the join \n> condition, so all the heap tuples are fetched anyway. There was just \n> recently discussion about this on this list: \n> http://archives.postgresql.org/pgsql-performance/2006-09/msg00080.php.\n>\n",
"msg_date": "Sat, 21 Oct 2006 12:22:25 +0200",
"msg_from": "=?ISO-8859-1?Q?P=E9ter_Kov=E1cs?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Hi, Peter,\n\nPéter Kovács wrote:\n> Sorry for the amateurish question, but what are \"heap tuples\"?\n> \n> Also, my understanding is that the following statement applies only for\n> composite indexes: \"PostgreSQL can't use the values stored in the index\n> to check the join condition\". I assume that PostgreSQL will be able to\n> use single-column-indexes for join conditions. Is this correct?\n\nBoth questions are tightly related:\n\nFirst, the \"heap\" is the part of the table where the actual tuples are\nstored.\n\nPostgreSQL uses an MVCC system, that means that multiple versions (with\ntheir transaction information) of a single row can coexist in the heap.\nThis allows for higher concurrency in the backend.\n\nNow, the index basically stores pointers like \"pages 23 and 42 contain\nrows with value 'foo'\", but version information is not replicated to the\nindex pages, this keeps the index' size requirements low.\n\nAdditionally, in most UPDATE cases, the new row version will fit into\nthe same page as the old version. In this case, the index does not have\nto be changed, which is an additional speed improvement.\n\nBut when accessing the data via the index, it can only give a\npreselection of pages that contain interesting data, and PostgreSQL has\nto look into the actual heap pages to check whether there really are row\nversions that are visible in the current transaction.\n\n\nA further problem is that some GIST index types are lossy, that means\nthe index does not retain the full information, but only an\napproximation, for efficiency reasons.\n\nA prominent example are the PostGIS geometry indices, they only store\nthe bounding box (4 float values) instead of the whole geometry (may be\nmillions of double precision coordinates). So it may be necessary to\nre-check the condition with the real data, using the lossy index for\npreselection.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n\n",
"msg_date": "Mon, 23 Oct 2006 15:25:49 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Markus Schaber wrote:\n\n> Additionally, in most UPDATE cases, the new row version will fit into\n> the same page as the old version. In this case, the index does not have\n> to be changed, which is an additional speed improvement.\n\nActually, when the UPDATE puts a new row version in the same heap page,\nthe index must be updated anyway. All the rest of what you said is\ncorrect.\n\nThere is another reason not to put visibility info in the index, which\nis that it would be extremely complex to update all indexes to contain\nthe right visibility (and maybe impossible without risking deadlocks).\nUpdating only the heap is very simple.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 23 Oct 2006 10:50:11 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Hi, Alvaro,\n\nAlvaro Herrera wrote:\n\n>> Additionally, in most UPDATE cases, the new row version will fit into\n>> the same page as the old version. In this case, the index does not have\n>> to be changed, which is an additional speed improvement.\n> Actually, when the UPDATE puts a new row version in the same heap page,\n> the index must be updated anyway.\n\nAFAICS only, when the index covers (directly or via function) a column\nthat's actually changed.\n\nChanging columns the index does not depend on should not need any write\naccess to that index.\n\nCorrect me if I'm wrong.\n\nThanks,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Mon, 23 Oct 2006 17:17:06 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Markus Schaber <[email protected]> writes:\n> Alvaro Herrera wrote:\n>> Actually, when the UPDATE puts a new row version in the same heap page,\n>> the index must be updated anyway.\n\n> AFAICS only, when the index covers (directly or via function) a column\n> that's actually changed.\n> Changing columns the index does not depend on should not need any write\n> access to that index.\n> Correct me if I'm wrong.\n\nYou're wrong. An UPDATE always writes a new version of the row (if it\noverwrote the row in-place, it wouldn't be rollback-able). The new\nversion has a different TID and therefore the index entry must change.\nTo support MVCC, our approach is to always insert a new index entry\npointing at the new TID --- the old one remains in place so that the old\nversion can still be found by transactions that need it. Once the old\nrow version is entirely dead, VACUUM is responsible for removing both it\nand the index entry pointing at it.\n\nOther DBMSes use other approaches that shift the overhead to other\nplaces, but that's how Postgres does it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 23 Oct 2006 12:01:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used "
},
{
"msg_contents": "Markus,\n\nThank you for your kind explanation.\n\nPeter\n\nMarkus Schaber wrote:\n> Hi, Peter,\n>\n> Pᅵter Kovᅵcs wrote:\n> \n>> Sorry for the amateurish question, but what are \"heap tuples\"?\n>>\n>> Also, my understanding is that the following statement applies only for\n>> composite indexes: \"PostgreSQL can't use the values stored in the index\n>> to check the join condition\". I assume that PostgreSQL will be able to\n>> use single-column-indexes for join conditions. Is this correct?\n>> \n>\n> Both questions are tightly related:\n>\n> First, the \"heap\" is the part of the table where the actual tuples are\n> stored.\n>\n> PostgreSQL uses an MVCC system, that means that multiple versions (with\n> their transaction information) of a single row can coexist in the heap.\n> This allows for higher concurrency in the backend.\n>\n> Now, the index basically stores pointers like \"pages 23 and 42 contain\n> rows with value 'foo'\", but version information is not replicated to the\n> index pages, this keeps the index' size requirements low.\n>\n> Additionally, in most UPDATE cases, the new row version will fit into\n> the same page as the old version. In this case, the index does not have\n> to be changed, which is an additional speed improvement.\n>\n> But when accessing the data via the index, it can only give a\n> preselection of pages that contain interesting data, and PostgreSQL has\n> to look into the actual heap pages to check whether there really are row\n> versions that are visible in the current transaction.\n>\n>\n> A further problem is that some GIST index types are lossy, that means\n> the index does not retain the full information, but only an\n> approximation, for efficiency reasons.\n>\n> A prominent example are the PostGIS geometry indices, they only store\n> the bounding box (4 float values) instead of the whole geometry (may be\n> millions of double precision coordinates). So it may be necessary to\n> re-check the condition with the real data, using the lossy index for\n> preselection.\n>\n> HTH,\n> Markus\n> \n",
"msg_date": "Tue, 24 Oct 2006 01:07:35 +0200",
"msg_from": "=?ISO-8859-15?Q?P=E9ter_Kov=E1cs?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used"
},
{
"msg_contents": "Hi, Tom,\n\nTom Lane wrote:\n\n> You're wrong. An UPDATE always writes a new version of the row (if it\n> overwrote the row in-place, it wouldn't be rollback-able). The new\n> version has a different TID and therefore the index entry must change.\n> To support MVCC, our approach is to always insert a new index entry\n> pointing at the new TID --- the old one remains in place so that the old\n> version can still be found by transactions that need it.\n\nOK, good you corrected me.\n\nI had the weird impression that both row versions have the same tuple ID\n(as they are different versions of the same tuple), and so an index\nchange is not necessary when both versions fit on the same page.\n\nThanks,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Tue, 24 Oct 2006 11:29:36 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index on two columns not used"
}
] |
[
{
"msg_contents": "Hi!\n\nI have a problem with ACCESS EXCLUSIVE lock when I drop a reference in \ntransaction. I have 2 tables: \ncreate table a(id SERIAL primary key);\ncreate table b(id SERIAL primary key references a(id));\n\nAfter that I have 2 processes: P1, P2\nIn P1:\nbegin;\nALTER TABLE b DROP CONSTRAINT b_id_fkey;\n\nIn P2:\nSELECT * FROM a;\n\nAnd I'm waiting for the result, but I don't get until P1 finishes.\nI know the DROP CONSTRAINT put an ACCESS EXCLUSIVE table LOCK into the \nTABLE a, and the SELECT is stopped by this LOCK in P2.\nNote: I cannot commit the P1 earlier, because it's a very long \ntransaction (more hours, data conversion transaction)\nMy question: Why need this strict locking?\n\nIn my opinion there isn't exclusion between the DROP CONSTRAINT and the \nSELECT.\n\nThanks for your suggestions!\nRegards,\nAntal Attila\n\n\n",
"msg_date": "Wed, 18 Oct 2006 16:24:30 +0200",
"msg_from": "Atesz <[email protected]>",
"msg_from_op": true,
"msg_subject": "ACCESS EXCLUSIVE lock"
},
{
"msg_contents": "On Wed, 2006-10-18 at 09:24, Atesz wrote:\n> Hi!\n> \n> I have a problem with ACCESS EXCLUSIVE lock when I drop a reference in \n> transaction. I have 2 tables: \n> create table a(id SERIAL primary key);\n> create table b(id SERIAL primary key references a(id));\n> \n> After that I have 2 processes: P1, P2\n> In P1:\n> begin;\n> ALTER TABLE b DROP CONSTRAINT b_id_fkey;\n> \n> In P2:\n> SELECT * FROM a;\n> \n> And I'm waiting for the result, but I don't get until P1 finishes.\n> I know the DROP CONSTRAINT put an ACCESS EXCLUSIVE table LOCK into the \n> TABLE a, and the SELECT is stopped by this LOCK in P2.\n> Note: I cannot commit the P1 earlier, because it's a very long \n> transaction (more hours, data conversion transaction)\n> My question: Why need this strict locking?\n> \n> In my opinion there isn't exclusion between the DROP CONSTRAINT and the \n> SELECT.\n\nWhat if, a minute or two after the drop contraint, you issue a rollback?\n",
"msg_date": "Wed, 18 Oct 2006 10:28:20 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ACCESS EXCLUSIVE lock"
},
{
"msg_contents": "Atesz <[email protected]> writes:\n> My question: Why need this strict locking?\n> In my opinion there isn't exclusion between the DROP CONSTRAINT and the \n> SELECT.\n\nThis isn't going to be changed, because the likely direction of future\ndevelopment is that the planner will start making use of constraints\neven for SELECT queries. This means that a DROP CONSTRAINT operation\ncould invalidate the plan of a SELECT query, so the locking will be\nessential.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2006 12:56:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ACCESS EXCLUSIVE lock "
},
{
"msg_contents": "Scott Marlowe wrote:\n> What if, a minute or two after the drop contraint, you issue a rollback?\n> \nAfter the DROP CONSTRAINT I insert 4 million rekords into the TABLE b. \nAfter the inserts I remake the dropped constraints, and commit the \ntransaction (P1). This solution is faster then the conventional method \nwithout the constraint's trick.\nIn my work the table A is a dictionary table (key-value pairs) with \n100-200 records, and the TABLE b has 20 columns with 10 references to \nTABLE a. So my experience is that I have to drop constraints before the \n4 million inserts and remake those after it.\n\nIf there is an error in my transaction (P1) and I have to rollback, \nthere isn't problem, because my inserts lost from TABLE b and the \ndropped constraints may be rolled back. In my opinion there isn't \nexclusion between a dropped constraint (reference from b to a) and a \nselect on TABLE a. If I think well the dropped constraint have to seem \nin other transation (for example: P2). And it doesn't have to seem in my \ntransaction, because it has already dropped.\n\n\nThanks your suggestions!\nRegards,\nAntal Attila\n",
"msg_date": "Thu, 19 Oct 2006 15:44:10 +0200",
"msg_from": "Atesz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ACCESS EXCLUSIVE lock"
},
{
"msg_contents": "Tom Lane wrote:\n> This isn't going to be changed, because the likely direction of future\n> development is that the planner will start making use of constraints\n> even for SELECT queries. This means that a DROP CONSTRAINT operation\n> could invalidate the plan of a SELECT query, so the locking will be\n> essential.\n> \nHi!\n\nI also think the constraints can increase performance of queries, if the \nplanner can use them. It will be a great feature in the future! But I \nhave more questions about the coherency between a constraint and a \ntransaction. Can a constraint live in differenet isolation levels? If I \ndrop a constraint in a transaction (T1), it doesn't seem after the drop \noperation in T1. But it should seem in another transaction (T2) in line \nwith T1 (if T2 is started between T1's begin and commit!). If T1 start \nafter T1's commit, our constraint doesn't have to seem in T2, so the \nplanner cannot use it. If I think well, these predicates means the \nconstraint follows its isolation level of the transaction.\n\nHow does it works in the current release?\n\nIf the constraints adapt its transaction why could it invalidate the \nplan of a SELECT query? A SELECT could use a given constraint, if it's \ndropped without comitting or exists when the SELECT or the tansaction of \nthe SELECT starts. I know we have to examine which rows can affect the \nresult of the SELECT. The main question in this case is that: A wrong \nrow (which break the dropped constraint) can affect the result of the \nSELECT? In my opininon there isn't wrong rows. Do you know such special \ncase when it can happen? So some wrong rows can seem in the SELECT?\n\nI know my original problem is not too common, but the parallel \nperformance of the PostgreSQL is very important in multiprocessor \nenvironment. I see, you follow this direction! So you make better \nlocking conditions in 8.2 in more cases. Generally the drop constraints \nare running in itself or in short transactions.\n\nWe have an optimalization trick when we have to insert more million rows \ninto a table in same transaction. Before inserting them we drop the \nforeign key constraints after the begin of the transaction, and remake \ntem after insertations. This method is faster then the conventional \nsolution. These trasactions are longer (5-40 minutes on a SunFireV40z).\n\nI read the TODO list and I found more features about deferrability. \nWould you like to implement the deferrable foreign key constraints? If \nyou want, in my opinion my posings will thouch it.\n\nThank you in anticipation!\n\nRegards,\nAntal Attila\n",
"msg_date": "Wed, 25 Oct 2006 19:41:06 +0200",
"msg_from": "Atesz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ACCESS EXCLUSIVE lock"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI am doing a test for a scenario where I have 2\nschemas one (public) for the operational data and\nanother one (archive) for old, archived data. So\nbasically I want to split the data from some huge\ntables in two. All data before 2006 in archive and all\ndata after and including 2006 in public. \n\nLet's say I have a table named public.AllTransactions\nwith data before and including 2006.\nI want to move all the data < 2006 into a new table\nnamed archive.transaction (in archive schema)\nI also want to move all data >= 2006 into a new table\nnamed public.transaction (in public schema).\n\nIn order to make this transparent for the developers I\nwant to drop the original table public.AllTransactions\n and to create a view with the same name that is a\nunion between the two new tables:\n\ncreate view public.AllTransactions as\nselect * from public.transaction\nunion all \nselect * from archive.transaction\n\nOn this view I will create rules for insert, update,\ndelete...\n\nTesting some selects I know we have in the application\nI got into a scenario where my plan does not work\nwithout doing code change. This scenario is:\n\nselect max(transid) from alltransaction;\n\nbecause the planner does not use the existent indexes\non the 2 new tables: public.transaction and\narchive.transaction\n\nHere are the results of the explain analyze:\n\n1. Select only from one table is OK:\n-------------------------------------\n\n# explain select max(transid) from public.transaction;\n\n QUERY\nPLAN \n \n--------------------------------------------------------------------------------\n \n----------------------\n Result (cost=0.04..0.05 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..0.04 rows=1 width=8)\n -> Index Scan Backward using\npk_transaction on transaction (cost=0.00..357870.46 \n \nrows=9698002 width=8)\n Filter: (transid IS NOT NULL)\n(5 rows)\n\n\n2. Select from the view is doing a sequential scan:\n---------------------------------------------------\n# explain analyze select max(transid) from\nalltransaction;\n\nQUERY PLAN \n \n---------------------------------------------------------------------------------------------------------------------------\n -----------------\n Aggregate (cost=200579993.70..200579993.71 rows=1\nwidth=8) (actual time=115778.101..115778.103 rows=1\nloops=1)\n -> Append (cost=100000000.00..200447315.74\nrows=10614237 width=143) (actual time=0.082..95146.144\nrows=10622206 loops= 1)\n -> Seq Scan transaction\n(cost=100000000.00..100312397.02 rows=9698002\nwidth=143) (actual time=0.078..56002.778 rows= \n 9706475 loops=1)\n -> Seq Scan on transaction \n(cost=100000000.00..100028776.35 rows=916235\nwidth=143) (actual time=8.822..2799.496 rows= \n 915731 loops=1)\n Total runtime: 115778.200 ms\n(5 rows)\n\nIs this a bug or this is how the planner is suppose to\nwork?\n\nThe same problem I have on the following select:\nselect transid from alltransaction order by transid\ndesc limit 1;\n\nThank you for your time,\nIoana\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Wed, 18 Oct 2006 15:51:34 -0400 (EDT)",
"msg_from": "Ioana Danes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql 8.1.4 - performance issues for select on view using max"
},
{
"msg_contents": "Hi,\n\nLe mercredi 18 octobre 2006 21:51, Ioana Danes a écrit :\n> I am doing a test for a scenario where I have 2\n> schemas one (public) for the operational data and\n> another one (archive) for old, archived data. So\n> basically I want to split the data from some huge\n> tables in two. All data before 2006 in archive and all\n> data after and including 2006 in public.\n[...]\n> I got into a scenario where my plan does not work\n> without doing code change.\n\nThis sounds a lot as a ddl partitionning, you may want to add some checks to \nyour schema and set constraint_exclusion to on in your postgresql.conf.\n\nPlease read following documentation material :\n http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html\n\nRegards,\n-- \nDimitri Fontaine\nhttp://www.dalibo.com/",
"msg_date": "Wed, 18 Oct 2006 22:13:17 +0200",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on view using\n max"
},
{
"msg_contents": "On 10/18/06, Ioana Danes <[email protected]> wrote:\n\n>\n> # explain select max(transid) from public.transaction;\n>\n> QUERY\n> PLAN\n>\n>\n> --------------------------------------------------------------------------------\n>\n> ----------------------\n> Result (cost=0.04..0.05 rows=1 width=0)\n> InitPlan\n> -> Limit (cost=0.00..0.04 rows=1 width=8)\n> -> Index Scan Backward using\n> pk_transaction on transaction (cost=0.00..357870.46\n>\n> rows=9698002 width=8)\n> Filter: (transid IS NOT NULL)\n> (5 rows)\n\n\nThis works fine because i recognizes the index for that table and can simply\nuse it to find the max.\n\n\n2. Select from the view is doing a sequential scan:\n> ---------------------------------------------------\n> # explain analyze select max(transid) from\n> alltransaction;\n>\n> QUERY PLAN\n>\n>\n> ---------------------------------------------------------------------------------------------------------------------------\n> -----------------\n> Aggregate (cost=200579993.70..200579993.71 rows=1\n> width=8) (actual time=115778.101..115778.103 rows=1\n> loops=1)\n> -> Append (cost=100000000.00..200447315.74\n> rows=10614237 width=143) (actual time=0.082..95146.144\n> rows=10622206 loops= 1)\n> -> Seq Scan transaction\n> (cost=100000000.00..100312397.02 rows=9698002\n> width=143) (actual time=0.078..56002.778 rows=\n> 9706475 loops=1)\n> -> Seq Scan on transaction\n> (cost=100000000.00..100028776.35 rows=916235\n> width=143) (actual time=8.822..2799.496 rows=\n> 915731 loops=1)\n> Total runtime: 115778.200 ms\n> (5 rows)\n>\n>\nBecause this is a view, it cannot use the indexes from the other tables.\nEverytime you run a query against a view, it recreates itself based on the\nunderlying data. From there it must sort the table based on the i and then\nreturn your max.\n\nIt's probably not a great idea to make a view this way if you are planning\non using queries like this regularly because you can't create an index for a\nview. You could try a query that pulls the max from each table and then\ngrabs the max of these:\n\nselect max (foo.transid) from (select max(transid) as id from\npublic.transaction union select max(transid) from archive.transaction) as\nfoo;\n\n-- \nThis E-mail is covered by the Electronic Communications Privacy Act, 18\nU.S.C. 2510-2521 and is legally privileged.\n\nThis information is confidential information and is intended only for the\nuse of the individual or entity named above. If the reader of this message\nis not the intended recipient, you are hereby notified that any\ndissemination, distribution or copying of this communication is strictly\nprohibited.\n\nOn 10/18/06, Ioana Danes <[email protected]> wrote:\n# explain select max(transid) from public.transaction; QUERYPLAN--------------------------------------------------------------------------------\n---------------------- Result (cost=0.04..0.05 rows=1 width=0) InitPlan -> Limit (cost=0.00..0.04 rows=1 width=8) -> Index Scan Backward usingpk_transaction on transaction (cost=\n0.00..357870.46rows=9698002 width=8) Filter: (transid IS NOT NULL)(5 rows)This works fine because i recognizes the index for that table and can simply use it to find the max. \n 2. Select from the view is doing a sequential scan:---------------------------------------------------\n# explain analyze select max(transid) fromalltransaction;QUERY PLAN--------------------------------------------------------------------------------------------------------------------------- -----------------\n Aggregate (cost=200579993.70..200579993.71 rows=1width=8) (actual time=115778.101..115778.103 rows=1loops=1) -> Append (cost=100000000.00..200447315.74rows=10614237 width=143) (actual time=0.082..95146.144\nrows=10622206 loops= 1) -> Seq Scan transaction(cost=100000000.00..100312397.02 rows=9698002width=143) (actual time=0.078..56002.778 rows= 9706475 loops=1) -> Seq Scan on transaction\n(cost=100000000.00..100028776.35 rows=916235width=143) (actual time=8.822..2799.496 rows= 915731 loops=1) Total runtime: 115778.200 ms(5 rows)Because this is a view, it cannot use the indexes from the other tables. Everytime you run a query against a view, it recreates itself based on the underlying data. From there it must sort the table based on the i and then return your max.\nIt's probably not a great idea to make a view this way if you are planning on using queries like this regularly because you can't create an index for a view. You could try a query that pulls the max from each table and then grabs the max of these:\nselect max (foo.transid) from (select max(transid) as id from public.transaction union select max(transid) from archive.transaction) as foo;-- This E-mail is covered by the Electronic Communications Privacy Act, 18 \nU.S.C. 2510-2521 and is legally privileged.This information is confidential information and is intended only for the use of the individual or entity named above. If the reader of this message is not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited.",
"msg_date": "Wed, 18 Oct 2006 14:21:05 -0600",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on view using\n max"
},
{
"msg_contents": "Thanks a lot I will give it a try. \n\n--- Dimitri Fontaine <[email protected]> wrote:\n\n> Hi,\n> \n> Le mercredi 18 octobre 2006 21:51, Ioana Danes a\n> �crit :\n> > I am doing a test for a scenario where I have 2\n> > schemas one (public) for the operational data and\n> > another one (archive) for old, archived data. So\n> > basically I want to split the data from some huge\n> > tables in two. All data before 2006 in archive and\n> all\n> > data after and including 2006 in public.\n> [...]\n> > I got into a scenario where my plan does not work\n> > without doing code change.\n> \n> This sounds a lot as a ddl partitionning, you may\n> want to add some checks to \n> your schema and set constraint_exclusion to on in\n> your postgresql.conf.\n> \n> Please read following documentation material :\n> \n>\nhttp://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html\n> \n> Regards,\n> -- \n> Dimitri Fontaine\n> http://www.dalibo.com/\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Wed, 18 Oct 2006 16:34:34 -0400 (EDT)",
"msg_from": "Ioana Danes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on view using\n max"
},
{
"msg_contents": "Hello,\n\nI tried the partitioning scenario but I've got into\nthe same problem. The max function is not using the\nindexes on the two partitioned tables...\n\nAny other thoughts?\n\n--- Ioana Danes <[email protected]> wrote:\n\n> Thanks a lot I will give it a try. \n> \n> --- Dimitri Fontaine <[email protected]> wrote:\n> \n> > Hi,\n> > \n> > Le mercredi 18 octobre 2006 21:51, Ioana Danes a\n> > �crit :\n> > > I am doing a test for a scenario where I have 2\n> > > schemas one (public) for the operational data\n> and\n> > > another one (archive) for old, archived data. So\n> > > basically I want to split the data from some\n> huge\n> > > tables in two. All data before 2006 in archive\n> and\n> > all\n> > > data after and including 2006 in public.\n> > [...]\n> > > I got into a scenario where my plan does not\n> work\n> > > without doing code change.\n> > \n> > This sounds a lot as a ddl partitionning, you may\n> > want to add some checks to \n> > your schema and set constraint_exclusion to on in\n> > your postgresql.conf.\n> > \n> > Please read following documentation material :\n> > \n> >\n>\nhttp://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html\n> > \n> > Regards,\n> > -- \n> > Dimitri Fontaine\n> > http://www.dalibo.com/\n> > \n> \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam\n> protection around \n> http://mail.yahoo.com \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Wed, 18 Oct 2006 17:02:01 -0400 (EDT)",
"msg_from": "Ioana Danes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on view using\n max"
},
{
"msg_contents": "On Wed, 2006-10-18 at 15:51 -0400, Ioana Danes wrote:\n> Hi everyone,\n> Testing some selects I know we have in the application\n> I got into a scenario where my plan does not work\n> without doing code change. This scenario is:\n> \n> select max(transid) from alltransaction;\n> \n> because the planner does not use the existent indexes\n> on the 2 new tables: public.transaction and\n> archive.transaction\n> \n\nFirst, the query is expanded into something like (I'm being inexact\nhere):\n\nSELECT max(transid) FROM (SELECT * FROM public.transaction UNION SELECT\n* FROM archive.transaction);\n\nPostgreSQL added a hack to the max() aggregate so that, in the simple\ncase, it can recognize that what it really wants to do is use the index.\nUsing the index for an aggregate only works in special cases, like min()\nand max(). What PostgreSQL actually does is to transform a query from:\n\nSELECT max(value) FROM sometable;\n\nInto:\n\nSELECT value FROM sometable ORDER BY value DESC LIMIT 1;\n\nIn your case, it would need to transform the query into something more\nlike:\n\nSELECT max(transid) FROM (\n SELECT transid FROM (\n SELECT transid FROM public.transaction ORDER BY transid DESC\n LIMIT 1\n ) t1 \n UNION \n SELECT transid FROM (\n SELECT transid FROM archive.transaction ORDER BY transid DESC\n LIMIT 1\n ) t2 \n) t;\n\nThe reason for that is because PostgreSQL (apparently) isn't smart\nenough to do a mergesort on the two indexes to sort the result of the\nUNION. At least, I can't get PostgreSQL to sort over two UNIONed tables\nusing an index; perhaps I'm missing it.\n\nRegards,\n\tJeff Davis\n\n\n",
"msg_date": "Wed, 18 Oct 2006 14:07:28 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on"
},
{
"msg_contents": "Le mercredi 18 octobre 2006 23:02, Ioana Danes a écrit :\n> I tried the partitioning scenario but I've got into\n> the same problem. The max function is not using the\n> indexes on the two partitioned tables...\n>\n> Any other thoughts?\n\nDid you make sure your test included table inheritance?\nI'm not sure the planner benefits from constraint_exclusion without selecting \nthe empty parent table (instead of your own union based view).\n\n-- \nDimitri Fontaine\nhttp://www.dalibo.com/",
"msg_date": "Wed, 18 Oct 2006 23:19:39 +0200",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on view using\n max"
},
{
"msg_contents": "On Wed, 2006-10-18 at 23:19 +0200, Dimitri Fontaine wrote:\n> Le mercredi 18 octobre 2006 23:02, Ioana Danes a écrit :\n> > I tried the partitioning scenario but I've got into\n> > the same problem. The max function is not using the\n> > indexes on the two partitioned tables...\n> >\n> > Any other thoughts?\n> \n> Did you make sure your test included table inheritance?\n> I'm not sure the planner benefits from constraint_exclusion without selecting \n> the empty parent table (instead of your own union based view).\n> \n\nconstraint exclusion and inheritance won't help him.\n\nThe problem is that he has two indexes, and he needs to find the max\nbetween both of them. PostgreSQL isn't smart enough to recognize that it\ncan use two indexes, find the max in each one, and find the max of those\ntwo values.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Wed, 18 Oct 2006 14:33:49 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on"
},
{
"msg_contents": "On Wed, Oct 18, 2006 at 02:33:49PM -0700, Jeff Davis wrote:\n> On Wed, 2006-10-18 at 23:19 +0200, Dimitri Fontaine wrote:\n> > Le mercredi 18 octobre 2006 23:02, Ioana Danes a ??crit :\n> > > I tried the partitioning scenario but I've got into\n> > > the same problem. The max function is not using the\n> > > indexes on the two partitioned tables...\n> > >\n> > > Any other thoughts?\n> > \n> > Did you make sure your test included table inheritance?\n> > I'm not sure the planner benefits from constraint_exclusion without selecting \n> > the empty parent table (instead of your own union based view).\n> > \n> \n> constraint exclusion and inheritance won't help him.\n> \n> The problem is that he has two indexes, and he needs to find the max\n> between both of them. PostgreSQL isn't smart enough to recognize that it\n> can use two indexes, find the max in each one, and find the max of those\n> two values.\n\nSorry, don't have the earlier part of this thread, but what about...\n\nSELECT greatest(max(a), max(b)) ...\n\n?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 18 Oct 2006 17:10:46 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on"
},
{
"msg_contents": "On Wed, 2006-10-18 at 17:10 -0500, Jim C. Nasby wrote:\n> Sorry, don't have the earlier part of this thread, but what about...\n> \n> SELECT greatest(max(a), max(b)) ...\n> \n> ?\n\nTo fill you in, we're trying to get the max of a union (a view across\ntwo physical tables).\n\nIt can be done if you're creative with the query; I suggested a query\nthat selected the max of the max()es of the individual tables. Your\nquery could work too. However, the trick would be getting postgresql to\nrecognize that it can transform \"SELECT max(x) FROM foo\" into that,\nwhere foo is a view of a union.\n\nIf PostgreSQL could sort the result of a union by merging the results of\ntwo index scans, I think the problem would be solved. Is there something\npreventing this, or is it just something that needs to be added to the\nplanner?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Wed, 18 Oct 2006 15:32:15 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on"
},
{
"msg_contents": "On Wed, Oct 18, 2006 at 03:32:15PM -0700, Jeff Davis wrote:\n> On Wed, 2006-10-18 at 17:10 -0500, Jim C. Nasby wrote:\n> > Sorry, don't have the earlier part of this thread, but what about...\n> > \n> > SELECT greatest(max(a), max(b)) ...\n> > \n> > ?\n> \n> To fill you in, we're trying to get the max of a union (a view across\n> two physical tables).\n\nUNION or UNION ALL? You definitely don't want to do a plain UNION if you\ncan possibly avoid it.\n\n> It can be done if you're creative with the query; I suggested a query\n> that selected the max of the max()es of the individual tables. Your\n> query could work too. However, the trick would be getting postgresql to\n> recognize that it can transform \"SELECT max(x) FROM foo\" into that,\n> where foo is a view of a union.\n> \n> If PostgreSQL could sort the result of a union by merging the results of\n> two index scans, I think the problem would be solved. Is there something\n> preventing this, or is it just something that needs to be added to the\n> planner?\n\nHrm... it'd be worth trying the old ORDER BY ... LIMIT 1 trick just to\nsee if that worked in this case, but I don't have much hope for that.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 18 Oct 2006 17:35:51 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on"
},
{
"msg_contents": "Jeff Davis <[email protected]> writes:\n> If PostgreSQL could sort the result of a union by merging the results of\n> two index scans, I think the problem would be solved. Is there something\n> preventing this, or is it just something that needs to be added to the\n> planner?\n\nIt's something on the wish-list. Personally I'd be inclined to try to\nrewrite the query as a plain MAX() across rewritten per-table indexed\nqueries, rather than worry about mergesort or anything like that.\nThere wasn't any very good way to incorporate that idea when planagg.c\nwas first written, but now that the planner has an explicit notion of\n\"append relations\" it might be relatively straightforward.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Oct 2006 19:05:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on "
},
{
"msg_contents": "On Wed, 2006-10-18 at 17:35 -0500, Jim C. Nasby wrote:\n> On Wed, Oct 18, 2006 at 03:32:15PM -0700, Jeff Davis wrote:\n> > On Wed, 2006-10-18 at 17:10 -0500, Jim C. Nasby wrote:\n> > > Sorry, don't have the earlier part of this thread, but what about...\n> > > \n> > > SELECT greatest(max(a), max(b)) ...\n> > > \n> > > ?\n> > \n> > To fill you in, we're trying to get the max of a union (a view across\n> > two physical tables).\n> \n> UNION or UNION ALL? You definitely don't want to do a plain UNION if you\n> can possibly avoid it.\n\nOops, of course he must be doing UNION ALL, but for some reason I ran my\ntest queries with plain UNION (thanks for reminding me). However, it\ndidn't make a difference, see below.\n\n> > It can be done if you're creative with the query; I suggested a query\n> > that selected the max of the max()es of the individual tables. Your\n> > query could work too. However, the trick would be getting postgresql to\n> > recognize that it can transform \"SELECT max(x) FROM foo\" into that,\n> > where foo is a view of a union.\n> > \n> > If PostgreSQL could sort the result of a union by merging the results of\n> > two index scans, I think the problem would be solved. Is there something\n> > preventing this, or is it just something that needs to be added to the\n> > planner?\n> \n> Hrm... it'd be worth trying the old ORDER BY ... LIMIT 1 trick just to\n> see if that worked in this case, but I don't have much hope for that.\n\nYeah, that's the solution. Here's the problem:\n\n=> set enable_seqscan = false;\nSET\n=> EXPLAIN SELECT i FROM (SELECT i FROM t10 UNION ALL SELECT i FROM t11)\nt ORDER BY i DESC;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Sort (cost=200026772.96..200027272.96 rows=200000 width=4)\n Sort Key: t.i\n -> Append (cost=100000000.00..200004882.00 rows=200000 width=4)\n -> Seq Scan on t10 (cost=100000000.00..100001441.00\nrows=100000 width=4)\n -> Seq Scan on t11 (cost=100000000.00..100001441.00\nrows=100000 width=4)\n(5 rows)\n\n=> EXPLAIN SELECT i FROM (SELECT i FROM t10) t ORDER BY i DESC;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Index Scan Backward using t10_idx on t10 (cost=0.00..1762.00\nrows=100000 width=4)\n(1 row)\n\n=> EXPLAIN SELECT i FROM (SELECT i FROM t11) t ORDER BY i DESC;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Index Scan Backward using t11_idx on t11 (cost=0.00..1762.00\nrows=100000 width=4)\n(1 row)\n\n=>\n\nBut if PostgreSQL could just merge the index scan results, it could\n\"ORDER BY i\" the result of a UNION ALL without a problem. But it can't\ndo that, so the syntactical trick introduced for min/max won't work in\nhis case :(\n\nHe'll probably have to change his application to make that query perform\ndecently if the tables are split.\n\nIdeas?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Wed, 18 Oct 2006 16:05:49 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on"
},
{
"msg_contents": "Hello,\n\nActually what I expected from the planner for this\nquery (select max(transid) from view) was something\nlike this :\n\nselect max(transid) from (select max(transid) from\narchive.transaction union all select max(transid) from\npublic.transaction)\n \nand to apply the max function to each query of the\nunion. This is what is happening when you use a where\ncondition, it is using the indexes on each subquery of\nthe view...\nex: select transid from view where transid = 12;\n\nThis way it would be fast enough.\n\nAlso for order by and limit I was expecting the same\nthing.\n\n\nThank you for your time,\nIoana Danes\n\n> constraint exclusion and inheritance won't help him.\n> \n> The problem is that he has two indexes, and he needs\n> to find the max\n> between both of them. PostgreSQL isn't smart enough\n> to recognize that it\n> can use two indexes, find the max in each one, and\n> find the max of those\n> two values.\n> \n> Regards,\n> \tJeff Davis\n> \n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Thu, 19 Oct 2006 07:23:57 -0400 (EDT)",
"msg_from": "Ioana Danes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on view using\n max"
},
{
"msg_contents": "Hi,\nI tried and this does does not work either.\n\nThank you,\nIoana\n--- \"Jim C. Nasby\" <[email protected]> wrote:\n\n> On Wed, Oct 18, 2006 at 03:32:15PM -0700, Jeff Davis\n> wrote:\n> > On Wed, 2006-10-18 at 17:10 -0500, Jim C. Nasby\n> wrote:\n> > > Sorry, don't have the earlier part of this\n> thread, but what about...\n> > > \n> > > SELECT greatest(max(a), max(b)) ...\n> > > \n> > > ?\n> > \n> > To fill you in, we're trying to get the max of a\n> union (a view across\n> > two physical tables).\n> \n> UNION or UNION ALL? You definitely don't want to do\n> a plain UNION if you\n> can possibly avoid it.\n> \n> > It can be done if you're creative with the query;\n> I suggested a query\n> > that selected the max of the max()es of the\n> individual tables. Your\n> > query could work too. However, the trick would be\n> getting postgresql to\n> > recognize that it can transform \"SELECT max(x)\n> FROM foo\" into that,\n> > where foo is a view of a union.\n> > \n> > If PostgreSQL could sort the result of a union by\n> merging the results of\n> > two index scans, I think the problem would be\n> solved. Is there something\n> > preventing this, or is it just something that\n> needs to be added to the\n> > planner?\n> \n> Hrm... it'd be worth trying the old ORDER BY ...\n> LIMIT 1 trick just to\n> see if that worked in this case, but I don't have\n> much hope for that.\n> -- \n> Jim Nasby \n> [email protected]\n> EnterpriseDB http://enterprisedb.com \n> 512.569.9461 (cell)\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Thu, 19 Oct 2006 07:32:56 -0400 (EDT)",
"msg_from": "Ioana Danes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on"
},
{
"msg_contents": "Hello,\n\nIt looks like some of you missed my first email but my\nproblem is not to find a replacement for this select:\nselect max(transid) from someunionview\nThare are plenty of solutions for doing this...\n\nMy point is to split a tale in two and to make this\ntransparent for the developers as a first step. On the\nsecond step they will improve some of the queries but\nthat is another story.\n\nSo I would like to know if there is any plan to\nimprove this type of query for views in the near\nfuture, or maybe it is alredy improved in 8.2 version?\nI have the same problem and question for: \nselect transid from someunionview order by transid\ndesc limit 1;\n \nThank you for your time,\nIoana Danes\n\n--- Tom Lane <[email protected]> wrote:\n\n> Jeff Davis <[email protected]> writes:\n> > If PostgreSQL could sort the result of a union by\n> merging the results of\n> > two index scans, I think the problem would be\n> solved. Is there something\n> > preventing this, or is it just something that\n> needs to be added to the\n> > planner?\n> \n> It's something on the wish-list. Personally I'd be\n> inclined to try to\n> rewrite the query as a plain MAX() across rewritten\n> per-table indexed\n> queries, rather than worry about mergesort or\n> anything like that.\n> There wasn't any very good way to incorporate that\n> idea when planagg.c\n> was first written, but now that the planner has an\n> explicit notion of\n> \"append relations\" it might be relatively\n> straightforward.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Thu, 19 Oct 2006 07:43:55 -0400 (EDT)",
"msg_from": "Ioana Danes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql 8.1.4 - performance issues for select on "
}
] |
[
{
"msg_contents": "Hello all,\n\nI read a paper, which is Query optimization in the presence of Foreign\nFunctions.\nAnd the paper , there is a paragraph like below.\n\nIn order to reduce the number of invocations, caching the results of\ninvocation was suggested in Postgres.\n\nI'd like to know in detail about how postgres is maintaing the cache of\nUDFs.\n\nThanks,\nJungmin\n\n\n-- \nJungmin Shin\n\nHello all,\n \nI read a paper, which is Query optimization in the presence of Foreign Functions.\nAnd the paper , there is a paragraph like below.\n \nIn order to reduce the number of invocations, caching the results of invocation was suggested in Postgres.\n \nI'd like to know in detail about how postgres is maintaing the cache of UDFs.\n \nThanks,\nJungmin\n-- Jungmin Shin",
"msg_date": "Wed, 18 Oct 2006 17:15:13 -0400",
"msg_from": "\"jungmin shin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "UDF and cache"
},
{
"msg_contents": "On Wed, Oct 18, 2006 at 05:15:13PM -0400, jungmin shin wrote:\n> Hello all,\n> \n> I read a paper, which is Query optimization in the presence of Foreign\n> Functions.\n> And the paper , there is a paragraph like below.\n> \n> In order to reduce the number of invocations, caching the results of\n> invocation was suggested in Postgres.\n> \n> I'd like to know in detail about how postgres is maintaing the cache of\n> UDFs.\n\nIt's not. See list archives for past discussions.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 18 Oct 2006 16:16:54 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] UDF and cache"
},
{
"msg_contents": "And PLEASE do not post something to 3 lists; it's a lot of extra traffic\nfor no reason.\n\nMoving to -hackers.\n\nOn Wed, Oct 18, 2006 at 05:15:13PM -0400, jungmin shin wrote:\n> Hello all,\n> \n> I read a paper, which is Query optimization in the presence of Foreign\n> Functions.\n> And the paper , there is a paragraph like below.\n> \n> In order to reduce the number of invocations, caching the results of\n> invocation was suggested in Postgres.\n> \n> I'd like to know in detail about how postgres is maintaing the cache of\n> UDFs.\n> \n> Thanks,\n> Jungmin\n> \n> \n> -- \n> Jungmin Shin\n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 18 Oct 2006 16:19:29 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UDF and cache"
}
] |
[
{
"msg_contents": "\n Hi\r\n\n\r\n\nThanks everybody. I have confirmed that this does not affect inserts. But the query performance has improved a lot.\n\n\r\n\nRegards\n\n\r\n\nRohit\n\n\r\n\n\r\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Heikki Linnakangas\nSent: 18 October 2006 13:52\nTo: Rohit_Behl\nCc: Merlin Moncure; [email protected]\nSubject: Re: [PERFORM] Jdbc/postgres performance\n\n\r\n\nRohit_Behl wrote:\n\n> Hi\n\n>\r\n\n> I made the following changes to the conf file:\n\n>\r\n\n> enable_indexscan = true\n\n>\r\n\n> enable_seqscan = false\n\n>\r\n\n> We also have a large amount of data being inserted into our tables. I was just wondering if this could have an impact on the inserts since I guess this change is on the database.\n\n\r\n\nenable_seqscan shouldn't affect plain inserts, but it will affect\r\n\n*every* query in the system.\n\n\r\n\nI would suggest using setting \"prepareThreshold=0\" in the JDBC driver\r\n\nconnection URL, or calling pstmt.setPrepareThreshold(0) in the\r\n\napplication. That tells the driver not to use server-side prepare, and\r\n\nthe query will be re-planned every time you execute it with the real\r\n\nvalues of the parameters.\n\n\r\n\n--\r\n\n Heikki Linnakangas\n\n EnterpriseDB http://www.enterprisedb.com\n\n\r\n\n---------------------------(end of broadcast)---------------------------\n\nTIP 6: explain analyze is your friend\n\n\n**************** CAUTION - Disclaimer *****************\nThis e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely for the use of the addressee(s). If you are not the intended recipient, please notify the sender by e-mail and delete the original message. Further, you are not to copy, disclose, or distribute this e-mail or its contents to any other person and any such actions are unlawful. This e-mail may contain viruses. Infosys has taken every reasonable precaution to minimize this risk, but is not liable for any damage you may sustain as a result of any virus in this e-mail. You should carry out your own virus checks before opening the e-mail or attachment. Infosys reserves the right to monitor and review the content of all messages sent to or from this e-mail address. Messages sent to or from this e-mail address may be stored on the Infosys e-mail system.\n***INFOSYS******** End of Disclaimer ********INFOSYS***\n",
"msg_date": "Thu, 19 Oct 2006 18:06:52 +0530",
"msg_from": "\"Rohit_Behl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Hi,\n\nwe've got performance problems due to repeating SELECT, UPDATE, DELETE, \nINSERT statements. This statements have to be executed every 10 seconds, \nbut they run into a timeout.\nTo obviate problems regarding to our Java Software and JDBC drivers, we \nput the repeating sequence of statements to a file more than 100k times \n(half a million statements) and executed \"psql ourDB -f ourFile.sql -o \n/dev/null\". To accelerate the occurence of the performance drop, we \nstarted 6 instances of this command.\nThe performance drop occured after 10 minutes shifting the server to 0 \npercent idle and 85 - 95 percent user.\nFor tracing the statement which raised the load, we are using pg_locks, \npg_stat_activity with current_query enabled. The responsible statement is \nthe DELETE, it hangs until its canceled by timeout. The first run on an \nvacuumed DB took 300 - 600ms.\nIn a second test we removed the DELETE statements to see wich statements \nalso needs longer time by increasing the amount of data. After half an \nhour the SELECT statements timed out.\nAn additional VACUUM - every 1 minute - does extend the timeout occurence \nby factor 5 - 6.\nIt does not sound professional, but the database seems to be aging by the \nincrease of executed statements.\n\nThe Statements\n---------------\n// normal insert - does not seem to be the problem - runtime is ok\nINSERT INTO tbl_reg(idreg,idtype,attr1,...,attr6,loc1,...,loc3,register) \nVALUES(nextval('tbl_reg_idreg_seq'),1,[attr],[loc],1);\n\n// select finds out which one has not an twin\n// a twin is defined as record with the same attr* values\n// decreases speed over time until timeout by postgresql\nSELECT *\n FROM tbl_reg reg\nWHERE register <> loc1 AND\n\tidreg NOT IN\n\t\t(\n\t\tSELECT reg.idreg\n\t\tFROM tbl_reg reg, tbl_reg regtwin\n\t\tWHERE regtwin.register = 1 AND\n\t\t\tregtwin.type <> 20 AND\n\t\t\treg.attr1 = regtwin.attr1 AND\n\t\t\treg.attr2 = regtwin.attr2 AND\n\t\t\treg.attr3 = regtwin.attr3 AND\n\t\t\treg.attr4 = regtwin.attr4 AND\n\t\t\treg.attr5 = regtwin.attr5 AND\n\t\t\treg.attr6 = regtwin.attr6 AND\n\t\t\treg.idreg <> regtwin.idreg AND\n\t\t\treg.register = 2\n\t\t);\nI tried to optimize the seslect statement but the group by having count(*) \n> 1 solution is half as fast as this statement - relating to the query \nplan of EXPLAIN ANALYZE.\n\n// delete data without a twin\n// drastically decreases speed over time until timeout by postgresql\nDELETE\n FROM tbl_reg\nWHERE idregs IN\n\t(\n\tSELECT reg.idreg\n\tFROM tbl_reg reg, tbl_reg regtwin\n\tWHERE regtwin.register = 1 AND\n\t\tregtwin.type <> 20 AND\n\t\treg.attr1 = regtwin.attr1 AND\n\t\treg.attr2 = regtwin.attr2 AND\n\t\treg.attr3 = regtwin.attr3 AND\n\t\treg.attr4 = regtwin.attr4 AND\n\t\treg.attr5 = regtwin.attr5 AND\n\t\treg.attr6 = regtwin.attr6 AND\n\t\treg.idreg <> regtwin.idreg AND\n\t\treg.register = 2\n\t) OR\n\t(loc1 = '2' AND loc2 = '2');\nThe runtime of this statement increases until it will canceled by \nPostgreSQL.\n\n// the where clause of this update statement is normally build in java\nUPDATE tbl_reg SET loc1=2 WHERE idreg IN ('...',...,'...');\n\nThe Table\n---------------\nTested with: 20.000, 80.000, 500.000 records\n\nCREATE TABLE tbl_reg\n(\n idreg bigserial NOT NULL,\n idtype int8 DEFAULT 0,\n attr1 int4,\n attr2 int4,\n attr3 varchar(20),\n attr4 varchar(20),\n attr5 int4,\n attr6 varchar(140) DEFAULT ''::character varying,\n loc1 int2 DEFAULT 0,\n loc2 int2 DEFAULT 0,\n loc3 int2 DEFAULT 0,\n register int2 DEFAULT 1,\n \"timestamp\" timestamp DEFAULT now(),\n CONSTRAINT tbl_reg_pkey PRIMARY KEY (idreg)\n)\nWITHOUT OIDS;\n\nThe Hardware\n----------------\nDual Xeon 3.2GHz Hyperthreading\nSCSI harddrives\nRAID and non-RAID tested\n\n\nWe have the problem, that we cannot see any potential to improve SQL \nstatements. Indexing the attr* columns seems not to be an solution, \nbecause the data mustn't be unique (twins) and changes really often so \nreindexing will took too long.\n\n\nthanks,\nJens Schipkowski\n",
"msg_date": "Thu, 19 Oct 2006 14:38:54 +0200",
"msg_from": "\"Jens Schipkowski\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "DB Performance decreases due to often written/accessed table"
},
{
"msg_contents": "Jens Schipkowski wrote:\n> Hi,\n> \n> we've got performance problems due to repeating SELECT, UPDATE, DELETE, \n> INSERT statements. This statements have to be executed every 10 seconds, \n> but they run into a timeout.\n> To obviate problems regarding to our Java Software and JDBC drivers, we \n> put the repeating sequence of statements to a file more than 100k times \n> (half a million statements) and executed \"psql ourDB -f ourFile.sql -o \n> /dev/null\". To accelerate the occurence of the performance drop, we \n> started 6 instances of this command.\n> The performance drop occured after 10 minutes shifting the server to 0 \n> percent idle and 85 - 95 percent user.\n\nAfter 10 minutes of what? Did the half-million statements complete? If \nnot, how many got completed? Were they all in separate transactions or \ndid you batch them? How ofternwere you vacuuming here?\n\n> For tracing the statement which raised the load, we are using pg_locks, \n> pg_stat_activity with current_query enabled. The responsible statement \n> is the DELETE, it hangs until its canceled by timeout. The first run on \n> an vacuumed DB took 300 - 600ms.\n> In a second test we removed the DELETE statements to see wich statements \n> also needs longer time by increasing the amount of data. After half an \n> hour the SELECT statements timed out.\n> An additional VACUUM - every 1 minute - does extend the timeout \n> occurence by factor 5 - 6.\n\nAnd running vacuum every 30 seconds does what?\n\n> It does not sound professional, but the database seems to be aging by \n> the increase of executed statements.\n\nIt sounds very likely if you aren't vacuuming enough or the tables are \ngrowing rapidly.\n\n> The Statements\n> ---------------\n> // normal insert - does not seem to be the problem - runtime is ok\n> INSERT INTO tbl_reg(idreg,idtype,attr1,...,attr6,loc1,...,loc3,register) \n> VALUES(nextval('tbl_reg_idreg_seq'),1,[attr],[loc],1);\n> \n> // select finds out which one has not an twin\n> // a twin is defined as record with the same attr* values\n> // decreases speed over time until timeout by postgresql\n> SELECT *\n> FROM tbl_reg reg\n> WHERE register <> loc1 AND\n> idreg NOT IN\n> (\n> SELECT reg.idreg\n> FROM tbl_reg reg, tbl_reg regtwin\n> WHERE regtwin.register = 1 AND\n> regtwin.type <> 20 AND\n> reg.attr1 = regtwin.attr1 AND\n> reg.attr2 = regtwin.attr2 AND\n> reg.attr3 = regtwin.attr3 AND\n> reg.attr4 = regtwin.attr4 AND\n> reg.attr5 = regtwin.attr5 AND\n> reg.attr6 = regtwin.attr6 AND\n> reg.idreg <> regtwin.idreg AND\n> reg.register = 2\n> );\n> I tried to optimize the seslect statement but the group by having count(*)\n>> 1 solution is half as fast as this statement - relating to the query \n> plan of EXPLAIN ANALYZE.\n\nAnd what did EXPLAIN ANALYSE show here? I'm guessing you're getting time \nincreasing as the square of the number of rows in tbl_reg. So, if 50 \nrows takes 2500s then 100 rows will take 10000s. Now, if you had enough \nRAM (or an index) I'd expect the planner to process the table in a \nsorted manner so the query-time would increase linearly.\n\nOh, and you're doing one join more than you need to here (counting the \nNOT IN as a join). You could get by with a LEFT JOIN and a test for \nidreg being null on the right-hand table.\n\n> // delete data without a twin\n> // drastically decreases speed over time until timeout by postgresql\n[snip delete doing the same query as above]\n> The runtime of this statement increases until it will canceled by \n> PostgreSQL.\n> \n> // the where clause of this update statement is normally build in java\n> UPDATE tbl_reg SET loc1=2 WHERE idreg IN ('...',...,'...');\n> \n> The Table\n> ---------------\n> Tested with: 20.000, 80.000, 500.000 records\n> \n> CREATE TABLE tbl_reg\n> (\n> idreg bigserial NOT NULL,\n> idtype int8 DEFAULT 0,\n\nYou can have more than 4 billion \"types\"?\n\n> attr1 int4,\n> attr2 int4,\n> attr3 varchar(20),\n> attr4 varchar(20),\n> attr5 int4,\n> attr6 varchar(140) DEFAULT ''::character varying,\n> loc1 int2 DEFAULT 0,\n> loc2 int2 DEFAULT 0,\n> loc3 int2 DEFAULT 0,\n> register int2 DEFAULT 1,\n> \"timestamp\" timestamp DEFAULT now(),\n\nYou probably want timestamp with time zone.\n\n> CONSTRAINT tbl_reg_pkey PRIMARY KEY (idreg)\n> )\n> WITHOUT OIDS;\n> \n> The Hardware\n> ----------------\n> Dual Xeon 3.2GHz Hyperthreading\n> SCSI harddrives\n> RAID and non-RAID tested\n> \n> We have the problem, that we cannot see any potential to improve SQL \n> statements. Indexing the attr* columns seems not to be an solution, \n> because the data mustn't be unique (twins) and changes really often so \n> reindexing will took too long.\n\nEh? Why would an index force uniqueness? And are you sure that adding an \nindex makes updates too slow? What did your testing show as the \nslow-down? I'd be tempted to put an index on attr1,attr2,attr5 (or \nwhichever combination provides the most selectivity) then make sure your \nstatistics are up to date (ANALYSE) and see if the plans change.\n\nOf course, that's assuming your postgresql.conf has some reasonable \nperformance-related settings.\n\nOh, I'd also wonder whether, with \"twin-ness\" being such an important \nconcept it isn't its own thing and thus perhaps deserve its own table.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 19 Oct 2006 15:55:34 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB Performance decreases due to often written/accessed"
},
{
"msg_contents": "On Thu, 19 Oct 2006 16:55:34 +0200, Richard Huxton <[email protected]> \nwrote:\n\n> Jens Schipkowski wrote:\n>> Hi,\n>> we've got performance problems due to repeating SELECT, UPDATE, \n>> DELETE, INSERT statements. This statements have to be executed every 10 \n>> seconds, but they run into a timeout.\n>> To obviate problems regarding to our Java Software and JDBC drivers, we \n>> put the repeating sequence of statements to a file more than 100k times \n>> (half a million statements) and executed \"psql ourDB -f ourFile.sql -o \n>> /dev/null\". To accelerate the occurence of the performance drop, we \n>> started 6 instances of this command.\n>> The performance drop occured after 10 minutes shifting the server to 0 \n>> percent idle and 85 - 95 percent user.\n>\n> After 10 minutes of what?\nstart testing using the command above.\n> Did the half-million statements complete? If not, how many got \n> completed? Were they all in separate transactions or did you batch them? \n> How ofternwere you vacuuming here?\nWe wrote a sql batch file which simulates the repeating cycle of SELECT, \nUPDATE, DELETE, INSERT. The INSERT is fired using another backend.\nThe half-million statements of this file will probably complete after all \nSELECT and DELETE statements timed out.\nWe had 6 seperate transactions executing the batch file.\n>\n>> For tracing the statement which raised the load, we are using pg_locks, \n>> pg_stat_activity with current_query enabled. The responsible statement \n>> is the DELETE, it hangs until its canceled by timeout. The first run on \n>> an vacuumed DB took 300 - 600ms.\n>> In a second test we removed the DELETE statements to see wich \n>> statements also needs longer time by increasing the amount of data. \n>> After half an hour the SELECT statements timed out.\n>> An additional VACUUM - every 1 minute - does extend the timeout \n>> occurence by factor 5 - 6.\n>\n> And running vacuum every 30 seconds does what?\nNot yet fully tested. It seems to lower the slow down.\nBut minimizing the gain of slow down doesn't solve the problem. The \nProblem is the decrease of execution speed of DELETE and SELECT statements \nby a table row count between 150k - 200k. The table starts growing first \nafter DELETE statements fails during timeout.\n>\n>> It does not sound professional, but the database seems to be aging by \n>> the increase of executed statements.\n>\n> It sounds very likely if you aren't vacuuming enough or the tables are \n> growing rapidly.\nvacuuming once a minute is not enough? We reach the execution of 3k \nstatements per minute (startup time of testing). 1/4 of them are INSERTs \nand DELETEs. After 5 minutes a DELETE will took about 50 seconds - \ncompared to startup time about 300 - 600ms.\n>\n>> The Statements\n>> ---------------\n>> // normal insert - does not seem to be the problem - runtime is ok\n>> INSERT INTO \n>> tbl_reg(idreg,idtype,attr1,...,attr6,loc1,...,loc3,register) \n>> VALUES(nextval('tbl_reg_idreg_seq'),1,[attr],[loc],1);\n>> // select finds out which one has not an twin\n>> // a twin is defined as record with the same attr* values\n>> // decreases speed over time until timeout by postgresql\n>> SELECT *\n>> FROM tbl_reg reg\n>> WHERE register <> loc1 AND\n>> idreg NOT IN\n>> (\n>> SELECT reg.idreg\n>> FROM tbl_reg reg, tbl_reg regtwin\n>> WHERE regtwin.register = 1 AND\n>> regtwin.type <> 20 AND\n>> reg.attr1 = regtwin.attr1 AND\n>> reg.attr2 = regtwin.attr2 AND\n>> reg.attr3 = regtwin.attr3 AND\n>> reg.attr4 = regtwin.attr4 AND\n>> reg.attr5 = regtwin.attr5 AND\n>> reg.attr6 = regtwin.attr6 AND\n>> reg.idreg <> regtwin.idreg AND\n>> reg.register = 2\n>> );\n>> I tried to optimize the seslect statement but the group by having \n>> count(*)\n>>> 1 solution is half as fast as this statement - relating to the query\n>> plan of EXPLAIN ANALYZE.\n>\n> And what did EXPLAIN ANALYSE show here? I'm guessing you're getting time \n> increasing as the square of the number of rows in tbl_reg. So, if 50 \n> rows takes 2500s then 100 rows will take 10000s. Now, if you had enough \n> RAM (or an index) I'd expect the planner to process the table in a \n> sorted manner so the query-time would increase linearly.\n\nEXPLAIN ANALYZE shows at startup\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Seq Scan on tbl_reg (cost=25841.35..31433.17 rows=72424 width=6) (actual \ntime=673.877..673.877 rows=0 loops=1)\n Filter: ((hashed subplan) OR ((loc1 = 2::smallint) AND (loc2 = \n2::smallint)))\n SubPlan\n -> Merge Join (cost=22186.21..25646.57 rows=77913 width=8) (actual \ntime=285.624..285.624 rows=0 loops=1)\n Merge Cond: ((\"outer\".attr1 = \"inner\".attr1) AND \n(\"outer\".\"?column8?\" = \"inner\".\"?column8?\") AND (\"outer\".\"?column9?\" = \n\"inner\".\"?column9?\") AND (\"outer\".\"?column10?\" = \"inner\".\"?column10?\") AND \n(\"outer\".\"?column11?\" = \"inner\".\"?column11?\") AND (\"outer\".attr6 = \n\"inner\".attr6))\n Join Filter: (\"outer\".idreg <> \"inner\".idreg)\n -> Sort (cost=4967.06..4971.65 rows=1835 width=56) (actual \ntime=285.622..285.622 rows=0 loops=1)\n Sort Key: reg.attr1, (reg.attr2)::text, \n(reg.attr3)::text, (reg.attr4)::text, (reg.attr5)::text, reg.attr6\n -> Seq Scan on tbl_reg reg (cost=0.00..4867.59 \nrows=1835 width=56) (actual time=285.551..285.551 rows=0 loops=1)\n Filter: (register = 2)\n -> Sort (cost=17219.15..17569.77 rows=140247 width=56) (never \nexecuted)\n Sort Key: regtwin.attr1, (regtwin.attr2)::text, \n(regtwin.attr3)::text, (regtwin.attr4)::text, (regtwin.attr5)::text, \nregtwin.attr6\n -> Seq Scan on tbl_reg regtwin (cost=0.00..5229.70 \nrows=140247 width=56) (never executed)\n Filter: ((register = 1) AND (\"type\" <> 20))\n Total runtime: 604.410 ms\n(15 rows)\n\nEXPLAIN ANALYZE shows after 10 minutes load and 1x vacuuming\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Seq Scan on tbl_reg (cost=25841.35..31433.17 rows=72424 width=6) (actual \ntime=43261.910..43261.910 rows=0 loops=1)\n Filter: ((hashed subplan) OR ((loc1 = 2::smallint) AND (loc2 = \n2::smallint)))\n SubPlan\n -> Merge Join (cost=22186.21..25646.57 rows=77913 width=8) (actual \ntime=43132.063..43132.063 rows=0 loops=1)\n Merge Cond: ((\"outer\".attr1 = \"inner\".attr1) AND \n(\"outer\".\"?column8?\" = \"inner\".\"?column8?\") AND (\"outer\".\"?column9?\" = \n\"inner\".\"?column9?\") AND (\"outer\".\"?column10?\" = \"inner\".\"?column10?\") AND \n(\"outer\".\"?column11?\" = \"inner\".\"?column11?\") AND (\"outer\".attr6 = \n\"inner\".attr6))\n Join Filter: (\"outer\".idreg <> \"inner\".idreg)\n -> Sort (cost=4967.06..4971.65 rows=1835 width=56) (actual \ntime=387.071..387.872 rows=1552 loops=1)\n Sort Key: reg.attr1, (reg.attr2)::text, \n(reg.attr3)::text, (reg.attr4)::text, (reg.attr5)::text, reg.attr6\n -> Seq Scan on tbl_reg reg (cost=0.00..4867.59 \nrows=1835 width=56) (actual time=303.966..325.614 rows=1552 loops=1)\n Filter: (register = 2)\n -> Sort (cost=17219.15..17569.77 rows=140247 width=56) \n(actual time=42368.934..42530.986 rows=145324 loops=1)\n Sort Key: regtwin.attr1, (regtwin.attr2)::text, \n(regtwin.attr3)::text, (regtwin.attr4)::text, (regtwin.attr5)::text, \nregtwin.attr6\n -> Seq Scan on tbl_reg regtwin (cost=0.00..5229.70 \nrows=140247 width=56) (actual time=0.015..1159.515 rows=145453 loops=1)\n Filter: ((register = 1) AND (\"type\" <> 20))\n Total runtime: 44073.127 ms\n(15 rows)\n\nI know that the second query plan executes the sort, because it finds \nmatching data. So maybe indexing will help.\n\n>\n> Oh, and you're doing one join more than you need to here (counting the \n> NOT IN as a join). You could get by with a LEFT JOIN and a test for \n> idreg being null on the right-hand table.\nIt sounds good. First tests doesn't improve runtime - it needs more \nextensive testing.\n>\n>> // delete data without a twin\n>> // drastically decreases speed over time until timeout by postgresql\n> [snip delete doing the same query as above]\n>> The runtime of this statement increases until it will canceled by \n>> PostgreSQL.\n>> // the where clause of this update statement is normally build in java\n>> UPDATE tbl_reg SET loc1=2 WHERE idreg IN ('...',...,'...');\n>> The Table\n>> ---------------\n>> Tested with: 20.000, 80.000, 500.000 records\n>> CREATE TABLE tbl_reg\n>> (\n>> idreg bigserial NOT NULL,\n>> idtype int8 DEFAULT 0,\n>\n> You can have more than 4 billion \"types\"?\nit seems so, or not?\n>\n>> attr1 int4,\n>> attr2 int4,\n>> attr3 varchar(20),\n>> attr4 varchar(20),\n>> attr5 int4,\n>> attr6 varchar(140) DEFAULT ''::character varying,\n>> loc1 int2 DEFAULT 0,\n>> loc2 int2 DEFAULT 0,\n>> loc3 int2 DEFAULT 0,\n>> register int2 DEFAULT 1,\n>> \"timestamp\" timestamp DEFAULT now(),\n>\n> You probably want timestamp with time zone.\nNo, just the server time is important. This is short living data.\n>\n>> CONSTRAINT tbl_reg_pkey PRIMARY KEY (idreg)\n>> )\n>> WITHOUT OIDS;\n>> The Hardware\n>> ----------------\n>> Dual Xeon 3.2GHz Hyperthreading\n>> SCSI harddrives\n>> RAID and non-RAID tested\n>> We have the problem, that we cannot see any potential to improve SQL \n>> statements. Indexing the attr* columns seems not to be an solution, \n>> because the data mustn't be unique (twins) and changes really often so \n>> reindexing will took too long.\n>\n> Eh? Why would an index force uniqueness? And are you sure that adding an \n> index makes updates too slow? What did your testing show as the \n> slow-down? I'd be tempted to put an index on attr1,attr2,attr5 (or \n> whichever combination provides the most selectivity) then make sure your \n> statistics are up to date (ANALYSE) and see if the plans change.\nOK, I misunderstood the PostgreSQL INDEX. Will test it using an \nmulticolumn index.\n>\n> Of course, that's assuming your postgresql.conf has some reasonable \n> performance-related settings.\npostgresql.conf settings have been optimized. Searched the web for useful \ninformation and got help from Mailing list by Tom Lane.\n>\n> Oh, I'd also wonder whether, with \"twin-ness\" being such an important \n> concept it isn't its own thing and thus perhaps deserve its own table.\n>\nIt's important due to software concept (conferencing groups).\n\nThank you for your suggestions. I will add indexes to the table and \noverhaul the SELECT and DELETE statements. After testing I will post \nresults.\n",
"msg_date": "Thu, 19 Oct 2006 19:05:22 +0200",
"msg_from": "\"Jens Schipkowski\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB Performance decreases due to often written/accessed table"
},
{
"msg_contents": "Jens Schipkowski wrote:\n>> And running vacuum every 30 seconds does what?\n> Not yet fully tested. It seems to lower the slow down.\n> But minimizing the gain of slow down doesn't solve the problem. The \n> Problem is the decrease of execution speed of DELETE and SELECT \n> statements by a table row count between 150k - 200k. The table starts \n> growing first after DELETE statements fails during timeout.\n>>\n>>> It does not sound professional, but the database seems to be aging by \n>>> the increase of executed statements.\n>>\n>> It sounds very likely if you aren't vacuuming enough or the tables are \n>> growing rapidly.\n> vacuuming once a minute is not enough? We reach the execution of 3k \n> statements per minute (startup time of testing). 1/4 of them are INSERTs \n> and DELETEs. After 5 minutes a DELETE will took about 50 seconds - \n> compared to startup time about 300 - 600ms.\n\nYou want to vacuum enough so that the deletes don't leave \never-increasing gaps in your table. If you run VACUUM VERBOSE it will \ntell you what it did - if the number of (non-)removable rows keeps \nincreasing you're not vacuuming more. The trick is to vacuum often but \nnot have to do a lot of work in each. The autovacuum tool in recent \nversions tries to estimate this for you, but might not cope in your case.\n\n>>> The Statements\n>>> ---------------\n>>> // normal insert - does not seem to be the problem - runtime is ok\n>>> INSERT INTO \n>>> tbl_reg(idreg,idtype,attr1,...,attr6,loc1,...,loc3,register) \n>>> VALUES(nextval('tbl_reg_idreg_seq'),1,[attr],[loc],1);\n>>> // select finds out which one has not an twin\n>>> // a twin is defined as record with the same attr* values\n>>> // decreases speed over time until timeout by postgresql\n>>> SELECT *\n>>> FROM tbl_reg reg\n>>> WHERE register <> loc1 AND\n>>> idreg NOT IN\n>>> (\n>>> SELECT reg.idreg\n>>> FROM tbl_reg reg, tbl_reg regtwin\n>>> WHERE regtwin.register = 1 AND\n>>> regtwin.type <> 20 AND\n>>> reg.attr1 = regtwin.attr1 AND\n>>> reg.attr2 = regtwin.attr2 AND\n>>> reg.attr3 = regtwin.attr3 AND\n>>> reg.attr4 = regtwin.attr4 AND\n>>> reg.attr5 = regtwin.attr5 AND\n>>> reg.attr6 = regtwin.attr6 AND\n>>> reg.idreg <> regtwin.idreg AND\n>>> reg.register = 2\n>>> );\n>>> I tried to optimize the seslect statement but the group by having \n>>> count(*)\n>>>> 1 solution is half as fast as this statement - relating to the query\n>>> plan of EXPLAIN ANALYZE.\n>>\n>> And what did EXPLAIN ANALYSE show here? I'm guessing you're getting \n>> time increasing as the square of the number of rows in tbl_reg. So, if \n>> 50 rows takes 2500s then 100 rows will take 10000s. Now, if you had \n>> enough RAM (or an index) I'd expect the planner to process the table \n>> in a sorted manner so the query-time would increase linearly.\n> \n> EXPLAIN ANALYZE shows at startup\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------- \n> \n> Seq Scan on tbl_reg (cost=25841.35..31433.17 rows=72424 width=6) \n> (actual time=673.877..673.877 rows=0 loops=1)\n> Filter: ((hashed subplan) OR ((loc1 = 2::smallint) AND (loc2 = \n> 2::smallint)))\n> SubPlan\n> -> Merge Join (cost=22186.21..25646.57 rows=77913 width=8) \n> (actual time=285.624..285.624 rows=0 loops=1)\n> Merge Cond: ((\"outer\".attr1 = \"inner\".attr1) AND \n> (\"outer\".\"?column8?\" = \"inner\".\"?column8?\") AND (\"outer\".\"?column9?\" = \n> \"inner\".\"?column9?\") AND (\"outer\".\"?column10?\" = \"inner\".\"?column10?\") \n> AND (\"outer\".\"?column11?\" = \"inner\".\"?column11?\") AND (\"outer\".attr6 = \n> \"inner\".attr6))\n> Join Filter: (\"outer\".idreg <> \"inner\".idreg)\n> -> Sort (cost=4967.06..4971.65 rows=1835 width=56) (actual \n> time=285.622..285.622 rows=0 loops=1)\n> Sort Key: reg.attr1, (reg.attr2)::text, \n> (reg.attr3)::text, (reg.attr4)::text, (reg.attr5)::text, reg.attr6\n> -> Seq Scan on tbl_reg reg (cost=0.00..4867.59 \n> rows=1835 width=56) (actual time=285.551..285.551 rows=0 loops=1)\n> Filter: (register = 2)\n> -> Sort (cost=17219.15..17569.77 rows=140247 width=56) \n> (never executed)\n> Sort Key: regtwin.attr1, (regtwin.attr2)::text, \n> (regtwin.attr3)::text, (regtwin.attr4)::text, (regtwin.attr5)::text, \n> regtwin.attr6\n> -> Seq Scan on tbl_reg regtwin (cost=0.00..5229.70 \n> rows=140247 width=56) (never executed)\n> Filter: ((register = 1) AND (\"type\" <> 20))\n> Total runtime: 604.410 ms\n> (15 rows)\n> \n> EXPLAIN ANALYZE shows after 10 minutes load and 1x vacuuming\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------- \n> \n> Seq Scan on tbl_reg (cost=25841.35..31433.17 rows=72424 width=6) \n> (actual time=43261.910..43261.910 rows=0 loops=1)\n> Filter: ((hashed subplan) OR ((loc1 = 2::smallint) AND (loc2 = \n> 2::smallint)))\n> SubPlan\n> -> Merge Join (cost=22186.21..25646.57 rows=77913 width=8) \n> (actual time=43132.063..43132.063 rows=0 loops=1)\n> Merge Cond: ((\"outer\".attr1 = \"inner\".attr1) AND \n> (\"outer\".\"?column8?\" = \"inner\".\"?column8?\") AND (\"outer\".\"?column9?\" = \n> \"inner\".\"?column9?\") AND (\"outer\".\"?column10?\" = \"inner\".\"?column10?\") \n> AND (\"outer\".\"?column11?\" = \"inner\".\"?column11?\") AND (\"outer\".attr6 = \n> \"inner\".attr6))\n> Join Filter: (\"outer\".idreg <> \"inner\".idreg)\n> -> Sort (cost=4967.06..4971.65 rows=1835 width=56) (actual \n> time=387.071..387.872 rows=1552 loops=1)\n> Sort Key: reg.attr1, (reg.attr2)::text, \n> (reg.attr3)::text, (reg.attr4)::text, (reg.attr5)::text, reg.attr6\n> -> Seq Scan on tbl_reg reg (cost=0.00..4867.59 \n> rows=1835 width=56) (actual time=303.966..325.614 rows=1552 loops=1)\n> Filter: (register = 2)\n> -> Sort (cost=17219.15..17569.77 rows=140247 width=56) \n> (actual time=42368.934..42530.986 rows=145324 loops=1)\n> Sort Key: regtwin.attr1, (regtwin.attr2)::text, \n> (regtwin.attr3)::text, (regtwin.attr4)::text, (regtwin.attr5)::text, \n> regtwin.attr6\n> -> Seq Scan on tbl_reg regtwin (cost=0.00..5229.70 \n> rows=140247 width=56) (actual time=0.015..1159.515 rows=145453 loops=1)\n> Filter: ((register = 1) AND (\"type\" <> 20))\n> Total runtime: 44073.127 ms\n> (15 rows)\n\nOK - these plans look about the same, but the time is greatly different. \nBoth have rows=140247 as the estimated number of rows in tbl_reg. Either \n you have many more rows in the second case (in which case you're not \nrunning ANALYSE enough) or you have lots of gaps in the table (you're \nnot running VACUUM enough).\n\nI'd then try putting an index on (attr1,attr2,attr3...attr6) and see if \nthat helps reduce time.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 19 Oct 2006 18:19:16 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB Performance decreases due to often written/accessed"
},
{
"msg_contents": "On Thu, Oct 19, 2006 at 06:19:16PM +0100, Richard Huxton wrote:\n> OK - these plans look about the same, but the time is greatly different. \n> Both have rows=140247 as the estimated number of rows in tbl_reg. Either \n> you have many more rows in the second case (in which case you're not \n> running ANALYSE enough) or you have lots of gaps in the table (you're \n> not running VACUUM enough).\n \nLook closer... the actual stats show that the sorts in the second case\nare returning far more rows. And yes, analyze probably needs to happen.\n\n> I'd then try putting an index on (attr1,attr2,attr3...attr6) and see if \n> that helps reduce time.\n\nWith bitmap index scans, I think it'd be much better to create 6 indexes\nand see which ones actually get used (and then drop the others).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 19 Oct 2006 12:22:50 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB Performance decreases due to often written/accessed"
},
{
"msg_contents": "On 10/19/06, Jens Schipkowski <[email protected]> wrote:\n> // select finds out which one has not an twin\n> // a twin is defined as record with the same attr* values\n> // decreases speed over time until timeout by postgresql\n> SELECT *\n> FROM tbl_reg reg\n> WHERE register <> loc1 AND\n> idreg NOT IN\n> (\n> SELECT reg.idreg\n> FROM tbl_reg reg, tbl_reg regtwin\n> WHERE regtwin.register = 1 AND\n> regtwin.type <> 20 AND\n> reg.attr1 = regtwin.attr1 AND\n> reg.attr2 = regtwin.attr2 AND\n> reg.attr3 = regtwin.attr3 AND\n> reg.attr4 = regtwin.attr4 AND\n> reg.attr5 = regtwin.attr5 AND\n> reg.attr6 = regtwin.attr6 AND\n> reg.idreg <> regtwin.idreg AND\n> reg.register = 2\n> );\n\n[...]\n\n> We have the problem, that we cannot see any potential to improve SQL\n> statements. Indexing the attr* columns seems not to be an solution,\n> because the data mustn't be unique (twins) and changes really often so\n> reindexing will took too long.\n\n1. your database design is the real culprit here. If you want things\nto run really quickly, solve the problem there by normalizing your\nschema. denomalization is the root cause of many, many, problems\nposted here on this list.\n2. barring that, the above query will run fastest by creating\nmulti-column indexes on regtwin (attr*) fields. and reg(attr*). the\nreal solution to problems like this is often proper idnexing,\nespecially multi column. saying indexes take to long to build is like\nsaying: 'i have a problem, so i am going to replace it with a much\nworse problem'.\n3. try where exists/not exists instead of where in/not in\n\nmerlin\n",
"msg_date": "Thu, 19 Oct 2006 13:32:22 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB Performance decreases due to often written/accessed table"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Thu, Oct 19, 2006 at 06:19:16PM +0100, Richard Huxton wrote:\n>> OK - these plans look about the same, but the time is greatly different. \n>> Both have rows=140247 as the estimated number of rows in tbl_reg. Either \n>> you have many more rows in the second case (in which case you're not \n>> running ANALYSE enough) or you have lots of gaps in the table (you're \n>> not running VACUUM enough).\n> \n> Look closer... the actual stats show that the sorts in the second case\n> are returning far more rows. And yes, analyze probably needs to happen.\n\nThe results are different, I agree, but the plans (and estimates) are \nthe same. Given the deletes and inserts I wasn't sure whether this was \njust lots more rows or a shift in values.\n\n>> I'd then try putting an index on (attr1,attr2,attr3...attr6) and see if \n>> that helps reduce time.\n> \n> With bitmap index scans, I think it'd be much better to create 6 indexes\n> and see which ones actually get used (and then drop the others).\n\nGood idea.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 19 Oct 2006 19:00:28 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB Performance decreases due to often written/accessed"
},
{
"msg_contents": "On Thu, 19 Oct 2006 19:32:22 +0200, Merlin Moncure <[email protected]> \nwrote:\n\n> On 10/19/06, Jens Schipkowski <[email protected]> wrote:\n>> // select finds out which one has not an twin\n>> // a twin is defined as record with the same attr* values\n>> // decreases speed over time until timeout by postgresql\n>> SELECT *\n>> FROM tbl_reg reg\n>> WHERE register <> loc1 AND\n>> idreg NOT IN\n>> (\n>> SELECT reg.idreg\n>> FROM tbl_reg reg, tbl_reg regtwin\n>> WHERE regtwin.register = 1 AND\n>> regtwin.type <> 20 AND\n>> reg.attr1 = regtwin.attr1 AND\n>> reg.attr2 = regtwin.attr2 AND\n>> reg.attr3 = regtwin.attr3 AND\n>> reg.attr4 = regtwin.attr4 AND\n>> reg.attr5 = regtwin.attr5 AND\n>> reg.attr6 = regtwin.attr6 AND\n>> reg.idreg <> regtwin.idreg AND\n>> reg.register = 2\n>> );\n>\n> [...]\n>\n>> We have the problem, that we cannot see any potential to improve SQL\n>> statements. Indexing the attr* columns seems not to be an solution,\n>> because the data mustn't be unique (twins) and changes really often so\n>> reindexing will took too long.\n>\n> 1. your database design is the real culprit here. If you want things\n> to run really quickly, solve the problem there by normalizing your\n> schema. denomalization is the root cause of many, many, problems\n> posted here on this list.\nBelieve it is normalized. We also seperated configuration and runtime \ndata. And this is a runtime table.\nThis table holds short living data for devices to be registered by a \nregistration server. The INSERTs are triggered by external devices. The \nmaster data tables are perfectly normalized too. What you are seeing is \nnot the real column names. I changed it due to readability. attr* have \nreally different names and meanings. A \"twin\" (in real, initiator/member \nof the same conferencing group) is defined by these attributes. Due to \nhigh flexibility of this system (serverside configuration/ deviceside \nconfiguration for runtime) there is no other way to normalize.\n\n> 2. barring that, the above query will run fastest by creating\n> multi-column indexes on regtwin (attr*) fields. and reg(attr*). the\n> real solution to problems like this is often proper idnexing,\n> especially multi column. saying indexes take to long to build is like\n> saying: 'i have a problem, so i am going to replace it with a much\n> worse problem'.\nI will index it. Just prepared the test and will run it tomorrow.\n> 3. try where exists/not exists instead of where in/not in\nDid try it, before I switched to NOT IN. It was 10 times slower.\n>\n> merlin\n",
"msg_date": "Thu, 19 Oct 2006 20:34:00 +0200",
"msg_from": "\"Jens Schipkowski\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB Performance decreases due to often written/accessed table"
},
{
"msg_contents": "On 10/19/06, Jens Schipkowski <[email protected]> wrote:\n> On Thu, 19 Oct 2006 19:32:22 +0200, Merlin Moncure > > 1. your database design is the real culprit here. If you want things\n> > to run really quickly, solve the problem there by normalizing your\n> > schema. denomalization is the root cause of many, many, problems\n> > posted here on this list.\n> Believe it is normalized. We also seperated configuration and runtime\n> data. And this is a runtime table.\n> This table holds short living data for devices to be registered by a\n> registration server. The INSERTs are triggered by external devices. The\n> master data tables are perfectly normalized too. What you are seeing is\n> not the real column names. I changed it due to readability. attr* have\n> really different names and meanings. A \"twin\" (in real, initiator/member\n> of the same conferencing group) is defined by these attributes. Due to\n> high flexibility of this system (serverside configuration/ deviceside\n> configuration for runtime) there is no other way to normalize.\n\nok, fair enough =) still, it feels odd that you are relating two\ntables on all 6 attributes. istm there is something more elegant\npossible, hard to say.\n\n> > 2. barring that, the above query will run fastest by creating\n> > multi-column indexes on regtwin (attr*) fields. and reg(attr*). the\n> > real solution to problems like this is often proper idnexing,\n> > especially multi column. saying indexes take to long to build is like\n> > saying: 'i have a problem, so i am going to replace it with a much\n> > worse problem'.\n> I will index it. Just prepared the test and will run it tomorrow.\n> > 3. try where exists/not exists instead of where in/not in\n> Did try it, before I switched to NOT IN. It was 10 times slower.\n\ndouble check that when properly indexed.\n\nmerlin\n",
"msg_date": "Thu, 19 Oct 2006 14:59:25 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB Performance decreases due to often written/accessed table"
}
] |
[
{
"msg_contents": "Hello!\n\nAfter upgrade PostgreSQL from 8.0 to 8.1.4 a VACUUM FULL ANALYZE\nprocess is much slower, from logs:\n\n8.0\n[13666][postgres][2006-10-06 01:13:38 CEST][1340121452] LOG: statement: VACUUM FULL ANALYZE;\n[13666][postgres][2006-10-06 01:39:15 CEST][0] LOG: duration: 1536862.425 ms\n\n\n8.1\n[4535][postgres][2006-10-10 01:08:51 CEST][6144112] LOG: statement: VACUUM FULL ANALYZE;\n[4535][postgres][2006-10-10 02:04:23 CEST][0] LOG: duration: 3332128.332 ms\n\nDatabases are equal.\nI'm not using autovacuum.\n\nLinux kernel is the same in two cases.\n$uname -a\nLinux zamczysko 2.6.13.4 #2 SMP Tue Oct 18 21:19:23 UTC 2005 x86_64 Dual_Core_AMD_Opteron(tm)_Processor_875 unknown PLD Linux\n\nWhy new PostgreSQL is slower?\n\n-- \nAndrzej Zawadzki\n",
"msg_date": "Thu, 19 Oct 2006 15:30:35 +0200",
"msg_from": "Andrzej Zawadzki <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM FULL ANALYZE on 8.1.4 is slower then on 8.0"
},
{
"msg_contents": "On Thu, Oct 19, 2006 at 03:30:35PM +0200, Andrzej Zawadzki wrote:\n> After upgrade PostgreSQL from 8.0 to 8.1.4 a VACUUM FULL ANALYZE\n> process is much slower, from logs:\n\nAre you sure you need VACUUM FULL? If you're vacuuming often enough\nand your free space map settings are adequate then plain VACUUM\n(without FULL) should suffice for routine use.\n\n> 8.0\n> [13666][postgres][2006-10-06 01:13:38 CEST][1340121452] LOG: statement: VACUUM FULL ANALYZE;\n> [13666][postgres][2006-10-06 01:39:15 CEST][0] LOG: duration: 1536862.425 ms\n> \n> \n> 8.1\n> [4535][postgres][2006-10-10 01:08:51 CEST][6144112] LOG: statement: VACUUM FULL ANALYZE;\n> [4535][postgres][2006-10-10 02:04:23 CEST][0] LOG: duration: 3332128.332 ms\n> \n> Databases are equal.\n\nEqual how? Number of tables? Number of tuples? Disk space used?\nActivity, especially updates and deletes? All of the above?\n\nHave you used VACUUM VERBOSE to see how much work each VACUUM is doing?\n\nIs it possible that 8.1 was built with --enable-cassert and 8.0\nwasn't? What does \"SHOW debug_assertions\" show on each server?\n\n-- \nMichael Fuhr\n",
"msg_date": "Thu, 19 Oct 2006 08:25:48 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL ANALYZE on 8.1.4 is slower then on 8.0"
},
{
"msg_contents": "Michael Fuhr <[email protected]> writes:\n> On Thu, Oct 19, 2006 at 03:30:35PM +0200, Andrzej Zawadzki wrote:\n>> After upgrade PostgreSQL from 8.0 to 8.1.4 a VACUUM FULL ANALYZE\n>> process is much slower, from logs:\n\n> Is it possible that 8.1 was built with --enable-cassert and 8.0\n> wasn't? What does \"SHOW debug_assertions\" show on each server?\n\nDifferent vacuum delay settings maybe?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Oct 2006 10:32:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL ANALYZE on 8.1.4 is slower then on 8.0 "
},
{
"msg_contents": "Hi, Andrzej,\n\nAndrzej Zawadzki wrote:\n> After upgrade PostgreSQL from 8.0 to 8.1.4 a VACUUM FULL ANALYZE\n> process is much slower, from logs:\n> \n> 8.0\n> [13666][postgres][2006-10-06 01:13:38 CEST][1340121452] LOG: statement: VACUUM FULL ANALYZE;\n> [13666][postgres][2006-10-06 01:39:15 CEST][0] LOG: duration: 1536862.425 ms\n> \n> \n> 8.1\n> [4535][postgres][2006-10-10 01:08:51 CEST][6144112] LOG: statement: VACUUM FULL ANALYZE;\n> [4535][postgres][2006-10-10 02:04:23 CEST][0] LOG: duration: 3332128.332 ms\n> \n> Databases are equal.\n\nAre they on equal disks? And in the same areas of those disks? Some\ncurrent disks tend to drop down their speed at the \"end\" of the LBA\naddress space drastically.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Fri, 20 Oct 2006 13:46:12 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL ANALYZE on 8.1.4 is slower then on 8.0"
}
] |
[
{
"msg_contents": "I just came to think about /proc/sys/swappiness ...\n\nWhen this one is set to a high number (say, 100 - which is maximum), the\nkernel will aggressively swap out all memory that is not beeing\naccessed, to allow more memory for caches. For a postgres server, OS\ncaches are good, because postgres relies on the OS to cache indices,\netc. At the other hand, for any production server it's very bad to\nexperience bursts of iowait when/if the swapped out memory becomes\nneeded - particularly if the server is used for interactive queries,\nlike serving a web application.\n\nI know there are much religion on this topic in general, I'm just\ncurious if anyone has done any serious thoughts or (even better!)\nexperimenting with the swappiness setting on a loaded postgres server.\n\nI would assume that the default setting (60) is pretty OK and sane, and\nthat modifying the setting would have insignificant effect. My\nreligious belief is that, however insignificant, a higher setting would\nhave been better :-)\n\nWe're running linux kernel 2.6.17.7 (debian) on the postgres server, and\nour memory stats looks like this:\n total used free shared buffers cached\nMem: 6083M 5846M 236M 0 31M 5448M\n-/+ buffers/cache: 366M 5716M\nSwap: 2643M 2M 2640M\n\nIn addition to the postgres server we're running some few cronscripts\nand misc on it - nothing significant though.\n\n",
"msg_date": "Thu, 19 Oct 2006 15:54:28 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Swappiness setting on a linux pg server"
},
{
"msg_contents": "On Thu, Oct 19, 2006 at 03:54:28PM +0200, Tobias Brox wrote:\n> I just came to think about /proc/sys/swappiness ...\n> \n> When this one is set to a high number (say, 100 - which is maximum), the\n> kernel will aggressively swap out all memory that is not beeing\n> accessed, to allow more memory for caches. For a postgres server, OS\n> caches are good, because postgres relies on the OS to cache indices,\n> etc. At the other hand, for any production server it's very bad to\n> experience bursts of iowait when/if the swapped out memory becomes\n> needed - particularly if the server is used for interactive queries,\n> like serving a web application.\n> \n> I know there are much religion on this topic in general, I'm just\n> curious if anyone has done any serious thoughts or (even better!)\n> experimenting with the swappiness setting on a loaded postgres server.\n \nI think it'd be much better to experiment with using much larger\nshared_buffers settings. The conventional wisdom there is from 7.x days\nwhen you really didn't want a large buffer, but that doesn't really\napply with the new buffer management we got in 8.0. I know of one site\nthat doubled their performance by setting shared_buffers to 50% of\nmemory.\n\nSomething else to consider is that many people will put pg_xlog on the\nsame drives as the OS (and swap). It's pretty important that those\ndrives not have much activity other than pg_xlog, so any swap activity\nwould have an even larger than normal impact on performance.\n\n> I would assume that the default setting (60) is pretty OK and sane, and\n> that modifying the setting would have insignificant effect. My\n> religious belief is that, however insignificant, a higher setting would\n> have been better :-)\n> \n> We're running linux kernel 2.6.17.7 (debian) on the postgres server, and\n> our memory stats looks like this:\n> total used free shared buffers cached\n> Mem: 6083M 5846M 236M 0 31M 5448M\n> -/+ buffers/cache: 366M 5716M\n> Swap: 2643M 2M 2640M\n> \n> In addition to the postgres server we're running some few cronscripts\n> and misc on it - nothing significant though.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 19 Oct 2006 10:28:31 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swappiness setting on a linux pg server"
},
{
"msg_contents": "\n> I just came to think about /proc/sys/swappiness ...\n>\n> When this one is set to a high number (say, 100 - which is maximum), the\n> kernel will aggressively swap out all memory that is not beeing\n> accessed, to allow more memory for caches. For a postgres server, OS\n> caches are good, because postgres relies on the OS to cache indices,\n> etc. At the other hand, for any production server it's very bad to\n> experience bursts of iowait when/if the swapped out memory becomes\n> needed - particularly if the server is used for interactive queries,\n> like serving a web application.\n\nThis is very useful on smaller systems where memory is a scarce commodity.\n I have a Xen virtual server with 128MB ram. I noticed a big improvement\nin query performance when I upped swappiness to 80. It gave just enough\nmore memory to fs buffers so my common queries ran in memory.\n\nYes, throwing more ram at it is usually the better solution, but it's nice\nlinux gives you that knob to turn when adding ram isn't an option, at\nleast for me.\n\n\n-- \nKevin\n\n",
"msg_date": "Thu, 19 Oct 2006 14:39:01 -0600 (MDT)",
"msg_from": "\"Kevin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swappiness setting on a linux pg server"
}
] |
[
{
"msg_contents": "[Jim C. Nasby - Thu at 10:28:31AM -0500]\n> I think it'd be much better to experiment with using much larger\n> shared_buffers settings. The conventional wisdom there is from 7.x days\n> when you really didn't want a large buffer, but that doesn't really\n> apply with the new buffer management we got in 8.0. I know of one site\n> that doubled their performance by setting shared_buffers to 50% of\n> memory.\n\nOh, that's interessting. I will give it a shot. Our config is\n\"inheritated\" from the 7.x-days, so we have a fairly low setting\ncompared to available memory. From the 7.x-days the logic was that \"a\nlot of careful thought has been given when designing the OS cache/buffer\nsubsystem, we don't really want to reinvent the wheel\" or something like\nthat.\n\nSadly it's not easy to measure the overall performance impact of such\ntunings in a production environment, so such a setting tends to be tuned\nby religion rather than science :-)\n\n> Something else to consider is that many people will put pg_xlog on the\n> same drives as the OS (and swap). It's pretty important that those\n> drives not have much activity other than pg_xlog, so any swap activity\n> would have an even larger than normal impact on performance.\n\nHm ... that's actually our current setting, we placed the postgres\ndatabase itself on a separate disk, not the xlog. So we should have\ndone it the other way around? No wonder the performance is badly\naffected by backups etc ...\n\nWhat else, I gave the swappiness a second thought, compared to our\nactual memory usage statistics ... turning down the swappiness would\nhave no significant effect since we're only using 2M of swap (hardly\nsignificant) and our total memory usage by applications (including the\npg shared buffers) is less than 400M out of 6G. Maybe we could have\nmoved some 50M of this to swap, but that's not really significant\ncompared to our 6G of memory. \n",
"msg_date": "Thu, 19 Oct 2006 18:00:54 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Swappiness setting on a linux pg server"
},
{
"msg_contents": "On Thu, Oct 19, 2006 at 06:00:54PM +0200, Tobias Brox wrote:\n> [Jim C. Nasby - Thu at 10:28:31AM -0500]\n> > I think it'd be much better to experiment with using much larger\n> > shared_buffers settings. The conventional wisdom there is from 7.x days\n> > when you really didn't want a large buffer, but that doesn't really\n> > apply with the new buffer management we got in 8.0. I know of one site\n> > that doubled their performance by setting shared_buffers to 50% of\n> > memory.\n> \n> Oh, that's interessting. I will give it a shot. Our config is\n> \"inheritated\" from the 7.x-days, so we have a fairly low setting\n> compared to available memory. From the 7.x-days the logic was that \"a\n> lot of careful thought has been given when designing the OS cache/buffer\n> subsystem, we don't really want to reinvent the wheel\" or something like\n> that.\n \nYeah, test setups are a good thing to have...\n\n> Sadly it's not easy to measure the overall performance impact of such\n> tunings in a production environment, so such a setting tends to be tuned\n> by religion rather than science :-)\n> \n> > Something else to consider is that many people will put pg_xlog on the\n> > same drives as the OS (and swap). It's pretty important that those\n> > drives not have much activity other than pg_xlog, so any swap activity\n> > would have an even larger than normal impact on performance.\n> \n> Hm ... that's actually our current setting, we placed the postgres\n> database itself on a separate disk, not the xlog. So we should have\n> done it the other way around? No wonder the performance is badly\n> affected by backups etc ...\n \nWell, typically I see setups where people dedicate say 6 drives to the\ndata and 2 drives for the OS and pg_xlog.\n\nThe issue with pg_xlog is you don't need bandwidth... you need super-low\nlatency. The best way to accomplish that is to get a battery-backed RAID\ncontroller that you can enable write caching on. In fact, if the\ncontroller is good enough, you can theoretically get away with just\nbuilding one big RAID10 and letting the controller provide the\nlow-latency fsyncs that pg_xlog depends on.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 19 Oct 2006 11:31:26 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swappiness setting on a linux pg server"
}
] |
[
{
"msg_contents": "[Jim C. Nasby - Thu at 10:28:31AM -0500]\n> I think it'd be much better to experiment with using much larger\n> shared_buffers settings. The conventional wisdom there is from 7.x days\n> when you really didn't want a large buffer, but that doesn't really\n> apply with the new buffer management we got in 8.0. I know of one site\n> that doubled their performance by setting shared_buffers to 50% of\n> memory.\n\nI've upped it a bit, but it would require a server restart to get the\nnew setting into effect. This is relatively \"expensive\" for us. Does\nanyone else share the viewpoint of Nasby, and does anyone have\nrecommendation for a good value? Our previous value was 200M, and I\ndon't want to go to the extremes just yet. We have 6G of memory\ntotally.\n",
"msg_date": "Thu, 19 Oct 2006 18:35:30 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Swappiness setting on a linux pg server"
},
{
"msg_contents": "At 12:35 PM 10/19/2006, Tobias Brox wrote:\n>[Jim C. Nasby - Thu at 10:28:31AM -0500]\n> > I think it'd be much better to experiment with using much larger\n> > shared_buffers settings. The conventional wisdom there is from 7.x days\n> > when you really didn't want a large buffer, but that doesn't really\n> > apply with the new buffer management we got in 8.0. I know of one site\n> > that doubled their performance by setting shared_buffers to 50% of\n> > memory.\n>\n>I've upped it a bit, but it would require a server restart to get the\n>new setting into effect. This is relatively \"expensive\" for us. Does\n>anyone else share the viewpoint of Nasby, and does anyone have\n>recommendation for a good value? Our previous value was 200M, and I\n>don't want to go to the extremes just yet. We have 6G of memory\n>totally.\n\nJim is correct that traditional 7.x folklore regarding shared buffer \nsize is nowhere near as valid for 8.x. Jim tends to know what he is \ntalking about when speaking about pg operational issues.\n\nNonetheless, \"YMMV\". The only sure way to know what is best for your \nSW running on your HW under your load conditions is to test, test, test.\n\nA= Find out how much RAM your OS image needs.\nUsually 1/3 to 2/3 of a GB is plenty.\n\nB= Find out how much RAM pg tasks need during typical peak usage and \nhow much each of those tasks is using.\nThis will tell you what work_mem should be.\nNote that you may well find out that you have not been using the best \nsize for work_mem for some tasks when you investigate this.\n\n(Total RAM) - A - B - (small amount for error margin) = 1st pass at \nshared_buffers setting.\n\nIf this results in better performance that your current settings, \neither declare victory and stop or cut the number in half and see \nwhat it does to performance.\n\nThen you can either set it to what experiment thus far has shown to \nbe best or use binary search to change the size of shared_buffers and \ndo experiments to your heart's content.\n\nRon \n\n",
"msg_date": "Thu, 19 Oct 2006 15:10:35 -0400",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swappiness setting on a linux pg server"
},
{
"msg_contents": "On 10/19/06, Ron <[email protected]> wrote:\n> Nonetheless, \"YMMV\". The only sure way to know what is best for your\n> SW running on your HW under your load conditions is to test, test, test.\n\nanybody have/know of some data on shared buffer settings on 8.1+?\n\nmerlin\n",
"msg_date": "Thu, 19 Oct 2006 15:44:07 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swappiness setting on a linux pg server"
},
{
"msg_contents": "[Ron - Thu at 03:10:35PM -0400]\n> Jim is correct that traditional 7.x folklore regarding shared buffer \n> size is nowhere near as valid for 8.x. Jim tends to know what he is \n> talking about when speaking about pg operational issues.\n\nI would not doubt it, but it's always better to hear it from more people\n:-)\n\n> Nonetheless, \"YMMV\". The only sure way to know what is best for your \n> SW running on your HW under your load conditions is to test, test, test.\n\nCertainly. My time and possibilities for testing is not\nthat great at the moment, and in any case I believe some small\nadjustments won't cause the really significant results. \n\nIn any case, our database server is not on fire at the moment and people\nare not angry because of slow reports at the moment. (actually, I\nstarted this thread out of nothing but curiousity ... triggered by\nsomebody complaining about his desktop windows computer swapping too\nmuch :-) So, for this round of tunings I'm more than satisfied just\nrelying on helpful rules of the thumb.\n\n> A= Find out how much RAM your OS image needs.\n> Usually 1/3 to 2/3 of a GB is plenty.\n\nA quick glance on \"free\" already revealed we are using less than 400 MB\nout of 6G totally (with the 7.x-mindset that the OS should take care of\ncacheing), and according to our previous settings, the shared buffers\nwas eating 200 MB of this - so most of our memory is free.\n\n> B= Find out how much RAM pg tasks need during typical peak usage and \n> how much each of those tasks is using.\n\nI believe we have quite good control of the queries ... there are\nsafeguards to prevent most of the heavy queries to run concurrently, and\nthe lighter queries shouldn't spend much memory, so it should be safe\nfor us to bump up the setting a bit.\n\nIn any case, I guess the OS is not that bad at handling the memory\nissue. Unused memory will be used relatively intelligently (i.e.\nbuffering the temp files used by sorts) and overuse of memory will cause\nsome swapping (which is probably quite much worse than using temp files\ndirectly, but a little bit of swapping is most probably not a disaster).\n\n",
"msg_date": "Thu, 19 Oct 2006 21:57:14 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Swappiness setting on a linux pg server"
}
] |
[
{
"msg_contents": "[Jim C. Nasby - Thu at 11:31:26AM -0500]\n> Yeah, test setups are a good thing to have...\n\nWe would need to replicate the production traffic as well to do reliable\ntests. Well, we'll get to that one day ...\n\n> The issue with pg_xlog is you don't need bandwidth... you need super-low\n> latency. The best way to accomplish that is to get a battery-backed RAID\n> controller that you can enable write caching on.\n\nSounds a bit risky to me :-)\n\n",
"msg_date": "Thu, 19 Oct 2006 18:39:22 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Swappiness setting on a linux pg server"
},
{
"msg_contents": "On Thu, Oct 19, 2006 at 06:39:22PM +0200, Tobias Brox wrote:\n> [Jim C. Nasby - Thu at 11:31:26AM -0500]\n> > Yeah, test setups are a good thing to have...\n> \n> We would need to replicate the production traffic as well to do reliable\n> tests. Well, we'll get to that one day ...\n \nMarginally reliable tests are usually better than none at all. :)\n\n> > The issue with pg_xlog is you don't need bandwidth... you need super-low\n> > latency. The best way to accomplish that is to get a battery-backed RAID\n> > controller that you can enable write caching on.\n> \n> Sounds a bit risky to me :-)\n\nWell, you do need to understand what happens if the machine does lose\npower... namely you have a limited amount of time to get power back to\nthe machine so that the controller can flush that data out. Other than\nthat, it's not very risky.\n\nAs for shared_buffers, conventional wisdom has been to use between 10%\nand 25% of memory, bounding towards the lower end as you get into larger\nquantities of memory. So in your case, 600M wouldn't be pushing things\nmuch at all. Even 1G wouldn't be that out of the ordinary. Also remember\nthat the more memory for shared_buffers, the less for\nsorting/hashes/etc. (work_mem)\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 19 Oct 2006 11:45:32 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swappiness setting on a linux pg server"
}
] |
[
{
"msg_contents": "[Jim C. Nasby - Thu at 11:45:32AM -0500]\n> > > The issue with pg_xlog is you don't need bandwidth... you need super-low\n> > > latency. The best way to accomplish that is to get a battery-backed RAID\n> > > controller that you can enable write caching on.\n> > \n> > Sounds a bit risky to me :-)\n> \n> Well, you do need to understand what happens if the machine does lose\n> power... namely you have a limited amount of time to get power back to\n> the machine so that the controller can flush that data out. Other than\n> that, it's not very risky.\n\nWe have burned ourself more than once due to unreliable raid controllers\n...\n\n> quantities of memory. So in your case, 600M wouldn't be pushing things\n> much at all. Even 1G wouldn't be that out of the ordinary. Also remember\n> that the more memory for shared_buffers, the less for\n> sorting/hashes/etc. (work_mem)\n\nWhat do you mean, a high value for the shared_buffers implicates I\ncan/should lower the work_mem value? Or just that I should remember to\nhave more than enough memory for both work_mem, shared_buffers and OS\ncaches? What is a sane value for the work_mem? It's currently set to\n8M.\n\n",
"msg_date": "Thu, 19 Oct 2006 18:53:49 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Swappiness setting on a linux pg server"
},
{
"msg_contents": "On Thu, Oct 19, 2006 at 06:53:49PM +0200, Tobias Brox wrote:\n> [Jim C. Nasby - Thu at 11:45:32AM -0500]\n> > > > The issue with pg_xlog is you don't need bandwidth... you need super-low\n> > > > latency. The best way to accomplish that is to get a battery-backed RAID\n> > > > controller that you can enable write caching on.\n> > > \n> > > Sounds a bit risky to me :-)\n> > \n> > Well, you do need to understand what happens if the machine does lose\n> > power... namely you have a limited amount of time to get power back to\n> > the machine so that the controller can flush that data out. Other than\n> > that, it's not very risky.\n> \n> We have burned ourself more than once due to unreliable raid controllers\n> ...\n \nWell, if you're buying unreliable hardware, there's not much you can\ndo... you're setting yourself up for problems.\n\n> > quantities of memory. So in your case, 600M wouldn't be pushing things\n> > much at all. Even 1G wouldn't be that out of the ordinary. Also remember\n> > that the more memory for shared_buffers, the less for\n> > sorting/hashes/etc. (work_mem)\n> \n> What do you mean, a high value for the shared_buffers implicates I\n> can/should lower the work_mem value? Or just that I should remember to\n> have more than enough memory for both work_mem, shared_buffers and OS\n> caches? What is a sane value for the work_mem? It's currently set to\n> 8M.\n\nThe key is that there's enough memory for shared_buffers and work_mem\nwithout going to swapping. If you're consuming that much work_mem I\nwouldn't worry at all about OS caching.\n\nWhat's reasonable for work_mem depends on your workload. If you've got\nsome reporting queries that you know aren't run very concurrently they\nmight benefit from large values of work_mem. For stats.distributed.net,\nI set work_mem to something like 2MB in the config file, but the nightly\nbatch routines manually set it up to 256M or more, because I know that\nthose only run one at a time, and having that extra memory means a lot\nof stuff that would otherwise have to spill to disk now stays in memory.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 19 Oct 2006 12:00:39 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swappiness setting on a linux pg server"
}
] |
[
{
"msg_contents": "[Jim C. Nasby - Thu at 12:00:39PM -0500]\n> What's reasonable for work_mem depends on your workload. If you've got\n> some reporting queries that you know aren't run very concurrently they\n> might benefit from large values of work_mem. For stats.distributed.net,\n> I set work_mem to something like 2MB in the config file, but the nightly\n> batch routines manually set it up to 256M or more, because I know that\n> those only run one at a time, and having that extra memory means a lot\n> of stuff that would otherwise have to spill to disk now stays in memory.\n\nThat sounds like a good idea; indeed we do have some few heavy reporting\nqueries and they are not run much concurrently (the application checks\nthe pg_stat_activity table and will disallow reports to be taken out if\nthere is too much activity there). We probably would benefit from\nraising the work mem just for those reports, and lower it for the rest\nof the connections.\n\n",
"msg_date": "Thu, 19 Oct 2006 19:10:23 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Swappiness setting on a linux pg server"
}
] |
[
{
"msg_contents": "[Jim C. Nasby - Thu at 12:00:39PM -0500]\n> Well, if you're buying unreliable hardware, there's not much you can\n> do... you're setting yourself up for problems.\n\nI'm luckily not responsible for the hardware, but my general experience\ntells that you never know anything about hardware reliability until the\nhardware actually breaks :-) It's not always so that more expensive\nequipment equals better equipment.\n\n",
"msg_date": "Thu, 19 Oct 2006 19:12:58 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Swappiness setting on a linux pg server"
}
] |
[
{
"msg_contents": "Hello friends,\n\nI am responsible for maintaining a high volume website using postgresql\n8.1.4. Given the amount of reads and writes, I vacuum full the server a\nfew times a week around 1, 2 AM shutting down the site for a few\nminutes. The next day morning around 10 - 11 AM the server slows down\nto death. It used to be that the error 'Too many clients' would be\nrecorded, until I increased the number of clients it can handle, and\nnow it simply slows down to death having lots and lots of postmaster\nprocesses running:\n\nTasks: 665 total, 10 running, 655 sleeping, 0 stopped, 0 zombie\nCpu(s): 14.9% us, 16.7% sy, 0.0% ni, 0.0% id, 68.4% wa, 0.0% hi,\n0.0% si\nMem: 2074932k total, 2051572k used, 23360k free, 2736k\nbuffers\nSwap: 2096440k total, 1844448k used, 251992k free, 102968k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 6420 postgres 15 0 26912 11m 10m R 3.6 0.6 0:00.11 postmaster\n 6565 postgres 16 0 26912 11m 10m S 3.6 0.6 0:00.12 postmaster\n 6707 postgres 15 0 26912 11m 10m S 3.3 0.6 0:00.10 postmaster\n 6715 postgres 15 0 26912 11m 10m S 3.3 0.6 0:00.11 postmaster\n 6765 postgres 15 0 26912 11m 10m S 3.3 0.6 0:00.11 postmaster\n 6147 postgres 15 0 26912 11m 10m R 3.0 0.6 0:00.15 postmaster\n 6311 postgres 15 0 26904 11m 10m R 3.0 0.6 0:00.10 postmaster\n 6551 postgres 15 0 26912 11m 10m R 3.0 0.6 0:00.09 postmaster\n 6803 postgres 16 0 26912 11m 10m R 3.0 0.6 0:00.09 postmaster\n 6255 postgres 15 0 26904 11m 10m R 2.6 0.6 0:00.14 postmaster\n 6357 postgres 15 0 26912 11m 10m R 2.6 0.6 0:00.11 postmaster\n 6455 postgres 15 0 26912 11m 10m S 2.6 0.6 0:00.10 postmaster\n 6457 postgres 15 0 26912 11m 10m S 2.6 0.6 0:00.11 postmaster\n 6276 postgres 15 0 26912 11m 10m S 2.3 0.6 0:00.10 postmaster\n 6475 postgres 15 0 26912 11m 10m R 2.3 0.6 0:00.11 postmaster\n 6868 postgres 15 0 26912 11m 10m S 2.3 0.6 0:00.07 postmaster\n 6891 postgres 15 0 26912 11m 10m S 1.3 0.6 0:00.19 postmaster\n\nThanks for your help in advance,\nMike\n\n",
"msg_date": "19 Oct 2006 18:24:15 -0700",
"msg_from": "\"Mike\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum and Memory Loss"
},
{
"msg_contents": "> Hello friends,\n> \n> I am responsible for maintaining a high volume website using \n> postgresql\n> 8.1.4. Given the amount of reads and writes, I vacuum full \n> the server a\n> few times a week around 1, 2 AM shutting down the site for a few\n> minutes. The next day morning around 10 - 11 AM the server slows down\n> to death. It used to be that the error 'Too many clients' would be\n> recorded, until I increased the number of clients it can handle, and\n> now it simply slows down to death having lots and lots of postmaster\n> processes running:\n\nIf you are saying that running the vacuum full helps your performance, then\nyou want to make sure you are running plain vacuum and analyze frequently\nenough. If you have a database which has lots of update and delete\nstatements, and you do not run vacuum regularly enough, you can end up with\nlots dead blocks slowing down database scans. If you do lots of updates and\ndeletes you should shedule vacuum and analyze more often, or you might want\nto look into running auto vacuum:\n\nhttp://www.postgresql.org/docs/8.1/interactive/maintenance.html#AUTOVACUUM\n\nIf you aren't doing lots of updates and deletes, then maybe you just have a\nbusy database. Lots of postmaster processes implies you have lots of\nclients connecting to your database. You can turn on stats_command_string\nand then check the pg_stat_activity table to see what these connections are\ndoing. If they are running queries, you can try to optimize them. Try\nturning on logging of long running queries with log_min_duration_statement.\nThen use EXPLAIN ANALYZE to see why the query is slow and if anything can be\ndone to speed it up.\n\n\n",
"msg_date": "Sun, 22 Oct 2006 16:30:53 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum and Memory Loss"
},
{
"msg_contents": "Mike wrote:\n> Hello friends,\n> \n> I am responsible for maintaining a high volume website using postgresql\n> 8.1.4. Given the amount of reads and writes, I vacuum full the server a\n> few times a week around 1, 2 AM shutting down the site for a few\n> minutes. The next day morning around 10 - 11 AM the server slows down\n> to death. It used to be that the error 'Too many clients' would be\n> recorded, until I increased the number of clients it can handle, and\n> now it simply slows down to death having lots and lots of postmaster\n> processes running:\n> \n> Tasks: 665 total, 10 running, 655 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 14.9% us, 16.7% sy, 0.0% ni, 0.0% id, 68.4% wa, 0.0% hi,\n> 0.0% si\n> Mem: 2074932k total, 2051572k used, 23360k free, 2736k\n> buffers\n> Swap: 2096440k total, 1844448k used, 251992k free, 102968k cached\n\nThis seems to be saying you have 1.8GB of swap in use. I'd start by \nchecking with vmstat whether you're actively swapping. If so, you're \noverallocating memory.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 23 Oct 2006 09:45:59 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum and Memory Loss"
},
{
"msg_contents": "On Mon, Oct 23, 2006 at 09:45:59AM +0100, Richard Huxton wrote:\n> Mike wrote:\n> >Hello friends,\n> >\n> >I am responsible for maintaining a high volume website using postgresql\n> >8.1.4. Given the amount of reads and writes, I vacuum full the server a\n> >few times a week around 1, 2 AM shutting down the site for a few\n> >minutes. The next day morning around 10 - 11 AM the server slows down\n> >to death. It used to be that the error 'Too many clients' would be\n> >recorded, until I increased the number of clients it can handle, and\n> >now it simply slows down to death having lots and lots of postmaster\n> >processes running:\n> >\n> >Tasks: 665 total, 10 running, 655 sleeping, 0 stopped, 0 zombie\n> >Cpu(s): 14.9% us, 16.7% sy, 0.0% ni, 0.0% id, 68.4% wa, 0.0% hi,\n> >0.0% si\n> >Mem: 2074932k total, 2051572k used, 23360k free, 2736k\n> >buffers\n> >Swap: 2096440k total, 1844448k used, 251992k free, 102968k cached\n> \n> This seems to be saying you have 1.8GB of swap in use. I'd start by \n> checking with vmstat whether you're actively swapping. If so, you're \n> overallocating memory.\n\nWhich could easily be caused by a combination of trying to handle too\nmany database connections at once and setting work_mem too high.\n\nI've often gone into client sites to find they've set the database up to\naccept hundreds or thousands of connections, even though the hardware\nthey're running on would most likely fall over if they actually had that\nmany simultaneously active connections. In many cases, increasing the\nnumber of connections the the database will hurt performance rather than\nhelp it, because you're now asking an already overloaded server to do\neven more work.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 23 Oct 2006 16:36:41 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum and Memory Loss"
}
] |
[
{
"msg_contents": "I would like to understand what causes some of my indexes to be slower to\nuse than others with PostgreSQL 8.1. On a particular table, I have an int4\nprimary key, an indexed unique text 'name' column and a functional index of\ntype text. The function (person_sort_key()) is declared IMMUTABLE and\nRETURNS NULL ON NULL INPUT.\n\nA simple query ordering by each of these columns generates nearly identical\nquery plans, however runtime differences are significantly slower using the\nfunctional index. If I add a new column to the table containing the result\nof the function, index it and query ordering by this new column then the\nruntime is nearly an order of magnitude faster than using the functional\nindex (and again, query plans are nearly identical).\n\n(The following log is also at\nhttp://rafb.net/paste/results/vKVuyi47.nln.html if that is more readable)\n\ndemo=# vacuum full analyze person;\nVACUUM\ndemo=# reindex table person;\nREINDEX\ndemo=# cluster person_pkey on person;\nCLUSTER\ndemo=# explain analyze select * from person order by id offset 527000 limit 50;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=37039.09..37042.61 rows=50 width=531) (actual\ntime=1870.393..1870.643 rows=50 loops=1)\n -> Index Scan using person_pkey on person (cost=0.00..37093.42\nrows=527773 width=531) (actual time=0.077..1133.659 rows=527050 loops=1)\n Total runtime: 1870.792 ms\n(3 rows)\n\ndemo=# cluster person_name_key on person;\nCLUSTER\ndemo=# explain analyze select * from person order by name offset 527000\nlimit 50;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=63727.87..63733.91 rows=50 width=531) (actual\ntime=1865.769..1866.028 rows=50 loops=1)\n -> Index Scan using person_name_key on person (cost=0.00..63821.34\nrows=527773 width=531) (actual time=0.068..1138.649 rows=527050 loops=1)\n Total runtime: 1866.153 ms\n(3 rows)\n\ndemo=# cluster person_sorting_idx on person;\nCLUSTER\ndemo=# explain analyze select * from person order by\nperson_sort_key(displayname,name) offset 527000 limit 50;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=65806.62..65812.86 rows=50 width=531) (actual\ntime=13846.677..13848.102 rows=50 loops=1)\n -> Index Scan using person_sorting_idx on person (cost=0.00..65903.14\nrows=527773 width=531) (actual time=0.214..13093.090 rows=527050 loops=1)\n Total runtime: 13848.254 ms\n(3 rows)\n\ndemo=# alter table person add column sort_key text;\nALTER TABLE\ndemo=# update person set sort_key=person_sort_key(displayname,name);\nUPDATE 527773\ndemo=# create index person_sort_key_idx on person(sort_key);\nCREATE INDEX\ndemo=# vacuum analyze person;\nVACUUM\ndemo=# cluster person_sort_key_idx on person;\nCLUSTER\ndemo=# explain analyze select * from person order by sort_key offset 527000\nlimit 50;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=41069.28..41073.18 rows=50 width=553) (actual\ntime=1999.456..1999.724 rows=50 loops=1)\n -> Index Scan using person_sort_key_idx on person (cost=0.00..41129.52\nrows=527773 width=553) (actual time=0.079..1274.952 rows=527050 loops=1)\n Total runtime: 1999.858 ms\n(3 rows)\n\n\n-- \nStuart Bishop <[email protected]> http://www.canonical.com/\nCanonical Ltd. http://www.ubuntu.com/",
"msg_date": "Fri, 20 Oct 2006 14:10:30 +0700",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow functional indexes?"
},
{
"msg_contents": "On 10/20/06, Stuart Bishop <[email protected]> wrote:\n> I would like to understand what causes some of my indexes to be slower to\n> use than others with PostgreSQL 8.1. On a particular table, I have an int4\n> primary key, an indexed unique text 'name' column and a functional index of\n> type text. The function (person_sort_key()) is declared IMMUTABLE and\n> RETURNS NULL ON NULL INPUT.\n\ndatabase will not allow you to create index if the function is not immutable.\n\n> A simple query ordering by each of these columns generates nearly identical\n> query plans, however runtime differences are significantly slower using the\n> functional index. If I add a new column to the table containing the result\n> of the function, index it and query ordering by this new column then the\n> runtime is nearly an order of magnitude faster than using the functional\n> index (and again, query plans are nearly identical).\n\n> demo=# explain analyze select * from person order by id offset 527000 limit 50;\n> QUERY PLAN\n\nit looks you just turned up a bad interaction between a functional\nindex and 'offset' probably your function is getting executed extra\ntimes or there is a sort going on. however, I'd suggest not using\n'offset', because its bad design.\n\nmerlin\n",
"msg_date": "Sat, 21 Oct 2006 08:09:31 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow functional indexes?"
},
{
"msg_contents": "Stuart Bishop <[email protected]> writes:\n> I would like to understand what causes some of my indexes to be slower to\n> use than others with PostgreSQL 8.1.\n\nI was about to opine that it was all about different levels of\ncorrelation between the index order and physical table order ... but\nyour experiments with freshly clustered indexes seem to cast doubt\non that idea. Are you sure your function is really immutable? A buggy\nfunction could possibly lead to a \"clustered\" index not being in\nphysical order.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 20 Oct 2006 23:30:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow functional indexes? "
},
{
"msg_contents": "Tom Lane wrote:\n> Stuart Bishop <[email protected]> writes:\n>> I would like to understand what causes some of my indexes to be slower to\n>> use than others with PostgreSQL 8.1.\n> \n> I was about to opine that it was all about different levels of\n> correlation between the index order and physical table order ... but\n> your experiments with freshly clustered indexes seem to cast doubt\n> on that idea. Are you sure your function is really immutable? A buggy\n> function could possibly lead to a \"clustered\" index not being in\n> physical order.\n\nDefinitely immutable. Here is the function definition:\n\n\nCREATE OR REPLACE FUNCTION person_sort_key(displayname text, name text)\nRETURNS text\nLANGUAGE plpythonu IMMUTABLE RETURNS NULL ON NULL INPUT AS\n$$\n # NB: If this implementation is changed, the person_sort_idx needs to be\n # rebuilt along with any other indexes using it.\n import re\n\n try:\n strip_re = SD[\"strip_re\"]\n except KeyError:\n strip_re = re.compile(\"(?:[^\\w\\s]|[\\d_])\", re.U)\n SD[\"strip_re\"] = strip_re\n\n displayname, name = args\n\n # Strip noise out of displayname. We do not have to bother with\n # name, as we know it is just plain ascii.\n displayname = strip_re.sub('', displayname.decode('UTF-8').lower())\n return (\"%s, %s\" % (displayname.strip(), name)).encode('UTF-8')\n$$;\n\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/",
"msg_date": "Tue, 24 Oct 2006 15:00:01 +0800",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow functional indexes?"
},
{
"msg_contents": "Stuart Bishop wrote:\n> I would like to understand what causes some of my indexes to be slower to\n> use than others with PostgreSQL 8.1. On a particular table, I have an int4\n> primary key, an indexed unique text 'name' column and a functional index of\n> type text. The function (person_sort_key()) is declared IMMUTABLE and\n> RETURNS NULL ON NULL INPUT.\n> \n> A simple query ordering by each of these columns generates nearly identical\n> query plans, however runtime differences are significantly slower using the\n> functional index. If I add a new column to the table containing the result\n> of the function, index it and query ordering by this new column then the\n> runtime is nearly an order of magnitude faster than using the functional\n> index (and again, query plans are nearly identical).\n>\n> (The following log is also at\n> http://rafb.net/paste/results/vKVuyi47.nln.html if that is more readable)\n\nHere is a minimal test case that demonstrates the issue. Can anyone else\nreproduce these results? Of the four EXPLAIN ANALYZE SELECT statements at\nthe end, the one that orders by a user created IMMUTABLE stored procedure is\nconsistently slower than the other three variants.\n\n\nBEGIN;\nDROP TABLE TestCase;\nCOMMIT;\nABORT;\n\nBEGIN;\nCREATE TABLE TestCase (name text, alt_name text);\n\nCREATE OR REPLACE FUNCTION munge(s text) RETURNS text\nIMMUTABLE RETURNS NULL ON NULL INPUT\nLANGUAGE plpgsql AS $$\nBEGIN\n RETURN lower(s);\nEND;\n$$;\n\n-- Fill the table with random strings\nCREATE OR REPLACE FUNCTION fill_testcase(num_rows int) RETURNS boolean\nLANGUAGE plpgsql AS\n$$\nDECLARE\n row_num int;\n char_num int;\n name text;\nBEGIN\n FOR row_num IN 1..num_rows LOOP\n name := '';\n FOR char_num IN 1..round(random() * 100) LOOP\n name := name || chr((\n round(random() * (ascii('z') - ascii('!'))) + ascii('!')\n )::int);\n END LOOP;\n INSERT INTO TestCase VALUES (name, lower(name));\n IF row_num % 20000 = 0 THEN\n RAISE NOTICE '% of % rows inserted', row_num, num_rows;\n END IF;\n END LOOP;\n RETURN TRUE;\nEND;\n$$;\n\nSELECT fill_testcase(500000);\n\nCREATE INDEX testcase__name__idx ON TestCase(name);\nCREATE INDEX testcase__lower__idx ON TestCase(lower(name));\nCREATE INDEX testcase__munge__idx ON TestCase(munge(name));\nCREATE INDEX testcase__alt_name__idx ON TestCase(alt_name);\n\nCOMMIT;\n\nANALYZE TestCase;\n\nEXPLAIN ANALYZE SELECT * FROM TestCase ORDER BY name;\nEXPLAIN ANALYZE SELECT * FROM TestCase ORDER BY lower(name);\nEXPLAIN ANALYZE SELECT * FROM TestCase ORDER BY munge(name);\nEXPLAIN ANALYZE SELECT * FROM TestCase ORDER BY alt_name;\n\nEXPLAIN ANALYZE SELECT * FROM TestCase ORDER BY name;\nEXPLAIN ANALYZE SELECT * FROM TestCase ORDER BY lower(name);\nEXPLAIN ANALYZE SELECT * FROM TestCase ORDER BY munge(name);\nEXPLAIN ANALYZE SELECT * FROM TestCase ORDER BY alt_name;\n\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/",
"msg_date": "Sun, 05 Nov 2006 12:34:15 -0800",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow functional indexes?"
},
{
"msg_contents": "Stuart Bishop <[email protected]> writes:\n> Here is a minimal test case that demonstrates the issue. Can anyone else\n> reproduce these results? Of the four EXPLAIN ANALYZE SELECT statements at\n> the end, the one that orders by a user created IMMUTABLE stored procedure is\n> consistently slower than the other three variants.\n\nWow, interesting. I'm surprised we never realized this before, but\nhere's the deal: the generated plan computes the ORDER BY expressions\neven if we end up not needing them because the ordering is created by\nan indexscan rather than an explicit sort step. (Such a sort step would\nof course need the values as input.) So the differential you're seeing\nrepresents the time for all those useless evaluations of the function.\nThe difference in the estimated cost comes from that too --- the code\ndoing the estimation can see perfectly well that there's an extra\nfunction call in the plan ...\n\nNot sure whether there's a simple way to fix this; it might take some\nnontrivial rejiggering in the planner. Or maybe not, but I don't have\nany cute ideas about it at the moment.\n\nI wonder whether there are any other cases where we are doing useless\ncomputations of resjunk columns?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 05 Nov 2006 20:23:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow functional indexes? "
},
{
"msg_contents": "I have a varchar field which is most commonly queried like \"someField like\n'%abcd'\". Realizing that it wouldn't be able to use an index for this type\nof query I created a reverse() function and an index using the function\nreverse(someField) so that the query would be performed as\n\"reverse(someField) like reverse('%abcd')\". When I looked at the query plan\nit seemed like it was using the new reverse index properly but also seemed\nto run slower. Would this explain these bazaar results? I have since gone\nback to the method without using the reverse function. Thanks\n\nOn 11/5/06, Tom Lane <[email protected]> wrote:\n>\n> Stuart Bishop <[email protected]> writes:\n> > Here is a minimal test case that demonstrates the issue. Can anyone else\n> > reproduce these results? Of the four EXPLAIN ANALYZE SELECT statements\n> at\n> > the end, the one that orders by a user created IMMUTABLE stored\n> procedure is\n> > consistently slower than the other three variants.\n>\n> Wow, interesting. I'm surprised we never realized this before, but\n> here's the deal: the generated plan computes the ORDER BY expressions\n> even if we end up not needing them because the ordering is created by\n> an indexscan rather than an explicit sort step. (Such a sort step would\n> of course need the values as input.) So the differential you're seeing\n> represents the time for all those useless evaluations of the function.\n> The difference in the estimated cost comes from that too --- the code\n> doing the estimation can see perfectly well that there's an extra\n> function call in the plan ...\n>\n> Not sure whether there's a simple way to fix this; it might take some\n> nontrivial rejiggering in the planner. Or maybe not, but I don't have\n> any cute ideas about it at the moment.\n>\n> I wonder whether there are any other cases where we are doing useless\n> computations of resjunk columns?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n\n-- \nGene Hart\ncell: 443-604-2679\n\nI have a varchar field which is most commonly queried like \"someField like '%abcd'\". Realizing that it wouldn't be able to use an index for this type of query I created a reverse() function and an index using the function reverse(someField) so that the query would be performed as \"reverse(someField) like reverse('%abcd')\". When I looked at the query plan it seemed like it was using the new reverse index properly but also seemed to run slower. Would this explain these bazaar results? I have since gone back to the method without using the reverse function. Thanks\nOn 11/5/06, Tom Lane <[email protected]> wrote:\nStuart Bishop <[email protected]> writes:> Here is a minimal test case that demonstrates the issue. Can anyone else> reproduce these results? Of the four EXPLAIN ANALYZE SELECT statements at\n> the end, the one that orders by a user created IMMUTABLE stored procedure is> consistently slower than the other three variants.Wow, interesting. I'm surprised we never realized this before, but\nhere's the deal: the generated plan computes the ORDER BY expressionseven if we end up not needing them because the ordering is created byan indexscan rather than an explicit sort step. (Such a sort step would\nof course need the values as input.) So the differential you're seeingrepresents the time for all those useless evaluations of the function.The difference in the estimated cost comes from that too --- the code\ndoing the estimation can see perfectly well that there's an extrafunction call in the plan ...Not sure whether there's a simple way to fix this; it might take somenontrivial rejiggering in the planner. Or maybe not, but I don't have\nany cute ideas about it at the moment.I wonder whether there are any other cases where we are doing uselesscomputations of resjunk columns? regards, tom lane---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [email protected] so that your message can get through to the mailing list cleanly\n-- Gene Hartcell: 443-604-2679",
"msg_date": "Sun, 5 Nov 2006 20:33:55 -0500",
"msg_from": "Gene <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow functional indexes?"
}
] |
[
{
"msg_contents": "Hello all,\n\nI am currently working out the best type of machine for a high volume \npgsql database that I going to need for a project. I will be \npurchasing a new server specifically for the database, and it won't \nbe running any other applications. I will be using FreeBSD 6.1 Stable.\n\nI think it may be beneficial if I give a brief overview of the types \nof database access. There are several groups of tables and associated \naccesses to them.\n\nThe first can be thought of as users details and configuration \ntables. They will have low read and write access (say around 10 - 20 \na min). SIzed at around 1/2 Million rows.\n\nThe second part is logging, this will be used occasionally for reads \nwhen reports are run, but I will probably back that off to more \naggregated data tables, so can probably think of this as a write only \ntables. Several table will each have around 200-300 inserts a second. \nThe can be archived on a regular basis to keep the size down, may be \nonce a day, or once a week. Not sure yet.\n\nThe third part will be transactional and will have around 50 \ntransaction a second. A transaction is made up of a query followed by \nan update, followed by approx 3 inserts. In addition some of these \ntables will be read out of the transactions at approx once per second.\n\nThere will be around 50 simultaneous connections.\n\nI hope that overview is a) enough and b) useful background to this \ndiscussion.\n\nI have some thoughts but I need them validating / discussing. If I \nhad the money I could buy the hardware and sped time testing \ndifferent options, thing is I need to get this pretty much right on \nthe hardware front first time. I'll almost certainly be buying Dell \nkit, but could go for HP as an alternative.\n\nProcessor : I understand that pgsql is not CPU intensive, but that \neach connection uses its own process. The HW has an option of upto 4 \ndual core xeon processors. My thoughts would be that more lower spec \nprocessors would be better than fewer higher spec ones. But the \nquestion is 4 (8 cores) wasted because there will be so much blocking \non I/O. Is 2 (4 cores) processors enough. I was thinking 2 x 2.6G \ndual core Xeons would be enough.\n\nMemory : I know this is very important for pgsql, and the more you \nhave the more of the tables can reside in memory. I was thinking of \naround 8 - 12G, but the machine can hold a lot more. Thing is memory \nis still quite expensive, and so I don't to over spec it if its not \ngoing to get used.\n\nDisk : Ok so this is the main bottleneck of the system. And the thing \nI know least about, so need the most help with. I understand you get \ngood improvements if you keep the transaction log on a different disk \nfrom the database, and that raid 5 is not as good as people think \nunless you have lots of disks.\n\nMy option in disks is either 5 x 15K rpm disks or 8 x 10K rpm disks \n(all SAS), or if I pick a different server I can have 6 x 15K rpm or \n8 x 10K rpm (again SAS). In each case controlled by a PERC 5/i (which \nI think is an LSI Mega Raid SAS 8408E card).\n\nSo the question here is will more disks at a slower speed be better \nthan fewer disks as a higher speed?\n\nAssuming I was going to have a mirrored pair for the O/S and \ntransaction logs that would leave me with 3 or 4 15K rpm for the \ndatabase, 3 would mean raid 5 (not great at 3 disks), 4 would give me \nraid 10 option if I wanted it. Or I could have raid 5 across all 5/6 \ndisks and not separate the transaction and database onto different \ndisks. Better performance from raid 5 with more disks, but does \nhaving the transaction logs and database on the same disks \ncounteract / worsen the performance?\n\nIf I had the 8 10K disks, I could have 2 as a mirrored pair for O/S \nTransaction, and still have 6 for raid 5. But the disks are slower.\n\nAnybody have any good thoughts on my disk predicament, and which \noptions will serve me better.\n\nYour thoughts are much appreciated.\n\nRegards\n\nBen\n\n\n\n\n\n\n",
"msg_date": "Fri, 20 Oct 2006 08:49:22 +0100",
"msg_from": "Ben Suffolk <[email protected]>",
"msg_from_op": true,
"msg_subject": "New hardware thoughts"
},
{
"msg_contents": "Ben Suffolk wrote:\n> Hello all,\n> \n> I am currently working out the best type of machine for a high volume \n> pgsql database that I going to need for a project. I will be purchasing \n> a new server specifically for the database, and it won't be running any \n> other applications. I will be using FreeBSD 6.1 Stable.\n> \n> I think it may be beneficial if I give a brief overview of the types of \n> database access. There are several groups of tables and associated \n> accesses to them.\n> \n> The first can be thought of as users details and configuration tables. \n> They will have low read and write access (say around 10 - 20 a min). \n> SIzed at around 1/2 Million rows.\n> \n> The second part is logging, this will be used occasionally for reads \n> when reports are run, but I will probably back that off to more \n> aggregated data tables, so can probably think of this as a write only \n> tables. Several table will each have around 200-300 inserts a second. \n> The can be archived on a regular basis to keep the size down, may be \n> once a day, or once a week. Not sure yet.\n> \n > The third part will be transactional and will have around 50\n > transaction a second. A transaction is made up of a query followed by\n > an update, followed by approx 3 inserts. In addition some of these\n > tables will be read out of the transactions at approx once per second.\n >\n> There will be around 50 simultaneous connections.\n >\n > I hope that overview is a) enough and b) useful background to this\n > discussion.\n\nSounds like you have a very good idea of what to expect. Are these solid \nstats or certain estimates? Estimates can vary when it comes time to start.\n\n> Processor : I understand that pgsql is not CPU intensive, but that each \n> connection uses its own process. The HW has an option of upto 4 dual \n> core xeon processors. My thoughts would be that more lower spec \n> processors would be better than fewer higher spec ones. But the question \n> is 4 (8 cores) wasted because there will be so much blocking on I/O. Is \n> 2 (4 cores) processors enough. I was thinking 2 x 2.6G dual core Xeons \n> would be enough.\n\nI would think 2 will cope with what you describe but what about in 12 \nmonths time? Can you be sure your needs won't increase? And will the \ncost of 4 CPU's cut your other options? If all 50 users may be running \nthe 3rd part at the same time (or is that your 50 trans. a second?) then \nI'd consider the 4.\n\n> Memory : I know this is very important for pgsql, and the more you have \n> the more of the tables can reside in memory. I was thinking of around 8 \n> - 12G, but the machine can hold a lot more. Thing is memory is still \n> quite expensive, and so I don't to over spec it if its not going to get \n> used.\n\n8GB is a good starting point for a busy server but a few hundred $ on \nthe extra ram can make more difference than extra disks (more for the \nreading part than writing).\n\nWhat you describe plans several times 300 inserts to logging plus 150 \ninserts and 50 updates and 1 read a second plus occasional reads to the \nlogging and user data.\nWill it be raw data fed in and saved or will the server be calculating a \nmajority of the inserted data? If so go for the 4 cpu's.\n\nAgain allow room for expansion.\n\n> Disk : Ok so this is the main bottleneck of the system. And the thing I \n> know least about, so need the most help with. I understand you get good \n> improvements if you keep the transaction log on a different disk from \n> the database, and that raid 5 is not as good as people think unless you \n> have lots of disks.\n> \n> My option in disks is either 5 x 15K rpm disks or 8 x 10K rpm disks (all \n> SAS), or if I pick a different server I can have 6 x 15K rpm or 8 x 10K \n> rpm (again SAS). In each case controlled by a PERC 5/i (which I think is \n> an LSI Mega Raid SAS 8408E card).\n> \n> So the question here is will more disks at a slower speed be better than \n> fewer disks as a higher speed?\n\nGenerally more disks at slower speed - 2 10K disks in raid 0 is faster \nthan 1 15K disk. More disks also allow more options.\n\nChoosing the best RAID controller can make a lot of difference too.\n\n> Assuming I was going to have a mirrored pair for the O/S and transaction \n> logs that would leave me with 3 or 4 15K rpm for the database, 3 would \n> mean raid 5 (not great at 3 disks), 4 would give me raid 10 option if I \n> wanted it. Or I could have raid 5 across all 5/6 disks and not separate \n> the transaction and database onto different disks. Better performance \n> from raid 5 with more disks, but does having the transaction logs and \n> database on the same disks counteract / worsen the performance?\n> \n> If I had the 8 10K disks, I could have 2 as a mirrored pair for O/S \n> Transaction, and still have 6 for raid 5. But the disks are slower.\n> \n\nI might consider RAID 5 with 8 disks but would lean more for 2 RAID 10 \nsetups. This can give you the reliability and speed with system and xlog \non one and data on the other.\n\nSounds to me like you have it worked out even if you are a little \nindecisive on a couple of finer points.\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Sat, 21 Oct 2006 00:12:59 +0930",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "Ben,\n\nOn 20-Oct-06, at 3:49 AM, Ben Suffolk wrote:\n\n> Hello all,\n>\n> I am currently working out the best type of machine for a high \n> volume pgsql database that I going to need for a project. I will be \n> purchasing a new server specifically for the database, and it won't \n> be running any other applications. I will be using FreeBSD 6.1 Stable.\n>\n> I think it may be beneficial if I give a brief overview of the \n> types of database access. There are several groups of tables and \n> associated accesses to them.\n>\n> The first can be thought of as users details and configuration \n> tables. They will have low read and write access (say around 10 - \n> 20 a min). SIzed at around 1/2 Million rows.\n>\n> The second part is logging, this will be used occasionally for \n> reads when reports are run, but I will probably back that off to \n> more aggregated data tables, so can probably think of this as a \n> write only tables. Several table will each have around 200-300 \n> inserts a second. The can be archived on a regular basis to keep \n> the size down, may be once a day, or once a week. Not sure yet.\n>\n> The third part will be transactional and will have around 50 \n> transaction a second. A transaction is made up of a query followed \n> by an update, followed by approx 3 inserts. In addition some of \n> these tables will be read out of the transactions at approx once \n> per second.\n>\n> There will be around 50 simultaneous connections.\n>\n> I hope that overview is a) enough and b) useful background to this \n> discussion.\n>\n> I have some thoughts but I need them validating / discussing. If I \n> had the money I could buy the hardware and sped time testing \n> different options, thing is I need to get this pretty much right on \n> the hardware front first time. I'll almost certainly be buying Dell \n> kit, but could go for HP as an alternative.\n>\n> Processor : I understand that pgsql is not CPU intensive, but that \n> each connection uses its own process. The HW has an option of upto \n> 4 dual core xeon processors. My thoughts would be that more lower \n> spec processors would be better than fewer higher spec ones. But \n> the question is 4 (8 cores) wasted because there will be so much \n> blocking on I/O. Is 2 (4 cores) processors enough. I was thinking 2 \n> x 2.6G dual core Xeons would be enough.\n>\n> Memory : I know this is very important for pgsql, and the more you \n> have the more of the tables can reside in memory. I was thinking of \n> around 8 - 12G, but the machine can hold a lot more. Thing is \n> memory is still quite expensive, and so I don't to over spec it if \n> its not going to get used.\n>\n> Disk : Ok so this is the main bottleneck of the system. And the \n> thing I know least about, so need the most help with. I understand \n> you get good improvements if you keep the transaction log on a \n> different disk from the database, and that raid 5 is not as good as \n> people think unless you have lots of disks.\n>\n> My option in disks is either 5 x 15K rpm disks or 8 x 10K rpm disks \n> (all SAS), or if I pick a different server I can have 6 x 15K rpm \n> or 8 x 10K rpm (again SAS). In each case controlled by a PERC 5/i \n> (which I think is an LSI Mega Raid SAS 8408E card).\n>\nYou mentioned a \"Perc\" controller, so I'll assume this is a Dell.\n\nMy advice is to find another supplier. check the archives for Dell.\n\nBasically you have no idea what the Perc controller is since it is \nwhatever Dell decides to ship that day.\n\nIn general though you are going down the right path here. Disks \nfirst, memory second, cpu third\n\nDave\n\n> So the question here is will more disks at a slower speed be better \n> than fewer disks as a higher speed?\n>\n> Assuming I was going to have a mirrored pair for the O/S and \n> transaction logs that would leave me with 3 or 4 15K rpm for the \n> database, 3 would mean raid 5 (not great at 3 disks), 4 would give \n> me raid 10 option if I wanted it. Or I could have raid 5 across \n> all 5/6 disks and not separate the transaction and database onto \n> different disks. Better performance from raid 5 with more disks, \n> but does having the transaction logs and database on the same disks \n> counteract / worsen the performance?\n>\n> If I had the 8 10K disks, I could have 2 as a mirrored pair for O/S \n> Transaction, and still have 6 for raid 5. But the disks are slower.\n>\n> Anybody have any good thoughts on my disk predicament, and which \n> options will serve me better.\n>\n> Your thoughts are much appreciated.\n>\n> Regards\n>\n> Ben\n>\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n",
"msg_date": "Fri, 20 Oct 2006 10:58:10 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "Cheers Shane,\n\n> Sounds like you have a very good idea of what to expect. Are these \n> solid stats or certain estimates? Estimates can vary when it comes \n> time to start.\n\nThe figures all come from how my application interacts with the \ndatabase when an event happens, so the scaling of operations to each \nother is accurate, the number of operations is based on an estimate \nof the user interactions with the system, and the figures I quote are \nactually peak figures based on some fairly reliable research. If \nanything its more likely to be lower then higher, but I like to air \non the side of caution, and so its important for know that I can \nsustain this throughput, and have an easy upgrade path in the \nhardware I choose now to help if I do need to be able to cope with \nmore load in the future.\n\nAlthough I suspect the next step would be to move things like the \nlogging into a separate database to relieve some of the load.\n\n> I would think 2 will cope with what you describe but what about in \n> 12 months time? Can you be sure your needs won't increase? And will \n> the cost of 4 CPU's cut your other options? If all 50 users may be \n> running the 3rd part at the same time (or is that your 50 trans. a \n> second?) then I'd consider the 4.\n\nThe 50 connections is pretty much a constant from the distributes \napplication servers, and only some about 10 of them will be \nresponsible for running the transactions , the others being more \nrelated to the reading, and logging, and thus mainly staying in the \nidle state. So I would think I am better off keeping the CPU sockets \nspare, and adding them if needed. Thus enabling more budget for \nmemory / disks.\n\n> 8GB is a good starting point for a busy server but a few hundred $ \n> on the extra ram can make more difference than extra disks (more \n> for the reading part than writing).\n\nI guess any spare budget I have after the disks should be spend on as \nmuch memory as possible.\n\n> What you describe plans several times 300 inserts to logging plus \n> 150 inserts and 50 updates and 1 read a second plus occasional \n> reads to the logging and user data.\n> Will it be raw data fed in and saved or will the server be \n> calculating a majority of the inserted data? If so go for the 4 cpu's.\n\nThe inserts are all raw (pre calculated) data, so not work needed by \nthe database server its self bar the actual insert.\n\n> Generally more disks at slower speed - 2 10K disks in raid 0 is \n> faster than 1 15K disk. More disks also allow more options.\n\nYes I figured striped slow disks are faster then non striped fast \ndisks, but what about 8 striped slow disks vs 5 striped fast disks? \nHow do you calculate what the maximum throughput of a disk system \nwould be? I know that a bit academic really as I need to split the \ndisks up for the transfer log and the table data, so the large number \nof slower disks is as you suggest better anyway.\n\n> I might consider RAID 5 with 8 disks but would lean more for 2 RAID \n> 10 setups. This can give you the reliability and speed with system \n> and xlog on one and data on the other.\n\nAssuming I go with 8 disks, I guess the real question I have no idea \nabout is the speed relationship of the transfer log to the table \nspace data. In other words if I have 2 disks in a raid 1 mirrored \npair for the transfer log (and the O/S, but can't see it needing to \nuse disk once boots really - so long as it does not need swap space) \nand 6 disks in a raid 1 + 0 striped mirrored pair would that be \nbetter than having 2 equal raid 1 + 0 sets of 4 disks.\n\nClearly if the requirements on the transfer log are the same as the \ntable data then 2 equal 1+0 sets are better, but if the table data is \nat least 1/3 more intensive that the transfer log I think the 2 + 6 \nshould be better. Does anybody know which it is?\n\n> Sounds to me like you have it worked out even if you are a little \n> indecisive on a couple of finer points.\n\nThanks, I guess its more about validating my thoughts are more or \nless right, and helping tweak the bits that could be better.\n\nRegards\n\nBen\n\n\n",
"msg_date": "Fri, 20 Oct 2006 19:22:18 +0100",
"msg_from": "Ben Suffolk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "> You mentioned a \"Perc\" controller, so I'll assume this is a Dell.\n>\n> My advice is to find another supplier. check the archives for Dell.\n>\n> Basically you have no idea what the Perc controller is since it is \n> whatever Dell decides to ship that day.\n>\n> In general though you are going down the right path here. Disks \n> first, memory second, cpu third\n>\n> Dave\n\nYes I am looking at either the 2950 or the 6850. I think the only \nthink that the 6850 really offers me over the 2950 is more \nexpandability in the spare processor, and additional memory\nsockets. In all other respects the config I am looking at would fit \neither chassis. Although the 2950, being slightly newer has the DRAC \n5 (dells implementation of IPMI) management, which may be useful.\n\nI hear what you say about the raid card, but how likely are they to \nchange it from the LSI Mega Raid one in reality? But I am open to \nsuggestions if you have any specific models from other manufacturers \nI should look at. I do need to be able to get the fast hardware \nsupport on it though that I can get from the likes of Dells 4 hours \non site call out, so rolling my own isn't an option on this one \nreally (unless it was so much cheaper I could have a hot standby or \nat least a cupboard of all the needed parts instantly available to me)\n\nRegards\n\nBen\n\n",
"msg_date": "Fri, 20 Oct 2006 19:34:30 +0100",
"msg_from": "Ben Suffolk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "Ben Suffolk wrote:\n>> You mentioned a \"Perc\" controller, so I'll assume this is a Dell.\n>>\n>> My advice is to find another supplier. check the archives for Dell.\n>>\n>> Basically you have no idea what the Perc controller is since it is\n>> whatever Dell decides to ship that day.\n>>\n>> In general though you are going down the right path here. Disks first,\n>> memory second, cpu third\n>>\n>> Dave\n> \n> Yes I am looking at either the 2950 or the 6850. I think the only think\n> that the 6850 really offers me over the 2950 is more expandability in\n> the spare processor, and additional memory\n> sockets. In all other respects the config I am looking at would fit\n> either chassis. Although the 2950, being slightly newer has the DRAC 5\n> (dells implementation of IPMI) management, which may be useful.\n\nGet an HP with the 64* series. They are a good, well rounded machine for\nPostgreSQL.\n\nhttp://h10010.www1.hp.com/wwpc/pscmisc/vac/us/en/ss/proliant/proliant-dl.html?jumpid=re_R295_prodexp/busproducts/computing-server/proliant-dl\n\n> I hear what you say about the raid card, but how likely are they to\n> change it from the LSI Mega Raid one in reality? But I am open to\n\nHeh... very likely. I have a 6 drive Dell machine with a Perc controller\n(lsi rebrand). If I put it in RAID 5, it refuses to get more than 8 megs\na second. If I put it in RAID 10, it get about 50 megs a second.\n\nIf I get the offshelf LSI Megaraid withe the same configuration? You\ndon't want to know... it will just make you want to cry at the fact that\nyou bought a Dell.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> suggestions if you have any specific models from other manufacturers I\n> should look at. I do need to be able to get the fast hardware support on\n> it though that I can get from the likes of Dells 4 hours on site call\n> out, so rolling my own isn't an option on this one really (unless it was\n> so much cheaper I could have a hot standby or at least a cupboard of all\n> the needed parts instantly available to me)\n> \n> Regards\n> \n> Ben\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Fri, 20 Oct 2006 11:52:09 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "Hi Ben ,\n\n\n>> You mentioned a \"Perc\" controller, so I'll assume this is a Dell.\n>>\n>> My advice is to find another supplier. check the archives for Dell.\n>>\n>> Basically you have no idea what the Perc controller is since it is \n>> whatever Dell decides to ship that day.\n>>\n>> In general though you are going down the right path here. Disks \n>> first, memory second, cpu third\n>>\n>> Dave\n>\n> Yes I am looking at either the 2950 or the 6850. I think the only \n> think that the 6850 really offers me over the 2950 is more \n> expandability in the spare processor, and additional memory\nI see (in first mail) you plan to use bsd 6.1 on dell2950.\n--- flame on\nOff topic for postgresql performance , but i'd like to warn you neither \nperc5i crap nor network adapter got proper support for bsd 6.1 stable ( \ndell2950 box )\ndmesg -a | grep bce\nbce0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500\n inet6 fe80::213:72ff:fe61:2ef6%bce1 prefixlen 64 tentative scopeid 0x2\nbce0: link state changed to UP\nbce0: /usr/src/sys/dev/bce/if_bce.c(5032): Watchdog timeout occurred, \nresetting!\nbce0: link state changed to DOWN\nbce0: link state changed to UP\nuname -a\nFreeBSD xxx 6.1-STABLE FreeBSD 6.1-STABLE #0: \nxxx:/usr/obj/usr/src/sys/customkenelcompiled-30-Aug-2006 i386\nProblem with (latest?) raid perc is that only one logical volume is \nsupported.\nYou may find some bits of info on freebsd mailing lists.\nAt least for n/w card problem i see no solution until now.\n3 month old history: due to buggy firmware on maxtor disks sold by dell \n2 servers from our server farm having raid5 crashed and data on raid \narray was lost.\nWe were lucky to have proper replication solution.\nIf you decide to choose 2950, you have to use linux instead of bsd 6.1 . \nAlso buy 2 boxes instead of 1 and set up slony replication for redundancy.\ngo dell , go to hell.\n--- flame off\n\ngood luck!\n\nregards, alvis\n\n\n",
"msg_date": "Fri, 20 Oct 2006 23:26:02 +0300",
"msg_from": "alvis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": ">> Yes I am looking at either the 2950 or the 6850. I think the only \n>> think that the 6850 really offers me over the 2950 is more \n>> expandability in the spare processor, and additional memory\n> I see (in first mail) you plan to use bsd 6.1 on dell2950.\n> --- flame on\n> Off topic for postgresql performance , but i'd like to warn you \n> neither perc5i crap nor network adapter got proper support for bsd \n> 6.1 stable ( dell2950 box )\n> dmesg -a | grep bce\n> bce0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500\n> inet6 fe80::213:72ff:fe61:2ef6%bce1 prefixlen 64 tentative \n> scopeid 0x2\n> bce0: link state changed to UP\n> bce0: /usr/src/sys/dev/bce/if_bce.c(5032): Watchdog timeout \n> occurred, resetting!\n> bce0: link state changed to DOWN\n> bce0: link state changed to UP\n> uname -a\n> FreeBSD xxx 6.1-STABLE FreeBSD 6.1-STABLE #0: xxx:/usr/obj/usr/ \n> src/sys/customkenelcompiled-30-Aug-2006 i386\n> Problem with (latest?) raid perc is that only one logical volume is \n> supported.\n> You may find some bits of info on freebsd mailing lists.\n> At least for n/w card problem i see no solution until now.\n> 3 month old history: due to buggy firmware on maxtor disks sold by \n> dell 2 servers from our server farm having raid5 crashed and data \n> on raid array was lost.\n> We were lucky to have proper replication solution.\n> If you decide to choose 2950, you have to use linux instead of bsd \n> 6.1 . Also buy 2 boxes instead of 1 and set up slony replication \n> for redundancy.\n> go dell , go to hell.\n> --- flame off\n>\n> good luck!\n\nThanks Alvis, its good to hear this sort of problem before one \ncommits to a purchase decision!\n\nI guess it makes the HP's Joshua mentioned in a reply more promising. \nAre there any other suppliers I should be looking at do you think. \nI'm keen on FreeBSD to be honest rather than Linux (I don't want to \nstart any holy wars on this as its not the place) as then its the \nsame as all my other servers, so support / sysadmin is easier if they \nare all the same.\n\nHow about the Fujitsu Siemens Sun Clones? I have not really looked at \nthem but have heard the odd good thing about them.\n\nBen\n\n",
"msg_date": "Fri, 20 Oct 2006 21:33:43 +0100",
"msg_from": "Ben Suffolk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Joshua D. Drake\n> Sent: Friday, October 20, 2006 2:52 PM\n> To: Ben Suffolk\n> Cc: Dave Cramer; [email protected]\n> Subject: Re: [PERFORM] New hardware thoughts\n> \n> Ben Suffolk wrote:\n> >> You mentioned a \"Perc\" controller, so I'll assume this is a Dell.\n> >>\n> >> My advice is to find another supplier. check the archives for Dell.\n> >>\n> >> Basically you have no idea what the Perc controller is since it is\n> >> whatever Dell decides to ship that day.\n> >>\n> >> In general though you are going down the right path here. Disks\nfirst,\n> >> memory second, cpu third\n> >>\n> >> Dave\n> >\n> > Yes I am looking at either the 2950 or the 6850. I think the only\nthink\n> > that the 6850 really offers me over the 2950 is more expandability\nin\n> > the spare processor, and additional memory\n> > sockets. In all other respects the config I am looking at would fit\n> > either chassis. Although the 2950, being slightly newer has the DRAC\n5\n> > (dells implementation of IPMI) management, which may be useful.\n> \n> Get an HP with the 64* series. They are a good, well rounded machine\nfor\n> PostgreSQL.\n> \n> http://h10010.www1.hp.com/wwpc/pscmisc/vac/us/en/ss/proliant/proliant-\n>\ndl.html?jumpid=re_R295_prodexp/busproducts/computing-server/proliant-dl\n> \n> > I hear what you say about the raid card, but how likely are they to\n> > change it from the LSI Mega Raid one in reality? But I am open to\n> \n> Heh... very likely. I have a 6 drive Dell machine with a Perc\ncontroller\n> (lsi rebrand). If I put it in RAID 5, it refuses to get more than 8\nmegs\n> a second. If I put it in RAID 10, it get about 50 megs a second.\n> \n> If I get the offshelf LSI Megaraid withe the same configuration? You\n> don't want to know... it will just make you want to cry at the fact\nthat\n> you bought a Dell.\n\nI agree there's better platforms out there than Dell, but the above is\nsimply not true for the 2950. Raid 5, dd, on 6 disks, I get about\n260Mb/s sustained writes. Granted, this should be faster, but... it's a\nfar cry from 8 or 50MB/s. I posted some numbers here a while back on the\n2950, so you might want to dig those out of the archives. \n\nFor CPU, if that's a concern, make sure you get Woodcrest with 4MB\nshared cache per socket. These are extremely fast CPU's (Intel's 80%\nperformance improvements over the old Xeons actually seem close). Oh,\nand I would NOT recommend planning to add CPU's to a dell box after\nyou've purchased it. I've seen too many CPU upgrades go awry. Adding\ndisks, no biggie, adding ram, eh, don't mind, adding CPU, I try to stay\naway from for reliability purposes.\n\nAlso, I have had experience with at least half dozen 2850's and 2950's -\nall have had the LSI controllers re-branded as Perc. If this is a\nconcern, talk with dell, and I believe you get a 30 day money-back\nguarantee. I've used this before, and yes, they will take the server\nback. The sales guys aren't too bright, they'll promise anything, but as\nlong as you can give the server back... (true, we buy a lot of dell\nservers.. so... get confirmation from dell on what return policy applies\nto your purchase)\n\nIf you're not concerned about space, go for the 8 2.5\" disks. You'll get\nmore raw storage out of 300GB 3.5\", but unless you need it, you'd be\nbetter served with the additional spindles.\n\nAs for FreeBSD- I'd advise taking a good look at 6.2, its' in beta and\nthey've fixed quite a few problems with the 2950 (Raid controller and\nbce nic issues come to mind). \n\nLastly, if you have the money and rack space for an external disk cage,\ntake a look at Dell's MD1000 - not as good as some of the sun offerings,\nbut not too shabby for dell. (Note that I have not tested the MD1000 so\nI'm just going off of my 2950 experience and the specs for the MD1000).\n\nThe above comes from being stuck with dell and trying to make the best\nof it. Turns out it's not as bad as it used to be. Oh, and side note,\nthis may be obvious for some, but if you're running BSD and need\nsupport, ask to speak to the Linux guys (or simply tell them you're\nrunning Linux). Avoid Dell's windows support at all costs...\n\n- Bucky\n\n\n",
"msg_date": "Sun, 22 Oct 2006 14:59:39 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "On 20-10-2006 16:58 Dave Cramer wrote:\n> Ben,\n> \n>> My option in disks is either 5 x 15K rpm disks or 8 x 10K rpm disks \n>> (all SAS), or if I pick a different server I can have 6 x 15K rpm or 8 \n>> x 10K rpm (again SAS). In each case controlled by a PERC 5/i (which I \n>> think is an LSI Mega Raid SAS 8408E card).\n>>\n> You mentioned a \"Perc\" controller, so I'll assume this is a Dell.\n> \n> My advice is to find another supplier. check the archives for Dell.\n> \n> Basically you have no idea what the Perc controller is since it is \n> whatever Dell decides to ship that day.\n\nAs far as I know, the later Dell PERC's have all been LSI \nLogic-controllers, to my knowledge Dell has been a major contributor to \nthe LSI-Linux drivers...\nAt least the 5/i and 5/e have LSI-logic controller chips. Although the \n5/e is not an exact copy of the LSI Mega raid 8480E, its board layout \nand BBU-memory module are quite different. It does share its \nfunctionality however and has afaik the same controller-chip on it.\n\nCurrently we're using a Dell 1950 with PERC 5/e connecting a MD1000 \nSAS-enclosure, filled with 15 36GB 15k rpm disks. And the Dell-card \neasily beats an ICP Vortex-card we also connected to that enclosure.\n\nOw and we do get much more than, say, 8-50 MB/sec out of it. WinBench99 \ngets about 644MB/sec in sequential reading tops from a 14-disk raid10 \nand although IOmeter is a bit less dramatic it still gets over \n240MB/sec. I have no idea how fast a simple dd would be and have no \nbonnie++ results (at hand) either.\nAt least in our benchmarks, we're convinced enough that it is a good \nset-up. There will be faster set-ups, but at this price-point it won't \nsurprise me if its the fastest disk-set you can get.\n\nBy the way, as far as I know, HP offers the exact same broadcom network \nchip in their systems as Dell does... So if that broadcom chip is \nunstable on a Dell in FreeBSD, it might very well be unstable in a HP too.\n\nBest regards,\n\nArjen\n",
"msg_date": "Mon, 23 Oct 2006 00:12:43 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "\n>> If I get the offshelf LSI Megaraid withe the same configuration? You\n>> don't want to know... it will just make you want to cry at the fact\n> that\n>> you bought a Dell.\n> \n> I agree there's better platforms out there than Dell, but the above is\n> simply not true for the 2950. Raid 5, dd, on 6 disks, I get about\n> 260Mb/s sustained writes. Granted, this should be faster, but... it's a\n> far cry from 8 or 50MB/s. I posted some numbers here a while back on the\n> 2950, so you might want to dig those out of the archives. \n\nWell these are 3 year old machines, they could have improved a bit but \nit is quite true for the version of the Dells I have. I can duplicate it \non both machines.\n\nFrankly Dell has a *long* way to go to prove to me that they are a \nquality vendor for Server hardware.\n\nJoshua D. Drake\n\n> \n\n",
"msg_date": "Sun, 22 Oct 2006 20:02:06 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "On 20-10-2006 22:33 Ben Suffolk wrote:\n> How about the Fujitsu Siemens Sun Clones? I have not really looked at \n> them but have heard the odd good thing about them.\n\nFujitsu doesn't build Sun clones! That really is insulting for them ;-) \nThey do offer Sparc-hardware, but that's a bit higher up the market.\n\nOn the other hand, they also offer nice x86-server hardware. We've had \nour hands on a RX300 (2U, dual woodcrest, six 3.5\" sas-bays, integraded \nlsi-logic raid-controller) and found it to be a very nice machine.\n\nBut again, they also offer (the same?) Broadcom networking on board. \nJust like Dell and HP. And it is a LSI Logic sas-controller on board, so \nif FBSD has trouble with either of those, its hard to find anything \nsuitable at all in the market.\n\nBest regards,\n\nArjen\n",
"msg_date": "Mon, 23 Oct 2006 09:32:57 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "-logic raid-controller) and found it to be a very nice machine.\n> \n> But again, they also offer (the same?) Broadcom networking on board.\n> Just like Dell and HP. And it is a LSI Logic sas-controller on board,\nso\n> if FBSD has trouble with either of those, its hard to find anything\n> suitable at all in the market.\n> \nYou may want to search the bsd -stable and -hardware archives for\nconfirmation on this, but I believe the RAID/SAS issues have been fixed\nin -stable and 6.2-beta1. The bce0 driver appears to have been fixed\nmore recently, but it's looking like it'll be fixed for the next round\nof beta testing.\n\nWith any hardware for a critical server, you need to ensure redundancy\n(RAID, etc) and for a critical server, you probably want either an\nautomatic spare hd failover done by the RAID (the 2950 RAID can be\nconfigured to do this) or an entire spare server/replication solution.\nWhile x86 class dells aren't even in the same ballpark as say an IBM\niSeries/pSeries for reliability, I haven't found their more recent boxes\n(2850, 2950) to be significantly worse than other vendors (HP might be a\nlittle better, but it's still x86 class hardware).\n\nHTH\n- Bucky\n",
"msg_date": "Mon, 23 Oct 2006 16:28:44 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "On Oct 20, 2006, at 10:58 AM, Dave Cramer wrote:\n\n> My advice is to find another supplier. check the archives for Dell.\n\nNot necessarily bad to go with Dell. There are *some* of their \ncontrollers that are wicked fast in some configurations. However, \nfinding which ones are fast is very tricky unless you buy + return \nthe box you want to test :-)\n\n>\n> Basically you have no idea what the Perc controller is since it is \n> whatever Dell decides to ship that day.\n\nFUD!!!\n\nThey don't randomly change the controllers under the same name. If \nyou order a PERC4e/Si controller you will get the same controller \nevery time. This particular controller (found in their PE1850) is \nincredibly fast, sustaining over 80Mb/sec writes to a mirror. I \nmeasured that during a DB mirror using slony.",
"msg_date": "Mon, 23 Oct 2006 17:02:56 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "> \n> FUD!!!\n> \n> They don't randomly change the controllers under the same name. If you\n> order a PERC4e/Si controller you will get the same controller every\n> time. \n\nActually Vivek this isn't true. Yes the hardware will likely be the\nsame, but the firmware rev will likely be different and I have seen\nfirmware make an incredible difference for them.\n\n> This particular controller (found in their PE1850) is incredibly\n> fast, sustaining over 80Mb/sec writes to a mirror. I measured that\n> during a DB mirror using slony.\n\nO.k. but my experience shows that mirroring isn't where their problem\nis, raid 5 or 10 is :)\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Mon, 23 Oct 2006 14:08:33 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "On Oct 23, 2006, at 5:08 PM, Joshua D. Drake wrote:\n\n>>\n>> They don't randomly change the controllers under the same name. \n>> If you\n>> order a PERC4e/Si controller you will get the same controller every\n>> time.\n>\n> Actually Vivek this isn't true. Yes the hardware will likely be the\n> same, but the firmware rev will likely be different and I have seen\n> firmware make an incredible difference for them.\n\nFair enough... but you don't expect LSI to never update their \nfirmware either, I suspect... not that I'm a big dell apologist.. \nthey're totally off of my personally approved server vendor for db \nservers.\n\n>\n>> This particular controller (found in their PE1850) is incredibly\n>> fast, sustaining over 80Mb/sec writes to a mirror. I measured that\n>> during a DB mirror using slony.\n>\n> O.k. but my experience shows that mirroring isn't where their problem\n> is, raid 5 or 10 is :)\n\nLike I said, for some configurations they're great! Finding those \nconfigs is difficult.",
"msg_date": "Mon, 23 Oct 2006 17:30:19 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "Vivek Khera wrote:\n> \n> On Oct 23, 2006, at 5:08 PM, Joshua D. Drake wrote:\n> \n>>>\n>>> They don't randomly change the controllers under the same name. If you\n>>> order a PERC4e/Si controller you will get the same controller every\n>>> time.\n>>\n>> Actually Vivek this isn't true. Yes the hardware will likely be the\n>> same, but the firmware rev will likely be different and I have seen\n>> firmware make an incredible difference for them.\n> \n> Fair enough... but you don't expect LSI to never update their firmware\n> either, I suspect... \n\nTrue, but I have *never* had to update the firmware of the LSI (which\nwas my actual point :))\n\n> Like I said, for some configurations they're great! Finding those\n> configs is difficult.\n\nAgreed.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Mon, 23 Oct 2006 14:33:59 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
},
{
"msg_contents": "On Sat, Oct 21, 2006 at 12:12:59AM +0930, Shane Ambler wrote:\n> Generally more disks at slower speed - 2 10K disks in raid 0 is faster \n> than 1 15K disk. More disks also allow more options.\n\nNot at writing they're not (unless you're using RAID0... ugh).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 23 Oct 2006 16:47:09 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New hardware thoughts"
}
] |
[
{
"msg_contents": "Hello Performancers,\n\nhas anyone a pgBench tool running on Windows?\n\nI want to experiment with various settings to tune; and would prefer using\nsomething ready made before coming up with my own misstakes.\n\nHarald\n\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\n-\nPython: the only language with more web frameworks than keywords.\n\nHello Performancers,has anyone a pgBench tool running on Windows?I want to experiment with various settings to tune; and would prefer using something ready made before coming up with my own misstakes.\nHarald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.",
"msg_date": "Fri, 20 Oct 2006 10:13:58 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgBench on Windows"
},
{
"msg_contents": "> Hello Performancers,\n> \n> has anyone a pgBench tool running on Windows?\n\nDoes the one that ships in the installer not work?\n\n//Magnus\n",
"msg_date": "Sat, 21 Oct 2006 14:40:45 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgBench on Windows"
},
{
"msg_contents": ">Does the one that ships in the installer not work?\n//Magnus\n\nit does work.\n\n*putting ashes on my head*\n\nGoogled around and only found pgbench.c; never looked in program directory.\n\nSorry, my mistake.\n\nHarald\n\n>\n>\n>\n\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nReinsburgstraße 202b\n70197 Stuttgart\n0173/9409607\n-\nPython: the only language with more web frameworks than keywords.\n\n>Does the one that ships in the installer not work?//Magnusit does work.*putting ashes on my head*Googled around and only found pgbench.c; never looked in program directory. Sorry, my mistake.\nHarald-- GHUM Harald Massa\npersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.",
"msg_date": "Sat, 21 Oct 2006 15:07:32 +0200",
"msg_from": "\"Harald Armin Massa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgBench on Windows"
}
] |
[
{
"msg_contents": "[Jim C. Nasby - Thu at 11:31:26AM -0500]\n> The issue with pg_xlog is you don't need bandwidth... you need super-low\n> latency. The best way to accomplish that is to get a battery-backed RAID\n> controller that you can enable write caching on. In fact, if the\n> controller is good enough, you can theoretically get away with just\n> building one big RAID10 and letting the controller provide the\n> low-latency fsyncs that pg_xlog depends on.\n\nI was talking a bit about our system administrator. We're running 4\ndisks in raid 1+0 for the database and 2 disks in raid 1 for the WALs\nand for OS. He wasn't really sure if we had write cacheing on the RAID\ncontroller or not. He pasted me some lines from the dmesg:\n\n sda: asking for cache data failed\n sda: assuming drive cache: write through\n failed line is expected from these controllers\n 0000:02:0e.0 RAID bus controller: Dell PowerEdge Expandable RAID controller 4 (rev 06)\n\nI think we're going to move system logs, temporary files and backup\nfiles from the wal-disks to the db-disks. Since our database aren't on\nfire for the moment, I suppose we'll wait moving the rest of the OS :-)\n\n",
"msg_date": "Fri, 20 Oct 2006 14:18:57 +0200",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Swappiness setting on a linux pg server"
}
] |
[
{
"msg_contents": "What is the best COPY performance that you have gotten on a \"normal\" table?\n\nI know that this is question is almost too general, but it might help\nme out a bit, or at least give me the right things to tweak. Perhaps\nthe question can be rewritten as \"Where are the major bottlenecks in a\nCOPY?\" or \"How can I compute the max theoretical COPY performance for\nmy hardware?\". The two subquestions that I have from this are:\n -Are my ETL scripts (perl) maximizing the database COPY speeds?\n -Can I tweak my DB further to eek out a bit more performance?\n\nI'm using perl to ETL a decent sized data set (10 million records) and\nthen loading it through perl::DBI's copy. I am currently getting\nbetween 10K and 15K inserts/second. I've profiled the ETL scripts a\nbit and have performance-improved a lot of the code, but I'd like to\ndetermine whether it makes sense to try and further optimize my Perl\nor count it as \"done\" and look for improvements elsewhere.\n\nI ran trivial little insert into a table with a single integer row and\ncame close to 250K inserts/second using psql's \\copy, so I'm thinking\nthat my code could be optimized a bit more, but wanted to check around\nto see if that was the case.\n\nI am most interested in loading two tables, one with about 21 (small)\nVARCHARs where each record is about 200 bytes, and another with 7\nINTEGERs, 3 TIMESTAMPs, and 1 BYTEA where each record is about 350\nbytes.\n\nI have implemented most of the various bits of PG config advice that I\nhave seen, both here and with a some googling, such as:\n\n wal_buffers=128\n checkpoint_segments=128\n checkpoint_timeout=3000\n\nSoftware: PG 8.1.3 on RHEL 4.3 x86_64\nHardware: Quad Dual-core Opteron, Fibre Channel SAN with 256M BBC\n\nThanks!\n",
"msg_date": "Fri, 20 Oct 2006 16:05:33 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best COPY Performance"
},
{
"msg_contents": "On 10/21/06, Worky Workerson <[email protected]> wrote:\n> What is the best COPY performance that you have gotten on a \"normal\" table?\n>\n> I know that this is question is almost too general, but it might help\n> me out a bit, or at least give me the right things to tweak. Perhaps\n> the question can be rewritten as \"Where are the major bottlenecks in a\n> COPY?\" or \"How can I compute the max theoretical COPY performance for\n> my hardware?\". The two subquestions that I have from this are:\n> -Are my ETL scripts (perl) maximizing the database COPY speeds?\n> -Can I tweak my DB further to eek out a bit more performance?\n>\n> I'm using perl to ETL a decent sized data set (10 million records) and\n> then loading it through perl::DBI's copy. I am currently getting\n> between 10K and 15K inserts/second. I've profiled the ETL scripts a\n> bit and have performance-improved a lot of the code, but I'd like to\n> determine whether it makes sense to try and further optimize my Perl\n> or count it as \"done\" and look for improvements elsewhere.\n>\n> I ran trivial little insert into a table with a single integer row and\n> came close to 250K inserts/second using psql's \\copy, so I'm thinking\n> that my code could be optimized a bit more, but wanted to check around\n> to see if that was the case.\n>\n> I am most interested in loading two tables, one with about 21 (small)\n> VARCHARs where each record is about 200 bytes, and another with 7\n> INTEGERs, 3 TIMESTAMPs, and 1 BYTEA where each record is about 350\n> bytes.\n\nindexes/keys? more memory for sorting during index creation can have\na dramatic affect on bulk insert performance. check for pg_tmp\nfolders popping up during copy run.\n\n> I have implemented most of the various bits of PG config advice that I\n> have seen, both here and with a some googling, such as:\n>\n> wal_buffers=128\n> checkpoint_segments=128\n> checkpoint_timeout=3000\n>\n> Software: PG 8.1.3 on RHEL 4.3 x86_64\n> Hardware: Quad Dual-core Opteron, Fibre Channel SAN with 256M BBC\n\nfor table light on indexes, 10-15k for copy is pretty poor. you can\nget pretty close to that with raw inserts on good hardware. I would\nsuggest configuirng your perl script to read from stdin and write to\nstdout, and pipe it to psql using copy from stdin. then just\nbenchmark your perl script redirecting output to a file.\n\nmerlin\n",
"msg_date": "Sat, 21 Oct 2006 06:41:14 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Hi, Worky,\n\nWorky Workerson wrote:\n> I am currently getting\n> between 10K and 15K inserts/second. \n\n> I ran trivial little insert into a table with a single integer row and\n> came close to 250K inserts/second using psql's \\copy, so I'm thinking\n> that my code could be optimized a bit more, but wanted to check around\n> to see if that was the case.\n\nCould you COPY one of your tables out to disk via psql, and then COPY it\nback into the database, to reproduce this measurement with your real data?\n\nAlso, how much is the disk load, and CPU usage?\n\nAs long as psql is factor 20 better than your perl script, I think that\nthe perl interface is what should be optimized.\n\nOn a table with no indices, triggers and contstraints, we managed to\nCOPY about 7-8 megabytes/second with psql over our 100 MBit network, so\nhere the network was the bottleneck.\n\nYou should think about making your perl program writing the COPY\nstatement as text, and piping it into psql.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Mon, 23 Oct 2006 11:27:41 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "> > I am most interested in loading two tables, one with about 21 (small)\n> > VARCHARs where each record is about 200 bytes, and another with 7\n> > INTEGERs, 3 TIMESTAMPs, and 1 BYTEA where each record is about 350\n> > bytes.\n>\n> indexes/keys? more memory for sorting during index creation can have\n> a dramatic affect on bulk insert performance. check for pg_tmp\n> folders popping up during copy run.\n\nThe only index on load is a single IP4 btree primary key, which I\nfigure should function about like an INTEGER.\n\n> for table light on indexes, 10-15k for copy is pretty poor. you can\n> get pretty close to that with raw inserts on good hardware. I would\n> suggest configuirng your perl script to read from stdin and write to\n> stdout, and pipe it to psql using copy from stdin. then just\n> benchmark your perl script redirecting output to a file.\n\nSo simple and hadn't thought of that ... thanks. When I pre-create a\nCOPY file, I can load it at about 45K inserts/sec (file was 1.8GB or\n14.5 million records in 331 seconds), which looks like its about 5.5\nMB/s. I'm loading from a local 15K SCSI320 RAID10 (which also\ncontains the PG log files) to a 10K SCSI320 RAID10 on an FC SAN. Does\nthis look more consistent with \"decent\" performance, or should I go\nlooking into some hardware issues i.e. SAN configuration? I've\ncurrently got several hats including hardware/systems/security admin,\nas well as DBA and programmer, and my SAN setup skills could\ndefinitely use some more work.\n\nHardware aside, my perl can definitely use some work, and it seems to\nbe mostly the CSV stuff that I am using, mostly for convenience. I'll\nsee if I can't redo some of that to eliminate some CSV processing, or,\nbarring that, multithread the process to utilize more of the CPUs.\nPart of the reason that I hadn't used psql in the first place is that\nI'm loading the data into partitioned tables, and the loader keeps\nseveral COPY connections open at a time to load the data into the\nright table. I guess I could just as easily keep several psql pipes\nopen, but it seemed cleaner to go through DBI.\n",
"msg_date": "Mon, 23 Oct 2006 11:10:19 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Markus,\n\nOn 10/23/06 2:27 AM, \"Markus Schaber\" <[email protected]> wrote:\n\n> On a table with no indices, triggers and contstraints, we managed to\n> COPY about 7-8 megabytes/second with psql over our 100 MBit network, so\n> here the network was the bottleneck.\n\nWe routinely get 10-12MB/s on I/O hardware that can sustain a sequential\nwrite rate of 60+ MB/s with the WAL and data on the same disks.\n\nIt depends on a few things you might not consider, including the number and\ntype of columns in the table and the client and server encoding. The\nfastest results are with more columns in a table and when the client and\nserver encoding are the same.\n\n- Luke\n\n\n",
"msg_date": "Mon, 23 Oct 2006 08:11:00 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Markus,\n\n> Could you COPY one of your tables out to disk via psql, and then COPY it\n> back into the database, to reproduce this measurement with your real data?\n\n$ psql -c \"COPY my_table TO STDOUT\" > my_data\n$ ls my_data\n2018792 edgescape_pg_load\n$ time cat my_data | psql -c \"COPY mytable FROM STDIN\"\nreal 5m43.194s\nuser 0m35.412s\nsys 0m9.567s\n\n> Also, how much is the disk load, and CPU usage?\n\n When I am loading via the perl (which I've established is a\nbottleneck), the one CPU core is at 99% for the perl and another is at\n30% for a postmaster, vs about 90% for the postmaster when going\nthrough psql.\n\nThe disk load is where I start to get a little fuzzy, as I haven't\nplayed with iostat to figure what is \"normal\". The local drives\ncontain PG_DATA as well as all the log files, but there is a\ntablespace on the FibreChannel SAN that contains the destination\ntable. The disk usage pattern that I see is that there is a ton of\nconsistent activity on the local disk, with iostat reporting an\naverage of 30K Blk_wrtn/s, which I assume is the log files. Every\nseveral seconds there is a massive burst of activity on the FC\npartition, to the tune of 250K Blk_wrtn/s.\n\n> On a table with no indices, triggers and contstraints, we managed to\n> COPY about 7-8 megabytes/second with psql over our 100 MBit network, so\n> here the network was the bottleneck.\n\nhmm, this makes me think that either my PG config is really lacking,\nor that the SAN is badly misconfigured, as I would expect it to\noutperform a 100Mb network. As it is, with a straight pipe to psql\nCOPY, I'm only working with a little over 5.5 MB/s. Could this be due\nto the primary key index updates?\n\nThanks!\n",
"msg_date": "Mon, 23 Oct 2006 11:40:57 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On Mon, Oct 23, 2006 at 11:10:19AM -0400, Worky Workerson wrote:\n> >> I am most interested in loading two tables, one with about 21 (small)\n> >> VARCHARs where each record is about 200 bytes, and another with 7\n> >> INTEGERs, 3 TIMESTAMPs, and 1 BYTEA where each record is about 350\n> >> bytes.\n> >\n> >indexes/keys? more memory for sorting during index creation can have\n> >a dramatic affect on bulk insert performance. check for pg_tmp\n> >folders popping up during copy run.\n> \n> The only index on load is a single IP4 btree primary key, which I\n> figure should function about like an INTEGER.\n> \n> >for table light on indexes, 10-15k for copy is pretty poor. you can\n> >get pretty close to that with raw inserts on good hardware. I would\n> >suggest configuirng your perl script to read from stdin and write to\n> >stdout, and pipe it to psql using copy from stdin. then just\n> >benchmark your perl script redirecting output to a file.\n> \n> So simple and hadn't thought of that ... thanks. When I pre-create a\n> COPY file, I can load it at about 45K inserts/sec (file was 1.8GB or\n> 14.5 million records in 331 seconds), which looks like its about 5.5\n> MB/s. I'm loading from a local 15K SCSI320 RAID10 (which also\n> contains the PG log files) to a 10K SCSI320 RAID10 on an FC SAN. Does\n> this look more consistent with \"decent\" performance, or should I go\n> looking into some hardware issues i.e. SAN configuration? I've\n> currently got several hats including hardware/systems/security admin,\n> as well as DBA and programmer, and my SAN setup skills could\n> definitely use some more work.\n> \n> Hardware aside, my perl can definitely use some work, and it seems to\n> be mostly the CSV stuff that I am using, mostly for convenience. I'll\n> see if I can't redo some of that to eliminate some CSV processing, or,\n> barring that, multithread the process to utilize more of the CPUs.\n> Part of the reason that I hadn't used psql in the first place is that\n> I'm loading the data into partitioned tables, and the loader keeps\n> several COPY connections open at a time to load the data into the\n> right table. I guess I could just as easily keep several psql pipes\n> open, but it seemed cleaner to go through DBI.\n\nhttp://stats.distributed.net used to use a perl script to do some\ntransformations before loading data into the database. IIRC, when we\nswitched to using C we saw 100x improvement in speed, so I suspect that\nif you want performance perl isn't the way to go. I think you can\ncompile perl into C, so maybe that would help some.\n\nUltimately, you might be best of using triggers instead of rules for the\npartitioning since then you could use copy. Or go to raw insert commands\nthat are wrapped in a transaction.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 23 Oct 2006 16:59:13 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "\n> Ultimately, you might be best of using triggers instead of rules for the\n> partitioning since then you could use copy. Or go to raw insert commands\n> that are wrapped in a transaction.\n\nMy experience is that triggers are quite a bit faster than rules in any\nkind of partitioning that involves more than say 7 tables.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Mon, 23 Oct 2006 15:10:33 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> http://stats.distributed.net used to use a perl script to do some\n> transformations before loading data into the database. IIRC, when we\n> switched to using C we saw 100x improvement in speed, so I suspect that\n> if you want performance perl isn't the way to go. I think you can\n> compile perl into C, so maybe that would help some.\n\nI use Perl extensively, and have never seen a performance problem. I suspect the perl-to-C \"100x improvement\" was due to some other factor, like a slight change in the schema, indexes, or the fundamental way the client (C vs Perl) handled the data during the transformation, or just plain bad Perl code.\n\nModern scripting languages like Perl and Python make programmers far, far more productive than the bad old days of C/C++. Don't shoot yourself in the foot by reverting to low-level languages like C/C++ until you've exhausted all other possibilities. I only use C/C++ for intricate scientific algorithms.\n\nIn many cases, Perl is *faster* than C/C++ code that I write, because I can't take the time (for example) to write the high-performance string manipulation that have been fine-tuned and extensively optimized in Perl.\n\nCraig\n\n",
"msg_date": "Mon, 23 Oct 2006 15:37:47 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "> http://stats.distributed.net used to use a perl script to do some\n> transformations before loading data into the database. IIRC, when we\n> switched to using C we saw 100x improvement in speed, so I suspect that\n> if you want performance perl isn't the way to go. I think you can\n> compile perl into C, so maybe that would help some.\n\nLike Craig mentioned, I have never seen those sorts of improvements\ngoing from perl->C, and developer efficiency is primo for me. I've\nprofiled most of the stuff, and have used XS modules and Inline::C on\nthe appropriate, often used functions, but I still think that it comes\ndown to my using CSV and Text::CSV_XS. Even though its XS, CSV is\nstill a pain in the ass.\n\n> Ultimately, you might be best of using triggers instead of rules for the\n> partitioning since then you could use copy. Or go to raw insert commands\n> that are wrapped in a transaction.\n\nEh, I've put the partition loading logic in the loader, which seems to\nwork out pretty well, especially since I keep things sorted and am the\nonly one inserting into the DB and do so with bulk loads. But I'll\nkeep this in mind for later use.\n\nThanks!\n",
"msg_date": "Tue, 24 Oct 2006 09:17:08 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On Mon, Oct 23, 2006 at 03:37:47PM -0700, Craig A. James wrote:\n> Jim C. Nasby wrote:\n> >http://stats.distributed.net used to use a perl script to do some\n> >transformations before loading data into the database. IIRC, when we\n> >switched to using C we saw 100x improvement in speed, so I suspect that\n> >if you want performance perl isn't the way to go. I think you can\n> >compile perl into C, so maybe that would help some.\n> \n> I use Perl extensively, and have never seen a performance problem. I \n> suspect the perl-to-C \"100x improvement\" was due to some other factor, like \n> a slight change in the schema, indexes, or the fundamental way the client \n> (C vs Perl) handled the data during the transformation, or just plain bad \n> Perl code.\n> \n> Modern scripting languages like Perl and Python make programmers far, far \n> more productive than the bad old days of C/C++. Don't shoot yourself in \n> the foot by reverting to low-level languages like C/C++ until you've \n> exhausted all other possibilities. I only use C/C++ for intricate \n> scientific algorithms.\n> \n> In many cases, Perl is *faster* than C/C++ code that I write, because I \n> can't take the time (for example) to write the high-performance string \n> manipulation that have been fine-tuned and extensively optimized in Perl.\n\nWell, the code is all at\nhttp://cvs.distributed.net/viewcvs.cgi/stats-proc/hourly/ (see logmod\ndirectory and logmod_*.pl). There have been changes made to the C code\nsince we changed over, but you can find the appropriate older versions\nin there. IIRC, nothing in the database changed when we went from perl\nto C (it's likely that was the *only* change that happened anywhere\naround that time).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 24 Oct 2006 19:14:49 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On Tue, Oct 24, 2006 at 09:17:08AM -0400, Worky Workerson wrote:\n> >http://stats.distributed.net used to use a perl script to do some\n> >transformations before loading data into the database. IIRC, when we\n> >switched to using C we saw 100x improvement in speed, so I suspect that\n> >if you want performance perl isn't the way to go. I think you can\n> >compile perl into C, so maybe that would help some.\n> \n> Like Craig mentioned, I have never seen those sorts of improvements\n> going from perl->C, and developer efficiency is primo for me. I've\n> profiled most of the stuff, and have used XS modules and Inline::C on\n> the appropriate, often used functions, but I still think that it comes\n> down to my using CSV and Text::CSV_XS. Even though its XS, CSV is\n> still a pain in the ass.\n> \n> >Ultimately, you might be best of using triggers instead of rules for the\n> >partitioning since then you could use copy. Or go to raw insert commands\n> >that are wrapped in a transaction.\n> \n> Eh, I've put the partition loading logic in the loader, which seems to\n> work out pretty well, especially since I keep things sorted and am the\n> only one inserting into the DB and do so with bulk loads. But I'll\n> keep this in mind for later use.\n\nWell, given that perl is using an entire CPU, it sounds like you should\nstart looking either at ways to remove some of the overhead from perl,\nor to split that perl into multiple processes.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 24 Oct 2006 19:17:10 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> Well, given that perl is using an entire CPU, it sounds like you should\n> start looking either at ways to remove some of the overhead from perl,\n> or to split that perl into multiple processes.\n\nI use Perl for big database copies (usually with some processing/transformation along the way) and I've never seen 100% CPU usage except for brief periods, even when copying BLOBS and such. My typical copy divides operations into blocks, for example doing\n\n N = 0\n while (more rows to go) {\n begin transaction\n select ... where primary_key > N order by primary_key limit 1000\n while (fetch a row)\n insert into ...\n N = (highest value found in last block)\n commit\n }\n\nDoing it like this in Perl should keep Postgres busy, with Perl using only moderate resources. If you're seeing high Perl CPU usage, I'd look first at the Perl code.\n\nCraig\n",
"msg_date": "Tue, 24 Oct 2006 22:36:04 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On 10/25/06, Craig A. James <[email protected]> wrote:\n> Jim C. Nasby wrote:\n> > Well, given that perl is using an entire CPU, it sounds like you should\n> > start looking either at ways to remove some of the overhead from perl,\n> > or to split that perl into multiple processes.\n>\n> I use Perl for big database copies (usually with some processing/transformation along the\n> way) and I've never seen 100% CPU usage except for brief periods, even when copying\n> BLOBS and such. My typical copy divides operations into blocks, for example doing\n\nI'm just doing CSV style transformations (and calling a lot of\nfunctions along the way), but the end result is a straight bulk load\nof data into a blank database. And we've established that Postgres\ncan do *way* better than what I am seeing, so its not suprising that\nperl is using 100% of a CPU.\n\nHowever, I am still curious as to the rather slow COPYs from psql to\nlocal disks. Like I mentioned previously, I was only seeing about 5.7\nMB/s (1.8 GB / 330 seconds), where it seemed like others were doing\nsubstantially better. What sorts of things should I look into?\n\nThanks!\n",
"msg_date": "Wed, 25 Oct 2006 08:03:38 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On 23 Oct 2006, at 22:59, Jim C. Nasby wrote:\n> http://stats.distributed.net used to use a perl script to do some\n> transformations before loading data into the database. IIRC, when we\n> switched to using C we saw 100x improvement in speed, so I suspect \n> that\n> if you want performance perl isn't the way to go. I think you can\n> compile perl into C, so maybe that would help some.\n\n\nhttp://shootout.alioth.debian.org/gp4/benchmark.php? \ntest=all&lang=perl&lang2=gcc\n\n100x doesn't totally impossible if that is even vaguely accurate and \nyou happen to be using bits of Perl which are a lot slower than the C \nimplementation would be...\nThe slowest things appear to involve calling functions, all the \nslowest tests involve lots of function calls.\n",
"msg_date": "Wed, 25 Oct 2006 13:16:15 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Hi, Worky,\n\nWorky Workerson wrote:\n\n> $ psql -c \"COPY my_table TO STDOUT\" > my_data\n> $ ls my_data\n> 2018792 edgescape_pg_load\n> $ time cat my_data | psql -c \"COPY mytable FROM STDIN\"\n> real 5m43.194s\n> user 0m35.412s\n> sys 0m9.567s\n\nThat's via PSQL, and you get about 5 MB/Sec.\n\n>> On a table with no indices, triggers and contstraints, we managed to\n>> COPY about 7-8 megabytes/second with psql over our 100 MBit network, so\n>> here the network was the bottleneck.\n> \n> hmm, this makes me think that either my PG config is really lacking,\n> or that the SAN is badly misconfigured, as I would expect it to\n> outperform a 100Mb network. As it is, with a straight pipe to psql\n> COPY, I'm only working with a little over 5.5 MB/s. Could this be due\n> to the primary key index updates?\n\nYes, index updates cause both CPU load, and random disk access (which is\nslow by nature).\n\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Wed, 25 Oct 2006 14:18:52 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On 10/23/06, Worky Workerson <[email protected]> wrote:\n> The disk load is where I start to get a little fuzzy, as I haven't\n> played with iostat to figure what is \"normal\". The local drives\n> contain PG_DATA as well as all the log files, but there is a\n> tablespace on the FibreChannel SAN that contains the destination\n> table. The disk usage pattern that I see is that there is a ton of\n> consistent activity on the local disk, with iostat reporting an\n> average of 30K Blk_wrtn/s, which I assume is the log files. Every\n> several seconds there is a massive burst of activity on the FC\n> partition, to the tune of 250K Blk_wrtn/s.\n>\n> > On a table with no indices, triggers and contstraints, we managed to\n> > COPY about 7-8 megabytes/second with psql over our 100 MBit network, so\n> > here the network was the bottleneck.\n\nI'm guessing the high bursts are checkpoints. Can you check your log\nfiles for pg and see if you are getting warnings about checkpoint\nfrequency? You can get some mileage here by increasing wal files.\n\nHave you determined that pg is not swapping? try upping maintenance_work_mem.\n\nWhat exactly is your architecture? is your database server direct\nattached to the san? if so, 2gb/4gb fc? what san? have you bonnie++\nthe san? basically, you can measure iowait to see if pg is waiting on\nyour disks.\n\nregarding perl, imo the language performance is really about which\nlibraries you use. the language itself is plenty fast.\n\nmerlin\n",
"msg_date": "Wed, 25 Oct 2006 09:20:02 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Mr. Worky Workerson,\n\nOn 10/25/06 5:03 AM, \"Worky Workerson\" <[email protected]> wrote:\n\n> However, I am still curious as to the rather slow COPYs from psql to\n> local disks. Like I mentioned previously, I was only seeing about 5.7\n> MB/s (1.8 GB / 330 seconds), where it seemed like others were doing\n> substantially better. What sorts of things should I look into?\n\nIt's probable that you have a really poor performing disk configuration.\nJudging from earlier results, you may only be getting 3 x 5.7 = 17 MB/s of\nwrite performance to your disks, which is about 1/4 of a single disk drive.\n\nPlease run this test and report the time here:\n\n1) Calculate the size of 2x memory in 8KB blocks:\n # of blocks = 250,000 x memory_in_GB\n\nExample:\n 250,000 x 16GB = 4,000,000 blocks\n\n2) Benchmark the time taken to write 2x RAM sequentially to your disk:\n time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=<# of blocks> &&\nsync\"\n\n3) Benchmark the time taken to read same:\n time dd if=bigfile of=/dev/null bs=8k\n\n- Luke\n\n\n",
"msg_date": "Wed, 25 Oct 2006 08:06:36 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On Tue, Oct 24, 2006 at 10:36:04PM -0700, Craig A. James wrote:\n> Jim C. Nasby wrote:\n> >Well, given that perl is using an entire CPU, it sounds like you should\n> >start looking either at ways to remove some of the overhead from perl,\n> >or to split that perl into multiple processes.\n> \n> I use Perl for big database copies (usually with some \n> processing/transformation along the way) and I've never seen 100% CPU usage \n> except for brief periods, even when copying BLOBS and such. My typical \n> copy divides operations into blocks, for example doing\n> \n> N = 0\n> while (more rows to go) {\n> begin transaction\n> select ... where primary_key > N order by primary_key limit 1000\n> while (fetch a row)\n> insert into ...\n> N = (highest value found in last block)\n> commit\n> }\n> \n> Doing it like this in Perl should keep Postgres busy, with Perl using only \n> moderate resources. If you're seeing high Perl CPU usage, I'd look first \n> at the Perl code.\n\nWait... so you're using perl to copy data between two tables? And using\na cursor to boot? I can't think of any way that could be more\ninefficient...\n\nWhat's wrong with a plain old INSERT INTO ... SELECT? Or if you really\nneed to break it into multiple transaction blocks, at least don't\nshuffle the data from the database into perl and then back into the\ndatabase; do an INSERT INTO ... SELECT with that same where clause.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 25 Oct 2006 10:22:09 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "> I'm guessing the high bursts are checkpoints. Can you check your log\n> files for pg and see if you are getting warnings about checkpoint\n> frequency? You can get some mileage here by increasing wal files.\n\nNope, nothing in the log. I have set:\n wal_buffers=128\n checkpoint_segments=128\n checkpoint_timeout=3000\nwhich I thought was rather generous. Perhaps I should set it even\nhigher for the loads?\n\n> Have you determined that pg is not swapping? try upping maintenance_work_mem.\n\nmaintenance_work_mem = 524288 ... should I increase it even more?\nDoesn't look like pg is swapping ...\n\n> What exactly is your architecture? is your database server direct\n> attached to the san? if so, 2gb/4gb fc? what san? have you bonnie++\n> the san? basically, you can measure iowait to see if pg is waiting on\n> your disks.\n\nI'm currently running bonnie++ with the defaults ... should I change\nthe execution to better mimic Postgres' behavior?\n\nRHEL 4.3 x86_64\nHP DL585, 4 Dual Core Opteron 885s\n 16 GB RAM\n 2x300GB 10K SCSI320, RAID10\nHP MSA1000 SAN direct connected via single 2GB Fibre Channel Arbitrated Loop\n 10x300GB 10K SCSI320, RAID10\n",
"msg_date": "Wed, 25 Oct 2006 11:25:01 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On Wed, Oct 25, 2006 at 08:03:38AM -0400, Worky Workerson wrote:\n> I'm just doing CSV style transformations (and calling a lot of\n> functions along the way), but the end result is a straight bulk load\n> of data into a blank database. And we've established that Postgres\n> can do *way* better than what I am seeing, so its not suprising that\n> perl is using 100% of a CPU.\n\nIf you're loading into an empty database, there's a number of tricks\nthat will help you:\n\nTurn off fsync\nAdd constraints and indexes *after* you've loaded the data (best to add\nas much of them as possible on a per-table basis right after the table\nis loaded so that it's hopefully still in cache)\nCrank up maintenance_work_mem, especially for tables that won't fit into\ncache anyway\nBump up checkpoint segments and wal_buffers.\nDisable PITR\nCreate a table and load it's data in a single transaction (8.2 will\navoid writing any WAL data if you do this and PITR is turned off)\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 25 Oct 2006 10:28:13 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On 10/25/06, Worky Workerson <[email protected]> wrote:\n> > I'm guessing the high bursts are checkpoints. Can you check your log\n> > files for pg and see if you are getting warnings about checkpoint\n> > frequency? You can get some mileage here by increasing wal files.\n>\n> Nope, nothing in the log. I have set:\n> wal_buffers=128\n> checkpoint_segments=128\n> checkpoint_timeout=3000\n> which I thought was rather generous. Perhaps I should set it even\n> higher for the loads?\n>\n> > Have you determined that pg is not swapping? try upping maintenance_work_mem.\n>\n> maintenance_work_mem = 524288 ... should I increase it even more?\n> Doesn't look like pg is swapping ...\n\nnah, you already addressed it. either pg is swapping or it isnt, and\ni'm guessing it isn't.\n\n> I'm currently running bonnie++ with the defaults ... should I change\n> the execution to better mimic Postgres' behavior?\n\njust post what you have...\n\n> RHEL 4.3 x86_64\n> HP DL585, 4 Dual Core Opteron 885s\n> 16 GB RAM\n> 2x300GB 10K SCSI320, RAID10\n> HP MSA1000 SAN direct connected via single 2GB Fibre Channel Arbitrated Loop\n> 10x300GB 10K SCSI320, RAID10\n\nin theory, with 10 10k disks in raid 10, you should be able to keep\nyour 2fc link saturated all the time unless your i/o is extremely\nrandom. random i/o is the wild card here, ideally you should see at\nleast 2000 seeks in bonnie...lets see what comes up.\n\nhopefully, bonnie will report close to 200 mb/sec. in extreme\nsequential cases, the 2fc link should be a bottleneck if the raid\ncontroller is doing its job.\n\nif you are having cpu issues, try breaking your process down to at\nleast 4 processes (you have quad dual core box after all)...thats a no\nbrainer.\n\nmerlin\n",
"msg_date": "Wed, 25 Oct 2006 11:38:46 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Merlin,\n\nOn 10/25/06 8:38 AM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> in theory, with 10 10k disks in raid 10, you should be able to keep\n> your 2fc link saturated all the time unless your i/o is extremely\n> random. random i/o is the wild card here, ideally you should see at\n> least 2000 seeks in bonnie...lets see what comes up.\n\nThe 2000 seeks/sec are irrelevant to Postgres with one user doing COPY.\nBecause the I/O is single threaded, you will get one disk worth of seeks for\none user, roughly 150/second on a 10K RPM drive.\n\nI suspect the problem here is the sequential I/O rate - let's wait and see\nwhat the dd test results look like.\n\n- Luke\n\n\n",
"msg_date": "Wed, 25 Oct 2006 09:34:01 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> Wait... so you're using perl to copy data between two tables? And using\n> a cursor to boot? I can't think of any way that could be more\n> inefficient...\n> \n> What's wrong with a plain old INSERT INTO ... SELECT? Or if you really\n> need to break it into multiple transaction blocks, at least don't\n> shuffle the data from the database into perl and then back into the\n> database; do an INSERT INTO ... SELECT with that same where clause.\n\nThe data are on two different computers, and I do processing of the data as it passes through the application. Otherwise, the INSERT INTO ... SELECT is my first choice.\n\nCraig\n",
"msg_date": "Wed, 25 Oct 2006 09:52:14 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Merlin/Luke:\n\n> > in theory, with 10 10k disks in raid 10, you should be able to keep\n> > your 2fc link saturated all the time unless your i/o is extremely\n> > random. random i/o is the wild card here, ideally you should see at\n> > least 2000 seeks in bonnie...lets see what comes up.\n\n> I suspect the problem here is the sequential I/O rate - let's wait and see\n> what the dd test results look like.\n\nHere are the tests that you suggested that I do, on both the local\ndisks (WAL) and the SAN (tablespace). The random seeks seem to be far\nbelow what Merlin said was \"good\", so I am a bit concerned. There is\na bit of other activity on the box at the moment which is hard to\nstop, so that might have had an impact on the processing.\n\nHere is the bonnie++ output:\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nLocal Disks 31G 45119 85 56548 21 27527 8 35069 66 86506 13 499.6 1\nSAN 31G 53544 98 93385 35 18266 5 24970 47 57911 8 611.8 1\n\nAnd here are the dd results for 16GB RAM, i.e. 4,000,000 8K blocks:\n# Local Disks\n$ time bash -c \"dd if=/dev/zero of=/home/myhome/bigfile bs=8k\ncount=4000000 && sync\"\n4000000+0 records in\n4000000+0 records out\n\nreal 10m0.382s\nuser 0m1.117s\nsys 2m45.681s\n$ time dd if=/home/myhome/bigfile of=/dev/null bs=8k count=4000000\n4000000+0 records in\n4000000+0 records out\n\nreal 6m22.904s\nuser 0m0.717s\nsys 0m53.766s\n\n# Fibre Channel SAN\n$ time bash -c \"dd if=/dev/zero of=/data/test/bigfile bs=8k\ncount=4000000 && sync\"\n4000000+0 records in\n4000000+0 records out\n\nreal 5m58.846s\nuser 0m1.096s\nsys 2m18.026s\n$ time dd if=/data/test/bigfile of=/dev/null bs=8k count=4000000\n4000000+0 records in\n4000000+0 records out\n\nreal 14m9.560s\nuser 0m0.739s\nsys 0m53.806s\n",
"msg_date": "Wed, 25 Oct 2006 14:26:27 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Mr. Worky,\n\nOn 10/25/06 11:26 AM, \"Worky Workerson\" <[email protected]> wrote:\n\n> And here are the dd results for 16GB RAM, i.e. 4,000,000 8K blocks:\n\nSo, if we divide 32,000 MB by the real time, we get:\n\n/home (WAL):\n53 MB/s write\n84 MB/s read\n\n/data (data):\n89 MB/s write\n38 MB/s read\n\nThe write and read speeds on /home look like a single disk drive, which is\nnot good if you have more drives in a RAID. OTOH, it should be sufficient\nfor WAL writing and you should think that the COPY speed won't be limited by\nWAL.\n\nThe read speed on your /data volume is awful to the point where you should\nconsider it broken and find a fix. A quick comparison: the same number on a\n16 drive internal SATA array with 7200 RPM disks gets 950 MB/s read, about\n25 times faster for about 1/4 the price.\n\nBut again, this may not have anything to do with the speed of your COPY\nstatements.\n\nCan you provide about 10 seconds worth of \"vmstat 1\" while running your COPY\nso we can get a global view of the I/O and CPU?\n\n- Luke \n\n\n",
"msg_date": "Wed, 25 Oct 2006 13:13:25 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On Wed, Oct 25, 2006 at 11:25:01AM -0400, Worky Workerson wrote:\n> >I'm guessing the high bursts are checkpoints. Can you check your log\n> >files for pg and see if you are getting warnings about checkpoint\n> >frequency? You can get some mileage here by increasing wal files.\n> \n> Nope, nothing in the log. I have set:\n> wal_buffers=128\n> checkpoint_segments=128\n> checkpoint_timeout=3000\n> which I thought was rather generous. Perhaps I should set it even\n> higher for the loads?\n\nBut depending on your shared_buffer and bgwriter settings (as well as\nhow much WAL traffic you're generating, you could still end up with big\nslugs of work to be done when checkpoints happen.\n\nIf you set checkpoint_warning to 3001, you'll see exactly when\ncheckpoints are happening, so you can determine if that's an issue.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 25 Oct 2006 22:34:48 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "I do have a dirty little secret, one which I wasn't completely aware\nof until a little while ago. Apparently, someone decided to install\nOracle on the server, and use the SAN as the primary tablespace, so\nthat might have something to do with the poor performance of the SAN.\nAt least, I'm hoping that's what is involved. I'll be able to tell\nfor sure when I can rerun the benchmarks on Monday without Oracle.\n\nThank you all for all your help so far. I've learned a ton about\nPostgres (and life!) from both this thread and this list in general,\nand the \"support\" has been nothing less than spectacular.\n\nI'm hoping that the corporate Oracle machine won't shut down my pg\nprojects. On total side note, if anyone knows how to best limit\nOracle's impact on a system (i.e. memory usage, etc), I'd be\ninterested.\n\nI hate shared DB servers.\n\nThanks!\n",
"msg_date": "Fri, 27 Oct 2006 14:39:57 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On 10/27/06, Worky Workerson <[email protected]> wrote:\n> I'm hoping that the corporate Oracle machine won't shut down my pg\n> projects. On total side note, if anyone knows how to best limit\n> Oracle's impact on a system (i.e. memory usage, etc), I'd be\n> interested.\n\n\nrm -rf /usr/local/oracle?\n\nmerlin\n",
"msg_date": "Fri, 27 Oct 2006 15:01:51 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "> The read speed on your /data volume is awful to the point where you should\n> consider it broken and find a fix. A quick comparison: the same number on a\n> 16 drive internal SATA array with 7200 RPM disks gets 950 MB/s read, about\n> 25 times faster for about 1/4 the price.\n\nI'm hoping that the poor performance is a result of Oracle doing\nrandom reads while I try and do the sequential read. If not (I'll\ntest on Monday), I'll start looking other places. Any immediate\nsuggestions?\n\n> Can you provide about 10 seconds worth of \"vmstat 1\" while running your COPY\n> so we can get a global view of the I/O and CPU?\n\nHere it is, taken from a spot about halfway through a 'cat file |\npsql' load, with the \"Oracle-is-installed-and-running\" caveat:\n\nr b swpd free buff cache si so bi bo in cs us sy id wa\n1 0 345732 29328 770980 12947212 0 0 20 16552 1223 3677 12 2 85 1\n1 0 345732 29840 770520 12946924 0 0 20 29244 1283 2955 11 2 85 1\n1 0 345732 32144 770560 12944436 0 0 12 16436 1204 2936 11 2 86 1\n1 0 345732 33744 770464 12942764 0 0 20 16460 1189 2005 10 2 86 1\n2 0 345732 32656 770140 12943972 0 0 16 7068 1057 3434 13 2 85 0\n1 0 345732 34832 770184 12941820 0 0 20 9368 1170 3120 11 2 86 1\n1 0 345732 36528 770228 12939804 0 0 16 32668 1297 2109 11 2 85 1\n1 0 345732 29304 770272 12946764 0 0 16 16428 1192 3105 12 2 85 1\n1 0 345732 30840 770060 12945480 0 0 20 16456 1196 3151 12 2 84 1\n1 0 345732 32760 769972 12943528 0 0 12 16460 1185 3103 11 2 86 1\n",
"msg_date": "Fri, 27 Oct 2006 15:08:32 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Worky (that your real name? :-)\n\n\nOn 10/27/06 12:08 PM, \"Worky Workerson\" <[email protected]> wrote:\n\n> Here it is, taken from a spot about halfway through a 'cat file |\n> psql' load, with the \"Oracle-is-installed-and-running\" caveat:\n> \n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 1 0 345732 29328 770980 12947212 0 0 20 16552 1223 3677 12 2 85 1\n> 1 0 345732 29840 770520 12946924 0 0 20 29244 1283 2955 11 2 85 1\n> 1 0 345732 32144 770560 12944436 0 0 12 16436 1204 2936 11 2 86 1\n> 1 0 345732 33744 770464 12942764 0 0 20 16460 1189 2005 10 2 86 1\n> 2 0 345732 32656 770140 12943972 0 0 16 7068 1057 3434 13 2 85 0\n> 1 0 345732 34832 770184 12941820 0 0 20 9368 1170 3120 11 2 86 1\n> 1 0 345732 36528 770228 12939804 0 0 16 32668 1297 2109 11 2 85 1\n> 1 0 345732 29304 770272 12946764 0 0 16 16428 1192 3105 12 2 85 1\n> 1 0 345732 30840 770060 12945480 0 0 20 16456 1196 3151 12 2 84 1\n> 1 0 345732 32760 769972 12943528 0 0 12 16460 1185 3103 11 2 86 1\n\nIt doesn't look like there's anything else running - the runnable \"r\" is\nabout 1. Your \"bo\" blocks output rate is about 16MB/s, so divide by 3 and\nyou're about in range with your 5MB/s COPY rate. The interesting thing is\nthat the I/O wait is pretty low.\n\nHow many CPUs on the machine? Can you send the result of \"cat\n/proc/cpuinfo\"?\n\nIs your \"cat file | psql\" being done on the DBMS server or is it on the\nnetwork?\n\n- Luke \n\n\n",
"msg_date": "Fri, 27 Oct 2006 18:00:46 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On 10/28/06, Luke Lonergan <[email protected]> wrote:\n> Worky (that your real name? :-)\n>\n>\n> On 10/27/06 12:08 PM, \"Worky Workerson\" <[email protected]> wrote:\n>\n> > Here it is, taken from a spot about halfway through a 'cat file |\n> > psql' load, with the \"Oracle-is-installed-and-running\" caveat:\n> >\n> > r b swpd free buff cache si so bi bo in cs us sy id wa\n> > 1 0 345732 29328 770980 12947212 0 0 20 16552 1223 3677 12 2 85 1\n> > 1 0 345732 29840 770520 12946924 0 0 20 29244 1283 2955 11 2 85 1\n> > 1 0 345732 32144 770560 12944436 0 0 12 16436 1204 2936 11 2 86 1\n> > 1 0 345732 33744 770464 12942764 0 0 20 16460 1189 2005 10 2 86 1\n> > 2 0 345732 32656 770140 12943972 0 0 16 7068 1057 3434 13 2 85 0\n> > 1 0 345732 34832 770184 12941820 0 0 20 9368 1170 3120 11 2 86 1\n> > 1 0 345732 36528 770228 12939804 0 0 16 32668 1297 2109 11 2 85 1\n> > 1 0 345732 29304 770272 12946764 0 0 16 16428 1192 3105 12 2 85 1\n> > 1 0 345732 30840 770060 12945480 0 0 20 16456 1196 3151 12 2 84 1\n> > 1 0 345732 32760 769972 12943528 0 0 12 16460 1185 3103 11 2 86 1\n>\n> It doesn't look like there's anything else running - the runnable \"r\" is\n> about 1. Your \"bo\" blocks output rate is about 16MB/s, so divide by 3 and\n> you're about in range with your 5MB/s COPY rate. The interesting thing is\n> that the I/O wait is pretty low.\n>\n> How many CPUs on the machine? Can you send the result of \"cat\n> /proc/cpuinfo\"?\n>\n> Is your \"cat file | psql\" being done on the DBMS server or is it on the\n> network?\n\niirc, he is running quad opteron 885 (8 cores), so if my math is\ncorrect he can split up his process for an easy gain.\n\nmerlin\n",
"msg_date": "Sat, 28 Oct 2006 08:42:26 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "> Worky (that your real name? :-)\n\nNope, its Mike. worky.workerson is just the email that I use for \"work\" :)\n\n> How many CPUs on the machine? Can you send the result of \"cat\n> /proc/cpuinfo\"?\n\nNot at work at the moment, however I do have quad dual-core opterons,\nlike Merlin mentioned.\n\n> Is your \"cat file | psql\" being done on the DBMS server or is it on the\n> network?\n\nEverything is currently being done on the same DB server. The data\nthat is being loaded is on the same 2-disk \"RAID10\" as the OS and WAL.\n",
"msg_date": "Fri, 27 Oct 2006 23:43:25 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On 10/27/06, Merlin Moncure <[email protected]> wrote:\n> > > r b swpd free buff cache si so bi bo in cs us sy id wa\n> > > 1 0 345732 29328 770980 12947212 0 0 20 16552 1223 3677 12 2 85 1\n> > > 1 0 345732 29840 770520 12946924 0 0 20 29244 1283 2955 11 2 85 1\n> > > 1 0 345732 32144 770560 12944436 0 0 12 16436 1204 2936 11 2 86 1\n> > > 1 0 345732 33744 770464 12942764 0 0 20 16460 1189 2005 10 2 86 1\n> > > 2 0 345732 32656 770140 12943972 0 0 16 7068 1057 3434 13 2 85 0\n> > > 1 0 345732 34832 770184 12941820 0 0 20 9368 1170 3120 11 2 86 1\n> > > 1 0 345732 36528 770228 12939804 0 0 16 32668 1297 2109 11 2 85 1\n> > > 1 0 345732 29304 770272 12946764 0 0 16 16428 1192 3105 12 2 85 1\n> > > 1 0 345732 30840 770060 12945480 0 0 20 16456 1196 3151 12 2 84 1\n> > > 1 0 345732 32760 769972 12943528 0 0 12 16460 1185 3103 11 2 86 1\n> >\n> > It doesn't look like there's anything else running - the runnable \"r\" is\n> > about 1. Your \"bo\" blocks output rate is about 16MB/s, so divide by 3 and\n> > you're about in range with your 5MB/s COPY rate. The interesting thing is\n> > that the I/O wait is pretty low.\n> >\n> > How many CPUs on the machine? Can you send the result of \"cat\n> > /proc/cpuinfo\"?\n>\n> iirc, he is running quad opteron 885 (8 cores), so if my math is\n> correct he can split up his process for an easy gain.\n\nAre you saying that I should be able to issue multiple COPY commands\nbecause my I/O wait is low? I was under the impression that I am I/O\nbound, so multiple simeoultaneous loads would have a detrimental\neffect ...\n",
"msg_date": "Fri, 27 Oct 2006 23:47:06 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Worky,\n\nOn 10/27/06 8:47 PM, \"Worky Workerson\" <[email protected]> wrote:\n\n>>>> 1 0 345732 29304 770272 12946764 0 0 16 16428 1192 3105 12 2 85 1\n>>>> 1 0 345732 30840 770060 12945480 0 0 20 16456 1196 3151 12 2 84 1\n>>>> 1 0 345732 32760 769972 12943528 0 0 12 16460 1185 3103 11 2 86 1\n>> \n>> iirc, he is running quad opteron 885 (8 cores), so if my math is\n>> correct he can split up his process for an easy gain.\n> \n> Are you saying that I should be able to issue multiple COPY commands\n> because my I/O wait is low? I was under the impression that I am I/O\n> bound, so multiple simeoultaneous loads would have a detrimental\n> effect ...\n\nThe reason I asked how many CPUs was to make sense of the 12% usr CPU time\nin the above. That means you are CPU bound and are fully using one CPU. So\nyou aren't being limited by the I/O in this case, it's the CPU.\n\nI agree with Merlin that you can speed things up by breaking the file up.\nAlternately you can use the OSS Bizgres java loader, which lets you specify\nthe number of I/O threads with the \"-n\" option on a single file.\n\nOTOH, you should find that you will only double your COPY speed with this\napproach because your write speed as you previously found was limited to 30\nMB/s.\n\nFor now, you could simply split the file in two pieces and load two copies\nat once, then watch the same \"vmstat 1\" for 10 seconds and look at your \"bo\"\nrate.\n\nIf this does speed things up, you really should check out the Bizgres Java\nloader.\n\nThe other thing to wonder about though is why you are so CPU bound at 5\nMB/s. What version of Postgres is this?\n\n- Luke \n\n\n",
"msg_date": "Fri, 27 Oct 2006 21:07:18 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "> > Are you saying that I should be able to issue multiple COPY commands\n> > because my I/O wait is low? I was under the impression that I am I/O\n> > bound, so multiple simeoultaneous loads would have a detrimental\n> > effect ...\n>\n> The reason I asked how many CPUs was to make sense of the 12% usr CPU time\n> in the above. That means you are CPU bound and are fully using one CPU. So\n> you aren't being limited by the I/O in this case, it's the CPU.\n>\n> I agree with Merlin that you can speed things up by breaking the file up.\n> Alternately you can use the OSS Bizgres java loader, which lets you specify\n> the number of I/O threads with the \"-n\" option on a single file.\n\nThanks, I'll try that on Monday.\n\n> The other thing to wonder about though is why you are so CPU bound at 5\n> MB/s. What version of Postgres is this?\n\nI was wondering about that as well, and the only thing that I can\nthink of is that its the PK btree index creation on the IP4.\n\nPG 8.1.3 x86_64. I installed it via a RH rpm for their \"Web Services\nBeta\", or something like that. I know I'm a bit behind the times, but\ngetting stuff in (and out) of my isolated lab is a bit of a pain.\nI'll compile up a 8.2 beta as well and see how that works out.\n",
"msg_date": "Sat, 28 Oct 2006 08:03:10 -0400",
"msg_from": "\"Michael Artz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Michael (aka Worky),\n\nOn 10/28/06 5:03 AM, \"Michael Artz\" <[email protected]> wrote:\n\n> PG 8.1.3 x86_64. I installed it via a RH rpm for their \"Web Services\n> Beta\", or something like that. I know I'm a bit behind the times, but\n> getting stuff in (and out) of my isolated lab is a bit of a pain.\n> I'll compile up a 8.2 beta as well and see how that works out.\n\nI think 8.1 and 8.2 should be the same in this regard.\n\nMaybe it is just the PK *build* that slows it down, but I just tried some\nsmall scale experiments on my MacBook Pro laptop (which has the same disk\nperformance as your server) and I get only a 10-15% slowdown from having a\nPK on an integer column. The 10-15% slowdown was on 8.1.5 MPP, so it used\nboth CPUs to build the index and load at about 15 MB/s.\n\nNote that the primary key is the first column.\n\n Table \"public.part\"\n Column | Type | Modifiers\n---------------+-----------------------+-----------\n p_partkey | integer | not null\n p_name | character varying(55) | not null\n p_mfgr | text | not null\n p_brand | text | not null\n p_type | character varying(25) | not null\n p_size | integer | not null\n p_container | text | not null\n p_retailprice | double precision | not null\n p_comment | character varying(23) | not null\nIndexes:\n \"part_pkey\" PRIMARY KEY, btree (p_partkey)\n\nWhat is your schema for the table?\n\n- Luke\n\n\n",
"msg_date": "Sat, 28 Oct 2006 08:41:55 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "> > And here are the dd results for 16GB RAM, i.e. 4,000,000 8K blocks:\n>\n> So, if we divide 32,000 MB by the real time, we get:\n> /data (data):\n> 89 MB/s write\n> 38 MB/s read\n... snip ...\n> The read speed on your /data volume is awful to the point where you should\n> consider it broken and find a fix. A quick comparison: the same number on a\n> 16 drive internal SATA array with 7200 RPM disks gets 950 MB/s read, about\n> 25 times faster for about 1/4 the price.\n\nI managed to get approval to shut down the Oracle instance and reran\nthe dd's on the SAN (/data) and came up with about 60MB/s write (I had\nmissed the 'sync' in the previous runs) and about 58 MB/s read, still\nno comparison on your SATA arrary. Any recommendations on what to\nlook at to find a fix? One thing which I never mentioned was that I\nam using ext3 mounted with noatime,data=writeback.\n\nAn interesting note (at least to me) is the inverse relationship\nbetween free memory and bo when writing with dd, i.e:\n\n$ vmstat 5\nr b swpd free buff cache si so bi bo in cs us sy id wa\n0 3 244664 320688 23588 15383120 0 0 0 28 1145 197 0 1 74 25\n2 6 244664 349488 22276 15204980 0 0 0 24 1137 188 0 1 75 25\n2 6 244664 28264 23024 15526552 0 0 0 65102 1152 335 0 12 60 28\n2 4 244664 28968 23588 15383120 0 0 1 384386 1134 372 0 19 34 47\n1 5 244664 28840 23768 15215728 0 0 1 438482 1144 494 0 24 33 43\n0 5 247256 41320 20144 15212788 0 524 0 57062 1142 388 0 6 43 51\n1 6 247256 29096 19588 15226788 0 0 5 60999 1140 391 0 15 42 43\n\nIs this because of the kernel attempting to cache the file in memory?\n",
"msg_date": "Tue, 31 Oct 2006 16:11:00 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Worky (!),\n\nOn 10/31/06 12:11 PM, \"Worky Workerson\" <[email protected]> wrote:\n\n> Any recommendations on what to\n> look at to find a fix? One thing which I never mentioned was that I\n> am using ext3 mounted with noatime,data=writeback.\n\nYou can try setting the max readahead like this:\n /sbin/blockdev --setra 16384 /dev/sd[a-z]\n\nIt will set the max readahead to 16MB for whatever devices are in\n/dev/sd[a-z] for the booted machine. You'd need to put the line(s) in\n/etc/rc.d/rc.local to have the setting persist on reboot.\n\n- Luke\n\n\n",
"msg_date": "Tue, 31 Oct 2006 12:16:24 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "> >I'm guessing the high bursts are checkpoints. Can you check your log\n> > >files for pg and see if you are getting warnings about checkpoint\n> > >frequency? You can get some mileage here by increasing wal files.\n> >\n> > Nope, nothing in the log. I have set:\n> > wal_buffers=128\n> > checkpoint_segments=128\n> > checkpoint_timeout=3000\n> > which I thought was rather generous. Perhaps I should set it even\n> > higher for the loads?\n>\n> But depending on your shared_buffer and bgwriter settings (as well as\n> how much WAL traffic you're generating, you could still end up with big\n> slugs of work to be done when checkpoints happen.\n>\n> If you set checkpoint_warning to 3001, you'll see exactly when\n> checkpoints are happening, so you can determine if that's an issue.\n\nI did that, and I get two log messages, seeing checkpoints happening\nat 316 and 147 seconds apart on my load of a 1.9 GB file. Is this \"an\nissue\"?\n\nThanks!\n",
"msg_date": "Tue, 31 Oct 2006 16:22:11 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "> Maybe it is just the PK *build* that slows it down, but I just tried some\n> small scale experiments on my MacBook Pro laptop (which has the same disk\n> performance as your server) and I get only a 10-15% slowdown from having a\n> PK on an integer column. The 10-15% slowdown was on 8.1.5 MPP, so it used\n> both CPUs to build the index and load at about 15 MB/s.\n...snip...\n> What is your schema for the table?\n\nA single IP4 PK and 21 VARCHARs. It takes about 340 seconds to load a\n1.9GB file with the PK index, and about 230 seconds without it (ALTER\nTABLE mytable DROP CONSTRAINT mytable_pkey), which is a pretty\nsignificant (~30%) savings. If I read the vmstat output correctly\n(i.e. the cpu us column), I'm still at 12% and thus still cpu-bound,\nexcept for when the checkpoint occurs, i.e (everything is chugging\nalong similar to the first line, then stuff gets wonky):\n\nr b swpd free buff cache si so bi bo in cs us sy id wa\n2 0 279028 4620040 717940 9697664 0 0 0 19735 1242 7534 13 4 82 1\n1 2 279028 4476120 718120 9776840 0 0 0 2225483 1354 5269 13 6 71 11\n0 3 279028 4412928 718320 9866672 0 0 2 19746 1324 3978 10 2 69 18\n1 1 279028 4334112 718528 9971456 0 0 0 20615 1311 5912 10 3 69 18\n0 1 279028 4279904 718608 9995244 0 0 0 134946 1205 674 1 3 85 11\n0 2 279028 4307344 718616 9995304 0 0 0 54 1132 247 0 1 77 22\n1 0 279028 7411104 718768 6933860 0 0 0 9942 1148 3618 11 6 80 3\n1 0 279028 7329312 718964 7015536 0 0 1 19766 1232 5108 13 2 84 1\n\nAlso, as a semi-side note, I only have a single checkpoint without the\nindex, while I have 2 with the index.\n",
"msg_date": "Tue, 31 Oct 2006 16:45:16 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "On Tuesday 31 October 2006 21:11, Worky Workerson wrote:\n> One thing which I never mentioned was that I\n> am using ext3 mounted with noatime,data=writeback.\n\nYou might also want to try with data=ordered. I have noticed that \nnowadays it seems to be a bit faster, but not much. I don't know why, \nmaybe it has got more optimization efforts.\n\nTeemu\n",
"msg_date": "Tue, 31 Oct 2006 22:03:57 +0100",
"msg_from": "Teemu Torma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "> >>>> 1 0 345732 29304 770272 12946764 0 0 16 16428 1192 3105 12 2 85 1\n> >>>> 1 0 345732 30840 770060 12945480 0 0 20 16456 1196 3151 12 2 84 1\n> >>>> 1 0 345732 32760 769972 12943528 0 0 12 16460 1185 3103 11 2 86 1\n> >>\n> >> iirc, he is running quad opteron 885 (8 cores), so if my math is\n> >> correct he can split up his process for an easy gain.\n> >\n> > Are you saying that I should be able to issue multiple COPY commands\n> > because my I/O wait is low? I was under the impression that I am I/O\n> > bound, so multiple simeoultaneous loads would have a detrimental\n> > effect ...\n>\n> The reason I asked how many CPUs was to make sense of the 12% usr CPU time\n> in the above. That means you are CPU bound and are fully using one CPU. So\n> you aren't being limited by the I/O in this case, it's the CPU.\n... snip ...\n> For now, you could simply split the file in two pieces and load two copies\n> at once, then watch the same \"vmstat 1\" for 10 seconds and look at your \"bo\"\n> rate.\n\nSignificantly higher on average, and a parallel loads were ~30% faster\nthat a single with index builds (240s vs 340s) and about ~45% (150s vs\n230s) without the PK index. I'll definitely look into the bizgres\njava loader.\n\nThanks!\n",
"msg_date": "Tue, 31 Oct 2006 17:13:59 -0400",
"msg_from": "\"Worky Workerson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "For the third time today, our server has crashed, or frozen, actually something in between. Normally there are about 30-50 connections because of mod_perl processes that keep connections open. After the crash, there are three processes remaining:\n\n# ps -ef | grep postgres\npostgres 23832 1 0 Nov11 pts/1 00:02:53 /usr/local/pgsql/bin/postmaster -D /postgres/main\npostgres 1200 23832 20 14:28 pts/1 00:58:14 postgres: pubchem pubchem 66.226.76.106(58882) SELECT\npostgres 4190 23832 25 14:33 pts/1 01:09:12 postgres: asinex asinex 66.226.76.106(56298) SELECT\n\nBut they're not doing anything: No CPU time consumed, no I/O going on, no progress. If I try to connect with psql(1), it says:\n\n psql: FATAL: the database system is in recovery mode\n\nAnd the server log has:\n\nLOG: background writer process (PID 23874) was terminated by signal 9\nLOG: terminating any other active server processes\nLOG: statistics collector process (PID 23875) was terminated by signal 9\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited ab\nnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and repeat your command.\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited ab\n... repeats about 50 times, one per process.\n\nQuestions:\n 1. Any idea what happened and how I can avoid this? It's a *big* problem.\n 2. Why didn't the database recover? Why are there two processes\n that couldn't be killed?\n 3. Where did the \"signal 9\" come from? (Nobody but me ever logs\n in to the server machine.)\n\nHelp!\n\nThanks,\nCraig\n\n",
"msg_date": "Wed, 15 Nov 2006 18:20:24 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Postgres server crash"
},
{
"msg_contents": "Craig A. James wrote:\n> For the third time today, our server has crashed, or frozen, actually \n> something in between. Normally there are about 30-50 connections \n> because of mod_perl processes that keep connections open. After the \n> crash, there are three processes remaining:\n>\n> # ps -ef | grep postgres\n> postgres 23832 1 0 Nov11 pts/1 00:02:53 \n> /usr/local/pgsql/bin/postmaster -D /postgres/main\n> postgres 1200 23832 20 14:28 pts/1 00:58:14 postgres: pubchem \n> pubchem 66.226.76.106(58882) SELECT\n> postgres 4190 23832 25 14:33 pts/1 01:09:12 postgres: asinex \n> asinex 66.226.76.106(56298) SELECT\n>\n> But they're not doing anything: No CPU time consumed, no I/O going on, \n> no progress. If I try to connect with psql(1), it says:\n>\n> psql: FATAL: the database system is in recovery mode\n>\n> And the server log has:\n>\n> LOG: background writer process (PID 23874) was terminated by signal 9\n> LOG: terminating any other active server processes\n> LOG: statistics collector process (PID 23875) was terminated by signal 9\n> WARNING: terminating connection because of crash of another server \n> process\n> DETAIL: The postmaster has commanded this server process to roll back \n> the current transaction and exit, because another server process \n> exited ab\n> normally and possibly corrupted shared memory.\n> HINT: In a moment you should be able to reconnect to the database and \n> repeat your command.\n> WARNING: terminating connection because of crash of another server \n> process\n> DETAIL: The postmaster has commanded this server process to roll back \n> the current transaction and exit, because another server process \n> exited ab\n> ... repeats about 50 times, one per process.\n>\n> Questions:\n> 1. Any idea what happened and how I can avoid this? It's a *big* \n> problem.\n> 2. Why didn't the database recover? Why are there two processes\n> that couldn't be killed?\n> 3. Where did the \"signal 9\" come from? (Nobody but me ever logs\n> in to the server machine.)\n>\nI would guess it's the linux OOM if you are running linux. You need to \nturn off killing of processes when you run out of memory. Are you \ngetting close to running out of memory?\n\n> Help!\n>\n> Thanks,\n> Craig\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n>\n\n",
"msg_date": "Thu, 16 Nov 2006 13:28:29 +1100",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "Russell Smith wrote:\n> Craig A. James wrote:\n>> Questions:\n>> 1. Any idea what happened and how I can avoid this? It's a *big* \n>> problem.\n>> 2. Why didn't the database recover? Why are there two processes\n>> that couldn't be killed?\n\nI'm guessing it didn't recover *because* there were two processes that \ncouldn't be killed. Responsibility for that falls to the \noperating-system. I've seen it most often with faulty drivers or \nhardware that's being communicated with/written to. However, see below.\n\n>> 3. Where did the \"signal 9\" come from? (Nobody but me ever logs\n>> in to the server machine.)\n>>\n> I would guess it's the linux OOM if you are running linux. \n\nIf not, it means the server is hacked or haunted. Something outside PG \nissued a kill -9 and the OOM killer is the prime suspect I'd say.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 16 Nov 2006 10:48:45 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "On 11/15/06, Craig A. James <[email protected]> wrote:\n\n> Questions:\n> 1. Any idea what happened and how I can avoid this? It's a *big* problem.\n> 2. Why didn't the database recover? Why are there two processes\n> that couldn't be killed?\n> 3. Where did the \"signal 9\" come from? (Nobody but me ever logs\n> in to the server machine.)\n\nhow much memory is in the box? maybe vmstat will give you some clues\nabout what is going on leading up to the crash...a spike in swap for\nexample. Maybe, if you can budget for it, query logging might give\nsome clues also. Try increasing swap.\n\nmerlin\n",
"msg_date": "Thu, 16 Nov 2006 09:01:28 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "Russell Smith wrote:\n>> For the third time today, our server has crashed...\n>\n> I would guess it's the linux OOM if you are running linux. You need to \n> turn off killing of processes when you run out of memory. Are you \n> getting close to running out of memory?\n\nGood suggestion, it was a memory leak in an add-on library that we plug in to the Postgres server.\n\nOOM? Can you give me a quick pointer to what this acronym stands for and how I can reconfigure it? It sounds like a \"feature\" old UNIX systems like SGI IRIX had, where the system would allocate virtual memory that it didn't really have, then kill your process if you tried to use it. I.e. malloc() would never return NULL even if swap space was over allocated. Is this what you're talking about? Having this enabled on a server is deadly for reliability.\n\nThanks,\nCraig\n\n",
"msg_date": "Thu, 16 Nov 2006 09:00:46 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "Craig A. James wrote:\n> Russell Smith wrote:\n>>> For the third time today, our server has crashed...\n>>\n>> I would guess it's the linux OOM if you are running linux. You need to \n>> turn off killing of processes when you run out of memory. Are you \n>> getting close to running out of memory?\n> \n> Good suggestion, it was a memory leak in an add-on library that we plug \n> in to the Postgres server.\n> \n> OOM? Can you give me a quick pointer to what this acronym stands for\n> and how I can reconfigure it? \n\nOut Of Memory\n\n > It sounds like a \"feature\" old UNIX\n> systems like SGI IRIX had, where the system would allocate virtual \n> memory that it didn't really have, then kill your process if you tried \n> to use it. \n\nThat's it.\n\n > I.e. malloc() would never return NULL even if swap space was\n> over allocated. Is this what you're talking about? Having this enabled \n> on a server is deadly for reliability.\n\nIndeed. See the manuals for details. Section 16.4.3\n\nhttp://www.postgresql.org/docs/8.1/static/kernel-resources.html#AEN18128\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 16 Nov 2006 17:09:45 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "By the way, in spite of my questions and concerns, I was *very* impressed by the recovery process. I know it might seem like old hat to you guys to watch the WAL in action, and I know on a theoretical level it's supposed to work, but watching it recover 150 separate databases, and find and fix a couple of problems was very impressive. It gives me great confidence that I made the right choice to use Postgres.\n\nRichard Huxton wrote:\n>>> 2. Why didn't the database recover? Why are there two processes\n>>> that couldn't be killed?\n> \n> I'm guessing it didn't recover *because* there were two processes that \n> couldn't be killed. Responsibility for that falls to the \n> operating-system. I've seen it most often with faulty drivers or \n> hardware that's being communicated with/written to. However, see below.\n\nIt can't be a coincidence that these were the only two processes in a SELECT operation. Does the server disable signals at critical points?\n\nI'd make a wild guess that this is some sort of deadlock problem -- these two servers have disabled signals for a critical section of SELECT, and are waiting for something from the postmaster, but postmaster is dead.\n\nThis is an ordinary system, no hardware problems, stock RH FC3 kernel, stock PG 8.1.4, with 4 GB memory, and at the moment the database is running on a single SATA disk. I'm worried that a production server can get into a state that requires manual intervention to recover.\n\nCraig\n",
"msg_date": "Thu, 16 Nov 2006 09:15:54 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "Craig A. James wrote:\n> By the way, in spite of my questions and concerns, I was *very* \n> impressed by the recovery process. I know it might seem like old hat to \n> you guys to watch the WAL in action, and I know on a theoretical level \n> it's supposed to work, but watching it recover 150 separate databases, \n> and find and fix a couple of problems was very impressive. It gives me \n> great confidence that I made the right choice to use Postgres.\n> \n> Richard Huxton wrote:\n>>>> 2. Why didn't the database recover? Why are there two processes\n>>>> that couldn't be killed?\n>>\n>> I'm guessing it didn't recover *because* there were two processes that \n>> couldn't be killed. Responsibility for that falls to the \n>> operating-system. I've seen it most often with faulty drivers or \n>> hardware that's being communicated with/written to. However, see below.\n> \n> It can't be a coincidence that these were the only two processes in a \n> SELECT operation. Does the server disable signals at critical points?\n\nIf a \"kill -9\" as root doesn't get rid of them, I think I'm right in \nsaying that it's a kernel-level problem rather than something else.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 16 Nov 2006 17:29:58 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> Craig A. James wrote:\n>> It can't be a coincidence that these were the only two processes in a \n>> SELECT operation. Does the server disable signals at critical points?\n\n> If a \"kill -9\" as root doesn't get rid of them, I think I'm right in \n> saying that it's a kernel-level problem rather than something else.\n\nI didn't actually see Craig say anywhere that he'd tried \"kill -9\" on\nthose backends. If he did and it didn't do anything, then clearly they\nwere in some kind of uninterruptable wait, which is certainly evidence\nof a kernel or hardware issue. If he didn't do \"kill -9\" then we don't\nreally know what the issue was --- it's not improbable that they were\nstuck in some loop not containing a CHECK_FOR_INTERRUPTS() test.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2006 13:10:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash "
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> OOM? Can you give me a quick pointer to what this acronym stands for\n> and how I can reconfigure it?\n\nSee \"Linux Memory Overcommit\" at\nhttp://www.postgresql.org/docs/8.1/static/kernel-resources.html#AEN18128\nor try googling for \"OOM kill\" for non-Postgres-specific coverage.\n\n> It sounds like a \"feature\" old UNIX\n> systems like SGI IRIX had, where the system would allocate virtual\n> memory that it didn't really have, then kill your process if you tried\n> to use it. I.e. malloc() would never return NULL even if swap space\n> was over allocated. Is this what you're talking about? Having this\n> enabled on a server is deadly for reliability. \n\nNo kidding :-(. The default behavior in Linux is extremely unfortunate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2006 13:14:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash "
},
{
"msg_contents": "Craig A. James wrote:\n> Richard Huxton wrote:\n>> If a \"kill -9\" as root doesn't get rid of them, I think I'm right in \n>> saying that it's a kernel-level problem rather than something else.\n> \n> Sorry I didn't clarify that. \"kill -9\" did kill them. Other signals \n> did not. It wasn't until I manually intervened with the \"kill -9\" that \n> the system began the recovery process.\n\nAh, when you said \"unkillable\" in the first msg I jumped to the wrong \nconclusion.\n\nIn that case, see Tom's answer.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 16 Nov 2006 18:22:10 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "OOM stands for \"Out Of Memory\" and it does indeed seem to be the same as \nwhat IRIX had. I believe you can turn the feature off and also configure \nits overcomitment by setting something in /proc/..... and unfortunately, I \ndon't remember more than that.\n\nOn Thu, 16 Nov 2006, Craig A. James wrote:\n\n> OOM? Can you give me a quick pointer to what this acronym stands for and how \n> I can reconfigure it? It sounds like a \"feature\" old UNIX systems like SGI \n> IRIX had, where the system would allocate virtual memory that it didn't \n> really have, then kill your process if you tried to use it. I.e. malloc() \n> would never return NULL even if swap space was over allocated. Is this what \n> you're talking about? Having this enabled on a server is deadly for \n> reliability.\n>\n> Thanks,\n> Craig\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n",
"msg_date": "Thu, 16 Nov 2006 13:00:23 -0800 (PST)",
"msg_from": "Ben <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "\n\nOn Thu, 16 Nov 2006, Tom Lane wrote:\n>\n> \"Craig A. James\" <[email protected]> writes:\n> > OOM? Can you give me a quick pointer to what this acronym stands for\n> > and how I can reconfigure it?\n>\n> See \"Linux Memory Overcommit\" at\n> http://www.postgresql.org/docs/8.1/static/kernel-resources.html#AEN18128\n> or try googling for \"OOM kill\" for non-Postgres-specific coverage.\n\nI did that - spent about two f-ing hours looking for what I wanted. (Guess\nI entered poor choices for my searches. -frown- ) There are a LOT of\narticles that TALK ABOUT OOM, but prescious few actually tell you what you\ncan do about it.\n\nTrying to save you some time:\n\nOn linux you can use the sysctl utility to muck with vm.overcommit_memory;\nYou can disable the \"feature.\"\n\nGoogle _that_ for more info!\n\n>\n> > It sounds like a \"feature\" old UNIX\n> > systems like SGI IRIX had, where the system would allocate virtual\n> > memory that it didn't really have, then kill your process if you tried\n> > to use it. I.e. malloc() would never return NULL even if swap space\n> > was over allocated. Is this what you're talking about? Having this\n> > enabled on a server is deadly for reliability.\n>\n> No kidding :-(. The default behavior in Linux is extremely unfortunate.\n>\n> \t\t\tregards, tom lane\n\nThat's a major understatement.\n\nThe reason I spent a couple of hours looking for what I could learn on\nthis is that I've been absolutely beside myself on this \"extremely\nunfortunate\" \"feature.\" I had a badly behaving app (but didn't know which\napp it was), so Linux would kill lots of things, like, oh, say, inetd.\nGood luck sshing into the box. You just had to suffer with pushing the\ndamned reset button... It must have taken at least a week before figuring\nout what not to do. (What I couldn't/can't understand is why the system\nwouldn't just refuse the bad app the memory when it was short - no, you've\nhad enough!)\n\n<soapbox> ...I read a large number of articles on this subject and am\nabsolutely dumbfounded by the -ahem- idiots who think killing a random\nprocess is an appropriate action. I'm just taking their word for it that\nthere's some kind of impossibility of the existing Linux kernel not\ngetting itself into a potentially hung situation because it didn't save\nitself any memory. Frankly, if it takes a complete kernel rewrite to fix\nthe problem that the damned operating system can't manage its own needs,\nthen the kernel needs to be rewritten! </soapbox>\n\nThese kernel hackers could learn something from VAX/VMS.\n\nRichard\n\n-- \nRichard Troy, Chief Scientist\nScience Tools Corporation\n510-924-1363 or 202-747-1263\[email protected], http://ScienceTools.com/\n\n",
"msg_date": "Sat, 18 Nov 2006 17:28:46 -0800 (PST)",
"msg_from": "Richard Troy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash "
},
{
"msg_contents": "Richard Troy wrote:\n> I did that - spent about two f-ing hours looking for what I wanted. (Guess\n> I entered poor choices for my searches. -frown- ) There are a LOT of\n> articles that TALK ABOUT OOM, but prescious few actually tell you what you\n> can do about it.\n> \n> Trying to save you some time:\n> \n> On linux you can use the sysctl utility to muck with vm.overcommit_memory;\n> You can disable the \"feature.\"\n> \n> Google _that_ for more info!\n\n\nHere's something I found googling for \"memory overcommitment\"+linux\n\n http://archives.neohapsis.com/archives/postfix/2000-04/0512.html\n\n From /usr/src/linux/Documentation/sysctl/vm.txt\n\n \"overcommit_memory:\n\n This value contains a flag that enables memory overcommitment.\n When this flag is 0, the kernel checks before each malloc()\n to see if there's enough memory left. If the flag is nonzero,\n the system pretends there's always enough memory.\"\n\n This flag is settable in /proc/sys/vm\n\nLo and behold, here it is on my system:\n\n $ cat /proc/sys/vm/overcommit_memory\n 0\n $ cat /proc/sys/vm/overcommit_ratio \n 50\n\nCraig\n",
"msg_date": "Sat, 18 Nov 2006 22:43:14 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> Here's something I found googling for \"memory overcommitment\"+linux\n> http://archives.neohapsis.com/archives/postfix/2000-04/0512.html\n\nThat might have been right when it was written (note the reference to a\n2.2 Linux kernel), but it's 100% wrong now. 0 is the default, not-safe\nsetting.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 19 Nov 2006 01:51:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Craig A. James\" <[email protected]> writes:\n>> Here's something I found googling for \"memory overcommitment\"+linux\n>> http://archives.neohapsis.com/archives/postfix/2000-04/0512.html\n> \n> That might have been right when it was written (note the reference to a\n> 2.2 Linux kernel), but it's 100% wrong now. 0 is the default, not-safe\n> setting.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n",
"msg_date": "Sun, 19 Nov 2006 09:01:11 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Craig A. James\" <[email protected]> writes:\n>> Here's something I found googling for \"memory overcommitment\"+linux\n>> http://archives.neohapsis.com/archives/postfix/2000-04/0512.html\n> \n> That might have been right when it was written (note the reference to a\n> 2.2 Linux kernel), but it's 100% wrong now. 0 is the default, not-safe\n> setting.\n\nIs this something we could detect and show a warning for at initdb\ntime; or perhaps log a warning in the log files at postmaster startup?\nThis dangerous choice of settings seems to catch quite a few postgresql\nusers.\n",
"msg_date": "Sun, 19 Nov 2006 09:07:27 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "On Sat, Nov 18, 2006 at 05:28:46PM -0800, Richard Troy wrote:\n>On linux you can use the sysctl utility to muck with vm.overcommit_memory;\n>You can disable the \"feature.\"\n\nBe aware that there's are \"reasons\" the \"feature\" exists before you \n\"cast\" \"aspersions\" and \"quote marks\" all over the place, and the \n\"reasons\" are \"things\" like \"shared memory\" and \"copy on write\" \nmemory, which are \"generally\" condsidered \"good things\".\n\nAt one point someone complained about the ability to configure, e.g., \nIRIX to allow memory overcommit. I worked on some large IRIX \ninstallations where full memory accounting would have required on the \norder of 100s of gigabytes of swap, due to large shared memory \nallocations. If the swap had been configured rather than allowing \novercommit, sure it might have reduced the chance of an application \nallocating memory that it couldn't use. In practice, if you're 100s of \ngigs into swap the system isn't going to be useable anyway.\n\n>itself any memory. Frankly, if it takes a complete kernel rewrite to fix\n>the problem that the damned operating system can't manage its own needs,\n>then the kernel needs to be rewritten! </soapbox>\n>\n>These kernel hackers could learn something from VAX/VMS.\n\nLike how to make a slower system? You can configure a VM to operate \nthe way they did 25 years ago, but in general people have preferred to \nget better performance (*much* better performance) instead. It could be \nthat they're all idiots, or it could be that in practice the problem \nisn't as bad as you make it out to be. (Maybe other people are just \nbetter at configuring their memory usage?)\n\nMike Stone\n",
"msg_date": "Sun, 19 Nov 2006 13:41:07 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "> (Maybe other people are just \n> better at configuring their memory usage?)\n\nI don't mean to hijack the thread, but I am interested in learning the science behind configuring\nmemory usage. A lot of the docs that I have found on this subject speak in terms of generalities\nand rules of thumb. Are there any resources that can show how to tune the kernels parameters to\nmake use of all of the available memory and at the same time allow all services to get along?\n\n From what I've read it seems that tuning kernel parameters is more of an art rather than a\nscience, since most recommendations boil down to experiences based on trial and error.\n\nRegards,\n\nRichard Broersma Jr.\n\n",
"msg_date": "Sun, 19 Nov 2006 12:42:45 -0800 (PST)",
"msg_from": "Richard Broersma Jr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "On Sun, Nov 19, 2006 at 12:42:45PM -0800, Richard Broersma Jr wrote:\n>I don't mean to hijack the thread, but I am interested in learning the science behind configuring\n>memory usage. \n\nThere isn't one. You need experience, and an awareness of your \nparticular requirements. If it were easy, it would be autotuned. :)\n\nMike Stone\n",
"msg_date": "Sun, 19 Nov 2006 16:24:54 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "Michael Stone wrote:\n> At one point someone complained about the ability to configure, e.g., \n> IRIX to allow memory overcommit. I worked on some large IRIX \n> installations where full memory accounting would have required on the \n> order of 100s of gigabytes of swap, due to large shared memory \n> allocations.\n\nThese were mostly scientific and graphical apps where reliability took a back seat to performance and to program complexity. They would allocate 100's of GB of swap space rather than taking the time to design proper data structures. If the program crashed every week or two, no big deal -- just run it again. Overallocating memory is a valuable technique for such applications.\n\nBut overallocating memory has no place in a server environment. When memory overcommittment is allowed, it is impossible to write a reliable application, because no matter how carefully and correctly you craft your code, someone else's program that leaks memory like Elmer Fudd's rowboat after his shotgun goes off, can kill your well-written application.\n\nInstalling Postgres on such a system makes Postgres unreliable.\n\nTom Lane wrote:\n> That might have been right when it was written (note the reference to a\n> 2.2 Linux kernel), but it's 100% wrong now. \n> [Setting /proc/sys/vm/overcommit_memory to] 0 is the default, not-safe\n> setting.\n\nI'm surprised that the Linux kernel people take such a uncritical view of reliability that they set, as *default*, a feature that makes Linux an unreliable platform for servers.\n\nAnd speaking of SGI, this very issue was among the things that sank the company. As the low-end graphics cards ate into their visualization market, they tried to become an Oracle Server platform. Their servers were *fast*. But they crashed -- a lot. And memory-overcommit was one of the reasons. IRIX admins would brag that their systems only crashed every couple of weeks. I had HP and Sun systems that would run for years.\n\nCraig\n",
"msg_date": "Sun, 19 Nov 2006 14:12:01 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "On Sun, Nov 19, 2006 at 02:12:01PM -0800, Craig A. James wrote:\n>And speaking of SGI, this very issue was among the things that sank the \n>company. As the low-end graphics cards ate into their visualization \n>market, they tried to become an Oracle Server platform. Their servers were \n>*fast*. But they crashed -- a lot. And memory-overcommit was one of the \n>reasons. \n\nYou realize that it had to be turned on explicitly on IRIX, right? But \ndon't let facts get in the way of a good rant...\n\nMike Stone\n",
"msg_date": "Sun, 19 Nov 2006 17:32:38 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "> You realize that it had to be turned on explicitly on IRIX, right? But \n> don't let facts get in the way of a good rant...\n\nOn the contrary, with Irix 4 and earlier it was the default, but it caused so many problems that SGI switched the default to OFF in IRIX 5. But because it had been available for so long, many important apps had come to rely on it, so most sites had to immediately re-enable virtual swap on every IRIX 5 server that came in. Admins just got used to doing it, so it became a \"default\" at most sites, and admins often couldn't be convinced to disable it for database server machines, because \"That's our standard for IRIX configuration.\"\n\nI worked at a big molecular modeling/visualization company; our visualization programs *required* virtual swap, and our server programs *prohibited* virtual swap. Imagine how our sales people felt about that, telling customers that they'd have to buy two $30,000 machines just because of one kernel parameter. Of course, they didn't, and our server apps took the heat as being \"unreliable.\"\n\nSGI called it \"virtual swap\" which I always thought was a hoot. You have virtual memory, which is really your swap space, and then virtual swap, which is some kind of dark hole...\n\nCraig\n",
"msg_date": "Sun, 19 Nov 2006 17:00:23 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "On Sun, Nov 19, 2006 at 14:12:01 -0800,\n \"Craig A. James\" <[email protected]> wrote:\n> \n> These were mostly scientific and graphical apps where reliability took a \n> back seat to performance and to program complexity. They would allocate \n> 100's of GB of swap space rather than taking the time to design proper data \n> structures. If the program crashed every week or two, no big deal -- just \n> run it again. Overallocating memory is a valuable technique for such \n> applications.\n\nI don't think the above applies generally. Programmers need to be aware of\nthe working set of CPU bound apps. If the program is constantly paging,\nthe performance is going to be abysmal.\n",
"msg_date": "Sun, 19 Nov 2006 19:59:52 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "We're getting off-topic for this group except that this is *why* we're plagued with this problem, so I'll make one more observation.\n\nBruno Wolff III wrote:\n>> They would allocate \n>> 100's of GB of swap space rather than taking the time to design proper data \n>> structures. If the program crashed every week or two, no big deal -- just \n>> run it again. Overallocating memory is a valuable technique for such \n>> applications.\n> \n> I don't think the above applies generally. Programmers need to be aware of\n> the working set of CPU bound apps. If the program is constantly paging,\n> the performance is going to be abysmal.\n\nYou're doing planetary number crunching, so you allocate an (x,y,z) space with 10^6 points on every axis, or 10^18 points, for roughly 2^60 64-bit floating-point numbers, or 2^68 bytes. But your space is mostly empty (just like real space). Where there's a planet or star, you're actually using memory, everywhere else, the memory is never referenced. So you're using just an ordinary amount of memory, but you can access your planet's data by simply referencing a three-dimensional array in FORTRAN. Just allow infinite overcommit, let the OS do the rest, and it actually works. A lot of molecular modeling works this way too.\n\nThis is how these applications were/are actually written. I'm not defending the method, just point out that it's real and it's one of the reasons virtual swap was invented.\n\nCraig\n",
"msg_date": "Sun, 19 Nov 2006 22:42:07 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "if [ `cat /proc/sys/vm/overcommit_memory` == 0 ]; then\n echo \"WARNING: Watch out for the oom-killer! Consider echo 2 > \n/proc/sys/vm/overcommit_memory\"\nfi\n\n----- Original Message ----- \nFrom: \"Ron Mayer\" <[email protected]>\nTo: <[email protected]>\nSent: Sunday, November 19, 2006 6:07 PM\nSubject: Re: Postgres server crash\n\n\n> Tom Lane wrote:\n>> \"Craig A. James\" <[email protected]> writes:\n>>> Here's something I found googling for \"memory overcommitment\"+linux\n>>> http://archives.neohapsis.com/archives/postfix/2000-04/0512.html\n>>\n>> That might have been right when it was written (note the reference to a\n>> 2.2 Linux kernel), but it's 100% wrong now. 0 is the default, not-safe\n>> setting.\n>\n> Is this something we could detect and show a warning for at initdb\n> time; or perhaps log a warning in the log files at postmaster startup?\n> This dangerous choice of settings seems to catch quite a few postgresql\n> users.\n> \n\n",
"msg_date": "Mon, 20 Nov 2006 23:36:06 +0100",
"msg_from": "\"Mattias Kregert\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "Hi, Richard,\n\nRichard Troy wrote:\n\n> The reason I spent a couple of hours looking for what I could learn on\n> this is that I've been absolutely beside myself on this \"extremely\n> unfortunate\" \"feature.\" I had a badly behaving app (but didn't know which\n> app it was), so Linux would kill lots of things, like, oh, say, inetd.\n\nActually, AFAICT, Linux tries to kill the process(es) that use the most\nmemory ressources first.\n\nWithout overcommitment, the OOM killer won't kick in, and as long as the\nhoggy applications don't actually exit when malloc fails, they will just\nstay around, sitting on their memory.\n\n> Good luck sshing into the box.\n\nSSH'ing into the box will even get worse without overcommitment. When\nthe machine is stuck, the sshd that tries to spawn its child will get\nthe out of memory signal, and you won't be able to log in.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Tue, 21 Nov 2006 14:22:40 +0100",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "On Sat, Nov 18, 2006 at 05:28:46PM -0800, Richard Troy wrote:\n> <soapbox> ...I read a large number of articles on this subject and am\n> absolutely dumbfounded by the -ahem- idiots who think killing a random\n> process is an appropriate action. I'm just taking their word for it that\n> there's some kind of impossibility of the existing Linux kernel not\n> getting itself into a potentially hung situation because it didn't save\n> itself any memory. Frankly, if it takes a complete kernel rewrite to fix\n> the problem that the damned operating system can't manage its own needs,\n> then the kernel needs to be rewritten! </soapbox>\n> \n> These kernel hackers could learn something from VAX/VMS.\n\nWhat's interesting is that apparently FreeBSD also has overcommit (and\nIIRC no way to disable it), yet I never hear people going off on OOM\nkills in FreeBSD. My theory is that FreeBSD admins are smart enough to\ndedicate a decent amount of swap space, so that by the time you got to\nan OOM kill situation you'd be so far into swapping that the box would\nbe nearly unusable. Many linux 'admins' think it's ok to save a few GB\nof disk space by allocating a small amount of swap (or none at all), and\n*kaboom*.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Sun, 26 Nov 2006 17:41:02 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "On Sun, Nov 26, 2006 at 05:41:02PM -0600, Jim C. Nasby wrote:\n>What's interesting is that apparently FreeBSD also has overcommit (and\n>IIRC no way to disable it), yet I never hear people going off on OOM\n>kills in FreeBSD. \n\nCould just be that nobody is using FreeBSD. <ducks>\n\nSeriously, though, there are so many linux installations out there that \nyou're statistically more likely to see corner cases. FWIW, I know a \nwhole lotta linux machines that have never had OOM-killer problems. \nRemember, though, that anecodotal evidence isn't.\n\nMike Stone\n",
"msg_date": "Mon, 27 Nov 2006 08:20:22 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
},
{
"msg_contents": "* Jim C. Nasby:\n\n> What's interesting is that apparently FreeBSD also has overcommit (and\n> IIRC no way to disable it), yet I never hear people going off on OOM\n> kills in FreeBSD. My theory is that FreeBSD admins are smart enough to\n> dedicate a decent amount of swap space, so that by the time you got to\n> an OOM kill situation you'd be so far into swapping that the box would\n> be nearly unusable.\n\nI've seen OOM situations with our regular \"use twice as much swap\nspace as there is RAM in the machine\" rule. Perhaps Linux is just\npaging too fast for this to work. 8-P\n\nBy the way, the sysctl setting breaks some applications and\nprogramming environments, such as SBCL and CMUCL.\n",
"msg_date": "Mon, 27 Nov 2006 19:11:57 +0100",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres server crash"
}
] |
[
{
"msg_contents": "Question: I have a big table with 120,000,000 records.\n\nLet's assume that I DELETE 4,000,000 records, VACUUM FULL, and REINDEX.\n\n\nNow I have the same table, but with 240,000,000 records.\n\nI DELETE 8,000,000 records, VACUUM FULL, and REINDEX.\n\n\nShould the second operation with twice the data take twice the time as the first?\n\n\n\n\n\n\nVACUUM Performance\n\n\n\nQuestion: I have a big table with 120,000,000 records.\n\nLet's assume that I DELETE 4,000,000 records, VACUUM FULL, and REINDEX.\n\n\nNow I have the same table, but with 240,000,000 records.\n\nI DELETE 8,000,000 records, VACUUM FULL, and REINDEX.\n\n\nShould the second operation with twice the data take twice the time as the first?",
"msg_date": "Fri, 20 Oct 2006 14:42:30 -0700",
"msg_from": "\"Steve Oualline\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM Performance"
},
{
"msg_contents": "\"Steve Oualline\" <[email protected]> writes:\n> Question: I have a big table with 120,000,000 records.\n> Let's assume that I DELETE 4,000,000 records, VACUUM FULL, and REINDEX.\n> Now I have the same table, but with 240,000,000 records.\n> I DELETE 8,000,000 records, VACUUM FULL, and REINDEX.\n> Should the second operation with twice the data take twice the time as =\n> the first?\n\nAt least. If you intend to reindex all the indexes, consider instead\ndoing\n\tDROP INDEX(es)\n\tVACUUM FULL\n\tre-create indexes\nas this avoids the very large amount of effort that VACUUM FULL puts\ninto index maintenance --- effort that's utterly wasted if you then\nreindex.\n\nCLUSTER and some forms of ALTER TABLE can accomplish a table rewrite\nwith less hassle than the above, although strictly speaking they violate\nMVCC by discarding recently-dead tuples.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 21 Oct 2006 01:02:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM Performance "
}
] |
[
{
"msg_contents": "Our Windows-based db server has to integrate with users that work regularily \nwith Access.When attempting to import user's data from Access MDB files to \nPostgreSQL, we try on eof two things: either import using EMS SQL Manager's \nData Import from Access utility, or export from Access to Postgresql via an \nodbc-based connectionin both cases, the performance is just awful. \nPerformance with Tcl's native postgres driver seems rather fine running from \nWindows a Windows client, BTW.\n\nODBC is often blamed for this sort of thing - I have the 8.01.02 release \ndated 2006.01.31. Everything appears to be at its default setting.\n\nIs this the reason for the rather depressing performance fromt/to access and \ncan anything be done about it?\n\nCarlo \n\n\n",
"msg_date": "Fri, 20 Oct 2006 18:19:42 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is ODBC that slow?"
},
{
"msg_contents": "On 10/21/06, Carlo Stonebanks <[email protected]> wrote:\n> Our Windows-based db server has to integrate with users that work regularily\n> with Access.When attempting to import user's data from Access MDB files to\n> PostgreSQL, we try on eof two things: either import using EMS SQL Manager's\n> Data Import from Access utility, or export from Access to Postgresql via an\n> odbc-based connectionin both cases, the performance is just awful.\n> Performance with Tcl's native postgres driver seems rather fine running from\n> Windows a Windows client, BTW.\n>\n> ODBC is often blamed for this sort of thing - I have the 8.01.02 release\n> dated 2006.01.31. Everything appears to be at its default setting.\n>\n> Is this the reason for the rather depressing performance fromt/to access and\n> can anything be done about it?\n\ni suspect the problem might be access...the odbc driver now uses libpq\nlibrary over postgresql. first thing to do is to monitor what hundred\nsql statements access decides to write when you want to, say, look up\na record. the results might suprise you! one gotcha that pops up now\nand then is that odbc clients somtimes experience wierd delays in\ncertain configurations. afaik this has never been solved.\n\n1. turn on full statement logging (log_statement='all'). i prefer to\nredirect everything to pg_log and rotate daily, with a month or so of\nlog files going back. turning on log_duration helps.\n\n2. tail the log, do random things in access and watch the fireworks.\nif nothing odd is really going on, then you may have an odbc issue.\nmore than likely though, access is generating wacky sql. solution in\nthis case is to code around that in access.\n\nmerlin\n",
"msg_date": "Sat, 21 Oct 2006 06:34:15 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is ODBC that slow?"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n> Our Windows-based db server has to integrate with users that work regularily \n> with Access.When attempting to import user's data from Access MDB files to \n> PostgreSQL, we try on eof two things: either import using EMS SQL Manager's \n> Data Import from Access utility, or export from Access to Postgresql via an \n> odbc-based connectionin both cases, the performance is just awful. \n> Performance with Tcl's native postgres driver seems rather fine running from \n> Windows a Windows client, BTW.\n> \n> ODBC is often blamed for this sort of thing - I have the 8.01.02 release \n> dated 2006.01.31. Everything appears to be at its default setting.\n\nTry Command Prompt's ODBC driver. Lately it has been measured to be\nconsistently faster than psqlODBC.\n\nhttp://projects.commandprompt.com/public/odbcng\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 20 Oct 2006 23:32:11 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is ODBC that slow?"
},
{
"msg_contents": "On 10/21/06, Alvaro Herrera <[email protected]> wrote:\n> Carlo Stonebanks wrote:\n> > Our Windows-based db server has to integrate with users that work regularily\n> > with Access.When attempting to import user's data from Access MDB files to\n> > PostgreSQL, we try on eof two things: either import using EMS SQL Manager's\n> > Data Import from Access utility, or export from Access to Postgresql via an\n> > odbc-based connectionin both cases, the performance is just awful.\n> > Performance with Tcl's native postgres driver seems rather fine running from\n> > Windows a Windows client, BTW.\n> >\n> > ODBC is often blamed for this sort of thing - I have the 8.01.02 release\n> > dated 2006.01.31. Everything appears to be at its default setting.\n>\n> Try Command Prompt's ODBC driver. Lately it has been measured to be\n> consistently faster than psqlODBC.\n>\n> http://projects.commandprompt.com/public/odbcng\n\njust curious: what was the reasoning to reimplement the protocol stack\nin odbcng? the mainline odbc driver went in the other direction.\n\ncarlo: please, please, get your mail server to quit telling me your\nmailbox is full :)\n\nmerlin\n",
"msg_date": "Sat, 21 Oct 2006 08:20:01 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is ODBC that slow?"
},
{
"msg_contents": "\n>> > ODBC is often blamed for this sort of thing - I have the 8.01.02\n>> release\n>> > dated 2006.01.31. Everything appears to be at its default setting.\n>>\n>> Try Command Prompt's ODBC driver. Lately it has been measured to be\n>> consistently faster than psqlODBC.\n\nI should note that we need to get a build out for Windows for rev 80.\nRev 80 is the one the build showing the most promise on Linux32 and 64\nbit. It is also the one that reflects the performance metrics we have\nbeen seeing.\n\nWe should have that in a week or so.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Fri, 20 Oct 2006 20:39:17 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is ODBC that slow?"
},
{
"msg_contents": "\n>> Try Command Prompt's ODBC driver. Lately it has been measured to be\n>> consistently faster than psqlODBC.\n>>\n>> http://projects.commandprompt.com/public/odbcng\n> \n> just curious: what was the reasoning to reimplement the protocol stack\n> in odbcng? the mainline odbc driver went in the other direction.\n\nWe wanted to be able to offer some options that we couldn't if based\naround libpq.\n\nSincerely,\n\nJoshua D. Drake\n\n> \n> carlo: please, please, get your mail server to quit telling me your\n> mailbox is full :)\n> \n> merlin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Fri, 20 Oct 2006 20:40:12 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is ODBC that slow?"
},
{
"msg_contents": "> carlo: please, please, get your mail server to quit telling me your\n> mailbox is full :)\n\nMerlin, sorry about that. This is the first I've heard of it.\n\nCarlo \n\n\n",
"msg_date": "Sat, 21 Oct 2006 11:25:19 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is ODBC that slow?"
},
{
"msg_contents": "Merlin Moncure wrote:\n> On 10/21/06, Alvaro Herrera <[email protected]> wrote:\n\n> >Try Command Prompt's ODBC driver. Lately it has been measured to be\n> >consistently faster than psqlODBC.\n> >\n> >http://projects.commandprompt.com/public/odbcng\n> \n> just curious: what was the reasoning to reimplement the protocol stack\n> in odbcng? the mainline odbc driver went in the other direction.\n\nYeah, but they had to back-off from that plan, and AFAIK it only uses\nlibpq for the auth stuff and then switch to dealing with the protocol\ndirectly.\n\nI don't know what the reasoning was though :-) I guess Joshua would\nknow. I'm not involved in that project. I only know that recently a\nuser posted some measurements showing that ODBCng was way slower that\npsqlODBC, and it was discovered that it was using v3 Prepare/Bind/\nExecute, which was problematic performance-wise due to the planner\nissues with that. So AFAIK it currently parses the statements\ninternally before passing them to the server.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Sat, 21 Oct 2006 13:07:10 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is ODBC that slow?"
},
{
"msg_contents": "\n> Yeah, but they had to back-off from that plan, and AFAIK it only uses\n> libpq for the auth stuff and then switch to dealing with the protocol\n> directly.\n> \n> I don't know what the reasoning was though :-) I guess Joshua would\n> know. I'm not involved in that project. I only know that recently a\n> user posted some measurements showing that ODBCng was way slower that\n> psqlODBC, and it was discovered that it was using v3 Prepare/Bind/\n> Execute, which was problematic performance-wise due to the planner\n> issues with that. So AFAIK it currently parses the statements\n> internally before passing them to the server.\n\nThat is correct, we were using PostgreSQL server side prepare which has\nshown to be ridiculously slow. So we moved to client side prepare and\nODBCng now moves very, very quickly.\n\nYou can see results here:\n\nhttp://projects.commandprompt.com/public/odbcng/wiki/Performance\n\nAs in, it moves quickly enough to compete with other bindings such as\nDBD::Pg.\n\nOne of the libpq features we wanted to avoid was the receiving of all\nresults on the server before sending to the client. With ODBCng we have\na buffering option that will receive all results over the wire directly.\n\nThis can increase performance quite a bit in specific circumstances but\nalso has the downside of using more memory on the ODBC client.\n\nWe also have a security through obscurity feature as described here:\n\nhttp://projects.commandprompt.com/public/odbcng/wiki/PatternMatch\n\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Sat, 21 Oct 2006 09:14:53 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is ODBC that slow?"
},
{
"msg_contents": "> Try Command Prompt's ODBC driver. Lately it has been measured to be\n> consistently faster than psqlODBC.\n>\n> http://projects.commandprompt.com/public/odbcng\n\nThanks,\n\nI tried this, but via Access it always reports a login (username/password) \nto db failure. However, this a an Alpha - is there an \"official\" release I \nshould be waiting for? It's not clear to me whether this is a commercial \nproduct or not.\n\nCarlo \n\n\n",
"msg_date": "Tue, 24 Oct 2006 17:49:22 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Is ODBC that slow?"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm working out specs for a new database server to be\npurchased for our organization. The applications the\nserver will handle are mainly related to network\noperations (monitoring, logging, statistical/trend\nreports, etc.). Disk I/O will be especially high with\nrelation to processing network stats.\n\nYou can find a diagram of my initial\nspec here: \nhttp://img266.imageshack.us/img266/9171/dbserverdiagramuc3.jpg\n\nServer will be a HP ProLiant DL585 G2 with four\ndual-core 2.6GHz processors and 8GB of RAM.\n\nI can always throw in more RAM. I'm trying to find\nthe most effective way to maximize disk throughput, as\nthe list archives suggest that it is the choke point\nin most cases. I separated the storage into multiple\narrays on multiple controllers, and plan to have 512MB\nRAM on each controller using BBWC. The plan is to\nutilize multiple tablespaces as well as partitioned\ntables when necessary. (Note that the\nStorageWorks 4214R enclosure with Ultra3 disks is used\nbecause we already have it lying around.)\n\nI heard some say that the transaction log should be on\nit's own array, others say it doesn't hurt to have it\non the same array as the OS. Is it really worthwhile\nto put it on it's own array?\n\nCan you guys see any glaring bottlenecks in my layout?\n Any other suggestions to offer (throw in more\ncontrollers, different RAID layout, etc.)? Our budget\nlimit is $50k.\n\nThanks!\n\nP.S. I know there was a very similar thread started by\nBen Suffolk recently, I'd still like to have your\n\"eyes of experience\" look at my proposed layout :-)\n\n\n\n\n\n\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Sat, 21 Oct 2006 08:43:05 -0700 (PDT)",
"msg_from": "John Philips <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing disk throughput on quad Opteron"
},
{
"msg_contents": "\n> I heard some say that the transaction log should be on\n> it's own array, others say it doesn't hurt to have it\n> on the same array as the OS. Is it really worthwhile\n> to put it on it's own array?\n> \n> Can you guys see any glaring bottlenecks in my layout?\n> Any other suggestions to offer (throw in more\n> controllers, different RAID layout, etc.)? Our budget\n> limit is $50k.\n\nYou should easily be able to fit in 50k since you already have the\nstorage device. I would suggest the following:\n\n1. Throw in as much RAM as you can.\n2. Yes put the transaction logs on a separate array. There are a couple\nof reasons for this:\n\n 1. transaction logs are written sequentially so a RAID 1 is enough\n 2. You don't have to use a journaled fs for the transaction logs so it\nis really fast.\n\n3. IIRC the MSA 30 can take 14 drives. Make sure you put in all 14\ndrives and delegate two of them to hot spare duty.\n\nI actually wonder if you would be better off putting the indexes on\ntablespace A and use your core data set on the larger storage works array...\n\nSincerely,\n\nJoshua D. Drake\n\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Thanks!\n> \n> P.S. I know there was a very similar thread started by\n> Ben Suffolk recently, I'd still like to have your\n> \"eyes of experience\" look at my proposed layout :-)\n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam protection around \n> http://mail.yahoo.com \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Sat, 21 Oct 2006 10:33:20 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing disk throughput on quad Opteron"
},
{
"msg_contents": "> You can find a diagram of my initial\n> spec here:\n> http://img266.imageshack.us/img266/9171/dbserverdiagramuc3.jpg\n>\n> Can you guys see any glaring bottlenecks in my layout?\n> Any other suggestions to offer (throw in more\n> controllers, different RAID layout, etc.)? Our budget\n> limit is $50k.\n\nThe thing I would ask is would you not be better with SAS drives?\n\nSince the comments on Dell, and the highlighted issues I have been \nlooking at HP and the the Smart Array P600 controller with 512 BBWC. \nAlthough I am looking to stick with the 8 internal disks, rather than \nuse external ones.\n\nThe HP Smart Array 50 is the external array for SAS drives. Not \nreally looked into it much though.\n\nRegards\n\nBen\n\n\n",
"msg_date": "Sat, 21 Oct 2006 19:17:37 +0100",
"msg_from": "Ben Suffolk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing disk throughput on quad Opteron"
},
{
"msg_contents": "> The thing I would ask is would you not be better\n> with SAS drives?\n> \n> Since the comments on Dell, and the highlighted\n> issues I have been \n> looking at HP and the the Smart Array P600\n> controller with 512 BBWC. \n> Although I am looking to stick with the 8 internal\n> disks, rather than \n> use external ones.\n> \n> The HP Smart Array 50 is the external array for SAS\n> drives. Not \n> really looked into it much though.\n\nBen,\n\nThe Smart Array 50 supports a maximum of 10 disks and\nhas a single I/O module, while the Smart Array 30\nsupports up to 14 disks and can be configured with a\ndual I/O module.\n\nI was under the assumption that SAS runs at the same\nspeed as Ultra320, in which case the Smart Array 30 is\na better bet...\n\nThanks for your feedback.\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Mon, 23 Oct 2006 05:16:37 -0700 (PDT)",
"msg_from": "John Philips <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing disk throughput on quad Opteron"
},
{
"msg_contents": "On Oct 21, 2006, at 11:43 AM, John Philips wrote:\n\n> Can you guys see any glaring bottlenecks in my layout?\n> Any other suggestions to offer (throw in more\n> controllers, different RAID layout, etc.)? Our budget\n> limit is $50k.\n\nIf I had $50k budget, I'd be buying the SunFire X4500 and running \nSolaris + ZFS on it. However, you're limited to 2 dual core \nOpterons, it seems.",
"msg_date": "Mon, 23 Oct 2006 16:56:23 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing disk throughput on quad Opteron"
},
{
"msg_contents": "Vivek Khera wrote:\n> \n> On Oct 21, 2006, at 11:43 AM, John Philips wrote:\n> \n>> Can you guys see any glaring bottlenecks in my layout?\n>> Any other suggestions to offer (throw in more\n>> controllers, different RAID layout, etc.)? Our budget\n>> limit is $50k.\n> \n> If I had $50k budget, I'd be buying the SunFire X4500 and running\n> Solaris + ZFS on it. However, you're limited to 2 dual core Opterons,\n> it seems.\n\nThe HP 585 will give you quad dual core :)\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Mon, 23 Oct 2006 13:59:05 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing disk throughput on quad Opteron"
},
{
"msg_contents": "On Oct 23, 2006, at 4:59 PM, Joshua D. Drake wrote:\n\n>> If I had $50k budget, I'd be buying the SunFire X4500 and running\n>> Solaris + ZFS on it. However, you're limited to 2 dual core \n>> Opterons,\n>> it seems.\n>\n> The HP 585 will give you quad dual core :)\n\nbut can you sling the bits to and from the disk as fast as the \nX4500? the speed numbers on the i/o of the x4500 are mind-numbing.",
"msg_date": "Mon, 23 Oct 2006 17:08:54 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing disk throughput on quad Opteron"
},
{
"msg_contents": "Vivek Khera wrote:\n> \n> On Oct 23, 2006, at 4:59 PM, Joshua D. Drake wrote:\n> \n>>> If I had $50k budget, I'd be buying the SunFire X4500 and running\n>>> Solaris + ZFS on it. However, you're limited to 2 dual core Opterons,\n>>> it seems.\n>>\n>> The HP 585 will give you quad dual core :)\n> \n> but can you sling the bits to and from the disk as fast as the X4500? \n> the speed numbers on the i/o of the x4500 are mind-numbing.\n\nHonestly, I don't know. I can tell you that they perform VERY well for\nus (as do the 385s). I have deployed about several 385s and 585s with\nthe MSA30s over the last 6 months and have not been disappointed yet.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Mon, 23 Oct 2006 14:12:16 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing disk throughput on quad Opteron"
},
{
"msg_contents": "On Sat, Oct 21, 2006 at 08:43:05AM -0700, John Philips wrote:\n> I heard some say that the transaction log should be on\n> it's own array, others say it doesn't hurt to have it\n> on the same array as the OS. Is it really worthwhile\n> to put it on it's own array?\n\nIt all depends on the controller and how much non-WAL workload there is.\nTheoretically, with a good enough controller, you can leave WAL on the\nsame partition as your data.\n\nWith a complex setup like you're looking at, you really will want to do\nsome testing to see what makes the most sense. I can also point you at a\ncompany that does modeling of stuff like this; they could actually give\nyou some idea of how well that setup would perform before you buy the\nhardware.\n\nBTW, any test results you can provide back to the community would be\nmost appreciated!\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 23 Oct 2006 17:08:33 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing disk throughput on quad Opteron"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of John Philips\n> Sent: Monday, October 23, 2006 8:17 AM\n> To: Ben Suffolk\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Optimizing disk throughput on quad Opteron\n> \n> > The thing I would ask is would you not be better\n> > with SAS drives?\n> >\n> > Since the comments on Dell, and the highlighted\n> > issues I have been\n> > looking at HP and the the Smart Array P600\n> > controller with 512 BBWC.\n> > Although I am looking to stick with the 8 internal\n> > disks, rather than\n> > use external ones.\n> >\n> > The HP Smart Array 50 is the external array for SAS\n> > drives. Not\n> > really looked into it much though.\n> \n> Ben,\n> \n> The Smart Array 50 supports a maximum of 10 disks and\n> has a single I/O module, while the Smart Array 30\n> supports up to 14 disks and can be configured with a\n> dual I/O module.\n> \n> I was under the assumption that SAS runs at the same\n> speed as Ultra320, in which case the Smart Array 30 is\n> a better bet...\n> \n> Thanks for your feedback.\n\nThe drives might be about the same speed, but SAS is a completely\ndifferent bus architecture from SCSI. U320 is a parallel interface\nlimited to 320 MB/s for the total bus (160 MB/s per channel, so be\ncareful here). SAS is a 3.0Gbps direct serial interface to the drive.\nSo, after 5-6 drives, SAS will definitely start to pay off. Take a look\nat Dell's MD1000 external enclosure vs the previous version. The MD1000\noffers much better performance (not saying to go with dell, just giving\nan example of SCSI vs. SAS from a vendor I'm familiar with). Oh, and if\nyou're not completely against dell, you can daisy chain 3 of the MD1000\nenclosures together off one of their new 6850 (Quad Woodcrest) or 6950\n(Quad Operton). \n\nAt the moment, the Woodcrests seem to be outperforming the Opteron in\nserver benchmarks, I have a quad core (dual cpu) 2950 I'd be happy to\nrun some pg_benches (or other preferred benchmark) if someone has a\nsimilar opteron so we can get some relevant comparisons on the list.\n\nAlso, here's a link that was posted a while back on opteron vs.\nwoodcrest:\nhttp://tweakers.net/reviews/646\n\nHTH,\n\nBucky\n",
"msg_date": "Tue, 24 Oct 2006 09:56:44 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing disk throughput on quad Opteron"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a query with several join operations and applying the same\nfilter condition over each involved table. This condition is a complex\npredicate over an indexed timestamp field, depending on some\nparameters.\nTo factorize code, I wrote the filter into a plpgsql function, but\nthe resulting query is much more slower than the first one!\n\nThe explain command over the original query gives the following info\nfor the WHERE clause that uses the filter:\n\n...\n Index Cond: ((_timestamp >= '2006-02-23 03:00:00'::timestamp without\ntime zone) AND (_timestamp <= '2006-02-27 20:00:00.989999'::timestamp\nwithout time zone))\n...\n\nThe explain command for the WHERE clause using the filtering function is:\n\n...\nFilter: include_time_date('2006-02-23'::date, '2006-02-27'::date,\n'03:00:00'::time without time zone, '20:00:00'::time without time\nzone, (_timestamp)::timestamp without time zone)\n...\n\nIt seems to not be using the index, and I think this is the reason of\nthe performance gap between both solutions.\n\nHow can I explicitly use this index? which type of functions shall I\nuse (VOLATILE | INMUTABLE | STABLE)?\n\nThanks in advance\n\nMara\n",
"msg_date": "Mon, 23 Oct 2006 16:54:00 -0300",
"msg_from": "\"Mara Dalponte\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems using a function in a where clause"
},
{
"msg_contents": "On Mon, Oct 23, 2006 at 04:54:00PM -0300, Mara Dalponte wrote:\n> Hello,\n> \n> I have a query with several join operations and applying the same\n> filter condition over each involved table. This condition is a complex\n> predicate over an indexed timestamp field, depending on some\n> parameters.\n> To factorize code, I wrote the filter into a plpgsql function, but\n> the resulting query is much more slower than the first one!\n\nA view would probably be a better idea... or create some code that\ngenerates the code for you.\n\n> The explain command over the original query gives the following info\n> for the WHERE clause that uses the filter:\n> \n> ...\n> Index Cond: ((_timestamp >= '2006-02-23 03:00:00'::timestamp without\n> time zone) AND (_timestamp <= '2006-02-27 20:00:00.989999'::timestamp\n> without time zone))\n> ...\n> \n> The explain command for the WHERE clause using the filtering function is:\n> \n> ...\n> Filter: include_time_date('2006-02-23'::date, '2006-02-27'::date,\n> '03:00:00'::time without time zone, '20:00:00'::time without time\n> zone, (_timestamp)::timestamp without time zone)\n> ...\n> \n> It seems to not be using the index, and I think this is the reason of\n> the performance gap between both solutions.\n \nWell, it looks like include_time_date just returns a boolean, so how\ncould it use the index?\n\n> How can I explicitly use this index? which type of functions shall I\n> use (VOLATILE | INMUTABLE | STABLE)?\n\nThat depends on what exactly the function does. There's a pretty good\ndescription in the CREATE FUNCTION docs.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 24 Oct 2006 19:21:31 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems using a function in a where clause"
},
{
"msg_contents": "On Wed, Oct 25, 2006 at 07:55:38AM -0300, Mara Dalponte wrote:\n> On 10/24/06, Jim C. Nasby <[email protected]> wrote:\n> >On Mon, Oct 23, 2006 at 04:54:00PM -0300, Mara Dalponte wrote:\n> >> Hello,\n> >>\n> >> I have a query with several join operations and applying the same\n> >> filter condition over each involved table. This condition is a complex\n> >> predicate over an indexed timestamp field, depending on some\n> >> parameters.\n> >> To factorize code, I wrote the filter into a plpgsql function, but\n> >> the resulting query is much more slower than the first one!\n> >\n> >A view would probably be a better idea... or create some code that\n> >generates the code for you.\n> \n> Thank, but the filter function needs some external parameters, so a\n> view wont be appropiate. Anyway, your second possibility could work!\n> \n> >> The explain command over the original query gives the following info\n> >> for the WHERE clause that uses the filter:\n> >>\n> >> ...\n> >> Index Cond: ((_timestamp >= '2006-02-23 03:00:00'::timestamp without\n> >> time zone) AND (_timestamp <= '2006-02-27 20:00:00.989999'::timestamp\n> >> without time zone))\n> >> ...\n> >>\n> >> The explain command for the WHERE clause using the filtering function is:\n> >>\n> >> ...\n> >> Filter: include_time_date('2006-02-23'::date, '2006-02-27'::date,\n> >> '03:00:00'::time without time zone, '20:00:00'::time without time\n> >> zone, (_timestamp)::timestamp without time zone)\n> >> ...\n> >>\n> >> It seems to not be using the index, and I think this is the reason of\n> >> the performance gap between both solutions.\n> >\n> >Well, it looks like include_time_date just returns a boolean, so how\n> >could it use the index?\n> \n> I mean that in the old query the index is used (because is a\n> comparative condition over an indexed timestamp field), but not in the\n> new one, where the function is used. Is there some kind of \"inline\"\n> function type?\n\nNo, unfortunately. Your best bet is to add the most important filter\ncriteria by hand, or write code that writes the code (which is what I'd\nprobably do).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 25 Oct 2006 10:32:37 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems using a function in a where clause"
}
] |
[
{
"msg_contents": "Hello there;\n\nI've got an application that has to copy an existing database to a new \ndatabase on the same machine.\n\nI used to do this with a pg_dump command piped to psql to perform the \ncopy; however the database is 18 gigs large on disk and this takes a LONG \ntime to do.\n\nSo I read up, found some things in this list's archives, and learned that \nI can use createdb --template=old_database_name to do the copy in a much \nfaster way since people are not accessing the database while this copy \nhappens.\n\n\nThe problem is, it's still too slow. My question is, is there any way I \ncan use 'cp' or something similar to copy the data, and THEN after that's \ndone modify the database system files/system tables to recognize the \ncopied database?\n\nFor what it's worth, I've got fsync turned off, and I've read every tuning \nthing out there and my settings there are probably pretty good. It's a \nSolaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and \nhere's some stats:\n\nshared_buffers = 300000\nwork_mem = 102400\nmaintenance_work_mem = 1024000\n\nbgwriter_lru_maxpages=0\nbgwriter_lru_percent=0\n\nfsync = off\nwal_buffers = 128\ncheckpoint_segments = 64\n\n\nThank you!\n\n\nSteve Conley\n",
"msg_date": "Mon, 23 Oct 2006 17:51:40 -0400 (EDT)",
"msg_from": "Steve <[email protected]>",
"msg_from_op": true,
"msg_subject": "Copy database performance issue"
},
{
"msg_contents": "Steve,\n\nAre you using the latest update release of Solaris 10 ?\n\nWhen you are doing the copy, did you check with prstat -amL to see if it \nis saturating on any CPU?\n\nIf it is saturating on a CPU then atleast it will narrow down that you \nneed to improve the CPU utilization of the copy process.\n\nBrendan Greg's \"hotuser\" script which uses DTrace and Pearl post \nprocessing will help you to figure out which functions is causing the \nhigh CPU utilization and then maybe somebody from the PostgreSQL team \ncan figure out what's happening that is causing the slow copy.\n\nIf none of the cores show up as near 100% then the next step is to \nfigure out if any disk is 100% utilized via iostat -xczmP .\n\nWith this information it might help to figure out the next steps in your \ncase.\n\nRegards,\nJignesh\n\n\nSteve wrote:\n> Hello there;\n>\n> I've got an application that has to copy an existing database to a new \n> database on the same machine.\n>\n> I used to do this with a pg_dump command piped to psql to perform the \n> copy; however the database is 18 gigs large on disk and this takes a \n> LONG time to do.\n>\n> So I read up, found some things in this list's archives, and learned \n> that I can use createdb --template=old_database_name to do the copy in \n> a much faster way since people are not accessing the database while \n> this copy happens.\n>\n>\n> The problem is, it's still too slow. My question is, is there any way \n> I can use 'cp' or something similar to copy the data, and THEN after \n> that's done modify the database system files/system tables to \n> recognize the copied database?\n>\n> For what it's worth, I've got fsync turned off, and I've read every \n> tuning thing out there and my settings there are probably pretty \n> good. It's a Solaris 10 machine (V440, 2 processor, 4 Ultra320 \n> drives, 8 gig ram) and here's some stats:\n>\n> shared_buffers = 300000\n> work_mem = 102400\n> maintenance_work_mem = 1024000\n>\n> bgwriter_lru_maxpages=0\n> bgwriter_lru_percent=0\n>\n> fsync = off\n> wal_buffers = 128\n> checkpoint_segments = 64\n>\n>\n> Thank you!\n>\n>\n> Steve Conley\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n",
"msg_date": "Tue, 24 Oct 2006 14:31:10 +0100",
"msg_from": "Jignesh Shah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy database performance issue"
},
{
"msg_contents": "On Mon, Oct 23, 2006 at 05:51:40PM -0400, Steve wrote:\n> Hello there;\n> \n> I've got an application that has to copy an existing database to a new \n> database on the same machine.\n> \n> I used to do this with a pg_dump command piped to psql to perform the \n> copy; however the database is 18 gigs large on disk and this takes a LONG \n> time to do.\n> \n> So I read up, found some things in this list's archives, and learned that \n> I can use createdb --template=old_database_name to do the copy in a much \n> faster way since people are not accessing the database while this copy \n> happens.\n> \n> \n> The problem is, it's still too slow. My question is, is there any way I \n> can use 'cp' or something similar to copy the data, and THEN after that's \n> done modify the database system files/system tables to recognize the \n> copied database?\n \nAFAIK, that's what initdb already does... it copies the database,\nessentially doing what cp does.\n\n> For what it's worth, I've got fsync turned off, and I've read every tuning \n> thing out there and my settings there are probably pretty good. It's a \n> Solaris 10 machine (V440, 2 processor, 4 Ultra320 drives, 8 gig ram) and \n> here's some stats:\n\nI don't think any of the postgresql.conf settings will really come into\nplay when you're doing this...\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 24 Oct 2006 19:25:13 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy database performance issue"
}
] |
[
{
"msg_contents": "(I tried this question on the interface forum and got no result, but I don't \nknow how to tell if it's an interface issue or not)\n\nI have a TCL app which typically takes hours to complete. I found out that \nit is taking longer than it should because it occasionally stalls \ninexplicably (for tens of minute at a time) then usually continues.\n\nThere are a minimum of four apps running at the same time, all reading \ndifferent sections of the same table, all writing to the same db and the \nsame tables. The other apps seem unaffected by the one app that freezes.\n\nThis happens running \"pg_exec $conn \"commit\" from within a TCL script on a \nclient app.\n\n\nThe delays are so long that I used to think the app was hopelessly frozen. \nBy accident, I left the app alone in its frozen state and came back a good \ndeal later and seen that it was running again.\n\nSometimes I decide it *IS* frozen and have to restart. Because Ctrl-C will \nnot cause the script to break, it appears the app is stuck in non-TCL code \n(either waiting for postgres or stuck in the interface code?)\n\nThe application loops through an import file, reading one row at a time, and \nissues a bunch of inserts and updates to various tables. There's a simple \npg_exec $conn \"start transaction\" at the beginning of the loop and the \ncommit at the end. The commit actually appears to be going through.\n\nThere are no messages of any significance in the log. There do not appear to \nbe any outstanding locks or transactions.\n\nI am not doing any explicit locking, all transaction settings are set to \ndefault.\n\nAny thoughts on the cause and possible solutions would be appreciated.\n\nCarlo\n\n\n",
"msg_date": "Wed, 25 Oct 2006 11:46:41 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "commit so slow program looks frozen"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n\n> The delays are so long that I used to think the app was hopelessly frozen. \n> By accident, I left the app alone in its frozen state and came back a good \n> deal later and seen that it was running again.\n> \n> Sometimes I decide it *IS* frozen and have to restart. Because Ctrl-C will \n> not cause the script to break, it appears the app is stuck in non-TCL code \n> (either waiting for postgres or stuck in the interface code?)\n\nYou may try to figure out what's the process doing (the backend\nobviously, not the frontend (Tcl) process) by attaching to it with\nstrace. Is it doing system calls? Maybe it's busy reading from or\nwriting to disk. Maybe it's swamped by a context switch storm (but in\nthat case, probably the other processes would be affected as well).\n\nOr you may want to attach to it with GDB and see what the backtrace\nlooks like. If nothing obvious pops up, do it several times and compare\nthem.\n\nI wouldn't expect it to be stuck on locks, because if it's only on\ncommit, then it probably has all the locks it needs. But try to see if\nyou can find something not granted in pg_locks that it may be stuck on.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Wed, 25 Oct 2006 15:19:10 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "> You may try to figure out what's the process doing (the backend\n> obviously, not the frontend (Tcl) process) by attaching to it with\n> strace.\n\nIt's so sad when us poor Windows guys get helpful hints from people assume \nthat we're smart enough to run *NIX... ;-)\n\n> Maybe it's swamped by a context switch storm (but in that case, probably \n> the other processes would be affected as well).\n\nWhat is a context switch storm? (and what a great name for a heavy metal \nrock band!)\n\nInterestingly enough, last night (after the original post) I watched three \nof the processes slow down, one after the other - and then stall for so long \nthat I had assumed they had frozen. They were all stalled on a message that \nI had put in the script that indicated they had never returned from a \ncommit. I have looked into this, and I believe the commits are actually \ngoing through.\n\nThe remaining 4th process continued to run, and actually picked up speed as \nthe CPU gave its cycles over. The Windows task manager shows the postgresql \nprocesses that (I assume) are associated with the stalled processes as \nconsuming zero CPU time.\n\nSometimes I have seen all of the apps slow down and momentarrily freeze at \nthe same time... but then continue. I have autovacuum off, although \nstats_row_level and stats_start_collector remain on (I thought these were \nonly relevant if autovacuum was on).\n\nI have seen the apps slow down (and perhaps stall) when specifical tables \nhave vacuum/analyze running, and that makes sense. I did notice that on one \noccasion a \"frozen\" app came back to life after I shut down EMS PostgreSQL \nmanager in another session. Maybe a coincidence, or maybe an indication that \nthe apps are straining resources... on a box with two twin-core XEONs and \n4GB of memory? Mind you, the config file is confgiured for the database \nloading phase weare in now - with lots of resources devoted to a few \nconnections.\n\n> I wouldn't expect it to be stuck on locks, because if it's only on\n> commit, then it probably has all the locks it needs. But try to see if\n> you can find something not granted in pg_locks that it may be stuck on.\n\nLooking at the pgadmin server status pages, no locks or transactions are \npending when this happens.\n\nCarlo \n\n\n",
"msg_date": "Wed, 25 Oct 2006 16:07:38 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n\n>>You may try to figure out what's the process doing (the backend\n>>obviously, not the frontend (Tcl) process) by attaching to it with\n>>strace.\n>> \n>>\n>\n>It's so sad when us poor Windows guys get helpful hints from people assume \n>that we're smart enough to run *NIX... ;-)\n>\n> \n>\n>>Maybe it's swamped by a context switch storm (but in that case, probably \n>>the other processes would be affected as well).\n>> \n>>\n>\n>What is a context switch storm? (and what a great name for a heavy metal \n>rock band!)\n>\n>Interestingly enough, last night (after the original post) I watched three \n>of the processes slow down, one after the other - and then stall for so long \n>that I had assumed they had frozen. They were all stalled on a message that \n>I had put in the script that indicated they had never returned from a \n>commit. I have looked into this, and I believe the commits are actually \n>going through.\n> \n>\nI have a question for you: did you have a long running query keeping \nopen a transaction? I've just noticed the same problem here, but things \ncleaned up immediately when I aborted the long-running transaction.\n\nNote that in my case the long-running transaction wasn't idle in \ntransaction, it was just doing a whole lot of work.\n\nBrian\n\n\n\n\n\n\n\n\nCarlo Stonebanks wrote:\n\n\nYou may try to figure out what's the process doing (the backend\nobviously, not the frontend (Tcl) process) by attaching to it with\nstrace.\n \n\n\nIt's so sad when us poor Windows guys get helpful hints from people assume \nthat we're smart enough to run *NIX... ;-)\n\n \n\nMaybe it's swamped by a context switch storm (but in that case, probably \nthe other processes would be affected as well).\n \n\n\nWhat is a context switch storm? (and what a great name for a heavy metal \nrock band!)\n\nInterestingly enough, last night (after the original post) I watched three \nof the processes slow down, one after the other - and then stall for so long \nthat I had assumed they had frozen. They were all stalled on a message that \nI had put in the script that indicated they had never returned from a \ncommit. I have looked into this, and I believe the commits are actually \ngoing through.\n \n\nI have a question for you: did you have a long running query keeping\nopen a transaction? I've just noticed the same problem here, but\nthings cleaned up immediately when I aborted the long-running\ntransaction.\n\nNote that in my case the long-running transaction wasn't idle in\ntransaction, it was just doing a whole lot of work.\n\nBrian",
"msg_date": "Wed, 25 Oct 2006 16:18:51 -0400",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": ">> I have a question for you: did you have a long running query keeping open\na transaction? I've just noticed the same problem here, but things cleaned\nup immediately when I aborted the long-running transaction.\n\n\n\nNo, the only processes are from those in the import applications themselves:\nshort transactions never lasting more than a fraction of a second.\n\n \n\nCarlo\n\n\n\n\n\n\n\n\n\n\n\n>> I have a question\nfor you: did you have a long running query keeping open a transaction? \nI've just noticed the same problem here, but things cleaned up immediately when\nI aborted the long-running transaction.\n\n\nNo, the only processes are from those in\nthe import applications themselves: short transactions never lasting more than\na fraction of a second.\n \nCarlo",
"msg_date": "Wed, 25 Oct 2006 16:32:16 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "On Wed, 2006-10-25 at 15:07, Carlo Stonebanks wrote:\n> > You may try to figure out what's the process doing (the backend\n> > obviously, not the frontend (Tcl) process) by attaching to it with\n> > strace.\n> \n> It's so sad when us poor Windows guys get helpful hints from people assume \n> that we're smart enough to run *NIX... ;-)\n\nYou should try a google search on strace and NT or windows or XP... I\nwas surprised how many various implementations of it I found.\n\n> \n> > Maybe it's swamped by a context switch storm (but in that case, probably \n> > the other processes would be affected as well).\n> \n> What is a context switch storm? (and what a great name for a heavy metal \n> rock band!)\n\nI can just see the postgresql group getting together at the next\nO'Reilley's conference and creating that band. And it will all be your\nfault.\n\nA context switch storm is when your machine spends more time trying to\nfigure out what to do than actually doing anything. The CPU spends most\nit's time switching between programs than running them.\n\n\n> I have seen the apps slow down (and perhaps stall) when specifical tables \n> have vacuum/analyze running, and that makes sense. I did notice that on one \n> occasion a \"frozen\" app came back to life after I shut down EMS PostgreSQL \n> manager in another session. Maybe a coincidence, or maybe an indication that \n> the apps are straining resources... on a box with two twin-core XEONs and \n> 4GB of memory? Mind you, the config file is confgiured for the database \n> loading phase weare in now - with lots of resources devoted to a few \n> connections.\n\nSeeing as PostgreSQL runs one thread / process per connection, it's\npretty unlikely that the problem here is one \"hungry\" thread. Do all\nfour CPUs show busy, or just one? Do you have a way of measuring how\nmuch time is spent waiting on I/O on a windows machine like top / vmstat\ndoes in unix?\n\nIs it possible your machine is going into a swap storm? i.e. you've\nused all physical memory somehow and it's swapping out? If your current\nconfiguration is too aggresive on sort / work mem then it can happen\nwith only a few connections. \n\nNote that if you have an import process that needs a big chunk of\nmemory, you can set just that one connection to use a large setting and\nleave the default smaller.\n",
"msg_date": "Wed, 25 Oct 2006 17:04:52 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "> \n>>> Maybe it's swamped by a context switch storm (but in that case, probably \n>>> the other processes would be affected as well).\n>> What is a context switch storm? (and what a great name for a heavy metal \n>> rock band!)\n> \n> I can just see the postgresql group getting together at the next\n> O'Reilley's conference and creating that band. And it will all be your\n> fault.\n\nWell now you let the secret out!\n\nJoshua D. Drake\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Wed, 25 Oct 2006 15:12:00 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "On Wed, Oct 25, 2006 at 04:32:16PM -0400, Carlo Stonebanks wrote:\n> >> I have a question for you: did you have a long running query keeping open\n> a transaction? I've just noticed the same problem here, but things cleaned\n> up immediately when I aborted the long-running transaction.\n> \n> No, the only processes are from those in the import applications themselves:\n> short transactions never lasting more than a fraction of a second.\n\nDo you have a linux/unix machine you could reproduce this on?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Wed, 25 Oct 2006 23:21:50 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "> > > You may try to figure out what's the process doing (the backend \n> > > obviously, not the frontend (Tcl) process) by attaching \n> to it with \n> > > strace.\n> > \n> > It's so sad when us poor Windows guys get helpful hints from people \n> > assume that we're smart enough to run *NIX... ;-)\n> \n> You should try a google search on strace and NT or windows or \n> XP... I was surprised how many various implementations of it I found.\n\nLet me know if you find one that's stable, I've been wanting that. I've\ntried one or two, but it's always been just a matter of time before the\ninevitable BSOD.\n\n> > > Maybe it's swamped by a context switch storm (but in that case, \n> > > probably the other processes would be affected as well).\n> > \n> > What is a context switch storm? (and what a great name for a heavy \n> > metal rock band!)\n> \n> I can just see the postgresql group getting together at the \n> next O'Reilley's conference and creating that band. And it \n> will all be your fault.\n\n*DO NOT LET DEVRIM SEE THIS THREAD*\n\n\n> A context switch storm is when your machine spends more time \n> trying to figure out what to do than actually doing anything. \n> The CPU spends most it's time switching between programs \n> than running them.\n\nI can see Windows benig more sucepitble to this than say Linux, because\nswitching between processes there is a lot more expensive than on Linux.\n\n> Seeing as PostgreSQL runs one thread / process per \n> connection, it's pretty unlikely that the problem here is one \n> \"hungry\" thread. Do all four CPUs show busy, or just one? \n> Do you have a way of measuring how much time is spent waiting \n> on I/O on a windows machine like top / vmstat does in unix?\n\nThere are plenty of counters in the Performance Monitor. Specificall,\nlook at \"disk queue counters\" - they indicate when the I/O subsystem is\nbacked up.\n\n\n//Magnus\n",
"msg_date": "Thu, 26 Oct 2006 13:49:27 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "> I can just see the postgresql group getting together at the next\n> O'Reilley's conference and creating that band. And it will all be your\n> fault.\n\nFinally, a chance for me to wear my black leather pants.\n\n> A context switch storm is when your machine spends more time trying to\n> figure out what to do than actually doing anything. The CPU spends most\n> it's time switching between programs than running them.\n\nIs thatl likely on a new 4 CPU server that has no clients connected and that \nis only running four (admittedly heavy) TCL data load scripts?\n\n> Seeing as PostgreSQL runs one thread / process per connection, it's\n> pretty unlikely that the problem here is one \"hungry\" thread. Do all\n> four CPUs show busy, or just one? Do you have a way of measuring how\n> much time is spent waiting on I/O on a windows machine like top / vmstat\n> does in unix?\n\nBefore optimising the queries, all four CPU's were pinned to max performance \n(that's why I only run four imports at a time). After opimisation, all four \nCPU's are busy, but usage is spikey (which looks more normal), but all are \nobviously busy. I have this feeling that when an import app freezes, one CPU \ngoes idle while the others stay busy - I will confirm that with the next \nimport operation.\n\nI suspect that the server has the Xeon processors that were of a generation \nwhich PostgreSQL had a problem with - should a postgresql process be able to \ndistrivute its processing load across CPU's? (i.e. When I see one CPU at \n100% while all others are idle?)\n\n> Note that if you have an import process that needs a big chunk of\n> memory, you can set just that one connection to use a large setting and\n> leave the default smaller.\n\nTotal memory usage is below the max available. Each postgresql process takes \nup 500MB, there are four running and I have 4GB of RAM.\n\nCarlo \n\n\n",
"msg_date": "Thu, 26 Oct 2006 10:43:50 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "\n> A context switch storm is when your machine spends more time trying to\n> figure out what to do than actually doing anything. The CPU spends most\n> it's time switching between programs than running them.\n\nWell, we usually use the term \"thrashing\" as the generic for when your\nmachine is spending more time on overhead than doing user work - this\nwould include paging or context switching, along with whatever else. A\ncontext-switch storm would be a specific form of thrashing!\n\nRichard\n\n-- \nRichard Troy, Chief Scientist\nScience Tools Corporation\n510-924-1363 or 202-747-1263\[email protected], http://ScienceTools.com/\n\n",
"msg_date": "Thu, 26 Oct 2006 10:10:24 -0700 (PDT)",
"msg_from": "Richard Troy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": ">Ben Trewern\" <ben.trewern@_nospam_mowlem.com> wrote in message \n>news:[email protected]...\n> It might be worth turning off hyperthreading if your Xeons are using it. \n> There have been reports of this causing inconsistent behaviour with \n> PostgreSQL.\n\nYes, this issue comes up often - I wonder if the Woodcrest Xeons resolved \nthis? Have these problems been experienced on both Linux and Windows (we are \nrunning Windows 2003 x64)\n\nCarlo\n\n\n\n",
"msg_date": "Sat, 28 Oct 2006 11:45:37 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "> \n> Yes, this issue comes up often - I wonder if the Woodcrest Xeons\nresolved\n> this? Have these problems been experienced on both Linux and Windows\n(we\n> are\n> running Windows 2003 x64)\n> \n> Carlo\n> \nIIRC Woodcrest doesn't have HT, just dual core with shared cache.\n\n- Bucky\n",
"msg_date": "Mon, 30 Oct 2006 11:39:22 -0500",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Craig A. James\n> Sent: Wednesday, October 25, 2006 12:52 PM\n> To: Jim C. Nasby\n> Cc: Worky Workerson; Merlin Moncure; [email protected]\n> Subject: Re: [PERFORM] Best COPY Performance\n> \n> Jim C. Nasby wrote:\n> > Wait... so you're using perl to copy data between two tables? And \n> > using a cursor to boot? I can't think of any way that could be more \n> > inefficient...\n> > \n> > What's wrong with a plain old INSERT INTO ... SELECT? Or if \n> you really \n> > need to break it into multiple transaction blocks, at least don't \n> > shuffle the data from the database into perl and then back into the \n> > database; do an INSERT INTO ... SELECT with that same where clause.\n> \n> The data are on two different computers, and I do processing \n> of the data as it passes through the application. Otherwise, \n> the INSERT INTO ... SELECT is my first choice.\n\nWould dblink() help in any way?\n\nGreg\n\n\n",
"msg_date": "Wed, 25 Oct 2006 13:15:40 -0400",
"msg_from": "\"Spiegelberg, Greg\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Spiegelberg, Greg wrote:\n>> The data are on two different computers, and I do processing \n>> of the data as it passes through the application. Otherwise, \n>> the INSERT INTO ... SELECT is my first choice.\n> \n> Would dblink() help in any way?\n\nIt might if perl wasn't so damned good at this. ;-)\n\nCraig\n\n",
"msg_date": "Wed, 25 Oct 2006 10:28:05 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Hi, Craig,\n\nCraig A. James wrote:\n\n>> Would dblink() help in any way?\n> \n> It might if perl wasn't so damned good at this. ;-)\n\nYou know that you can use Perl inside PostgreS via plperl?\n\nHTH,\nMarkus\n",
"msg_date": "Wed, 25 Oct 2006 20:58:50 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
}
] |
[
{
"msg_contents": "Hello All-\n \n We have a question about numbers of fields in the select clause of a\nquery and how that affects query speed.\n The following query simply selects the primary key field from a table\nwith 100,000 records:\n \n------------------------------------------------------------\nselect p.opid\nFROM \nott_op p\n\n------------------------------------------------------------\n \n It runs in about half a second (running in PgAdmin... the query run\ntime, not the data retrieval time)\n \n When we change it by adding fields to the select list, it slows down\ndrastically. This version takes about 3 seconds:\n \n------------------------------------------------------------\nselect p.opid, p.opid, p.opid, p.opid, p.opid, p.opid, p.opid, p.opid,\np.opid, p.opid, p.opid\nFROM \nott_op p\n\n------------------------------------------------------------\n \n The more fields we add, the slower it gets.\n \n My guess is that we are missing a configuration setting... any ideas?\n Any help much appreciated.\n \nThanks,\n-Tom\n\n\n\n\n\n\nHello \nAll-\n \n We have a \nquestion about numbers of fields in the select clause of a query and how that \naffects query speed.\n The following \nquery simply selects the primary key field from a table with 100,000 \nrecords:\n \n------------------------------------------------------------\nselect p.opidFROM ott_op p\n------------------------------------------------------------\n \n It runs in about half a second (running in \nPgAdmin... the query run time, not the data retrieval \ntime)\n \n When we change it by adding fields to the select \nlist, it slows down drastically. This version takes about 3 \nseconds:\n \n\n------------------------------------------------------------select \np.opid, p.opid, p.opid, p.opid, p.opid, p.opid, p.opid, p.opid, p.opid, p.opid, \np.opidFROM ott_op p\n------------------------------------------------------------\n \n The more fields we add, the slower it \ngets.\n \n My guess is that we are missing a \nconfiguration setting... any \nideas?\n Any help much \nappreciated.\n \nThanks,\n-Tom",
"msg_date": "Wed, 25 Oct 2006 13:20:42 -0400",
"msg_from": "\"Tom Darci\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "query slows down drastically with increased number of fields"
},
{
"msg_contents": "\"Tom Darci\" <[email protected]> writes:\n> It runs in about half a second (running in PgAdmin... the query run\n> time, not the data retrieval time)\n\nI don't have a lot of faith in PgAdmin's ability to distinguish the two.\nIn fact, for a query such as you have here that's just a bare seqscan,\nit's arguably *all* data retrieval time --- the backend will start\nemitting records almost instantly.\n\nFWIW, in attempting to duplicate your test I get\n\nregression=# explain analyze select f1 from foo;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..1541.00 rows=100000 width=4) (actual time=0.161..487.192 rows=100000 loops=1)\n Total runtime: 865.454 ms\n(2 rows)\n\nregression=# explain analyze select f1,f1,f1,f1,f1,f1,f1,f1,f1,f1,f1 from foo;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Seq Scan on foo (cost=0.00..1541.00 rows=100000 width=4) (actual time=0.169..603.795 rows=100000 loops=1)\n Total runtime: 984.124 ms\n(2 rows)\n\nNote that this test doesn't perform conversion of the field values to\ntext form, so it's an underestimate of the total time spent by the\nbackend for the real query. But I think almost certainly, your speed\ndifference is all about having to send more values to the client.\nThe costs not measured by the explain-analyze scenario would scale darn\nnear linearly with the number of repetitions of f1.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2006 17:53:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query slows down drastically with increased number of fields "
},
{
"msg_contents": "i have wondered myself. i wouldn't do it through pgAdmin (not sure what\nthe best test it, but i thought psql from the same machine might be\nbetter--see below). anyway, the funny thing is that if you concatenate\nthem the time drops:\n\n~% time psql -dXXX -hYYY -UZZZ -c\"select consumer_id from consumer\" -o\n/dev/null\npsql -dXXX -hYYY -UZZZ -c\"select consumer_id from consumer\" -o 0.09s\nuser 0.01s system 29% cpu 0.341 total\n\n~% time psql -dXXX -hstgdb0 -p5432 -Umnp -c\"select\nconsumer_id,consumer_id,consumer_id,consumer_id,consumer_id,consumer_id,\nconsumer_id,consumer_id from consumer\" -o /dev/null\npsql -dXXX -hYYY -UZZZ -o /dev/null 0.76s user 0.06s system 45% cpu\n1.796 total\n\n~% time psql -dXXX -hYYY -UZZZ -c\"select\nconsumer_id||consumer_id||consumer_id||consumer_id||consumer_id||consume\nr_id||consumer_id||consumer_id from consumer\" -o /dev/null\npsql -dXXX -hYYY -UZZZ -o /dev/null 0.18s user 0.04s system 20% cpu\n1.061 total\n\n \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Tom Darci\n> Sent: Wednesday, October 25, 2006 10:21 AM\n> To: [email protected]\n> Subject: [PERFORM] query slows down drastically with \n> increased number of fields\n> \n> Hello All-\n> \n> We have a question about numbers of fields in the select \n> clause of a query and how that affects query speed.\n> The following query simply selects the primary key field \n> from a table with 100,000 records:\n> \n> ------------------------------------------------------------\n> select p.opid\n> FROM \n> ott_op p\n> \n> ------------------------------------------------------------\n> \n> It runs in about half a second (running in PgAdmin... the \n> query run time, not the data retrieval time)\n> \n> When we change it by adding fields to the select list, it \n> slows down drastically. This version takes about 3 seconds:\n> \n> ------------------------------------------------------------\n> select p.opid, p.opid, p.opid, p.opid, p.opid, p.opid, \n> p.opid, p.opid, p.opid, p.opid, p.opid\n> FROM \n> ott_op p\n> \n> ------------------------------------------------------------\n> \n> The more fields we add, the slower it gets.\n> \n> My guess is that we are missing a configuration setting... \n> any ideas?\n> Any help much appreciated.\n> \n> Thanks,\n> -Tom\n> \n",
"msg_date": "Thu, 26 Oct 2006 15:03:38 -0700",
"msg_from": "\"George Pavlov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query slows down drastically with increased number of fields"
},
{
"msg_contents": "On Thu, Oct 26, 2006 at 03:03:38PM -0700, George Pavlov wrote:\n> i have wondered myself. i wouldn't do it through pgAdmin (not sure what\n> the best test it, but i thought psql from the same machine might be\n> better--see below). anyway, the funny thing is that if you concatenate\n> them the time drops:\n\nSure. Take a look at the output and you'll see there's less data to\nshove around.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 26 Oct 2006 17:36:50 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query slows down drastically with increased number of fields"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Thu, Oct 26, 2006 at 03:03:38PM -0700, George Pavlov wrote:\n>> anyway, the funny thing is that if you concatenate\n>> them the time drops:\n\n> Sure. Take a look at the output and you'll see there's less data to\n> shove around.\n\nEven more to the point, psql's time to format its standard ASCII-art\noutput is proportional to the number of columns, because it has to\ndetermine how wide to make each one ... if you used one of the other\ndisplay formats such as \"expanded\" or \"unaligned\" mode, there's probably\nbe less difference.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2006 18:50:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query slows down drastically with increased number of fields "
}
] |
[
{
"msg_contents": "Hi \n\n \n\nPlease help. I have got a postgres 7.3.4 database running on RedHat ES\n3, with 8GB of physical memory in it. The machine is shared with my\napplication which is pretty intensive in doing selects and updates\nagainst the database, but there are usually no more than 10 connections\nto the database at any time.\n\n \n\nDespite having 8GB of RAM on the machine, the machine is frequently\nrunning out of physical memory and swapping which is hurting\nperformance. Have read around on various of the message boards, and I\nsuspect that the SHARED_BUFFERS setting on this server is set way to\nhigh, and that this in fact may be hurting performance. My current\nconfiguration settings are as follows:\n\n \n\nshared_buffers = 393216 # min max_connections*2 or 16, 8KB each\n\nmax_fsm_relations = 10000 # min 10, fsm is free space map, ~40\nbytes\n\nmax_fsm_pages = 160001 # min 1000, fsm is free space map, ~6\n\nbytes\n\nsort_mem = 409600 # min 64, size in KB\n\nvacuum_mem = 81920 # min 1024, size in KB\n\n \n\n From what Ive read, Ive not seen anyone recommend a SHARED_BUFFERS\nsetting higher than 50,000. Is a setting of 393216 going to cause\nsignificant problems, or does this sound about right on an 8GB system,\nbearing in mind that Id like to reserve at least a couple of GB for my\napplication.\n\n \n\nAlso if you have any recommendations regarding effective_cache_size Id\nbe interested as reading around this sounds important as well\n\n \n\nThanks\n\n \n\nMark \n\n \n\n\n\n\n\n\n\n\n\n\n \nHi \n \nPlease help. I have got a postgres 7.3.4\ndatabase running on RedHat ES 3, with 8GB of physical memory in it.\n The machine is shared with my application which is pretty intensive\nin doing selects and updates against the database, but there are usually no\nmore than 10 connections to the database at any time.\n \nDespite having 8GB of RAM on the machine, the machine\nis frequently running out of physical memory and swapping which is hurting\nperformance. Have read around on various of the message boards, and\nI suspect that the SHARED_BUFFERS setting on this server is set way to high,\nand that this in fact may be hurting performance. My\ncurrent configuration settings are as follows:\n \nshared_buffers =\n393216 # min max_connections*2\nor 16, 8KB each\nmax_fsm_relations =\n10000 # min 10, fsm is free space map, ~40\nbytes\nmax_fsm_pages =\n160001 # min 1000, fsm is\nfree space map, ~6\nbytes\nsort_mem =\n409600 \n# min 64, size in KB\nvacuum_mem =\n81920 \n# min 1024, size in KB\n \nFrom what Ive read, Ive not seen anyone recommend a\nSHARED_BUFFERS setting higher than 50,000. Is a setting of 393216\ngoing to cause significant problems, or does this sound about right on an 8GB\nsystem, bearing in mind that Id like to reserve at least a couple of GB for my\napplication.\n \nAlso if you have any recommendations regarding effective_cache_size\nId be interested as reading around this sounds important as well\n \nThanks\n \nMark",
"msg_date": "Wed, 25 Oct 2006 16:47:50 -0400",
"msg_from": "\"Mark Lonsdale\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Configuration Issue ?"
},
{
"msg_contents": "Mark Lonsdale wrote:\n> \n> \n> Hi \n> \n> \n> \n> Please help. I have got a postgres 7.3.4 database running on RedHat ES\n> 3, with 8GB of physical memory in it. The machine is shared with my\n> application which is pretty intensive in doing selects and updates\n> against the database, but there are usually no more than 10 connections\n> to the database at any time.\n> \n> \n> shared_buffers = 393216 # min max_connections*2 or 16, 8KB each\n\nThe above is likely hurting you more than helping you with 7.3.\n\n> \n> max_fsm_relations = 10000 # min 10, fsm is free space map, ~40\n> bytes\n> \n> max_fsm_pages = 160001 # min 1000, fsm is free space map, ~6\n> \n> bytes\n> \n> sort_mem = 409600 # min 64, size in KB\n\nThe above will likely kill you :). Try 4096 or 8192, maybe 16384\ndepending on workload.\n\n> \n> vacuum_mem = 81920 # min 1024, size in KB\n\nThis is fine.\n\n> \n> Also if you have any recommendations regarding effective_cache_size Id\n> be interested as reading around this sounds important as well\n\nAbout 20-25% of available ram for 7.3.\n\n\nThe long and short is you need to upgrade to at least 7.4, preferrably 8.1.\n\nJoshua D. Drake\n\n\n\n> \n> \n> \n> Thanks\n> \n> \n> \n> Mark \n> \n> \n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Wed, 25 Oct 2006 13:51:54 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Issue ?"
}
] |
[
{
"msg_contents": "\n\nHi Josh\n\nThanks for the feedback, that is most usefull. When you said one of the\nsettings was likely killing us, was it all of the settings for\nmax_fsm_relations, max_fsm_pages, and sort_mem or just the setting for\nsort_mem ?\n\nCan you explain why the setting would be killing me :-)\n\nThanks\n\nMark\n\n-----Original Message-----\nFrom: Joshua D. Drake [mailto:[email protected]] \nSent: 25 October 2006 21:52\nTo: Mark Lonsdale\nCc: [email protected]\nSubject: Re: [PERFORM] Configuration Issue ?\n\nMark Lonsdale wrote:\n> \n> \n> Hi \n> \n> \n> \n> Please help. I have got a postgres 7.3.4 database running on RedHat\nES\n> 3, with 8GB of physical memory in it. The machine is shared with my\n> application which is pretty intensive in doing selects and updates\n> against the database, but there are usually no more than 10\nconnections\n> to the database at any time.\n> \n> \n> shared_buffers = 393216 # min max_connections*2 or 16, 8KB\neach\n\nThe above is likely hurting you more than helping you with 7.3.\n\n> \n> max_fsm_relations = 10000 # min 10, fsm is free space map, ~40\n> bytes\n> \n> max_fsm_pages = 160001 # min 1000, fsm is free space map, ~6\n> \n> bytes\n> \n> sort_mem = 409600 # min 64, size in KB\n\nThe above will likely kill you :). Try 4096 or 8192, maybe 16384\ndepending on workload.\n\n> \n> vacuum_mem = 81920 # min 1024, size in KB\n\nThis is fine.\n\n> \n> Also if you have any recommendations regarding effective_cache_size Id\n> be interested as reading around this sounds important as well\n\nAbout 20-25% of available ram for 7.3.\n\n\nThe long and short is you need to upgrade to at least 7.4, preferrably\n8.1.\n\nJoshua D. Drake\n\n\n\n> \n> \n> \n> Thanks\n> \n> \n> \n> Mark \n> \n> \n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Wed, 25 Oct 2006 16:57:09 -0400",
"msg_from": "\"Mark Lonsdale\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Issue ?"
},
{
"msg_contents": "Mark Lonsdale wrote:\n> \n> Hi Josh\n> \n> Thanks for the feedback, that is most usefull. When you said one of the\n> settings was likely killing us, was it all of the settings for\n> max_fsm_relations, max_fsm_pages, and sort_mem or just the setting for\n> sort_mem ?\n> \n> Can you explain why the setting would be killing me :-)\n\nThe sort_mem is crucial. It's memory *per sort*, which means one query \ncan use several times that amount.\n\n> The long and short is you need to upgrade to at least 7.4, preferrably\n> 8.1.\n\nJoshua means this too. Upgrade to 7.3.16 within the next few days, then \ntest out something more recent. You should see some useful performance \ngains from 8.1.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 25 Oct 2006 22:12:49 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Issue ?"
},
{
"msg_contents": "Richard Huxton wrote:\n> Mark Lonsdale wrote:\n>>\n>> Hi Josh\n>>\n>> Thanks for the feedback, that is most usefull. When you said one of the\n>> settings was likely killing us, was it all of the settings for\n>> max_fsm_relations, max_fsm_pages, and sort_mem or just the setting for\n>> sort_mem ?\n>>\n>> Can you explain why the setting would be killing me :-)\n> \n> The sort_mem is crucial. It's memory *per sort*, which means one query\n> can use several times that amount.\n\nWorse then that it is:\n\n((sort memory) * (number of sorts)) * (number of connections) = amount\nof ram possible to use.\n\nNow... take the following query:\n\nSELECT * FROM foo\n JOIN bar on (bar.id = foo.id)\n JOIN baz on (baz.id = foo_baz.id)\nORDER BY baz.name, foo.salary;\n\nOver 5 million rows... How much ram you think you just used?\n\n> \n>> The long and short is you need to upgrade to at least 7.4, preferrably\n>> 8.1.\n> \n> Joshua means this too. Upgrade to 7.3.16 within the next few days, then\n> test out something more recent. You should see some useful performance\n> gains from 8.1.\n\nRight. The reason I suggested 7.4 is that he gets VACUUM VERBOSE in a\nreasonable fashion but of course 8.1 is better.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Wed, 25 Oct 2006 14:16:55 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Issue ?"
}
] |
[
{
"msg_contents": "\n\nThanks guys, I think we'll certainly look to get the app certified with\n7.4 and 8.x but that may take a little while. In the interim, Im\nthinking of making the following changes then:-\n\nChange Shared_buffers from 393216 to 80,000 ( ~15% of 4GB of RAM.\nServer is 8GB but I want to leave space for App as well )\n\nSet my effective_cache_size to 125,000 ( ~25% of 4GB of RAM )\n\nSet my sort_mem to 8192\n\nDo those numbers look a bit better? Will probably see if we can make\nthese changes asap as the server is struggling a bit now, which doesn't\nreally make sense given how much memory is in it.\n\nReally appreciate your help and fast turnaround on this\n\nMark\n\n-----Original Message-----\nFrom: Joshua D. Drake [mailto:[email protected]] \nSent: 25 October 2006 22:17\nTo: Richard Huxton\nCc: Mark Lonsdale; [email protected]\nSubject: Re: [PERFORM] Configuration Issue ?\n\nRichard Huxton wrote:\n> Mark Lonsdale wrote:\n>>\n>> Hi Josh\n>>\n>> Thanks for the feedback, that is most usefull. When you said one of\nthe\n>> settings was likely killing us, was it all of the settings for\n>> max_fsm_relations, max_fsm_pages, and sort_mem or just the setting\nfor\n>> sort_mem ?\n>>\n>> Can you explain why the setting would be killing me :-)\n> \n> The sort_mem is crucial. It's memory *per sort*, which means one query\n> can use several times that amount.\n\nWorse then that it is:\n\n((sort memory) * (number of sorts)) * (number of connections) = amount\nof ram possible to use.\n\nNow... take the following query:\n\nSELECT * FROM foo\n JOIN bar on (bar.id = foo.id)\n JOIN baz on (baz.id = foo_baz.id)\nORDER BY baz.name, foo.salary;\n\nOver 5 million rows... How much ram you think you just used?\n\n> \n>> The long and short is you need to upgrade to at least 7.4,\npreferrably\n>> 8.1.\n> \n> Joshua means this too. Upgrade to 7.3.16 within the next few days,\nthen\n> test out something more recent. You should see some useful performance\n> gains from 8.1.\n\nRight. The reason I suggested 7.4 is that he gets VACUUM VERBOSE in a\nreasonable fashion but of course 8.1 is better.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Wed, 25 Oct 2006 17:31:29 -0400",
"msg_from": "\"Mark Lonsdale\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration Issue ?"
},
{
"msg_contents": "Mark Lonsdale wrote:\n> \n> Thanks guys, I think we'll certainly look to get the app certified with\n> 7.4 and 8.x but that may take a little while. In the interim, Im\n> thinking of making the following changes then:-\n> \n> Change Shared_buffers from 393216 to 80,000 ( ~15% of 4GB of RAM.\n> Server is 8GB but I want to leave space for App as well )\n\nYou likely run into issues with anything over 16384. I have never seen a\nbenefit from shared_buffers over 12k or so with 7.3.\n\n> \n> Set my effective_cache_size to 125,000 ( ~25% of 4GB of RAM )\n> \n> Set my sort_mem to 8192\n\n:)\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Do those numbers look a bit better? Will probably see if we can make\n> these changes asap as the server is struggling a bit now, which doesn't\n> really make sense given how much memory is in it.\n> \n> Really appreciate your help and fast turnaround on this\n> \n> Mark\n> \n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]] \n> Sent: 25 October 2006 22:17\n> To: Richard Huxton\n> Cc: Mark Lonsdale; [email protected]\n> Subject: Re: [PERFORM] Configuration Issue ?\n> \n> Richard Huxton wrote:\n>> Mark Lonsdale wrote:\n>>> Hi Josh\n>>>\n>>> Thanks for the feedback, that is most usefull. When you said one of\n> the\n>>> settings was likely killing us, was it all of the settings for\n>>> max_fsm_relations, max_fsm_pages, and sort_mem or just the setting\n> for\n>>> sort_mem ?\n>>>\n>>> Can you explain why the setting would be killing me :-)\n>> The sort_mem is crucial. It's memory *per sort*, which means one query\n>> can use several times that amount.\n> \n> Worse then that it is:\n> \n> ((sort memory) * (number of sorts)) * (number of connections) = amount\n> of ram possible to use.\n> \n> Now... take the following query:\n> \n> SELECT * FROM foo\n> JOIN bar on (bar.id = foo.id)\n> JOIN baz on (baz.id = foo_baz.id)\n> ORDER BY baz.name, foo.salary;\n> \n> Over 5 million rows... How much ram you think you just used?\n> \n>>> The long and short is you need to upgrade to at least 7.4,\n> preferrably\n>>> 8.1.\n>> Joshua means this too. Upgrade to 7.3.16 within the next few days,\n> then\n>> test out something more recent. You should see some useful performance\n>> gains from 8.1.\n> \n> Right. The reason I suggested 7.4 is that he gets VACUUM VERBOSE in a\n> reasonable fashion but of course 8.1 is better.\n> \n> Sincerely,\n> \n> Joshua D. Drake\n> \n> \n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Wed, 25 Oct 2006 14:42:25 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Issue ?"
},
{
"msg_contents": "On Wed, Oct 25, 2006 at 05:31:29PM -0400, Mark Lonsdale wrote:\n> Set my sort_mem to 8192\n\nYou really need to look at what your workload is before trying to tweak\nsort_mem. With 8G of memory, sort_mem=400000 (~400MB) with only 10\nactive connections might be a good setting. It's usually better to get a\nsort to fit into memory than spill to disk. Since you never mentioned\nwhat kind of workload you have or how many active connections there are,\nit's pretty much impossible to make a recommendation on that setting.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 26 Oct 2006 14:08:38 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration Issue ?"
}
] |
[
{
"msg_contents": "Hullo, here's one of those dreadful touchy-feely hand-waving problems.\nOur 5-node 8.1.3 Slony system has just started taking /much/ longer to\nVACUUM ANALYZE..\n\nThe data set has not increased more than usual (nightly backups stand\nat 1.3GB, growing by 10MB per day), and no configuration has changed on\nthe machines.\n\nNodes 2 and 3 take only the tables necessary to run our search (10 out\nof the full 130) and are much lighter (only 7GB on disk cf. 30GB for\nthe full master) , yet the nightly VACUUM FULL has jumped from 2 hours\nto 4 in the space of one day!\n\nLike I say, no config changes, no reboots / postmaster restarts, no extra processes, and every machine has a comfortable overhead of free page slots + relations.\n\n From a few days ago:\n2006-10-20 03:04:29 UTC INFO: \"Allocation\": found 786856 removable, 4933448 nonremovable row versions in 53461 pages\n2006-10-20 03:04:29 UTC DETAIL: 0 dead row versions cannot be removed yet.\n2006-10-20 03:07:32 UTC INFO: index \"allocation_pkey\" now contains 4933448 row versions in 93918 pages\n2006-10-20 03:07:32 UTC DETAIL: 786856 index row versions were removed.\n2006-10-20 03:14:21 UTC INFO: index \"ix_date\" now contains 4933448 row versions in 74455 pages\n2006-10-20 03:14:21 UTC DETAIL: 786856 index row versions were removed.\n2006-10-20 03:22:32 UTC INFO: index \"ix_dateprice\" now contains 4933448 row versions in 81313 pages\n2006-10-20 03:22:32 UTC DETAIL: 786856 index row versions were removed.\n2006-10-20 03:24:41 UTC INFO: index \"ix_dateroom\" now contains 4933448 row versions in 44610 pages\n2006-10-20 03:24:41 UTC DETAIL: 786856 index row versions were removed.\n2006-10-20 03:27:52 UTC INFO: index \"ix_room\" now contains 4933448 row versions in 35415 pages\n2006-10-20 03:27:52 UTC DETAIL: 786856 index row versions were removed.\n2006-10-20 03:31:43 UTC INFO: \"Allocation\": moved 348324 row versions, truncated 53461 to 46107 pages\n2006-10-20 03:31:43 UTC DETAIL: CPU 4.72s/17.63u sec elapsed 230.81 sec.\n\n From last night:\n2006-10-26 01:00:30 UTC INFO: vacuuming \"public.Allocation\"\n2006-10-26 01:00:36 UTC INFO: \"Allocation\": found 774057 removable, 4979938 nonremovable row versions in 53777 pages\n2006-10-26 01:00:36 UTC DETAIL: 0 dead row versions cannot be removed yet.\n2006-10-26 01:06:18 UTC INFO: index \"allocation_pkey\" now contains 4979938 row versions in 100800 pages\n2006-10-26 01:06:18 UTC DETAIL: 774057 index row versions were removed.\n2006-10-26 01:19:22 UTC INFO: index \"ix_date\" now contains 4979938 row versions in 81630 pages\n2006-10-26 01:19:22 UTC DETAIL: 774057 index row versions were removed.\n2006-10-26 01:35:17 UTC INFO: index \"ix_dateprice\" now contains 4979938 row versions in 87750 pages\n2006-10-26 01:35:17 UTC DETAIL: 774057 index row versions were removed.\n2006-10-26 01:41:27 UTC INFO: index \"ix_dateroom\" now contains 4979938 row versions in 46320 pages\n2006-10-26 01:41:27 UTC DETAIL: 774057 index row versions were removed.\n2006-10-26 01:48:18 UTC INFO: index \"ix_room\" now contains 4979938 row versions in 36513 pages\n2006-10-26 01:48:18 UTC DETAIL: 774057 index row versions were removed.\n2006-10-26 01:56:35 UTC INFO: \"Allocation\": moved 322744 row versions, truncated 53777 to 46542 pages\n2006-10-26 01:56:35 UTC DETAIL: CPU 4.21s/15.90u sec elapsed 496.30 sec.\n\nAs you can see, the amount of system + user time for these runs are comparable, but the amount of real time has more than doubled. \n\nThis isn't even a case for making the cost-based delay vacuum more aggressive because I already have vacuum_cost_delay = 0 on all machines to make the vacuum run as quickly as possible.\n\nAny ideas warmly received! :)\n\nCheers,\nGavin.\n\n",
"msg_date": "Thu, 26 Oct 2006 09:58:27 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> Nodes 2 and 3 take only the tables necessary to run our search (10 out\n> of the full 130) and are much lighter (only 7GB on disk cf. 30GB for\n> the full master) , yet the nightly VACUUM FULL has jumped from 2 hours\n> to 4 in the space of one day!\n\nI guess the most useful question to ask is \"why are you doing VACUUM FULL?\"\nPlain VACUUM should be considerably faster, and for the level of row\nturnover shown by your log, there doesn't seem to be a reason to use FULL.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2006 10:47:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes "
},
{
"msg_contents": "On Thu, 26 Oct 2006 10:47:21 -0400\nTom Lane <[email protected]> wrote:\n\n> Gavin Hamill <[email protected]> writes:\n> > Nodes 2 and 3 take only the tables necessary to run our search (10\n> > out of the full 130) and are much lighter (only 7GB on disk cf.\n> > 30GB for the full master) , yet the nightly VACUUM FULL has jumped\n> > from 2 hours to 4 in the space of one day!\n> \n> I guess the most useful question to ask is \"why are you doing VACUUM\n> FULL?\" Plain VACUUM should be considerably faster, and for the level\n> of row turnover shown by your log, there doesn't seem to be a reason\n> to use FULL.\n\nI do FULL on the 'light' clients simply because 'I can'. The example\nposted was a poor choice - the other tables have a larger churn.\n\nAnyway, once it starts, the load balancer takes it out of rotation so\nno love is lost.\n\nThe same behaviour is shown on the 'heavy' clients (master + 2 slaves)\nwhich take all tables - although I cannot afford to VACUUM FULL on\nthere, the usual VACUUM ANALYZE has begun to take vastly more time\nsince yesterday than in the many previous months we've been using pg.\n\nCheers,\nGavin.\n",
"msg_date": "Thu, 26 Oct 2006 16:06:09 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "On Thu, Oct 26, 2006 at 04:06:09PM +0100, Gavin Hamill wrote:\n> On Thu, 26 Oct 2006 10:47:21 -0400\n> Tom Lane <[email protected]> wrote:\n> \n> > Gavin Hamill <[email protected]> writes:\n> > > Nodes 2 and 3 take only the tables necessary to run our search (10\n> > > out of the full 130) and are much lighter (only 7GB on disk cf.\n> > > 30GB for the full master) , yet the nightly VACUUM FULL has jumped\n> > > from 2 hours to 4 in the space of one day!\n> > \n> > I guess the most useful question to ask is \"why are you doing VACUUM\n> > FULL?\" Plain VACUUM should be considerably faster, and for the level\n> > of row turnover shown by your log, there doesn't seem to be a reason\n> > to use FULL.\n> \n> I do FULL on the 'light' clients simply because 'I can'. The example\n> posted was a poor choice - the other tables have a larger churn.\n> \n> Anyway, once it starts, the load balancer takes it out of rotation so\n> no love is lost.\n> \n> The same behaviour is shown on the 'heavy' clients (master + 2 slaves)\n> which take all tables - although I cannot afford to VACUUM FULL on\n> there, the usual VACUUM ANALYZE has begun to take vastly more time\n> since yesterday than in the many previous months we've been using pg.\n\nAre you sure that there's nothing else happening on the machine that\ncould affect the vacuum times? Like, say a backup? Or perhaps updates\ncoming in from Slony that didn't used to be there?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 26 Oct 2006 14:17:29 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "On Thu, 26 Oct 2006 14:17:29 -0500\n\"Jim C. Nasby\" <[email protected]> wrote:\n\n> Are you sure that there's nothing else happening on the machine that\n> could affect the vacuum times? Like, say a backup? Or perhaps updates\n> coming in from Slony that didn't used to be there?\n\nI'm absolutely certain. The backups run from only one slave, given that\nit is a full copy of node 1. Our overnight traffic has not increased\nany, and the nightly backups show that the overall size of the DB has\nnot increased more than usual growth.\n\nPlus, I have fairly verbose logging, and it's not showing anything out\nof the ordinary. \n\nLike I said, it's one of those awful hypothesis/hand-waving problems :)\n\nCheers,\nGavin.\n",
"msg_date": "Thu, 26 Oct 2006 21:35:56 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "On Thu, Oct 26, 2006 at 09:35:56PM +0100, Gavin Hamill wrote:\n> On Thu, 26 Oct 2006 14:17:29 -0500\n> \"Jim C. Nasby\" <[email protected]> wrote:\n> \n> > Are you sure that there's nothing else happening on the machine that\n> > could affect the vacuum times? Like, say a backup? Or perhaps updates\n> > coming in from Slony that didn't used to be there?\n> \n> I'm absolutely certain. The backups run from only one slave, given that\n> it is a full copy of node 1. Our overnight traffic has not increased\n> any, and the nightly backups show that the overall size of the DB has\n> not increased more than usual growth.\n> \n> Plus, I have fairly verbose logging, and it's not showing anything out\n> of the ordinary. \n> \n> Like I said, it's one of those awful hypothesis/hand-waving problems :)\n\nWell, the fact that it's happening on all your nodes leads me to think\nSlony is somehow involved. Perhaps it suddenly decided to change how\noften it's issuing syncs? I know it issues vacuums as well, so maybe\nthat's got something to do with it... (though I'm guessing you've\nalready looked in pg_stat_activity/logs to see if anything\ncorrelates...) Still, it might be worth asking about this on the slony\nlist...\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 26 Oct 2006 16:59:36 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "On Thu, Oct 26, 2006 at 09:35:56PM +0100, Gavin Hamill wrote:\n> \n> I'm absolutely certain. The backups run from only one slave, given that\n> it is a full copy of node 1. Our overnight traffic has not increased\n> any, and the nightly backups show that the overall size of the DB has\n> not increased more than usual growth.\n\nA couple things from your posts:\n\n1.\tDon't do VACUUM FULL, please. It takes longer, and blocks\nother things while it's going on, which might mean you're having\ntable bloat in various slony-related tables.\n\n2.\tAre your slony logs showing increased time too? Are your\ntargets getting further behind?\n\n3.\tYour backups \"from the slave\" aren't done with pg_dump,\nright?\n\nBut I suspect Slony has a role here, too. I'd look carefully at the\nslony tables -- especially the sl_log and pg_listen things, which\nboth are implicated.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n\"The year's penultimate month\" is not in truth a good way of saying\nNovember.\n\t\t--H.W. Fowler\n",
"msg_date": "Thu, 26 Oct 2006 18:09:37 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "On Thu, 26 Oct 2006 18:09:37 -0400\nAndrew Sullivan <[email protected]> wrote:\n\n> On Thu, Oct 26, 2006 at 09:35:56PM +0100, Gavin Hamill wrote:\n> > \n> > I'm absolutely certain. The backups run from only one slave, given\n> > that it is a full copy of node 1. Our overnight traffic has not\n> > increased any, and the nightly backups show that the overall size\n> > of the DB has not increased more than usual growth.\n> \n> A couple things from your posts:\n> \n> 1.\tDon't do VACUUM FULL, please. It takes longer, and blocks\n> other things while it's going on, which might mean you're having\n> table bloat in various slony-related tables.\n\nI know it takes longer, I know it blocks. It's never been a problem\n\n> 2.\tAre your slony logs showing increased time too? Are your\n> targets getting further behind?\n\nNope, the slaves are keeping up just great - once the vacuums are\nfinished, all machines are running at about 50%-75% of full load in\nduty.\n \n> 3.\tYour backups \"from the slave\" aren't done with pg_dump,\n> right?\n\nEm, they are indeed. I assumed that MVCC would ensure I got a\nconsistent snapshot from the instant when pg_dump began. Am I wrong?\n\n> But I suspect Slony has a role here, too. I'd look carefully at the\n> slony tables -- especially the sl_log and pg_listen things, which\n> both are implicated.\n\nSlony is an easy target to point the finger at, so I tried a\nlittle test. I took one of the 'light' slaves (only 10 tables..),\nstopped its slon daemon, removed it from the load-balancer, and\nrestarted postgres so there were no active connections.\n\nWith the removal of both replication overhead and normal queries from\nclients, the machine should be completely clear to run at full tilt.\n\nThen I launched a 'vacuum verbose' and I was able to see exactly the\nsame poor speeds as before, even with vacuum_cost_delay = 0 as it was\npreviously...\n\n2006-10-27 08:37:12 UTC INFO: vacuuming \"public.Allocation\"\n2006-10-27 08:37:21 UTC INFO: \"Allocation\": found 56449 removable, 4989360 nonremovable row versions in 47158 pages\n2006-10-27 08:37:21 UTC DETAIL: 0 dead row versions cannot be removed yet.\n Nonremovable row versions range from 64 to 72 bytes long.\n There were 1 unused item pointers.\n Total free space (including removable row versions) is 5960056 bytes.\n 13 pages are or will become empty, including 0 at the end of the table.\n 5258 pages containing 4282736 free bytes are potential move destinations.\n CPU 0.16s/0.07u sec elapsed 9.55 sec.\n2006-10-27 08:44:25 UTC INFO: index \"allocation_pkey\" now contains 4989360 row versions in 102198 pages\n2006-10-27 08:44:25 UTC DETAIL: 56449 index row versions were removed.\n 1371 index pages have been deleted, 1371 are currently reusable.\n CPU 1.02s/0.38u sec elapsed 423.22 sec.\n\nIf I've read this correctly, then on an otherwise idle system, it has taken seven minutes to perform 1.4 seconds-worth of actual work. Surely that's nonsense? \n\nThat would suggest that the issue is poor IO; \"vmstat 5\" output during this run wasn't ripping performance - maybe averaging 3MB/sec in and out. \n\nI know the peak IO on this machine is rather much better than that:\n\njoltpg2:/root# dd if=/dev/zero of=/tmp/t bs=1024k count=1000\n1000+0 records in\n1000+0 records out\n1048576000 bytes (1.0 GB) copied, 8.02106 seconds, 131 MB/s\n\nThe test \"system\" is one CPU's-worth (two cores) of a 4 x Opteron 880 machine split up by Xen, and I can confirm the IO on the other Xen partitions was minimal.\n\nI appreciate the time, help and advice people are offering, however I really don't think Slony is the culprit here.\n\nCheers,\nGavin.\n",
"msg_date": "Fri, 27 Oct 2006 10:20:25 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> 2006-10-27 08:37:12 UTC INFO: vacuuming \"public.Allocation\"\n> 2006-10-27 08:37:21 UTC INFO: \"Allocation\": found 56449 removable, 4989360 nonremovable row versions in 47158 pages\n> 2006-10-27 08:37:21 UTC DETAIL: 0 dead row versions cannot be removed yet.\n> Nonremovable row versions range from 64 to 72 bytes long.\n> There were 1 unused item pointers.\n> Total free space (including removable row versions) is 5960056 bytes.\n> 13 pages are or will become empty, including 0 at the end of the table.\n> 5258 pages containing 4282736 free bytes are potential move destinations.\n> CPU 0.16s/0.07u sec elapsed 9.55 sec.\n> 2006-10-27 08:44:25 UTC INFO: index \"allocation_pkey\" now contains 4989360 row versions in 102198 pages\n> 2006-10-27 08:44:25 UTC DETAIL: 56449 index row versions were removed.\n> 1371 index pages have been deleted, 1371 are currently reusable.\n> CPU 1.02s/0.38u sec elapsed 423.22 sec.\n\nSo the time is all in index vacuuming, eh? I think what's happening is\nthat the physical order of the index is degrading over time, and so the\nvacuum scan takes longer due to more seeking. Can you afford to do a\nREINDEX? If this theory is correct that should drive the time back\ndown.\n\n8.2 has rewritten btree index vacuuming code that scans the index in\nphysical not logical order, so this problem should largely go away in\n8.2, but in existing releases I can't see much you can do about it\nexcept REINDEX when things get slow.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 27 Oct 2006 14:07:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes "
},
{
"msg_contents": "On Fri, 27 Oct 2006 14:07:43 -0400\nTom Lane <[email protected]> wrote:\n\n> So the time is all in index vacuuming, eh? I think what's happening\n> is that the physical order of the index is degrading over time, and\n> so the vacuum scan takes longer due to more seeking. Can you afford\n> to do a REINDEX? If this theory is correct that should drive the\n> time back down.\n\nTom,\n\nYou wonderful, wonderful man.\n\nI tried a test reindex on \"Allocation\", and noticed a vacuum had\nturbo-charged... then reindexed the whole db, did a vacuum, and lo! The\nwhole db had turbo-charged :)\n\nWhen I say 'turbo-charged', I mean it. The vacuum times have dropped to\n20% of what we were seeing even before it 'got much slower a\ncouple of days ago.'\n\nIt sucks that the new reindex code is only in 8.2, but now that I know\nthis is an issue in 8.1 I can plan for it.\n\nThanks so much :)\n\nCheers,\nGavin.\n",
"msg_date": "Fri, 27 Oct 2006 23:19:20 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "Ok, I see Tom has diagnosed your problem. Here are more hints\nanyway:\n\nOn Fri, Oct 27, 2006 at 10:20:25AM +0100, Gavin Hamill wrote:\n> > table bloat in various slony-related tables.\n> \n> I know it takes longer, I know it blocks. It's never been a problem\n\nThe problem from a VACUUM FULL is that its taking longer causes the\nvacuums on (especially) pg_listen and sl_log_[n] to be unable to\nrecover as many rows (because there's an older transaction around). \nThis is a significant area of vulnerability in Slony. You really\nhave to readjust your vacuum assumptions when using Slony.\n\n> > 3.\tYour backups \"from the slave\" aren't done with pg_dump,\n> > right?\n> \n> Em, they are indeed. I assumed that MVCC would ensure I got a\n> consistent snapshot from the instant when pg_dump began. Am I wrong?\n\nThat's not the problem. The problem is that when you restore the\ndump of the slave, you'll have garbage. Slony fools with the\ncatalogs on the replicas. This is documented in the Slony docs, but\nprobably not in sufficiently large-type bold italics in red with the\n<blink> tag set as would be appropriate for such a huge gotcha. \nAnyway, don't use pg_dump on a replica. There's a tool that comes\nwith slony that will allow you to take consistent, restorable dumps\nfrom replicas if you like. (And you might as well throw away the\ndumpfiles from the replicas that you have. They won't work when you\nrestore them.)\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n\"The year's penultimate month\" is not in truth a good way of saying\nNovember.\n\t\t--H.W. Fowler\n",
"msg_date": "Sun, 29 Oct 2006 09:58:25 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "\nHi :)\n\n[pg_dump from a Slony replica]\n\n> That's not the problem. The problem is that when you restore the\n> dump of the slave, you'll have garbage. Slony fools with the\n> catalogs on the replicas. \n\n> (And you might as well throw away the\n> dumpfiles from the replicas that you have. They won't work when you\n> restore them.)\n\nThis is interesting, but I don't understand.. We've done a full restore\nfrom one of these pg_dump backups before now and it worked just great.\n\nSure I had to DROP SCHEMA _replication CASCADE to clear out all the\nslony-specific triggers etc., but the new-master ran fine, as did\nfiring up new replication to the other nodes :)\n\nWas I just lucky?\n\nCheers,\nGavin.\n",
"msg_date": "Sun, 29 Oct 2006 15:08:26 +0000",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "On Sun, Oct 29, 2006 at 03:08:26PM +0000, Gavin Hamill wrote:\n> \n> This is interesting, but I don't understand.. We've done a full restore\n> from one of these pg_dump backups before now and it worked just great.\n> \n> Sure I had to DROP SCHEMA _replication CASCADE to clear out all the\n> slony-specific triggers etc., but the new-master ran fine, as did\n> firing up new replication to the other nodes :)\n> \n> Was I just lucky?\n\nYes. Slony alters data in the system catalog for a number of\ndatabase objects on the replicas. It does this in order to prevent,\nfor example, triggers from firing both on the origin and the replica. \n(That is the one that usually bites people hardest, but IIRC it's not\nthe only such hack in there.) This was a bit of a dirty hack that\nwas supposed to be cleaned up, but that hasn't been yet. In general,\nyou can't rely on a pg_dump of a replica giving you a dump that, when\nrestored, actually works.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nEverything that happens in the world happens at some place.\n\t\t--Jane Jacobs \n",
"msg_date": "Sun, 29 Oct 2006 10:34:04 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "Am Sonntag, den 29.10.2006, 10:34 -0500 schrieb Andrew Sullivan:\n> On Sun, Oct 29, 2006 at 03:08:26PM +0000, Gavin Hamill wrote:\n> > \n> > This is interesting, but I don't understand.. We've done a full restore\n> > from one of these pg_dump backups before now and it worked just great.\n> > \n> > Sure I had to DROP SCHEMA _replication CASCADE to clear out all the\n> > slony-specific triggers etc., but the new-master ran fine, as did\n> > firing up new replication to the other nodes :)\n> > \n> > Was I just lucky?\n> \n> Yes. Slony alters data in the system catalog for a number of\n> database objects on the replicas. It does this in order to prevent,\n> for example, triggers from firing both on the origin and the replica. \n> (That is the one that usually bites people hardest, but IIRC it's not\n> the only such hack in there.) This was a bit of a dirty hack that\n> was supposed to be cleaned up, but that hasn't been yet. In general,\n> you can't rely on a pg_dump of a replica giving you a dump that, when\n> restored, actually works.\n\nActually, you need to get the schema from the master node, and can take\nthe data from a slave. In mixing dumps like that, you must realize that\nthere are two seperate parts in the schema dump: \"table definitions\" and\n\"constraints\". Do get a restorable backup you need to put the table\ndefinitions stuff before your data, and the constraints after the data\ncopy.\n\nAndreas\n\n> \n> A\n>",
"msg_date": "Sun, 29 Oct 2006 17:24:33 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "On Sun, Oct 29, 2006 at 05:24:33PM +0100, Andreas Kostyrka wrote:\n> Actually, you need to get the schema from the master node, and can take\n> the data from a slave. In mixing dumps like that, you must realize that\n> there are two seperate parts in the schema dump: \"table definitions\" and\n> \"constraints\". Do get a restorable backup you need to put the table\n> definitions stuff before your data, and the constraints after the data\n> copy.\n\nThis will work, yes, but you don't get a real point-in-time dump this\nway. (In any case, we're off the -performance charter now, so if\nanyone wants to pursue this, I urge you to take it to the Slony\nlist.)\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nWindows is a platform without soap, where rats run around \nin open sewers.\n\t\t--Daniel Eran\n",
"msg_date": "Sun, 29 Oct 2006 11:43:19 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "Am Sonntag, den 29.10.2006, 11:43 -0500 schrieb Andrew Sullivan:\n> On Sun, Oct 29, 2006 at 05:24:33PM +0100, Andreas Kostyrka wrote:\n> > Actually, you need to get the schema from the master node, and can take\n> > the data from a slave. In mixing dumps like that, you must realize that\n> > there are two seperate parts in the schema dump: \"table definitions\" and\n> > \"constraints\". Do get a restorable backup you need to put the table\n> > definitions stuff before your data, and the constraints after the data\n> > copy.\n> \n> This will work, yes, but you don't get a real point-in-time dump this\nBut one does, because one can dump all data in one pg_dump call. And\nwith slony enabled, schema changes won't happen by mistake, they tend to\nbe a thing for the Slony High Priest, nothing for mere developers ;)\n\nAndreas",
"msg_date": "Sun, 29 Oct 2006 18:12:09 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes"
},
{
"msg_contents": "On Oct 27, 2006, at 2:07 PM, Tom Lane wrote:\n\n> 8.2, but in existing releases I can't see much you can do about it\n> except REINDEX when things get slow.\n\nThis will be so nice for me. I have one huge table with a massive \namount of churn and bulk deletes. I have to reindex it once every \nother month. It takes about 60 to 75 minutes per index (times two \nindexes) else I'd do it monthly.\n\nIt shaves nearly 1/3 of the relpages off of the index size.",
"msg_date": "Thu, 2 Nov 2006 16:41:35 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUMs take twice as long across all nodes "
}
] |
[
{
"msg_contents": "I seem to remember Oleg/Teodor recently reporting a problem with Windows\nhanging on a multi-processor machine, during a heavy load operation.\n\nIn their case it seemed like a vacuum would allow it to wake up. They\ndid commit a patch that did not make it into the last minor version for\nlack of testing.\n\nPerhaps you could see if that patch might work for you, which would also\nhelp ease the argument against the patches lack of testing.\n\n\t-rocco\n",
"msg_date": "Thu, 26 Oct 2006 07:59:24 -0400",
"msg_from": "\"Rocco Altier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "This is pretty interesting - where can I read more on this? Windows isn't \nactually hanging, one single command line window is - from its behaviour, it \nlooks like the TCL postgresql package is waiting for pg_exec to come back \nfrom the commit (I believe the commit has actually gone through).\n\nIt could even be that there's something wrong with the TCL package, but from \nmy understanding it is one of the most complete interfaces out there - which \nis weird, because TCL seems to be the most unpopular language in the \ncommunity.\n\nCaro\n\n\n\"\"Rocco Altier\"\" <[email protected]> wrote in message \nnews:[email protected]...\n>I seem to remember Oleg/Teodor recently reporting a problem with Windows\n> hanging on a multi-processor machine, during a heavy load operation.\n>\n> In their case it seemed like a vacuum would allow it to wake up. They\n> did commit a patch that did not make it into the last minor version for\n> lack of testing.\n>\n> Perhaps you could see if that patch might work for you, which would also\n> help ease the argument against the patches lack of testing.\n>\n> -rocco\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Thu, 26 Oct 2006 10:48:00 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "On 10/26/06, Carlo Stonebanks <[email protected]> wrote:\n> This is pretty interesting - where can I read more on this? Windows isn't\n> actually hanging, one single command line window is - from its behaviour, it\n> looks like the TCL postgresql package is waiting for pg_exec to come back\n> from the commit (I believe the commit has actually gone through).\n>\n> It could even be that there's something wrong with the TCL package, but from\n> my understanding it is one of the most complete interfaces out there - which\n> is weird, because TCL seems to be the most unpopular language in the\n> community.\n\nwhen it happens, make sure to query pg_locks and see what is going on\nthere lock issues are not supposed to manifest on a commit, which\nreleases locks, but you never know. There have been reports of\ninsonsistent lock ups on windows (espeically multi-processor) which\nyou might be experiencing. Make sure you have the very latest version\nof pg 8.1.x. Also consider checking out 8.2 and see if you can\nreproduce the behavior there...this will require compiling postgresql.\n\nmerlin\n",
"msg_date": "Thu, 26 Oct 2006 11:06:05 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "> when it happens, make sure to query pg_locks and see what is going on\n> there lock issues are not supposed to manifest on a commit, which\n> releases locks, but you never know.\n\nThere aren't any pedning locks (assuming that pgAdmin is using pg_locks to \ndisplay pendin glocks).\n\n> There have been reports of\n> insonsistent lock ups on windows (espeically multi-processor) which\n> you might be experiencing. Make sure you have the very latest version\n> of pg 8.1.x. Also consider checking out 8.2 and see if you can\n> reproduce the behavior there...this will require compiling postgresql.\n\nAre these associated with any type of CPU?\n\nCarlo \n\n\n",
"msg_date": "Thu, 26 Oct 2006 13:49:57 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "\n\nOn Thu, 26 Oct 2006, Carlo Stonebanks wrote:\n>\n> It could even be that there's something wrong with the TCL package, but from\n> my understanding it is one of the most complete interfaces out there - which\n> is weird, because TCL seems to be the most unpopular language in the\n> community.\n>\n\nNot that this matters much and it's slightly off the topic of performance,\nbut...\n\n...I would have to check my _ancient_ emails for the name of the guy and\nthe dates, but the integration was first done while I was a researcher at\nBerkeley, at the tail end of the Postgres team's funding. My team used\nPostgres with TCL internals to implement \"the query from hell\" inside the\nserver. That was about 1994 or '95, IIRC. At that time, most people who\nknew both said that they were roughly equivalent, with PERL being _vastly_\nless intelligible (to humans) and they hated it. What happened was PERL\ngot exposure that TCL didn't and people who didn't know better jumped on\nit.\n\nSo, it was one of the most complete interfaces because it was done first,\nor nearly first, by the original guys that created the original Postgres.\n\nRichard\n\n\n-- \nRichard Troy, Chief Scientist\nScience Tools Corporation\n510-924-1363 or 202-747-1263\[email protected], http://ScienceTools.com/\n\n",
"msg_date": "Thu, 26 Oct 2006 10:57:11 -0700 (PDT)",
"msg_from": "Richard Troy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "Perl started out fast - TCL started out slow. Perl used syntax that,\nalthough it would drive some people crazy, followed a linguistic curve\nthat Larry Wall claimed was healthy. The English language is crazy,\nand yet, it has become standard world wide as well. Designed, regular\nlanguages like Esperanto have not received much support either.\n\nPerl is designed to be practical. TCL was designed to be minimalistic.\n\nPerl uses common idioms for UNIX programmers. // for regular expressions,\n$VAR for variables, Many of the statement are familiar for C programmers.\n++ for increment (compare against 'incr abc' for TCL). $a=5 for assignment,\ncompare against 'set abc 5' in TCL.\n\nTCL tries to have a reduced syntax, where 'everything is a string'\nwhich requires wierdness for people. For example, newline is\nend-of-line, so { must be positioned correctly. Code is a string, so\nin some cases you need to escape code, otherwise not.\n\nPerl has object oriented support built-in. It's ugly, but it works.\nTCL has a questionable '[incr tcl]' package.\n\nPerl has a wealth of modules on CPAN to do almost anything you need to.\nTCL has the beginning of one (not as rich), but comes built-in with things\nlike event loops, and graphicals (Tk).\n\nI could go on and on - but I won't, because this is the PostgreSQL\nmailing list. People either get Perl, or TCL, or they don't. More\npeople 'get' Perl, because it was marketted better, it's syntax is\ndeceivingly comparable to other well known languages, and for the\nlongest time, it was much faster than TCL to write (especially when\nusing regular expressions) and faster to run.\n\nDid TCL get treated unfairly as a result? It's a language. Who cares! :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Thu, 26 Oct 2006 14:07:06 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "OT: TCL vs Perl Re: commit so slow program looks frozen"
},
{
"msg_contents": "\n> Perl has a wealth of modules on CPAN to do almost anything you need to.\n> TCL has the beginning of one (not as rich), but comes built-in with things\n> like event loops, and graphicals (Tk).\n> \n> I could go on and on - but I won't, because this is the PostgreSQL\n> mailing list. People either get Perl, or TCL, or they don't. More\n> people 'get' Perl, because it was marketted better, it's syntax is\n> deceivingly comparable to other well known languages, and for the\n> longest time, it was much faster than TCL to write (especially when\n> using regular expressions) and faster to run.\n> \n> Did TCL get treated unfairly as a result? It's a language. Who cares! :-)\n\nYou forgot the god of scripting languages, Python... (Yes perl is much\nbetter at system level scripting than Python).\n\nSincerely,\n\nJoshua D. Drake\n\n> \n> Cheers,\n> mark\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n",
"msg_date": "Thu, 26 Oct 2006 11:32:02 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: OT: TCL vs Perl Re: commit so slow program looks frozen"
},
{
"msg_contents": "On Thu, 2006-10-26 at 11:06 -0400, Merlin Moncure wrote:\n> On 10/26/06, Carlo Stonebanks <[email protected]> wrote:\n> > This is pretty interesting - where can I read more on this? Windows isn't\n> > actually hanging, one single command line window is - from its behaviour, it\n> > looks like the TCL postgresql package is waiting for pg_exec to come back\n> > from the commit (I believe the commit has actually gone through).\n> >\n> > It could even be that there's something wrong with the TCL package, but from\n> > my understanding it is one of the most complete interfaces out there - which\n> > is weird, because TCL seems to be the most unpopular language in the\n> > community.\n> \n> when it happens, make sure to query pg_locks and see what is going on\n> there lock issues are not supposed to manifest on a commit, which\n> releases locks, but you never know. There have been reports of\n> insonsistent lock ups on windows (espeically multi-processor) which\n> you might be experiencing. Make sure you have the very latest version\n> of pg 8.1.x. Also consider checking out 8.2 and see if you can\n> reproduce the behavior there...this will require compiling postgresql.\n\nMerlin,\n\nRumour has it you managed to get a BT from Windows. That sounds like it\nwould be very useful here.\n\nCarlo,\n\nMany things can happen at commit time. Temp tables dropped, TRUNCATEd\nold relations unlinked, init files removed, deferred foreign key checks\n(and subsequent cascading), dropped tables flushed. The assumption that\nCOMMIT is a short request may not be correct according to the wide range\nof tasks that could occur according to standard SQL:2003 behaviour. \n\nSome of those effects take longer on larger systems. Any and all of\nthose things have potential secondary effects, all of which can also\nconflict with other user tasks and especially with a CHECKPOINT. Then\nthere's various forms of contention caused by misconfiguration.\n\nI do think we need some better instrumentation for this kind of thing.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 28 Oct 2006 11:07:02 +0100",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "> I do think we need some better instrumentation for this kind of thing.\n\nWell, one thing's for sure - I have little other information to offer. The \nproblem is that the lockups occur after hours of operation and thousands of \nrows being digested (which is the nature of the program). If \"better \ninstrumentation\" implies tools to inpsect the sate of the db server's \nprocess and to know what it's waiting for from the OS, I agree.\n\nThen again, I can't even tell you whether the postgres process is at fault \nor the TCL interface - which would be odd, because it's one fo the most \nmature interfaces postgres has. So, here's a thought: is there any way for \nme to inspect the state of a postgres process to see if it's responsive - \neven if it's serving another connection?\n\nCarlo \n\n\n",
"msg_date": "Sat, 28 Oct 2006 11:51:23 -0400",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "On 10/28/06, Simon Riggs <[email protected]> wrote:\n> On Thu, 2006-10-26 at 11:06 -0400, Merlin Moncure wrote:\n> > On 10/26/06, Carlo Stonebanks <[email protected]> wrote:\n> > > This is pretty interesting - where can I read more on this? Windows isn't\n> > > actually hanging, one single command line window is - from its behaviour, it\n> > > looks like the TCL postgresql package is waiting for pg_exec to come back\n> > > from the commit (I believe the commit has actually gone through).\n> > >\n> > > It could even be that there's something wrong with the TCL package, but from\n> > > my understanding it is one of the most complete interfaces out there - which\n> > > is weird, because TCL seems to be the most unpopular language in the\n> > > community.\n> >\n> > when it happens, make sure to query pg_locks and see what is going on\n> > there lock issues are not supposed to manifest on a commit, which\n> > releases locks, but you never know. There have been reports of\n> > insonsistent lock ups on windows (espeically multi-processor) which\n> > you might be experiencing. Make sure you have the very latest version\n> > of pg 8.1.x. Also consider checking out 8.2 and see if you can\n> > reproduce the behavior there...this will require compiling postgresql.\n>\n> Merlin,\n>\n> Rumour has it you managed to get a BT from Windows. That sounds like it\n> would be very useful here.\n>\n> Carlo,\n>\n> Many things can happen at commit time. Temp tables dropped, TRUNCATEd\n> old relations unlinked, init files removed, deferred foreign key checks\n> (and subsequent cascading), dropped tables flushed. The assumption that\n> COMMIT is a short request may not be correct according to the wide range\n> of tasks that could occur according to standard SQL:2003 behaviour.\n>\n> Some of those effects take longer on larger systems. Any and all of\n> those things have potential secondary effects, all of which can also\n> conflict with other user tasks and especially with a CHECKPOINT. Then\n> there's various forms of contention caused by misconfiguration.\n>\n> I do think we need some better instrumentation for this kind of thing.\n>\n> --\n> Simon Riggs\n> EnterpriseDB http://www.enterprisedb.com\n\nstart here:\nhttp://beta.linuxports.com/pgsql-hackers-win32/2005-08/msg00051.php\n\nmerlin\n",
"msg_date": "Sat, 28 Oct 2006 23:09:58 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
},
{
"msg_contents": "Merlin Moncure wrote:\n> On 10/28/06, Simon Riggs <[email protected]> wrote:\n>> On Thu, 2006-10-26 at 11:06 -0400, Merlin Moncure wrote:\n>> > On 10/26/06, Carlo Stonebanks <[email protected]> wrote:\n>> > > This is pretty interesting - where can I read more on this?\n>> Windows isn't\n>> > > actually hanging, one single command line window is - from its\n>> behaviour, it\n>> > > looks like the TCL postgresql package is waiting for pg_exec to\n>> come back\n>> > > from the commit (I believe the commit has actually gone through).\n>> > >\n>> > > It could even be that there's something wrong with the TCL\n>> package, but from\n>> > > my understanding it is one of the most complete interfaces out\n>> there - which\n>> > > is weird, because TCL seems to be the most unpopular language in the\n>> > > community.\n>> >\n>> > when it happens, make sure to query pg_locks and see what is going on\n>> > there lock issues are not supposed to manifest on a commit, which\n>> > releases locks, but you never know. There have been reports of\n>> > insonsistent lock ups on windows (espeically multi-processor) which\n>> > you might be experiencing. Make sure you have the very latest version\n>> > of pg 8.1.x. Also consider checking out 8.2 and see if you can\n>> > reproduce the behavior there...this will require compiling postgresql.\n>>\n>> Merlin,\n>>\n>> Rumour has it you managed to get a BT from Windows. That sounds like it\n>> would be very useful here.\n\nCould it be there is a hangup in communication with the backend via the\nlibpq library?\n\nI have a situation on Windows where psql seems to be hanging randomly\nAFTER completing (or almost completing) a vacuum full analyze verbose.\n\nI'm running the same databases on a single postgres instance on a Dell\n4gb RAM 2 processor xeon (hyper-threading turned off) running Debian\nGNU/Linux. The windows system is an IBM 24gb RAM, 4 processor xeon\n(hyperthreading turned off). No problems on the Dell, it runs pgbench\nfaster than the windows IBM system. The Dell Linux system zips through\nvacuumdb --all --analyze --full --verbose with no problems. The windows\nmachine is running 6 instances of postgresql because of problems trying\nto load all of the databases into one instance on windows.\n\nThe last output from psql is:\n\nINFO: free space map contains 474 pages in 163 relations\nDETAIL: A total of 2864 page slots are in use (including overhead).\n2864 page slots are required to track all free space.\nCurrent limits are: 420000 page slots, 25000 relations, using 4154 KB.\n\n(I've currently restarted postgresql with more reasonable fsm_page_slots\nand fsm_relations).\n\nIt appears that psql is hung in the call to WS2_32!select.\nThe psql stack trace looks like this:\n\nntdll!KiFastSystemCallRet\nntdll!NtWaitForSingleObject+0xc\nmswsock!SockWaitForSingleObject+0x19d\nmswsock!WSPSelect+0x380\nWS2_32!select+0xb9\nWARNING: Stack unwind information not available. Following frames may be\nwrong.\nlibpq!PQenv2encoding+0x1fb\nlibpq!PQenv2encoding+0x3a1\nlibpq!PQenv2encoding+0x408\nlibpq!PQgetResult+0x58\nlibpq!PQgetResult+0x188\npsql+0x4c0f\npsql+0x954d\npsql+0x11e7\npsql+0x1238\nkernel32!IsProcessorFeaturePresent+0x9e\n\nWith more detail:\n\n # ChildEBP RetAddr Args to Child\n00 0022f768 7c822124 71b23a09 000007a8 00000001\nntdll!KiFastSystemCallRet (FPO: [0,0,0])\n01 0022f76c 71b23a09 000007a8 00000001 0022f794\nntdll!NtWaitForSingleObject+0xc (FPO: [3,0,0])\n02 0022f7a8 71b23a52 000007a8 00000780 00000000\nmswsock!SockWaitForSingleObject+0x19d (FPO: [Non-Fpo])\n03 0022f898 71c0470c 00000781 0022fc40 0022fb30 mswsock!WSPSelect+0x380\n(FPO: [Non-Fpo])\n04 0022f8e8 6310830b 00000781 0022fc40 0022fb30 WS2_32!select+0xb9 (FPO:\n[Non-Fpo])\nWARNING: Stack unwind information not available. Following frames may be\nwrong.\n05 0022fd68 631084b1 00000000 ffffffff 0000001d libpq!PQenv2encoding+0x1fb\n06 0022fd88 63108518 00000001 00000000 00614e70 libpq!PQenv2encoding+0x3a1\n07 0022fda8 631060f8 00000001 00000000 00614e70 libpq!PQenv2encoding+0x408\n08 0022fdc8 63106228 00614e70 00613a71 00615188 libpq!PQgetResult+0x58\n09 0022fde8 00404c0f 00614e70 00613a71 0041ac7a libpq!PQgetResult+0x188\n0a 0022fe98 0040954d 00613a71 00423180 00423185 psql+0x4c0f\n0b 0022ff78 004011e7 00000006 00613b08 00612aa8 psql+0x954d\n0c 0022ffb0 00401238 00000001 00000009 0022fff0 psql+0x11e7\n0d 0022ffc0 77e523e5 00000000 00000000 7ffdc000 psql+0x1238\n0e 0022fff0 00000000 00401220 00000000 78746341\nkernel32!IsProcessorFeaturePresent+0x9e\n\nthe pg_locks table:\n-[ RECORD 1 ]-+----------------\nlocktype | relation\ndatabase | 19553\nrelation | 10342\npage |\ntuple |\ntransactionid |\nclassid |\nobjid |\nobjsubid |\ntransaction | 1998424\npid | 576\nmode | AccessShareLock\ngranted | t\n-[ RECORD 2 ]-+----------------\nlocktype | transactionid\ndatabase |\nrelation |\npage |\ntuple |\ntransactionid | 1998424\nclassid |\nobjid |\nobjsubid |\ntransaction | 1998424\npid | 576\nmode | ExclusiveLock\ngranted | t\n\n\nThe call stack on the postgres.exe process id 576:\n\nntdll!KiFastSystemCallRet\nntdll!NtWaitForMultipleObjects+0xc\nWARNING: Stack unwind information not available. Following frames may be\nwrong.\nkernel32!ResetEvent+0x45\npostgres!pgwin32_waitforsinglesocket+0x89\npostgres!pgwin32_recv+0x82\npostgres!secure_read+0x7b\npostgres!TouchSocketFile+0x93\npostgres!pq_getbyte+0x22\npostgres!PostgresMain+0x1056\npostgres!SubPostmasterMain+0x9ca\npostgres!main+0x33f\npostgres+0x11e7\npostgres+0x1238\nkernel32!IsProcessorFeaturePresent+0x9e\n\n\nThese are the parameters:\n\nlisten_addresses = '*'\nport = 5432\nmax_connections = 300\nshared_buffers = 30000\ntemp_buffers = 5000\nwork_mem = 4096\nmax_fsm_pages = 25000\nmax_fsm_relations = 500\nvacuum_cost_delay = 50\nwal_buffers = 32\ncheckpoint_segments = 16\neffective_cache_size = 50000\nrandom_page_cost = 3\ndefault_statistics_target = 300\nlog_destination = 'stderr'\nredirect_stderr = on\n\n(Since there's 24gb RAM on this thing, I've apparently gotten the system\ncache up to about 6 gb, according sysinternals.com \"System Information\"\napplet.)\n\n\"Cache Data Map Hits %\" runs at 100%\nPhysical Disk Queue Lengths are almost non-existent\nProcessors are not very busy at all\n\nSeems like once something is in memory, we never have to go back to the\ndisk again except to write. (hope so with that kind of RAM).\n\n\n>>\n>> Carlo,\n>>\n>> Many things can happen at commit time. Temp tables dropped, TRUNCATEd\n>> old relations unlinked, init files removed, deferred foreign key checks\n>> (and subsequent cascading), dropped tables flushed. The assumption that\n>> COMMIT is a short request may not be correct according to the wide range\n>> of tasks that could occur according to standard SQL:2003 behaviour.\n>>\n>> Some of those effects take longer on larger systems. Any and all of\n>> those things have potential secondary effects, all of which can also\n>> conflict with other user tasks and especially with a CHECKPOINT. Then\n>> there's various forms of contention caused by misconfiguration.\n>>\n>> I do think we need some better instrumentation for this kind of thing.\n>>\n>> -- \n>> Simon Riggs\n>> EnterpriseDB http://www.enterprisedb.com\n> \n> start here:\n> http://beta.linuxports.com/pgsql-hackers-win32/2005-08/msg00051.php\n> \n> merlin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n\n",
"msg_date": "Tue, 31 Oct 2006 14:58:20 -0600",
"msg_from": "Rob Lemley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: commit so slow program looks frozen"
}
] |
[
{
"msg_contents": "(Repost - did not appear to make it to the list the first time)\n\nI have written a stored procedure for 8.1 that wraps a single (albeit complex) query, and uses 2 IN parameters (BIGINT, INTEGER) in the FROM JOIN and WHERE clauses. The procedure is written in SQL (as opposed to plpgsql - although testing in plpgsql produces the same problem). The results are SETOF a custom type (SMALLINT, NUMERIC(38,2), NUMERIC(38,2)). The central query, when tested in psql and pgadmin III returns in 500 ms. As a stored procedure, it returns in 22000 ms! How can a stored procedure containing a single query not implement the same execution plan (assumption based on the dramatic performance difference) that an identical ad-hoc query generates? I ran a series of tests, and determined that if I replaced the parameters with hard-coded values, the execution time returned to 500ms. Can anyone shed some light on this for me - it seems counter-intuitive?\n\nHere are some particulars about the underlying query and tables:\n\nThe query selects a month number from a generate_series(1,12) left outer joined on a subquery, which produces three fields. The subquery is a UNION ALL of 5 tables. Each of the five tables has 100 inherited partitions. As you can see from the execution plan, the partitioning constraint is successfully restricting the query to the appropriate partition for each of the five tables. The constraint for each partition is a CHAR(2) field \"partition_key\" = '00' (where '00' is a two-digit CHAR(2) value that is returned from a function call ala table1.partition_key = partition_key($1) ) \n\nExecution Plan :\n\nSort (cost=10410.15..10410.65 rows=200 width=68) (actual time=273.050..273.071 rows=12 loops=1)\n Sort Key: mm.monthnumber\n -> HashAggregate (cost=10398.01..10402.51 rows=200 width=68) (actual time=272.970..273.001 rows=12 loops=1)\n -> Hash Ltable5 Join (cost=10370.01..10390.51 rows=1000 width=68) (actual time=272.817..272.902 rows=13 loops=1)\n Hash Cond: ((\"outer\".monthnumber)::double precision = \"inner\".monthnumber)\n -> Function Scan on generate_series mm (cost=0.00..12.50 rows=1000 width=4) (actual time=0.018..0.043 rows=12 loops=1)\n -> Hash (cost=10369.99..10369.99 rows=10 width=72) (actual time=272.769..272.769 rows=8 loops=1)\n -> Append (cost=1392.08..10369.89 rows=10 width=47) (actual time=39.581..272.734 rows=8 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=1392.08..1392.15 rows=2 width=47) (actual time=39.576..39.582 rows=1 loops=1)\n -> HashAggregate (cost=1392.08..1392.13 rows=2 width=47) (actual time=39.571..39.573 rows=1 loops=1)\n -> Result (cost=0.00..1392.05 rows=2 width=47) (actual time=25.240..39.538 rows=1 loops=1)\n -> Append (cost=0.00..1392.03 rows=2 width=47) (actual time=25.224..39.518 rows=1 loops=1)\n -> Seq Scan on table1 table1 (cost=0.00..14.50 rows=1 width=47) (actual time=0.003..0.003 rows=0 loops=1)\n Filter: ((partition_key = '12'::bpchar) AND (substr((indexed_field)::text, 2, 1) = '5'::text) AND (table_key = 10265512) AND (date_part('year'::text, (event_date)::timestamp without time zone) = 2005::double precision))\n -> Seq Scan on table1_p12 table1 (cost=0.00..1377.53 rows=1 width=28) (actual time=25.214..39.503 rows=1 loops=1)\n Filter: ((partition_key = '12'::bpchar) AND (substr((indexed_field)::text, 2, 1) = '5'::text) AND (table_key = 10265512) AND (date_part('year'::text, (event_date)::timestamp without time zone) = 2005::double precision))\n -> Subquery Scan \"*SELECT* 2\" (cost=2741.47..2741.56 rows=2 width=47) (actual time=78.140..78.140 rows=0 loops=1)\n -> HashAggregate (cost=2741.47..2741.54 rows=2 width=47) (actual time=78.134..78.134 rows=0 loops=1)\n -> Result (cost=0.00..2741.45 rows=2 width=47) (actual time=78.128..78.128 rows=0 loops=1)\n -> Append (cost=0.00..2741.43 rows=2 width=47) (actual time=78.122..78.122 rows=0 loops=1)\n -> Seq Scan on table2 table2 (cost=0.00..12.40 rows=1 width=47) (actual time=0.004..0.004 rows=0 loops=1)\n Filter: ((partition_key = '12'::bpchar) AND (substr((indexed_field)::text, 2, 1) = '5'::text) AND (table_key = 10265512) AND (date_part('year'::text, (event_date)::timestamp without time zone) = 2005::double precision))\n -> Seq Scan on table2_p12 table2 (cost=0.00..2729.03 rows=1 width=29) (actual time=78.109..78.109 rows=0 loops=1)\n Filter: ((partition_key = '12'::bpchar) AND (substr((indexed_field)::text, 2, 1) = '5'::text) AND (table_key = 10265512) AND (date_part('year'::text, (event_date)::timestamp without time zone) = 2005::double precision))\n -> Subquery Scan \"*SELECT* 3\" (cost=3173.33..3173.41 rows=2 width=47) (actual time=91.609..91.609 rows=0 loops=1)\n -> HashAggregate (cost=3173.33..3173.39 rows=2 width=47) (actual time=91.603..91.603 rows=0 loops=1)\n -> Result (cost=0.00..3173.30 rows=2 width=47) (actual time=91.598..91.598 rows=0 loops=1)\n -> Append (cost=0.00..3173.28 rows=2 width=47) (actual time=91.592..91.592 rows=0 loops=1)\n -> Seq Scan on table3 table3 (cost=0.00..10.90 rows=1 width=47) (actual time=0.003..0.003 rows=0 loops=1)\n Filter: ((partition_key = '12'::bpchar) AND (substr((indexed_field)::text, 2, 1) = '5'::text) AND (table_key = 10265512) AND (date_part('year'::text, (event_date)::timestamp without time zone) = 2005::double precision))\n -> Seq Scan on table3_p12 table3 (cost=0.00..3162.38 rows=1 width=29) (actual time=91.581..91.581 rows=0 loops=1)\n Filter: ((partition_key = '12'::bpchar) AND (substr((indexed_field)::text, 2, 1) = '5'::text) AND (table_key = 10265512) AND (date_part('year'::text, (event_date)::timestamp without time zone) = 2005::double precision))\n -> Subquery Scan \"*SELECT* 4\" (cost=961.72..961.80 rows=2 width=29) (actual time=11.647..11.694 rows=7 loops=1)\n -> HashAggregate (cost=961.72..961.78 rows=2 width=29) (actual time=11.640..11.659 rows=7 loops=1)\n -> Result (cost=0.00..961.69 rows=2 width=29) (actual time=7.537..11.567 rows=10 loops=1)\n -> Append (cost=0.00..961.67 rows=2 width=29) (actual time=7.520..11.499 rows=10 loops=1)\n -> Seq Scan on table4 table4 (cost=0.00..22.30 rows=1 width=29) (actual time=0.003..0.003 rows=0 loops=1)\n Filter: ((partition_key = '12'::bpchar) AND (substr((indexed_field)::text, 2, 1) = '5'::text) AND (table_key = 10265512) AND (date_part('year'::text, (event_date)::timestamp without time zone) = 2005::double precision))\n -> Index Scan using table4_p12_recency_date_type_xyz_idx on table4_p12 table4 (cost=0.00..939.37 rows=1 width=29) (actual time=7.510..11.452 rows=10 loops=1)\n Index Cond: (table_key = 10265512)\n Filter: ((partition_key = '12'::bpchar) AND (substr((indexed_field)::text, 2, 1) = '5'::text) AND (date_part('year'::text, (event_date)::timestamp without time zone) = 2005::double precision))\n -> Subquery Scan \"*SELECT* 5\" (cost=2100.89..2100.98 rows=2 width=47) (actual time=51.658..51.658 rows=0 loops=1)\n -> HashAggregate (cost=2100.89..2100.96 rows=2 width=47) (actual time=51.652..51.652 rows=0 loops=1)\n -> Result (cost=0.00..2100.87 rows=2 width=47) (actual time=51.646..51.646 rows=0 loops=1)\n -> Append (cost=0.00..2100.85 rows=2 width=47) (actual time=51.641..51.641 rows=0 loops=1)\n -> Seq Scan on table5 table5 (cost=0.00..10.90 rows=1 width=47) (actual time=0.004..0.004 rows=0 loops=1)\n Filter: ((partition_key = '12'::bpchar) AND (substr((indexed_field)::text, 2, 1) = '5'::text) AND (table_key = 10265512) AND (date_part('year'::text, (event_date)::timestamp without time zone) = 2005::double precision))\n -> Seq Scan on table5_p12 table5 (cost=0.00..2089.95 rows=1 width=29) (actual time=51.627..51.627 rows=0 loops=1)\n Filter: ((partition_key = '12'::bpchar) AND (substr((indexed_field)::text, 2, 1) = '5'::text) AND (table_key = 10265512) AND (date_part('year'::text, (event_date)::timestamp without time zone) = 2005::double precision))\nTotal runtime: 274.605 ms\n\n\nI appreciate your thoughts - this is a real mind-bender!\n\nMatthew Peters\n\n",
"msg_date": "Thu, 26 Oct 2006 08:49:02 -0700",
"msg_from": "\"Matthew Peters\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stored procedure slower than sql?"
},
{
"msg_contents": "\"Matthew Peters\" <[email protected]> writes:\n> How can a stored procedure containing a single query not implement the\n> same execution plan (assumption based on the dramatic performance\n> difference) that an identical ad-hoc query generates?\n\nParameterized vs non parameterized query?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2006 12:14:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored procedure slower than sql? "
},
{
"msg_contents": "Parameterized.\n\nIE (very simplified)\n\nCREATE OR REPLACE FUNCTION my_function(IN param1 BIGINT, IN param2\nINTEGER)\nRETURNS my_type\nSECURITY DEFINER\nAS\n$$\n\t/* my_type = (a,b,c) */\n\tSelect a,b,c\n\tFROM my_table\n\tWHERE indexed_column = $1\n\tAND partition_constraint_column = $2;\n$$\nLANGUAGE SQL;\n\n\n\n\nMatthew A. Peters\nSr. Software Engineer, Haydrian Corp.\[email protected]\n(mobile) 425-941-6566\n Haydrian Corp.\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, October 26, 2006 9:15 AM\nTo: Matthew Peters\nCc: [email protected]\nSubject: Re: [PERFORM] Stored procedure slower than sql? \nImportance: High\n\n\"Matthew Peters\" <[email protected]> writes:\n> How can a stored procedure containing a single query not implement the\n> same execution plan (assumption based on the dramatic performance\n> difference) that an identical ad-hoc query generates?\n\nParameterized vs non parameterized query?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 26 Oct 2006 09:21:37 -0700",
"msg_from": "\"Matthew Peters\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stored procedure slower than sql? "
},
{
"msg_contents": "The planner has no idea what $1 and $2 are when it plans the query, so\nthat could easily explain why the performance is different. You can\nprepare statements in psql (at least in 8.1), which would be a good way\nto verify that theory (compare EXPLAIN for prepared vs. non).\n\nOn Thu, Oct 26, 2006 at 09:21:37AM -0700, Matthew Peters wrote:\n> Parameterized.\n> \n> IE (very simplified)\n> \n> CREATE OR REPLACE FUNCTION my_function(IN param1 BIGINT, IN param2\n> INTEGER)\n> RETURNS my_type\n> SECURITY DEFINER\n> AS\n> $$\n> \t/* my_type = (a,b,c) */\n> \tSelect a,b,c\n> \tFROM my_table\n> \tWHERE indexed_column = $1\n> \tAND partition_constraint_column = $2;\n> $$\n> LANGUAGE SQL;\n> \n> \n> \n> \n> Matthew A. Peters\n> Sr. Software Engineer, Haydrian Corp.\n> [email protected]\n> (mobile) 425-941-6566\n> Haydrian Corp.\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> Sent: Thursday, October 26, 2006 9:15 AM\n> To: Matthew Peters\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Stored procedure slower than sql? \n> Importance: High\n> \n> \"Matthew Peters\" <[email protected]> writes:\n> > How can a stored procedure containing a single query not implement the\n> > same execution plan (assumption based on the dramatic performance\n> > difference) that an identical ad-hoc query generates?\n> \n> Parameterized vs non parameterized query?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Thu, 26 Oct 2006 14:19:18 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored procedure slower than sql?"
}
] |
[
{
"msg_contents": "Thanks for all the feedback, folks.\n\nRunning explain analyze (see below) I get results similar to Tom Lane,\nwhere the 2 queries run at the same speed.\nAnd running in psql (see below) we see the expected speed degradation\nfor multiple fields, although concatenation is not getting us any\nadvantage.\n\n\n----------------------------------------------------------------------\n=== RUNNING EXPLAIN ANALYZE ===\n----------------------------------------------------------------------\n\not6_tdarci=# explain analyze select p.opid from ott_op p;\n\n QUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------\n Seq Scan on ott_op p (cost=100000000.00..100002654.44 rows=114344\nwidth=4) (actual time=0.008..260.739 rows=114344 loops=1)\n Total runtime: 472.833 ms\n\nTime: 473.240 ms\n\n\not6_tdarci=# explain analyze select p.opid, p.opid, p.opid, p.opid,\np.opid, p.opid, p.opid, p.opid, p.opid, p.opid from ott_op p;\n\n QUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------\n Seq Scan on ott_op p (cost=100000000.00..100002654.44 rows=114344\nwidth=4) (actual time=0.006..260.795 rows=114344 loops=1)\n Total runtime: 472.980 ms\n\nTime: 473.439 ms\n\n----------------------------------------------------------------------\n=== RUNNING THE QUERIES ===\n----------------------------------------------------------------------\n\not6_tdarci=# \\o /dev/null\n\not6_tdarci=# select p.opid from ott_op p;\nTime: 157.419 ms\n\not6_tdarci=# select p.opid, p.opid, p.opid, p.opid, p.opid, p.opid,\np.opid, p.opid, p.opid, p.opid from ott_op p;\nTime: 659.505 ms\n\not6_tdarci=# select p.opid || p.opid || p.opid || p.opid || p.opid ||\np.opid || p.opid || p.opid || p.opid || p.opid from ott_op p;\nTime: 672.113 ms \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, October 26, 2006 2:53 PM\nTo: Tom Darci\nCc: [email protected]\nSubject: Re: [PERFORM] query slows down drastically with increased\nnumber of fields\n\n\"Tom Darci\" <[email protected]> writes:\n> It runs in about half a second (running in PgAdmin... the query run \n> time, not the data retrieval time)\n\nI don't have a lot of faith in PgAdmin's ability to distinguish the two.\nIn fact, for a query such as you have here that's just a bare seqscan,\nit's arguably *all* data retrieval time --- the backend will start\nemitting records almost instantly.\n\nFWIW, in attempting to duplicate your test I get\n\nregression=# explain analyze select f1 from foo;\n QUERY PLAN\n------------------------------------------------------------------------\n------------------------------------\n Seq Scan on foo (cost=0.00..1541.00 rows=100000 width=4) (actual\ntime=0.161..487.192 rows=100000 loops=1) Total runtime: 865.454 ms\n(2 rows)\n\nregression=# explain analyze select f1,f1,f1,f1,f1,f1,f1,f1,f1,f1,f1\nfrom foo;\n QUERY PLAN\n------------------------------------------------------------------------\n------------------------------------\n Seq Scan on foo (cost=0.00..1541.00 rows=100000 width=4) (actual\ntime=0.169..603.795 rows=100000 loops=1) Total runtime: 984.124 ms\n(2 rows)\n\nNote that this test doesn't perform conversion of the field values to\ntext form, so it's an underestimate of the total time spent by the\nbackend for the real query. But I think almost certainly, your speed\ndifference is all about having to send more values to the client.\nThe costs not measured by the explain-analyze scenario would scale darn\nnear linearly with the number of repetitions of f1.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Oct 2006 20:01:19 -0400",
"msg_from": "\"Tom Darci\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query slows down drastically with increased number of"
}
] |
[
{
"msg_contents": "Hi,\n\nI wanted to use \"exp1 is not distinct from exp2\" which I tough was syntaxic \nsugar for\nexp1 is not null and exp2 is not null and exp1 = exp2 or exp1 is null and \nexp2 is null\nbut my index is ignored with \"is not distinct from\".\n\nIs this the expected behavior ?\n\ncreate temporary table t as select * from generate_series(1,1000000) t(col);\ncreate unique index i on t(col);\nanalyze t;\n\n-- These queries don't use the index\nselect count(*) from t where col is not distinct from 123;\nselect count(*) from t where not col is distinct from 123;\n\n-- This query use the index\nselect count(*) from t where col is not null and 123 is not null and col = \n123 or col is null and 123 is null;\n\nexplain analyze select count(*) from t where col is not distinct from 123;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\nAggregate (cost=19154.79..19154.80 rows=1 width=0) (actual \ntime=228.200..228.202 rows=1 loops=1)\n -> Seq Scan on t (cost=0.00..17904.90 rows=499956 width=0) (actual \ntime=0.042..228.133 rows=1 loops=1)\n Filter: (NOT (col IS DISTINCT FROM 123))\nTotal runtime: 228.290 ms\n(4 rows)\nTime: 219.000 ms\n\nexplain analyze select count(*) from t where not col is distinct from 123;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\nAggregate (cost=19154.79..19154.80 rows=1 width=0) (actual \ntime=235.950..235.952 rows=1 loops=1)\n -> Seq Scan on t (cost=0.00..17904.90 rows=499956 width=0) (actual \ntime=0.040..235.909 rows=1 loops=1)\n Filter: (NOT (col IS DISTINCT FROM 123))\nTotal runtime: 236.065 ms\n(4 rows)\nTime: 250.000 ms\n\nexplain analyze select count(*) from t where col is not null and 123 is not \nnull and col = 123 or col is null and 123 is null;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\nAggregate (cost=8.13..8.14 rows=1 width=0) (actual time=0.267..0.268 rows=1 \nloops=1)\n -> Index Scan using i on t (cost=0.00..8.13 rows=1 width=0) (actual \ntime=0.237..0.241 rows=1 loops=1)\n Index Cond: (col = 123)\nTotal runtime: 0.366 ms\n(4 rows)\nTime: 0.000 ms\n\nI am on Windows XP Service pack 2 with PostgreSQL 8.2 beta2\n\nThanks,\nJean-Pierre Pelletier\ne-djuster\n\n\n",
"msg_date": "Thu, 26 Oct 2006 22:19:20 -0400",
"msg_from": "\"JEAN-PIERRE PELLETIER\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index ignored with \"is not distinct from\", 8.2 beta2"
}
] |
[
{
"msg_contents": "Hello,\n\nMy Server is crashed in PQfinish. Below is the core file details:\n\n=>[1] DLRemHead(0x2b7780, 0xfb6bc008, 0x319670, 0xfb6bc008, 0x21c40,\n0x3106f8), at 0xfded10e4\n [2] DLFreeList(0x2b7780, 0x0, 0x417b48, 0xfdec5aa4, 0x21c18, 0x0), at\n0xfded0c64\n [3] freePGconn(0x371ea0, 0x0, 0x289f48, 0xfbfb61b8, 0x21c18, 0x0), at\n0xfdec5ac0\n [4] PQfinish(0x371ea0, 0x289ce8, 0x289ce8, 0xf9a0b65c, 0x20fa0,\n0xfb0718dc), at 0xfdec5cc4\n [5] abc(0x289ce0, 0xfafec000, 0xfb5b1d88, 0x0, 0xf9a0ba8c, 0x7), at\n0xfb071aec\n\nServer is crashed at \"DLRemHead\". This crash is not easily reproducible.\n\nCan anybody please tell me whether above problem is related to postgres or\nnot?\n\nThanks,\nSonal\n\nHello,\n \nMy Server is crashed in PQfinish. Below is the core file details:\n \n=>[1] DLRemHead(0x2b7780, 0xfb6bc008, 0x319670, 0xfb6bc008, 0x21c40, 0x3106f8), at 0xfded10e4 [2] DLFreeList(0x2b7780, 0x0, 0x417b48, 0xfdec5aa4, 0x21c18, 0x0), at 0xfded0c64 [3] freePGconn(0x371ea0, 0x0, 0x289f48, 0xfbfb61b8, 0x21c18, 0x0), at 0xfdec5ac0\n [4] PQfinish(0x371ea0, 0x289ce8, 0x289ce8, 0xf9a0b65c, 0x20fa0, 0xfb0718dc), at 0xfdec5cc4 [5] abc(0x289ce0, 0xfafec000, 0xfb5b1d88, 0x0, 0xf9a0ba8c, 0x7), at 0xfb071aec\n \nServer is crashed at \"DLRemHead\". This crash is not easily reproducible.\n \nCan anybody please tell me whether above problem is related to postgres or not?\n \nThanks,\nSonal",
"msg_date": "Fri, 27 Oct 2006 18:38:56 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "client crashes in PQfinish"
},
{
"msg_contents": "Any response?\n\n\nOn 10/27/06, soni de <[email protected]> wrote:\n>\n> Hello,\n>\n> My Server is crashed in PQfinish. Below is the core file details:\n>\n> =>[1] DLRemHead(0x2b7780, 0xfb6bc008, 0x319670, 0xfb6bc008, 0x21c40,\n> 0x3106f8), at 0xfded10e4\n> [2] DLFreeList(0x2b7780, 0x0, 0x417b48, 0xfdec5aa4, 0x21c18, 0x0), at\n> 0xfded0c64\n> [3] freePGconn(0x371ea0, 0x0, 0x289f48, 0xfbfb61b8, 0x21c18, 0x0), at\n> 0xfdec5ac0\n> [4] PQfinish(0x371ea0, 0x289ce8, 0x289ce8, 0xf9a0b65c, 0x20fa0,\n> 0xfb0718dc), at 0xfdec5cc4\n> [5] abc(0x289ce0, 0xfafec000, 0xfb5b1d88, 0x0, 0xf9a0ba8c, 0x7), at\n> 0xfb071aec\n>\n> Server is crashed at \"DLRemHead\". This crash is not easily reproducible.\n>\n> Can anybody please tell me whether above problem is related to postgres or\n> not?\n>\n> Thanks,\n> Sonal\n>\n\nAny response?\n \nOn 10/27/06, soni de <[email protected]> wrote:\n\nHello,\n \nMy Server is crashed in PQfinish. Below is the core file details:\n \n=>[1] DLRemHead(0x2b7780, 0xfb6bc008, 0x319670, 0xfb6bc008, 0x21c40, 0x3106f8), at 0xfded10e4 [2] DLFreeList(0x2b7780, 0x0, 0x417b48, 0xfdec5aa4, 0x21c18, 0x0), at 0xfded0c64 [3] freePGconn(0x371ea0, 0x0, 0x289f48, 0xfbfb61b8, 0x21c18, 0x0), at 0xfdec5ac0 \n [4] PQfinish(0x371ea0, 0x289ce8, 0x289ce8, 0xf9a0b65c, 0x20fa0, 0xfb0718dc), at 0xfdec5cc4 [5] abc(0x289ce0, 0xfafec000, 0xfb5b1d88, 0x0, 0xf9a0ba8c, 0x7), at 0xfb071aec\n \nServer is crashed at \"DLRemHead\". This crash is not easily reproducible.\n \nCan anybody please tell me whether above problem is related to postgres or not?\n \nThanks,\nSonal",
"msg_date": "Tue, 31 Oct 2006 11:29:21 +0530",
"msg_from": "\"soni de\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: client crashes in PQfinish"
},
{
"msg_contents": "soni de wrote:\n> Any response?\n\nCouple of points:\n1. You're on the wrong list. This is for performance issues. I'd \nrecommend one of the bugs/hackers/general lists instead.\n\n2. You don't give details of any error message produced during the crash \n (or if there is one).\n\n3a. You don't give details of the version of PostgreSQL you're on\n b. what O.S.\n c. how installed\n d. what the database was doing at the time\n e. what the client was doing at the time\n\nNow, to my uneducated eye it looks like a linked-list problem when \nclosing a connection. Presumably a corrupted pointer or freeing \nsomething already released.\n\nTo make diagnosis even more interesting, although you say it is the \n\"server\" that is crashed, I think PQfinish is part of the libpq \nconnection library. That probably means the crash is in the client, not \nthe server. Or does it?\n\nSo - based on the fact that I can't tell what's happening, where it \nhappens or even if it's in the server or client I'd guess something in \nyour code is overwriting some of libpq's data structures. Possibly \nyou're using threads in a non-threading library? Bear in mind that I'm \nnot a C programmer.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 31 Oct 2006 11:47:17 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: client crashes in PQfinish"
}
] |
[
{
"msg_contents": "Hi!\n\nI'm just wondering, I've got a table that is partitioned into monthly\ntables:\n\nmedia_downloads -> media_downloads_YYYYMM\n I\\- id (primary key)\n \\- created_on (timestamp criteria for the monthly table split)\n\nThere are constraints upon the created_on column, all needed insert\ninstead rules are defined too.\nOne additional hardship is that id are not monotone against created_on,\nid1 < id2 does not imply created_on1 <= created_on2 :(\nThe table contains basically almost 100M rows, and the number is\ngrowing. (the table will be about a 12GB pg_dump.)\nAll relevant indexes (primary key id, index on created_on) are defined\ntoo.\n\nThe good thing is, queries like all rows in the last 7 days work\nreasonable fast, the optimizer just checks the 1-2 last month tables.\n\nUsing postgres 8.1.4-0ubuntu1, I've got to implement the following\nqueries in a reasonable fast way:\n\n-- sequential reading of rows\nSELECT * FROM media_downloads WHERE id > 1000000 ORDER BY id LIMIT 100;\n\nAgainst the same monolithic table with about 16.5M rows, I'm getting a\ncost of 20.6 pages. (Index scan)\n\nAgainst the partitioned tables, I'm getting a cost of 5406822 pages.\nNow I understand, that without any additional conditions, postgresql\nneeds to do the query for all subtables first, but explain against the\nsubtables show costs of 4-5 pages.\nevents=# explain select * from media_downloads where id >90000000 order\nby id limit 100;\n\nQUERY\nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5406822.39..5406822.64 rows=100 width=1764)\n -> Sort (cost=5406822.39..5413639.50 rows=2726843 width=1764)\n Sort Key: public.media_downloads.id\n -> Result (cost=0.00..115960.71 rows=2726843 width=1764)\n -> Append (cost=0.00..115960.71 rows=2726843\nwidth=1764)\n -> Seq Scan on media_downloads (cost=0.00..10.50\nrows=13 width=1764)\n Filter: (id > 90000000)\n -> Index Scan using media_downloads_200510_pkey on\nmedia_downloads_200510 media_downloads (cost=0.00..3.75 rows=14\nwidth=243)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200511_pkey on\nmedia_downloads_200511 media_downloads (cost=0.00..72.19 rows=172\nwidth=239)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200512_pkey on\nmedia_downloads_200512 media_downloads (cost=0.00..603.64 rows=172\nwidth=240)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200601_pkey on\nmedia_downloads_200601 media_downloads (cost=0.00..19.33 rows=232\nwidth=239)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200602_pkey on\nmedia_downloads_200602 media_downloads (cost=0.00..56.82 rows=316\nwidth=240)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200603_pkey on\nmedia_downloads_200603 media_downloads (cost=0.00..18.88 rows=270\nwidth=243)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200604_pkey on\nmedia_downloads_200604 media_downloads (cost=0.00..1194.16 rows=939\nwidth=298)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200605_pkey on\nmedia_downloads_200605 media_downloads (cost=0.00..79.28 rows=672\nwidth=326)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200606_pkey on\nmedia_downloads_200606 media_downloads (cost=0.00..75.26 rows=1190\nwidth=314)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200607_pkey on\nmedia_downloads_200607 media_downloads (cost=0.00..55.29 rows=1238\nwidth=319)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200608_pkey on\nmedia_downloads_200608 media_downloads (cost=0.00..73.95 rows=1305\nwidth=319)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200609_pkey on\nmedia_downloads_200609 media_downloads (cost=0.00..144.10 rows=1575\nwidth=324)\n Index Cond: (id > 90000000)\n -> Index Scan using media_downloads_200610_pkey on\nmedia_downloads_200610 media_downloads (cost=0.00..113532.57\nrows=2718709 width=337)\n Index Cond: (id > 90000000)\n -> Seq Scan on media_downloads_200611\nmedia_downloads (cost=0.00..10.50 rows=13 width=1764)\n Filter: (id > 90000000)\n -> Seq Scan on media_downloads_200612\nmedia_downloads (cost=0.00..10.50 rows=13 width=1764)\n Filter: (id > 90000000)\n(37 rows)\n\nevents=# explain select * from media_downloads_200610 where id\n>90000000 order by id limit 100;\n QUERY\nPLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..4.18 rows=100 width=337)\n -> Index Scan using media_downloads_200610_pkey on\nmedia_downloads_200610 (cost=0.00..113582.70 rows=2719904 width=337)\n Index Cond: (id > 90000000)\n(3 rows)\n\nInterestingly, if one reformulates the query like that:\n\nSELECT * FROM media_downloads WHERE id > 90000000 AND id < 90001000\nORDER BY id LIMIT 100;\n\nresults in a reasonable cost of 161.5 pages.\n\nNow the above query is basically acceptable, as one iterate all rows\nthis way, but now I need to know max(id) to know when to stop my loop:\n\nevents=# explain select max(id) from media_downloads;\n QUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3676914.56..3676914.58 rows=1 width=4)\n -> Append (cost=0.00..3444211.85 rows=93081085 width=4)\n -> Seq Scan on media_downloads (cost=0.00..10.40 rows=40\nwidth=4)\n -> Seq Scan on media_downloads_200510 media_downloads\n(cost=0.00..5615.84 rows=139884 width=4)\n -> Seq Scan on media_downloads_200511 media_downloads\n(cost=0.00..67446.56 rows=1724356 width=4)\n -> Seq Scan on media_downloads_200512 media_downloads\n(cost=0.00..66727.02 rows=1718302 width=4)\n -> Seq Scan on media_downloads_200601 media_downloads\n(cost=0.00..88799.91 rows=2321991 width=4)\n -> Seq Scan on media_downloads_200602 media_downloads\n(cost=0.00..121525.71 rows=3159571 width=4)\n -> Seq Scan on media_downloads_200603 media_downloads\n(cost=0.00..104205.40 rows=2701240 width=4)\n -> Seq Scan on media_downloads_200604 media_downloads\n(cost=0.00..342511.42 rows=9391242 width=4)\n -> Seq Scan on media_downloads_200605 media_downloads\n(cost=0.00..245167.39 rows=6724039 width=4)\n -> Seq Scan on media_downloads_200606 media_downloads\n(cost=0.00..430186.99 rows=11901499 width=4)\n -> Seq Scan on media_downloads_200607 media_downloads\n(cost=0.00..451313.72 rows=12380172 width=4)\n -> Seq Scan on media_downloads_200608 media_downloads\n(cost=0.00..474743.72 rows=13048372 width=4)\n -> Seq Scan on media_downloads_200609 media_downloads\n(cost=0.00..619711.52 rows=15754452 width=4)\n -> Seq Scan on media_downloads_200610 media_downloads\n(cost=0.00..426225.45 rows=12115845 width=4)\n -> Seq Scan on media_downloads_200611 media_downloads\n(cost=0.00..10.40 rows=40 width=4)\n -> Seq Scan on media_downloads_200612 media_downloads\n(cost=0.00..10.40 rows=40 width=4)\n(18 rows)\n\nevents=# explain select max(id) from media_downloads_200610;\n QUERY\nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.04..0.05 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..0.04 rows=1 width=4)\n -> Index Scan Backward using media_downloads_200610_pkey on\nmedia_downloads_200610 (cost=0.00..475660.29 rows=12115845 width=4)\n Filter: (id IS NOT NULL)\n(5 rows)\n\nFor me as a human, it's obvious, that max(media_downloads) ==\nmax(media_downloads_200612..media_downloads_200510).\n\nAny ideas how to make the optimizer handle partitioned tables more\nsensible?\n\nAndreas",
"msg_date": "Sun, 29 Oct 2006 00:28:05 +0200",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": true,
"msg_subject": "partitioned table performance"
},
{
"msg_contents": "On Sun, 2006-10-29 at 00:28 +0200, Andreas Kostyrka wrote:\n\n> Any ideas how to make the optimizer handle partitioned tables more\n> sensible? \n\nYes, those are known inefficiencies in the current implementation which\nwe expect to address for 8.3.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Oct 2006 08:18:21 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioned table performance"
},
{
"msg_contents": "Am Montag, den 30.10.2006, 08:18 +0000 schrieb Simon Riggs:\n> On Sun, 2006-10-29 at 00:28 +0200, Andreas Kostyrka wrote:\n> \n> > Any ideas how to make the optimizer handle partitioned tables more\n> > sensible? \n> \n> Yes, those are known inefficiencies in the current implementation which\n> we expect to address for 8.3.\n\nAny ideas to force the current optimizer to do something sensible?\n\nAndreas\n\n>",
"msg_date": "Mon, 30 Oct 2006 22:58:15 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: partitioned table performance"
},
{
"msg_contents": "On Mon, 2006-10-30 at 22:58 +0100, Andreas Kostyrka wrote:\n> Am Montag, den 30.10.2006, 08:18 +0000 schrieb Simon Riggs:\n> > On Sun, 2006-10-29 at 00:28 +0200, Andreas Kostyrka wrote:\n> > \n> > > Any ideas how to make the optimizer handle partitioned tables more\n> > > sensible? \n> > \n> > Yes, those are known inefficiencies in the current implementation which\n> > we expect to address for 8.3.\n> \n> Any ideas to force the current optimizer to do something sensible?\n\nBrute force & ignorance: PL/pgSQL\n\nPerhaps some other domain knowledge might help you shorten the search?\n\nThats all for now. It's not a minor fixup and nobody had time to fix\nthat for 8.2 since other fish were bigger.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Oct 2006 22:46:43 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioned table performance"
}
] |
[
{
"msg_contents": "Look at this insane plan:\n\nlucas=# explain analyse select huvudklass,sum(summa) from kor_tjanster left outer join prislist on prislista=listid and tjanst=tjanstid where kor_id in (select id from kor where lista=10484) group by 1;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=212892.07..212892.10 rows=2 width=23) (actual time=4056.165..4056.167 rows=2 loops=1)\n -> Hash IN Join (cost=102.84..212889.04 rows=607 width=23) (actual time=4032.931..4056.017 rows=31 loops=1)\n Hash Cond: (\"outer\".kor_id = \"inner\".id)\n -> Hash Left Join (cost=59.66..206763.11 rows=1215336 width=27) (actual time=4.959..3228.550 rows=1216434 loops=1)\n Hash Cond: ((\"outer\".prislista = (\"inner\".listid)::text) AND (\"outer\".tjanst = (\"inner\".tjanstid)::text))\n -> Seq Scan on kor_tjanster (cost=0.00..23802.36 rows=1215336 width=26) (actual time=0.032..1257.241 rows=1216434 loops=1)\n -> Hash (cost=51.77..51.77 rows=1577 width=29) (actual time=4.898..4.898 rows=1577 loops=1)\n -> Seq Scan on prislist (cost=0.00..51.77 rows=1577 width=29) (actual time=0.034..2.445 rows=1577 loops=1)\n -> Hash (cost=41.79..41.79 rows=557 width=4) (actual time=0.185..0.185 rows=29 loops=1)\n -> Index Scan using kor_lista on kor (cost=0.00..41.79 rows=557 width=4) (actual time=0.070..0.150 rows=29 loops=1)\n Index Cond: (lista = 10484)\n Total runtime: 4056.333 ms\n\nI have an index on kor_tjanster(kor_id), an index on prislist(prislist_id), did ANALYZE and all that stuff... but those indexes are not used.\n\nWhy does it come up with this strange plan? It does a seqscan of 1.2 million rows and then a join!? Using the index would be much faster...\n\nI expected something like this:\n 1. Index Scan using kor_lista on kor (use lista_id 10484 to get a list of kor_id's - 29 rows (expected 557 rows))\n 2. Index Scan using kor_id on kor_tjanster (use the kor_id's to get a list of kor_tjanster - 31 rows)\n 3. Index Scan using prislist_listid on prislist (use the 31 kor_tjanster rows to find the corresponding 'huvudklass' for each row)\n29+31+31=91 index lookups... which is MUCH faster than seq-scanning millions of rows...\n\nI need to speed up this query. How can i make it use the correct index? Any hints?\n\nI have pg 8.1.0, default settings.\n\n/* m */\n",
"msg_date": "Mon, 30 Oct 2006 13:05:07 +0200",
"msg_from": "Mattias Kregert <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange plan in pg 8.1.0"
},
{
"msg_contents": "On Mon, Oct 30, 2006 at 01:05:07PM +0200, Mattias Kregert wrote:\n> -> Hash Left Join (cost=59.66..206763.11 rows=1215336 width=27) (actual time=4.959..3228.550 rows=1216434 loops=1)\n> Hash Cond: ((\"outer\".prislista = (\"inner\".listid)::text) AND (\"outer\".tjanst = (\"inner\".tjanstid)::text))\n\nNote the conversion to text here. Are you sure the types are matching on both\nsides of the join?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 30 Oct 2006 13:27:33 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange plan in pg 8.1.0"
},
{
"msg_contents": ">>> From: \"Steinar H. Gunderson\" <[email protected]>\n> On Mon, Oct 30, 2006 at 01:05:07PM +0200, Mattias Kregert wrote:\n>> -> Hash Left Join (cost=59.66..206763.11 rows=1215336 \n>> width=27) (actual time=4.959..3228.550 rows=1216434 loops=1)\n>> Hash Cond: ((\"outer\".prislista = (\"inner\".listid)::text) \n>> AND (\"outer\".tjanst = (\"inner\".tjanstid)::text))\n>\n> Note the conversion to text here. Are you sure the types are matching on \n> both\n> sides of the join?\n>\n> /* Steinar */\n\nOn the left side it is text, and on the right side it is varchar(10).\nCasting left side to varchar(10) does not help, in fact it makes things even \nworse: The cast to ::text vanishes in a puff of logic, but the plan gets \nbigger and even slower (20-25 seconds).\n\nA RIGHT join takes only 20 milliseconds, but i want the left join because \nthere could be missing rows in the \"prislist\" table...\n\n/* m */\n",
"msg_date": "Mon, 30 Oct 2006 15:26:09 +0100",
"msg_from": "\"Mattias Kregert\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange plan in pg 8.1.0"
},
{
"msg_contents": "Mattias Kregert <[email protected]> writes:\n> Why does it come up with this strange plan?\n\nBecause 8.1 can't reorder outer joins. To devise the plan you want,\nthe planner has to be able to prove that it's OK to perform the IN join\nbefore the LEFT join, something that isn't always the case. 8.2 can\nprove this, but no existing release can.\n\nThe only workaround I can think of is to do the IN in a sub-select.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Oct 2006 10:09:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange plan in pg 8.1.0 "
},
{
"msg_contents": "From: \"Tom Lane\" <[email protected]>\n> Mattias Kregert <[email protected]> writes:\n>> Why does it come up with this strange plan?\n>\n> Because 8.1 can't reorder outer joins. To devise the plan you want,\n> the planner has to be able to prove that it's OK to perform the IN join\n> before the LEFT join, something that isn't always the case. 8.2 can\n> prove this, but no existing release can.\n>\n> The only workaround I can think of is to do the IN in a sub-select.\n>\n> regards, tom lane\n>\n\nThanks!\nI'll try some subselect solution for now, and make a note to change it when \n8.2 is out.\n\n/* m */\n",
"msg_date": "Mon, 30 Oct 2006 16:38:13 +0100",
"msg_from": "\"Mattias Kregert\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange plan in pg 8.1.0 "
},
{
"msg_contents": "On Mon, Oct 30, 2006 at 03:26:09PM +0100, Mattias Kregert wrote:\n> On the left side it is text, and on the right side it is varchar(10).\n> Casting left side to varchar(10) does not help, in fact it makes things \n> even worse: The cast to ::text vanishes in a puff of logic, but the plan \n> gets bigger and even slower (20-25 seconds).\n\nCasting definitely won't help it any; it was more a question of having the\ntypes in the _tables_ be the same.\n\nAnyhow, this might be a red herring; others might have something more\nintelligent to say in this matter.\n\nBy the way, does it use an index scan if you turn off sequential scans\n(set enable_seqscan = false)?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 30 Oct 2006 16:49:57 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange plan in pg 8.1.0"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Luke Lonergan\n> Sent: Saturday, October 28, 2006 12:07 AM\n> To: Worky Workerson; Merlin Moncure\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Best COPY Performance\n> \n> Worky,\n> \n> On 10/27/06 8:47 PM, \"Worky Workerson\" \n> <[email protected]> wrote:\n> \n> > Are you saying that I should be able to issue multiple COPY \n> commands \n> > because my I/O wait is low? I was under the impression \n> that I am I/O \n> > bound, so multiple simeoultaneous loads would have a detrimental \n> > effect ...\n> \n> ... \n> I agree with Merlin that you can speed things up by breaking \n> the file up.\n> Alternately you can use the OSS Bizgres java loader, which \n> lets you specify the number of I/O threads with the \"-n\" \n> option on a single file.\n\nAs a result of this thread, and b/c I've tried this in the past but\nnever had much success at speeding the process up, I attempted just that\nhere except via 2 psql CLI's with access to the local file. 1.1M rows\nof data varying in width from 40 to 200 characters COPY'd to a table\nwith only one text column, no keys, indexes, &c took about 15 seconds to\nload. ~73K rows/second.\n\nI broke that file into 2 files each of 550K rows and performed 2\nsimultaneous COPY's after dropping the table, recreating, issuing a sync\non the system to be sure, &c and nearly every time both COPY's finish in\n12 seconds. About a 20% gain to ~91K rows/second.\n\nAdmittedly, this was a pretty rough test but a 20% savings, if it can be\nput into production, is worth exploring for us.\n\nB/c I'll be asked, I did this on an idle, dual 3.06GHz Xeon with 6GB of\nmemory, U320 SCSI internal drives and PostgreSQL 8.1.4.\n\nGreg\n",
"msg_date": "Mon, 30 Oct 2006 09:09:32 -0500",
"msg_from": "\"Spiegelberg, Greg\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Greg,\n\nOn 10/30/06 7:09 AM, \"Spiegelberg, Greg\" <[email protected]> wrote:\n\n> I broke that file into 2 files each of 550K rows and performed 2\n> simultaneous COPY's after dropping the table, recreating, issuing a sync\n> on the system to be sure, &c and nearly every time both COPY's finish in\n> 12 seconds. About a 20% gain to ~91K rows/second.\n> \n> Admittedly, this was a pretty rough test but a 20% savings, if it can be\n> put into production, is worth exploring for us.\n\nDid you see whether you were I/O or CPU bound in your single threaded COPY?\nA 10 second \"vmstat 1\" snapshot would tell you/us.\n\nWith Mr. Workerson (:-) I'm thinking his benefit might be a lot better\nbecause the bottleneck is the CPU and it *may* be the time spent in the\nindex building bits.\n\nWe've found that there is an ultimate bottleneck at about 12-14MB/s despite\nhaving sequential write to disk speeds of 100s of MB/s. I forget what the\nlatest bottleneck was.\n\n- Luke \n\n\n",
"msg_date": "Mon, 30 Oct 2006 07:23:07 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Stefan,\n\nOn 10/30/06 8:57 AM, \"Stefan Kaltenbrunner\" <[email protected]> wrote:\n\n>> We've found that there is an ultimate bottleneck at about 12-14MB/s despite\n>> having sequential write to disk speeds of 100s of MB/s. I forget what the\n>> latest bottleneck was.\n> \n> I have personally managed to load a bit less then 400k/s (5 int columns\n> no indexes) - on very fast disk hardware - at that point postgresql is\n> completely CPU bottlenecked (2,6Ghz Opteron).\n\n400,000 rows/s x 4 bytes/column x 5 columns/row = 8MB/s\n\n> Using multiple processes to load the data will help to scale up to about\n> 900k/s (4 processes on 4 cores).\n\n18MB/s? Have you done this? I've not seen this much of an improvement\nbefore by using multiple COPY processes to the same table.\n\nAnother question: how to measure MB/s - based on the input text file? On\nthe DBMS storage size? We usually consider the input text file in the\ncalculation of COPY rate.\n\n- Luke\n\n\n",
"msg_date": "Mon, 30 Oct 2006 08:03:41 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Greg,\n> \n> On 10/30/06 7:09 AM, \"Spiegelberg, Greg\" <[email protected]> wrote:\n> \n>> I broke that file into 2 files each of 550K rows and performed 2\n>> simultaneous COPY's after dropping the table, recreating, issuing a sync\n>> on the system to be sure, &c and nearly every time both COPY's finish in\n>> 12 seconds. About a 20% gain to ~91K rows/second.\n>>\n>> Admittedly, this was a pretty rough test but a 20% savings, if it can be\n>> put into production, is worth exploring for us.\n> \n> Did you see whether you were I/O or CPU bound in your single threaded COPY?\n> A 10 second \"vmstat 1\" snapshot would tell you/us.\n> \n> With Mr. Workerson (:-) I'm thinking his benefit might be a lot better\n> because the bottleneck is the CPU and it *may* be the time spent in the\n> index building bits.\n> \n> We've found that there is an ultimate bottleneck at about 12-14MB/s despite\n> having sequential write to disk speeds of 100s of MB/s. I forget what the\n> latest bottleneck was.\n\nI have personally managed to load a bit less then 400k/s (5 int columns \nno indexes) - on very fast disk hardware - at that point postgresql is \ncompletely CPU bottlenecked (2,6Ghz Opteron).\nUsing multiple processes to load the data will help to scale up to about \n 900k/s (4 processes on 4 cores).\n\n\nStefan\n",
"msg_date": "Mon, 30 Oct 2006 16:57:19 +0100",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Stefan,\n> \n> On 10/30/06 8:57 AM, \"Stefan Kaltenbrunner\" <[email protected]> wrote:\n> \n>>> We've found that there is an ultimate bottleneck at about 12-14MB/s despite\n>>> having sequential write to disk speeds of 100s of MB/s. I forget what the\n>>> latest bottleneck was.\n>> I have personally managed to load a bit less then 400k/s (5 int columns\n>> no indexes) - on very fast disk hardware - at that point postgresql is\n>> completely CPU bottlenecked (2,6Ghz Opteron).\n> \n> 400,000 rows/s x 4 bytes/column x 5 columns/row = 8MB/s\n> \n>> Using multiple processes to load the data will help to scale up to about\n>> 900k/s (4 processes on 4 cores).\n\nyes I did that about half a year ago as part of the CREATE INDEX on a \n1,8B row table thread on -hackers that resulted in some some the sorting \nimprovements in 8.2.\nI don't think there is much more possible in terms of import speed by \nusing more cores (at least not when importing to the same table) - iirc \nI was at nearly 700k/s with two cores and 850k/s with 3 cores or such ...\n\n> \n> 18MB/s? Have you done this? I've not seen this much of an improvement\n> before by using multiple COPY processes to the same table.\n> \n> Another question: how to measure MB/s - based on the input text file? On\n> the DBMS storage size? We usually consider the input text file in the\n> calculation of COPY rate.\n\n\nyeah that is a good questions (and part of the reason why I cited the \nrows/sec number btw.)\n\n\nStefan\n",
"msg_date": "Mon, 30 Oct 2006 17:23:44 +0100",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best COPY Performance"
}
] |
[
{
"msg_contents": "Hello,\nI have been having a problem with the following query ignoring an index\non the foos.bar column.\n\nSELECT c.id\nFROM foos c, bars r\nWHERE r.id != 0\nAND r.modified_time > '2006-10-20 10:00:00.000'\nAND r.modified_time <= '2006-10-30 15:20:00.000'\nAND c.bar = r.id\n\nThe bars table contains 597 rows, while the foos table contains 5031203\nrows.\n\nAfter much research I figured out that the problem is being caused by the\nPG planner deciding that my foos.bar index is not useful. The data in the\nfoos.bar column contains 5028698 0 values and 2505 that are ids in the bars\ntable.\n\nBoth tables have just been analyzed.\n\nWhen I EXPLAIN ANALYZE the above query, I get the following:\n\n\"Hash Join (cost=3.06..201642.49 rows=25288 width=8) (actual\ntime=0.234..40025.514 rows=11 loops=1)\"\n\" Hash Cond: (\"outer\".bar = \"inner\".id)\"\n\" -> Seq Scan on foos c (cost=0.00..176225.03 rows=5032303 width=16)\n(actual time=0.007..30838.623 rows=5031203 loops=1)\"\n\" -> Hash (cost=3.06..3.06 rows=3 width=8) (actual time=0.117..0.117\nrows=20 loops=1)\"\n\" -> Index Scan using bars_index_modified_time on bars r\n(cost=0.00..3.06 rows=3 width=8) (actual time=0.016..0.066 rows=20 loops=1)\"\n\" Index Cond: ((modified_time > '2006-10-20 10:00:00'::timestamp without\ntime zone) AND (modified_time <= '2006-10-30 15:20:00'::timestamp\nwithout time zone))\"\n\" Filter: (id <> 0)\"\n\"Total runtime: 40025.629 ms\"\n\nThe solution I found was to change the statistics on my foos.bar column from\nthe default -1 to 1000. When I do this, reanalyze the table, and rerun\nthe above\nquery, I get the following expected result.\n\n\"Nested Loop (cost=0.00..25194.66 rows=25282 width=8) (actual\ntime=13.035..23.338 rows=11 loops=1)\"\n\" -> Index Scan using bars_index_modified_time on bars r\n(cost=0.00..3.06 rows=3 width=8) (actual time=0.063..0.115 rows=20 loops=1)\"\n\" Index Cond: ((modified_time > '2006-10-20 10:00:00'::timestamp without\ntime zone) AND (modified_time <= '2006-10-30 15:20:00'::timestamp\nwithout time zone))\"\n\" Filter: (id <> 0)\"\n\" -> Index Scan using foos_index_bar on foos c (cost=0.00..6824.95\nrows=125780 width=16) (actual time=1.141..1.152 rows=1 loops=20)\"\n\" Index Cond: (c.bar = \"outer\".id)\"\n\"Total runtime: 23.446 ms\"\n\nHaving to do this concerns me as I am not sure what a good statistics value\nshould be. Also we expect this table to grow much larger and I am concerned\nthat it may not continue to function correctly. I tried a value of 100\nand that\nworks when the number of bars records is small, but as soon as I increase\nthem, the query starts ignoring the index again.\n\nIs increasing the statistics value the best way to resolve this problem? How\ncan I best decide on a good statistics value?\n\nHaving a column containing large numbers of null or 0 values seems fairly\ncommon. Is there way to tell Postgres to create an index of all values with\nmeaning. Ie all non-0 values? None that I could find.\n\nThanks in advance,\nLeif\n\n\n",
"msg_date": "Tue, 31 Oct 2006 13:04:12 +0900",
"msg_from": "Leif Mortenson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index ignored on column containing mostly 0 values"
},
{
"msg_contents": "\nOn Oct 31, 2006, at 13:04 , Leif Mortenson wrote:\n\n> Hello,\n> I have been having a problem with the following query ignoring an \n> index\n> on the foos.bar column.\n>\n> SELECT c.id\n> FROM foos c, bars r\n> WHERE r.id != 0\n> AND r.modified_time > '2006-10-20 10:00:00.000'\n> AND r.modified_time <= '2006-10-30 15:20:00.000'\n> AND c.bar = r.id\n\n<snip />\n\n> Having a column containing large numbers of null or 0 values seems \n> fairly\n> common. Is there way to tell Postgres to create an index of all \n> values with\n> meaning. Ie all non-0 values? None that I could find.\n\nTry\n\ncreate index foo_non_zero_bar_index on foos(bar) where bar <> 0;\n\nTake a look on the docs on partial indexes for more information.\n\nhttp://www.postgresql.org/docs/current/interactive/indexes-partial.html\n\nHope this helps.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n",
"msg_date": "Tue, 31 Oct 2006 13:31:35 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index ignored on column containing mostly 0 values"
},
{
"msg_contents": "Leif Mortenson <[email protected]> writes:\n> Having a column containing large numbers of null or 0 values seems fairly\n> common.\n\nYou would likely be better off to use NULL as a no-value placeholder,\ninstead of an arbitrarily chosen regular value (which the planner cannot\nbe certain does not match any entries in the other table...)\n\n> Is there way to tell Postgres to create an index of all values with\n> meaning. Ie all non-0 values? None that I could find.\n\nPartial index. Though I'm not sure that would help here. The problem\nis that the nestloop join you want would be spectacularly awful if there\nhappened to be any zeroes in bars.id, and the planner's statistical\nestimates allow some probability of that happening.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 30 Oct 2006 23:40:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index ignored on column containing mostly 0 values "
},
{
"msg_contents": "Am Dienstag, den 31.10.2006, 13:04 +0900 schrieb Leif Mortenson:\n> Hello,\n> I have been having a problem with the following query ignoring an index\n> on the foos.bar column.\n> \n> SELECT c.id\n> FROM foos c, bars r\n> WHERE r.id != 0\n> AND r.modified_time > '2006-10-20 10:00:00.000'\n> AND r.modified_time <= '2006-10-30 15:20:00.000'\n> AND c.bar = r.id\n> \n> The bars table contains 597 rows, while the foos table contains 5031203\n> rows.\n> \n> After much research I figured out that the problem is being caused by the\n> PG planner deciding that my foos.bar index is not useful. The data in the\n> foos.bar column contains 5028698 0 values and 2505 that are ids in the bars\n> table.\n> \n> Both tables have just been analyzed.\n> \n> When I EXPLAIN ANALYZE the above query, I get the following:\n> \n> \"Hash Join (cost=3.06..201642.49 rows=25288 width=8) (actual\n> time=0.234..40025.514 rows=11 loops=1)\"\n> \" Hash Cond: (\"outer\".bar = \"inner\".id)\"\n> \" -> Seq Scan on foos c (cost=0.00..176225.03 rows=5032303 width=16)\n> (actual time=0.007..30838.623 rows=5031203 loops=1)\"\n> \" -> Hash (cost=3.06..3.06 rows=3 width=8) (actual time=0.117..0.117\n> rows=20 loops=1)\"\n> \" -> Index Scan using bars_index_modified_time on bars r\n> (cost=0.00..3.06 rows=3 width=8) (actual time=0.016..0.066 rows=20 loops=1)\"\n> \" Index Cond: ((modified_time > '2006-10-20 10:00:00'::timestamp without\n> time zone) AND (modified_time <= '2006-10-30 15:20:00'::timestamp\n> without time zone))\"\n> \" Filter: (id <> 0)\"\n> \"Total runtime: 40025.629 ms\"\n> \n> The solution I found was to change the statistics on my foos.bar column from\n> the default -1 to 1000. When I do this, reanalyze the table, and rerun\n> the above\n> query, I get the following expected result.\n> \n> \"Nested Loop (cost=0.00..25194.66 rows=25282 width=8) (actual\n> time=13.035..23.338 rows=11 loops=1)\"\n> \" -> Index Scan using bars_index_modified_time on bars r\n> (cost=0.00..3.06 rows=3 width=8) (actual time=0.063..0.115 rows=20 loops=1)\"\n> \" Index Cond: ((modified_time > '2006-10-20 10:00:00'::timestamp without\n> time zone) AND (modified_time <= '2006-10-30 15:20:00'::timestamp\n> without time zone))\"\n> \" Filter: (id <> 0)\"\n> \" -> Index Scan using foos_index_bar on foos c (cost=0.00..6824.95\n> rows=125780 width=16) (actual time=1.141..1.152 rows=1 loops=20)\"\n> \" Index Cond: (c.bar = \"outer\".id)\"\n> \"Total runtime: 23.446 ms\"\n> \n> Having to do this concerns me as I am not sure what a good statistics value\n> should be. Also we expect this table to grow much larger and I am concerned\n> that it may not continue to function correctly. I tried a value of 100\n> and that\n> works when the number of bars records is small, but as soon as I increase\n> them, the query starts ignoring the index again.\n> \n> Is increasing the statistics value the best way to resolve this problem? How\n> can I best decide on a good statistics value?\n> \n> Having a column containing large numbers of null or 0 values seems fairly\n> common. Is there way to tell Postgres to create an index of all values with\n> meaning. Ie all non-0 values? None that I could find.\nHave you tried\n\nCREATE INDEX partial ON foos (bar) WHERE bar IS NOT NULL;\n\nAndreas",
"msg_date": "Tue, 31 Oct 2006 16:03:55 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index ignored on column containing mostly 0 values"
}
] |
[
{
"msg_contents": "Checkpoints are not an issue here, the vmstat you included was on a 5 second interval, so the 'bursts' were bursting at a rate of 60MB/s.\n\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n",
"msg_date": "Tue, 31 Oct 2006 15:56:00 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best COPY Performance"
}
] |
[
{
"msg_contents": "Ok, so MVCC is the best thing since a guy put a round stone on a stick\nand called it \"the wheel\", but I've seen several references on this list\nabout \"indexes not being under MVCC\" - at least that's how I read it,\nthe original posts were explaining why indexes can't be used for solving\nMIN()/MAX()/COUNT() aggregates. Is this correct?\n\nIn particular, I'm trying to find out is there (b)locking involved when\nconcurrently updating and/or inserting records in an indexed table. My\nguess is that, since PG does copy+delete on updating, even updating a\nnon-indexed field will require fixups in the index tree (to point to the\nnew record) and thus (b)locking.\n\n\n",
"msg_date": "Tue, 31 Oct 2006 22:55:40 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "MVCC & indexes?"
},
{
"msg_contents": "Ivan Voras wrote:\n> Ok, so MVCC is the best thing since a guy put a round stone on a stick\n> and called it \"the wheel\", but I've seen several references on this list\n> about \"indexes not being under MVCC\" - at least that's how I read it,\n> the original posts were explaining why indexes can't be used for solving\n> MIN()/MAX()/COUNT() aggregates. Is this correct?\n> \n> In particular, I'm trying to find out is there (b)locking involved when\n> concurrently updating and/or inserting records in an indexed table. My\n> guess is that, since PG does copy+delete on updating, even updating a\n> non-indexed field will require fixups in the index tree (to point to the\n> new record) and thus (b)locking.\n\nWell, there certainly is locking involved in inserting index entries,\nbut it's more fine-grained than you seem to think. Only one page of the\nindex is locked at any time, resulting in that typically there's very\nlittle blocking involved. Two processes can be inserting into the same\nindex concurrently (btree and GiST indexes at least; GiST only gained\nconcurrency in a recent release, I don't remember if it was 8.0 or 8.1).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 31 Oct 2006 19:36:32 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MVCC & indexes?"
},
{
"msg_contents": "On Tue, Oct 31, 2006 at 10:55:40PM +0100, Ivan Voras wrote:\n> Ok, so MVCC is the best thing since a guy put a round stone on a stick\n> and called it \"the wheel\", but I've seen several references on this list\n> about \"indexes not being under MVCC\" - at least that's how I read it,\n> the original posts were explaining why indexes can't be used for solving\n> MIN()/MAX()/COUNT() aggregates. Is this correct?\n\n> In particular, I'm trying to find out is there (b)locking involved when\n> concurrently updating and/or inserting records in an indexed table. My\n> guess is that, since PG does copy+delete on updating, even updating a\n> non-indexed field will require fixups in the index tree (to point to the\n> new record) and thus (b)locking.\n\nShort bits of blocking. The PostgreSQL index 'problem', is that indexes\nare conservative. They only guarantee to return at least as much data as\nyou should see. They cannot be used to limit what you see to only as much\nas you should see.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 31 Oct 2006 19:20:26 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: MVCC & indexes?"
},
{
"msg_contents": "Alvaro Herrera wrote:\n\n> Well, there certainly is locking involved in inserting index entries,\n> but it's more fine-grained than you seem to think. Only one page of the\n> index is locked at any time, resulting in that typically there's very\n> little blocking involved. Two processes can be inserting into the same\n> index concurrently (btree and GiST indexes at least; GiST only gained\n> concurrency in a recent release, I don't remember if it was 8.0 or 8.1).\n\nThank you, this was the bit I was missing. In retrospect, I don't really\nknow how I came to conclusion the whole index was being locked :(\n\n",
"msg_date": "Wed, 01 Nov 2006 13:06:09 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MVCC & indexes?"
}
] |
[
{
"msg_contents": "Hello-\n\n#I am a biologist, and work with large datasets (tables with millions of \nrows are common).\n#These datasets often can be simplified as features with a name, and a \nstart and end position (ie: a range along a number line. GeneX is on \nsome chromosome from position 10->40)\n\nI store these features in tables that generally have the form:\n\nSIMPLE_TABLE:\nFeatureID(PrimaryKey) -- FeatureName(varchar) -- \nFeatureChromosomeName(varchar) -- StartPosition(int) -- EndPosition(int)\n\nMy problem is, I often need to execute searches of tables like these \nwhich find \"All features within a range\". \nIe: select FeatureID from SIMPLE_TABLE where FeatureChromosomeName like \n'chrX' and StartPosition > 1000500 and EndPosition < 2000000;\n\nThis kind of query is VERY slow, and I've tried tinkering with indexes \nto speed it up, but with little success.\nIndexes on Chromosome help a little, but it I can't think of a way to \navoid full table scans for each of the position range queries.\n\nAny advice on how I might be able to improve this situation would be \nvery helpful.\n\nThanks!\nJohn\n",
"msg_date": "Tue, 31 Oct 2006 18:18:38 -0500",
"msg_from": "John Major <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help w/speeding up range queries?"
},
{
"msg_contents": "John,\n\nOn 10/31/06 3:18 PM, \"John Major\" <[email protected]> wrote:\n\n> Any advice on how I might be able to improve this situation would be\n> very helpful.\n\nI think table partitioning is exactly what you need.\n\nThere's a basic capability in current Postgres to divide tables into parent\n+ children, each of which have a constraint for the rows inside (in your\ncase chromosome). When you query the parent, the planner will exclude child\ntables outside of the predicate range.\n\n- Luke\n\n\n",
"msg_date": "Tue, 31 Oct 2006 15:54:50 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help w/speeding up range queries?"
},
{
"msg_contents": "John Major wrote:\n> Hello-\n> \n> #I am a biologist, and work with large datasets (tables with millions of\n> rows are common).\n> #These datasets often can be simplified as features with a name, and a\n> start and end position (ie: a range along a number line. GeneX is on\n> some chromosome from position 10->40)\n> \n> I store these features in tables that generally have the form:\n> \n> SIMPLE_TABLE:\n> FeatureID(PrimaryKey) -- FeatureName(varchar) --\n> FeatureChromosomeName(varchar) -- StartPosition(int) -- EndPosition(int)\n> \n> My problem is, I often need to execute searches of tables like these\n> which find \"All features within a range\". Ie: select FeatureID from\n> SIMPLE_TABLE where FeatureChromosomeName like 'chrX' and StartPosition >\n> 1000500 and EndPosition < 2000000;\n> \n> This kind of query is VERY slow, and I've tried tinkering with indexes\n> to speed it up, but with little success.\n> Indexes on Chromosome help a little, but it I can't think of a way to\n> avoid full table scans for each of the position range queries.\n> \n> Any advice on how I might be able to improve this situation would be\n> very helpful.\n\nBasic question - What version, and what indexes do you have?\n\nHave an EXPLAIN?\n\nSomething like -\n\nCREATE INDEX index_name ON SIMPLE_TABLE ( FeatureChromosomeName\nvarchar_pattern_ops, StartPosition, EndPosition );\n\nThe varchar_pattern_ops being the \"key\" so LIKE can use an index.\nProvided of course its LIKE 'something%' and not LIKE '%something'\n\n\n> \n> Thanks!\n> John\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n\n\nWeslee\n",
"msg_date": "Tue, 31 Oct 2006 15:57:04 -0800",
"msg_from": "Weslee Bilodeau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help w/speeding up range queries?"
},
{
"msg_contents": "Weslee,\n\nOn 10/31/06 3:57 PM, \"Weslee Bilodeau\"\n<[email protected]> wrote:\n\n> Basic question - What version, and what indexes do you have?\n\nI'd expect the problem with this is that unless the indexed column is\ncorrelated with the loading order of the rows over time, then the index will\nrefer to rows distributed non-sequentially on disk, in which case the index\ncan be worse than a sequential scan.\n\nYou can cluster the table on the index (don't use the \"CLUSTER\" command! Do\na CREATE TABLE AS SELECT .. ORDER BY instead!), but the index won't refer to\nsequential table data when there's more data added. What this does is\nanalogous to the partitioning option though, and you don't have the problem\nof the table being de-clustered on the constraint column.\n\nThe problem with the current support for partitioning is that you have to\nimplement rules for inserts/updates/deletes so that you can do them to the\nparent and they will be implemented on the children. As a result,\npartitioning is not transparent. OTOH, it achieves great performance gains.\n\nBTW - If you have a date column and your data is loaded in date order, then\nan index is all that's necessary, you will get sequential access.\n \n- Luke\n\n\n",
"msg_date": "Tue, 31 Oct 2006 16:10:54 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help w/speeding up range queries?"
},
{
"msg_contents": "John Major <[email protected]> writes:\n> My problem is, I often need to execute searches of tables like these \n> which find \"All features within a range\". \n> Ie: select FeatureID from SIMPLE_TABLE where FeatureChromosomeName like \n> 'chrX' and StartPosition > 1000500 and EndPosition < 2000000;\n\nA standard btree index is just going to suck for these types of queries;\nyou need something that's actually designed for spatial range queries.\nYou might look at the contrib/seg module --- if you can store your\nranges as \"seg\" datatype then the seg overlap operator expresses what\nyou need to do, and searches on an overlap operator can be handled well\nby a GIST index.\n\nAlso, there's the PostGIS stuff, though it might be overkill for what\nyou want.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 31 Oct 2006 23:29:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help w/speeding up range queries? "
},
{
"msg_contents": "John,\n\nOn 10/31/06 8:29 PM, \"Tom Lane\" <[email protected]> wrote:\n\n>> 'chrX' and StartPosition > 1000500 and EndPosition < 2000000;\n> \n> Also, there's the PostGIS stuff, though it might be overkill for what\n> you want.\n\nOops - I missed the point earlier. Start and End are separate attributes so\nthis is like an unbounded window in a Start,End space. PostGis provides\nquadtree indexing would provide a terse TID list but you still have the\nproblem of how to ensure that the heap tuples being scanned are efficiently\nretrieved, which would only happen if they are grouped similarly to the\nretrieval pattern, right?\n\n- Luke\n\n\n",
"msg_date": "Tue, 31 Oct 2006 21:26:04 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help w/speeding up range queries?"
},
{
"msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> Oops - I missed the point earlier. Start and End are separate attributes so\n> this is like an unbounded window in a Start,End space. PostGis provides\n> quadtree indexing would provide a terse TID list but you still have the\n> problem of how to ensure that the heap tuples being scanned are efficiently\n> retrieved, which would only happen if they are grouped similarly to the\n> retrieval pattern, right?\n\nYeah, but I think that's a second-order problem compared to having an\nindex that's reasonably well matched to the query ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 01 Nov 2006 00:58:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help w/speeding up range queries? "
},
{
"msg_contents": "> Ie: select FeatureID from SIMPLE_TABLE where FeatureChromosomeName like\n> 'chrX' and StartPosition > 1000500 and EndPosition < 2000000;\n\nHow about ( this assumes that StartPosition <= EndPosition ):\n\nselect FeatureID\nfrom SIMPLE_TABLE\nwhere FeatureChromosomeName llike 'chrX'\nand StartPosition > 1000500\nand StartPosition < 2000000\nand EndPosition > 1000500\nand EndPosition < 2000000;\n\n\nThis at least should help the planner with estimating number of rows.\n\nAlso think twice when You assume that a query with ILIKE will use an index.\nRead about varchar_pattern_ops.\nMake an index on (FeatureChromosomeName,StartPosition) , and all should be\nfine.\n\nGreetings\nMarcin\n\n",
"msg_date": "Thu, 2 Nov 2006 11:54:57 +0100",
"msg_from": "\"Marcin Mank\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help w/speeding up range queries?"
},
{
"msg_contents": "On Tue, 2006-10-31 at 18:18 -0500, John Major wrote:\n\n> #I am a biologist, and work with large datasets (tables with millions of \n> rows are common).\n> #These datasets often can be simplified as features with a name, and a \n> start and end position (ie: a range along a number line. GeneX is on \n> some chromosome from position 10->40)\n\nDo you know about www.biopostgres.org ?\n\nI believe they provide some additional indexing mechanisms for just this\ntype of data.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 02 Nov 2006 10:59:47 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help w/speeding up range queries?"
},
{
"msg_contents": "On Oct 31, 2006, at 8:29 PM, Tom Lane wrote:\n> John Major <[email protected]> writes:\n>> My problem is, I often need to execute searches of tables like these\n>> which find \"All features within a range\".\n>> Ie: select FeatureID from SIMPLE_TABLE where \n>> FeatureChromosomeName like\n>> 'chrX' and StartPosition > 1000500 and EndPosition < 2000000;\n>\n> A standard btree index is just going to suck for these types of \n> queries;\n> you need something that's actually designed for spatial range queries.\n> You might look at the contrib/seg module --- if you can store your\n> ranges as \"seg\" datatype then the seg overlap operator expresses what\n> you need to do, and searches on an overlap operator can be handled \n> well\n> by a GIST index.\n>\n> Also, there's the PostGIS stuff, though it might be overkill for what\n> you want.\n\nAnother possibility (think Tom has suggested in the past) is to \ndefine Start and End as a box, and then use the geometric functions \nbuilt into plain PostgreSQL (though perhaps that's what he meant by \n\"PostGIS stuff\").\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Mon, 6 Nov 2006 17:37:13 -0800",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help w/speeding up range queries? "
}
] |
[
{
"msg_contents": "Hi,\n\nWe've migrated one of our servers from pg 7.4 to 8.1 and from times to times\n(4 hours) the server start doing a lot of context switching and all\ntransactions become very slow.\n\nThe average context switching for this server as vmstat shows is 1 but when\nthe problem occurs it goes to 250000.\n\nCPU and memory usage are ok.\n\nWhat can produce this context switching storms?\n\nIt is a box with 12GB RAM and 4 processors running RedHat Enterprise Linux\nAS.\n\nThank you in advance!\nReimer\[email protected]\nOpenDB Servi�os e Treinamentos PostgreSQL e DB2\nFone: 47 3327-0878 Cel: 47 9602-0151\nwww.opendb.com.br\n\n\n\n\n\n\nHi,\n \nWe've migrated one \nof our servers from pg 7.4 to 8.1 and from times to times (4 hours) the server \nstart doing a lot of context switching and all transactions become very \nslow.\n \nThe average context \nswitching for this server as vmstat shows is 1 but when the problem occurs it \ngoes to 250000.\n \nCPU and memory \nusage are ok.\n \nWhat can produce \nthis context switching storms?\n \nIt is a box with \n12GB RAM and 4 processors running RedHat Enterprise Linux \nAS.\n \nThank you in \nadvance!\[email protected] \nServiços e Treinamentos PostgreSQL e DB2Fone: 47 3327-0878 Cel: 47 \n9602-0151www.opendb.com.br",
"msg_date": "Wed, 1 Nov 2006 03:23:17 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Context switching"
},
{
"msg_contents": "Hi,\n\nSorry, but this message was already post some days before!\n\nThank you!\n\nCarlos\n\n-----Mensagem original-----\nDe: [email protected]\n[mailto:[email protected]]Em nome de Carlos H. Reimer\nEnviada em: quarta-feira, 1 de novembro de 2006 03:23\nPara: [email protected]\nAssunto: [PERFORM] Context switching\n\n\n Hi,\n\n We've migrated one of our servers from pg 7.4 to 8.1 and from times to\ntimes (4 hours) the server start doing a lot of context switching and all\ntransactions become very slow.\n\n The average context switching for this server as vmstat shows is 1 but\nwhen the problem occurs it goes to 250000.\n\n CPU and memory usage are ok.\n\n What can produce this context switching storms?\n\n It is a box with 12GB RAM and 4 processors running RedHat Enterprise Linux\nAS.\n\n Thank you in advance!\n Reimer\n [email protected]\n OpenDB Servi�os e Treinamentos PostgreSQL e DB2\n Fone: 47 3327-0878 Cel: 47 9602-0151\n www.opendb.com.br\n\n\n\n\n\n\nHi,\n \nSorry, but this \nmessage was already post some days before!\n \nThank \nyou!\n \nCarlos\n \n-----Mensagem original-----De: \[email protected] \n[mailto:[email protected]]Em nome de Carlos H. \nReimerEnviada em: quarta-feira, 1 de novembro de 2006 \n03:23Para: [email protected]: \n[PERFORM] Context switching\n\nHi,\n \nWe've migrated \n one of our servers from pg 7.4 to 8.1 and from times to times (4 hours) the \n server start doing a lot of context switching and all transactions become very \n slow.\n \nThe average \n context switching for this server as vmstat shows is 1 but when the problem \n occurs it goes to 250000.\n \nCPU and memory \n usage are ok.\n \nWhat can produce \n this context switching storms?\n \nIt is a box with \n 12GB RAM and 4 processors running RedHat Enterprise Linux \n AS.\n \nThank you in \n advance!\[email protected] \n Serviços e Treinamentos PostgreSQL e DB2Fone: 47 3327-0878 Cel: 47 \n 9602-0151www.opendb.com.br",
"msg_date": "Mon, 6 Nov 2006 16:29:13 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: Context switching"
},
{
"msg_contents": "Carlos,\n\n> We've migrated one of our servers from pg 7.4 to 8.1 and from times to\n> times (4 hours) the server start doing a lot of context switching and all\n> transactions become very slow.\n>\n> The average context switching for this server as vmstat shows is 1 but when\n> the problem occurs it goes to 250000.\n\nContext Switching is a symptom rather than a cause. What's most likely \nhappening is that you have a combined heavy-CPU and heavy-IO workload, so you \nhave bursts of CPU activity stalled by iowaits.\n\nCan you check the rate of iowaits during the \"storm\" periods?\n\nAlso, is this Xeon? And are you saying that you *didn't* have this issue \nunder 7.4?\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Tue, 7 Nov 2006 09:42:26 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switching"
},
{
"msg_contents": "Hi,\n\nI have not seen the iowaits numbers and yes, it is a xeon, four processors.\n\nWith the 7.4 version the problem did not exist but I think Tom Lane cleared why this behavior with 8.0 and 8.1 versions in http://archives.postgresql.org/pgsql-performance/2006-11/msg00050.php\n\nReimer\n\n\n> -----Mensagem original-----\n> De: Josh Berkus [mailto:[email protected]]\n> Enviada em: terça-feira, 7 de novembro de 2006 15:42\n> Para: [email protected]; [email protected]\n> Assunto: Re: [PERFORM] Context switching\n> \n> \n> Carlos,\n> \n> > We've migrated one of our servers from pg 7.4 to 8.1 and from times to\n> > times (4 hours) the server start doing a lot of context \n> switching and all\n> > transactions become very slow.\n> >\n> > The average context switching for this server as vmstat shows \n> is 1 but when\n> > the problem occurs it goes to 250000.\n> \n> Context Switching is a symptom rather than a cause. What's most likely \n> happening is that you have a combined heavy-CPU and heavy-IO \n> workload, so you \n> have bursts of CPU activity stalled by iowaits.\n> \n> Can you check the rate of iowaits during the \"storm\" periods?\n> \n> Also, is this Xeon? And are you saying that you *didn't* have \n> this issue \n> under 7.4?\n> \n> -- \n> Josh Berkus\n> PostgreSQL @ Sun\n> San Francisco\n> \n> \n\n",
"msg_date": "Tue, 21 Nov 2006 10:44:23 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: Context switching"
}
] |
[
{
"msg_contents": "I've got a long-running, update-heavy transaction that increasingly slows \ndown the longer it runs. I would expect that behavior, if there was some \ntemp file creation going on. But monitoring vmstat over the life of the \ntransaction shows virtually zero disk activity. Instead, the system has \nits CPU pegged the whole time.\n\nSo.... why the slowdown? Is it a MVCC thing? A side effect of calling \nstored proceedures a couple hundred thousand times in a single \ntransaction? Or am I just doing something wrong?\n",
"msg_date": "Tue, 31 Oct 2006 21:58:36 -0800 (PST)",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "big transaction slows down over time - but disk seems almost unused"
},
{
"msg_contents": "Am Dienstag, den 31.10.2006, 21:58 -0800 schrieb Ben:\n> I've got a long-running, update-heavy transaction that increasingly slows \n> down the longer it runs. I would expect that behavior, if there was some \n> temp file creation going on. But monitoring vmstat over the life of the \n> transaction shows virtually zero disk activity. Instead, the system has \n> its CPU pegged the whole time.\n> \n> So.... why the slowdown? Is it a MVCC thing? A side effect of calling \n> stored proceedures a couple hundred thousand times in a single \n\nMemory usage? Have you tried to checkpoint your transaction from time to\ntime?\n\nAndreas\n\n> transaction? Or am I just doing something wrong?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings",
"msg_date": "Wed, 01 Nov 2006 10:21:01 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big transaction slows down over time - but disk"
},
{
"msg_contents": "Ben wrote:\n> I've got a long-running, update-heavy transaction that increasingly \n> slows down the longer it runs. I would expect that behavior, if there \n> was some temp file creation going on. But monitoring vmstat over the \n> life of the transaction shows virtually zero disk activity. Instead, the \n> system has its CPU pegged the whole time.\n> \n> So.... why the slowdown? Is it a MVCC thing? A side effect of calling \n> stored proceedures a couple hundred thousand times in a single \n> transaction? Or am I just doing something wrong?\n\nMy guess is that the updates are creating a lot of old row versions, and \na command within the transaction is doing a seq scan that has to scan \nthrough all of them. Or something like that. It's hard to tell without \nmore details.\n\nCalling stored procedures repeatedly shouldn't cause a slowdown over time.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Wed, 01 Nov 2006 09:39:59 +0000",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big transaction slows down over time - but disk seems"
},
{
"msg_contents": "Memory usage remains consistent, which is to say that postgres is \nusing most available system memory all the time, as I configured it \nto. There is no swapping going on.\n\nIt's not clear to me why forcing a WAL checkpoint would help \nanything.... but it doesn't matter, as only superusers can do it, so \nit's not an option for me. Unless there's a whole other meaning you \nwere implying....?\n\nOn Nov 1, 2006, at 1:21 AM, Andreas Kostyrka wrote:\n\n> Am Dienstag, den 31.10.2006, 21:58 -0800 schrieb Ben:\n>> I've got a long-running, update-heavy transaction that \n>> increasingly slows\n>> down the longer it runs. I would expect that behavior, if there \n>> was some\n>> temp file creation going on. But monitoring vmstat over the life \n>> of the\n>> transaction shows virtually zero disk activity. Instead, the \n>> system has\n>> its CPU pegged the whole time.\n>>\n>> So.... why the slowdown? Is it a MVCC thing? A side effect of calling\n>> stored proceedures a couple hundred thousand times in a single\n>\n> Memory usage? Have you tried to checkpoint your transaction from \n> time to\n> time?\n>\n> Andreas\n>\n>> transaction? Or am I just doing something wrong?\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Wed, 1 Nov 2006 07:49:15 -0800",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: big transaction slows down over time - but disk seems almost\n\tunused"
},
{
"msg_contents": "My transaction calls the same stored procedure many times over. Over \nthe lifetime of the transaction, that stored procedure slows down by \nroughly 2 orders of magnitude. The procedure itself tries to look up \nseveral strings in dictionary tables, and if the strings aren't there \n(most of them will be) it inserts them. All those dictionary tables \nhave indexes. After it has converted most of the strings into ids, it \ndoes another lookup on a table and if it finds a matching row (should \nbe the common case) it updates a timestamp column of that row; \notherwise, it inserts a new row.\n\nSo.... there isn't much table size changing, but there are a lot of \nupdates. Based on pg_stat_user_tables I suspect that the procedure is \nusing indexes more than table scans. Is there a better way to know?\n\nOn Nov 1, 2006, at 1:31 AM, Richard Huxton wrote:\n\n> Ben wrote:\n>> I've got a long-running, update-heavy transaction that \n>> increasingly slows down the longer it runs. I would expect that \n>> behavior, if there was some temp file creation going on. But \n>> monitoring vmstat over the life of the transaction shows virtually \n>> zero disk activity. Instead, the system has its CPU pegged the \n>> whole time.\n>> So.... why the slowdown? Is it a MVCC thing? A side effect of \n>> calling stored proceedures a couple hundred thousand times in a \n>> single transaction? Or am I just doing something wrong?\n>\n> You'll need to provide some more information before anyone can come \n> up with something conclusive. What queries slow down, by how much \n> and after what updates (for example). It could be an update/vacuum- \n> related problem, or it could be that your stored procedures aren't \n> coping with changes in table size (if table(s) are changing size).\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n\n",
"msg_date": "Wed, 1 Nov 2006 07:56:54 -0800",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: big transaction slows down over time - but disk seems almost\n\tunused"
},
{
"msg_contents": "Ben wrote:\n> My transaction calls the same stored procedure many times over. Over the \n> lifetime of the transaction, that stored procedure slows down by roughly \n> 2 orders of magnitude. The procedure itself tries to look up several \n> strings in dictionary tables, and if the strings aren't there (most of \n> them will be) it inserts them. All those dictionary tables have indexes. \n> After it has converted most of the strings into ids, it does another \n> lookup on a table and if it finds a matching row (should be the common \n> case) it updates a timestamp column of that row; otherwise, it inserts a \n> new row.\n\nWhich would suggest Heikki's guess was pretty much right and it's dead \nrows that are causing the problem.\n\nAssuming most updates are to this timestamp, could you try a test case \nthat does everything *except* update the timestamp. If that runs \nblazingly fast then we've found the problem.\n\nIf that is the problem, there's two areas to look at:\n1. Avoid updating the same timestamp more than once (if that's happening)\n2. Update timestamps in one go at the end of the transaction (perhaps by \nloading updates into a temp table).\n3. Split the transaction in smaller chunks of activity.\n\n> So.... there isn't much table size changing, but there are a lot of \n> updates. Based on pg_stat_user_tables I suspect that the procedure is \n> using indexes more than table scans. Is there a better way to know?\n\nNot really. You can check the plans of queries within the function, but \nthere's no way to capture query plans of running functions.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 01 Nov 2006 16:34:12 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big transaction slows down over time - but disk seems"
},
{
"msg_contents": "\n\nOn Wed, 1 Nov 2006, Richard Huxton wrote:\n\n> 1. Avoid updating the same timestamp more than once (if that's happening)\n\nEach row is updated at most once, and not all rows are updated.\n\n> 2. Update timestamps in one go at the end of the transaction (perhaps by \n> loading updates into a temp table).\n\nHey, that's not a bad idea. I'll give that a shot. Thanks!\n\n> 3. Split the transaction in smaller chunks of activity.\n\nI'd be happy to do this too, except that I need a simple way to rollback \neverything, and I don't see how I can get that with this.\n\n\n\n",
"msg_date": "Wed, 1 Nov 2006 08:51:46 -0800 (PST)",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: big transaction slows down over time - but disk seems"
},
{
"msg_contents": "Ben wrote:\n> \n> \n> On Wed, 1 Nov 2006, Richard Huxton wrote:\n> \n>> 1. Avoid updating the same timestamp more than once (if that's happening)\n> \n> Each row is updated at most once, and not all rows are updated.\n> \n>> 2. Update timestamps in one go at the end of the transaction (perhaps \n>> by loading updates into a temp table).\n> \n> Hey, that's not a bad idea. I'll give that a shot. Thanks!\n> \n>> 3. Split the transaction in smaller chunks of activity.\n> \n> I'd be happy to do this too, except that I need a simple way to rollback \n> everything, and I don't see how I can get that with this.\n\nWell, you could with a temp-table, but it probably won't be necessary if \nyou have one. You might wan to issue a vacuum on the updated table after \nthe transaction completes.\n\nNote that this idea is built on a set of assumptions that might not be \ntrue, so do test.\n\nOh - if you're processing rows one at a time with your stored procedure, \nsee if there's not a way to process the whole set. That can make a huge \ndifference.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 01 Nov 2006 17:15:16 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: big transaction slows down over time - but disk seems"
}
] |
[
{
"msg_contents": "Here is a potential problem with the auto-vacuum daemon, and I'm wondering\nif anyone has considered this. To avoid transaction ID wraparound, the\nauto-vacuum daemon will periodically determine that it needs to do a DB-wide\nvacuum, which takes a long time. On our system, it is on the order of a\ncouple of weeks. (The system is very busy and there is a lot of I/O going\non pretty much 24/7). During this period of time, there is nothing to\nautomatically analyze any of the tables, leading to further performance\nproblems. What are your thoughts on having the DB-wide vacuum running on a\nseparate thread so that the daemon can concurrently wake up and take care of\nanalyzing tables?\n\nHere is a potential problem with the auto-vacuum daemon, and I'm wondering if anyone has considered this. To avoid transaction ID wraparound, the auto-vacuum daemon will periodically determine that it needs to do a DB-wide vacuum, which takes a long time. On our system, it is on the order of a couple of weeks. (The system is very busy and there is a lot of I/O going on pretty much 24/7). During this period of time, there is nothing to automatically analyze any of the tables, leading to further performance problems. What are your thoughts on having the DB-wide vacuum running on a separate thread so that the daemon can concurrently wake up and take care of analyzing tables?",
"msg_date": "Wed, 1 Nov 2006 14:15:29 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Database-wide vacuum can take a long time,\n\tduring which tables are not being analyzed"
},
{
"msg_contents": "Steven Flatt wrote:\n> Here is a potential problem with the auto-vacuum daemon, and I'm \n> wondering if anyone has considered this. To avoid transaction ID \n> wraparound, the auto-vacuum daemon will periodically determine that it \n> needs to do a DB-wide vacuum, which takes a long time. On our system, \n> it is on the order of a couple of weeks. (The system is very busy and \n> there is a lot of I/O going on pretty much 24/7). During this period of \n> time, there is nothing to automatically analyze any of the tables, \n> leading to further performance problems. What are your thoughts on \n> having the DB-wide vacuum running on a separate thread so that the \n> daemon can concurrently wake up and take care of analyzing tables?\n\nTwo issues here:\n1)XID Wraparound: There has been work done on this already, and in 8.2 \nI believe there will no longer be a requirement that a database wide \nvacuum be issued, rather, XID wraparound will be managed on a per table \nbasis rather than per database, so that will solve this problem.\n\n2)Concurrent Vacuuming: There has been a lot of talk about \nmultiple-concurrent vacuums and I believe that this is required in the \nlong run, but it's not here yet, and won't be in 8.2, hopefully it will \nget done for 8.3.\n\n\nMatt\n\n",
"msg_date": "Wed, 01 Nov 2006 16:56:07 -0500",
"msg_from": "Matthew O'Connor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database-wide vacuum can take a long time, during which"
},
{
"msg_contents": "On Wed, 2006-11-01 at 14:15 -0500, Steven Flatt wrote:\n> Here is a potential problem with the auto-vacuum daemon, and I'm\n> wondering if anyone has considered this. To avoid transaction ID\n> wraparound, the auto-vacuum daemon will periodically determine that it\n> needs to do a DB-wide vacuum, which takes a long time. On our system,\n> it is on the order of a couple of weeks. (The system is very busy and\n> there is a lot of I/O going on pretty much 24/7). During this period\n> of time, there is nothing to automatically analyze any of the tables,\n> leading to further performance problems. What are your thoughts on\n> having the DB-wide vacuum running on a separate thread so that the\n> daemon can concurrently wake up and take care of analyzing tables?\n\nYes, do it.\n\nEvery couple of weeks implies a transaction rate of ~~500tps, so I'd be\ninterested to hear more about your system. \n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 02 Nov 2006 10:49:18 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database-wide vacuum can take a long time,\n\tduringwhich tables are not being analyzed"
},
{
"msg_contents": "Sorry, I think there's a misunderstanding here. Our system is not doing\nnear that number of transactions per second. I meant that the duration of a\nsingle DB-wide vacuum takes on the order of a couple of weeks. The time\nbetween DB-wide vacuums is a little over a year, I believe.\n\n\n\n> Every couple of weeks implies a transaction rate of ~~500tps, so I'd be\n> interested to hear more about your system.\n>\n> --\n> Simon Riggs\n> EnterpriseDB http://www.enterprisedb.com\n>\n>\n>\n\nSorry, I think there's a misunderstanding here. Our system is not doing near that number of transactions per second. I meant that the duration of a single DB-wide vacuum takes on the order of a couple of weeks. The time between DB-wide vacuums is a little over a year, I believe.\n\n \n \n\nEvery couple of weeks implies a transaction rate of ~~500tps, so I'd beinterested to hear more about your system.\n--Simon RiggsEnterpriseDB http://www.enterprisedb.com",
"msg_date": "Thu, 2 Nov 2006 10:15:30 -0500",
"msg_from": "\"Steven Flatt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Database-wide vacuum can take a long time,\n\tduringwhich tables are not being analyzed"
},
{
"msg_contents": "Steven Flatt wrote:\n> Sorry, I think there's a misunderstanding here. Our system is not doing\n> near that number of transactions per second. I meant that the duration of a\n> single DB-wide vacuum takes on the order of a couple of weeks. The time\n> between DB-wide vacuums is a little over a year, I believe.\n\nI wonder if this is using some vacuum delay setting?\n\nIf that's the case, I think you could manually run a database-wide\nvacuum with a zero vacuum delay setting, so that said vacuum takes less\ntime to finish (say, once every 8 months).\n\n(8.2 pretty much solves this issue BTW, by not requiring database-wide\nvacuums).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 2 Nov 2006 12:53:59 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database-wide vacuum can take a long time,\n\tduringwhich tables are not being analyzed"
}
] |
[
{
"msg_contents": "Hello,\n\nI do not understand, why Postgres very ofter starts execution from\nsub-select instead of doing main select and then proceeding to \"lite\"\nsub-selects. For example:\n\n(example is quite weird, but it demonstrates the problem)\n\n1. explain analyze select * from pg_proc offset 1500 limit 1;\n\"Limit (cost=116.91..116.99 rows=1 width=365) (actual\ntime=2.111..2.112 rows=1 loops=1)\"\n\" -> Seq Scan on pg_proc (cost=0.00..175.52 rows=2252 width=365)\n(actual time=0.034..1.490 rows=1501 loops=1)\"\n\"Total runtime: 2.156 ms\"\n\n3. explain analyze select oid,* from pg_type where oid=2277 limit 1;\n\"Limit (cost=0.00..5.91 rows=1 width=816) (actual time=0.021..0.022\nrows=1 loops=1)\"\n\" -> Index Scan using pg_type_oid_index on pg_type (cost=0.00..5.91\nrows=1 width=816) (actual time=0.018..0.018 rows=1 loops=1)\"\n\" Index Cond: (oid = 2277::oid)\"\n\"Total runtime: 0.079 ms\"\n\n2. explain analyze select\n *,\n (select typname from pg_type where pg_type.oid=pg_proc.prorettype limit 1)\nfrom pg_proc offset 1500 limit 1;\n\"Limit (cost=8983.31..8989.30 rows=1 width=365) (actual\ntime=17.648..17.649 rows=1 loops=1)\"\n\" -> Seq Scan on pg_proc (cost=0.00..13486.95 rows=2252 width=365)\n(actual time=0.100..16.851 rows=1501 loops=1)\"\n\" SubPlan\"\n\" -> Limit (cost=0.00..5.91 rows=1 width=64) (actual\ntime=0.006..0.007 rows=1 loops=1501)\"\n\" -> Index Scan using pg_type_oid_index on pg_type\n(cost=0.00..5.91 rows=1 width=64) (actual time=0.004..0.004 rows=1\nloops=1501)\"\n\" Index Cond: (oid = $0)\"\n\"Total runtime: 17.784 ms\"\n\nWe see that in the 2nd example Postgres starts with \"Index Scan using\npg_type_oid_index\" (1501 iterations!). My understanding of SQL says me\nthat the simplest (and, in this case - and probably in *most* cases -\nfastest) way to perform such queries is to start from main SELECT and\nthen, when we already have rows from \"main\" table, perform \"lite\"\nsub-selects. So, I expected smth near 2.156 ms + 0.079 ms, but obtain\n17.784 ms... For large table this is killing behaviour.\n\nWhat should I do to make Postgres work properly in such cases (I have\na lot of similar queries; surely, they are executed w/o seqscans, but\noverall picture is the same - I see that starting from sub-selects\ndramatically decrease performance)?\n\n-- \nBest regards,\nNikolay\n",
"msg_date": "Thu, 2 Nov 2006 14:07:19 +0300",
"msg_from": "\"Nikolay Samokhvalov\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query plan for \"heavy\" SELECT with \"lite\" sub-SELECTs"
},
{
"msg_contents": "Nikolay Samokhvalov wrote:\n> 2. explain analyze select\n> *,\n> (select typname from pg_type where pg_type.oid=pg_proc.prorettype limit 1)\n> from pg_proc offset 1500 limit 1;\n> \"Limit (cost=8983.31..8989.30 rows=1 width=365) (actual\n> time=17.648..17.649 rows=1 loops=1)\"\n> \" -> Seq Scan on pg_proc (cost=0.00..13486.95 rows=2252 width=365)\n> (actual time=0.100..16.851 rows=1501 loops=1)\"\n> \" SubPlan\"\n> \" -> Limit (cost=0.00..5.91 rows=1 width=64) (actual\n> time=0.006..0.007 rows=1 loops=1501)\"\n> \" -> Index Scan using pg_type_oid_index on pg_type\n> (cost=0.00..5.91 rows=1 width=64) (actual time=0.004..0.004 rows=1\n> loops=1501)\"\n> \" Index Cond: (oid = $0)\"\n> \"Total runtime: 17.784 ms\"\n> \n> We see that in the 2nd example Postgres starts with \"Index Scan using\n> pg_type_oid_index\" (1501 iterations!).\n\nNo, what you see here is that the inner loop is the index-scan over \npg_type_oid. It's running a sequential scan on pg_proc and then runs \n1501 index scans against pg_type.\n\n> My understanding of SQL says me\n> that the simplest (and, in this case - and probably in *most* cases -\n> fastest) way to perform such queries is to start from main SELECT and\n> then, when we already have rows from \"main\" table, perform \"lite\"\n> sub-selects. So, I expected smth near 2.156 ms + 0.079 ms, but obtain\n> 17.784 ms... For large table this is killing behaviour.\n\nYou've forgotten about the cost of matching up the two sets of rows. \nNow, if the first part of the query outputs only one row then you might \nbe right, but I'm not sure that the SQL standard allows the subquery to \nbe delayed to that stage without explicitly organising the query that \nway. From memory, the OFFSET/LIMIT takes place at the very end of the \nquery processing.\n\n> What should I do to make Postgres work properly in such cases (I have\n> a lot of similar queries; surely, they are executed w/o seqscans, but\n> overall picture is the same - I see that starting from sub-selects\n> dramatically decrease performance)?\n\nDo you have a real example? That might be more practical.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 02 Nov 2006 12:40:09 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for \"heavy\" SELECT with \"lite\" sub-SELECTs"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] \n> Nikolay Samokhvalov\n> \n> What should I do to make Postgres work properly in such cases (I have\n> a lot of similar queries; surely, they are executed w/o seqscans, but\n> overall picture is the same - I see that starting from sub-selects\n> dramatically decrease performance)?\n\nHow about this:\n\nexplain analyze \nselect (select typname from pg_type where pg_type.oid=mainq.prorettype limit\n1)\nfrom (select * from pg_proc offset 1500 limit 1) mainq;\n\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------------------------\n Subquery Scan mainq (cost=50.99..56.85 rows=1 width=4) (actual\ntime=13.646..13.659 rows=1 loops=1)\n -> Limit (cost=50.99..51.02 rows=1 width=310) (actual\ntime=13.575..13.579 rows=1 loops=1)\n -> Seq Scan on pg_proc (cost=0.00..62.34 rows=1834 width=310)\n(actual time=0.014..7.297 rows=1501 loops=1)\n SubPlan\n -> Limit (cost=0.00..5.82 rows=1 width=64) (actual time=0.038..0.043\nrows=1 loops=1)\n -> Index Scan using pg_type_oid_index on pg_type\n(cost=0.00..5.82 rows=1 width=64) (actual time=0.028..0.028 rows=1 loops=1)\n Index Cond: (oid = $0)\n Total runtime: 13.785 ms\n\nI would expect you to get closer to 2 ms on that query. My machine takes 13\nms to do just the seq scan of pg_proc.\n\nDave\n\n\n\n",
"msg_date": "Thu, 2 Nov 2006 08:25:27 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for \"heavy\" SELECT with \"lite\" sub-SELECTs"
},
{
"msg_contents": "Dave Dutcher wrote:\n> > -----Original Message-----\n> > From: [email protected] \n> > Nikolay Samokhvalov\n> > \n> > What should I do to make Postgres work properly in such cases (I have\n> > a lot of similar queries; surely, they are executed w/o seqscans, but\n> > overall picture is the same - I see that starting from sub-selects\n> > dramatically decrease performance)?\n> \n> How about this:\n> \n> explain analyze \n> select (select typname from pg_type where pg_type.oid=mainq.prorettype limit\n> 1)\n> from (select * from pg_proc offset 1500 limit 1) mainq;\n\nWhat's the use of such a query? One would think that in the real world,\nyou'd at least have an ORDER BY somewhere in the subqueries.\n\nPerformance analysis of strange queries is useful, but the input queries\nhave to be meaningful as well. Otherwise you end up optimizing bizarre\nand useless cases.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 2 Nov 2006 16:15:55 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for \"heavy\" SELECT with \"lite\" sub-SELECTs"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Performance analysis of strange queries is useful, but the input queries\n> have to be meaningful as well. Otherwise you end up optimizing bizarre\n> and useless cases.\n> \n\nI had a similar one a few weeks ago. I did some batch-processing over a \nbunch of documents and discovered postgresql was faster if I let it \nprocess just 1000 documents, in stead of all 45000 at the same time. But \nwith 1000 it was faster than 1000x one document.\n\nSo I started with a query like:\nSELECT docid, (SELECT work to be done for each document)\nFROM documents\nORDER BY docid\nLIMIT 1000\nOFFSET ?\n\nAnd I noticed the 44th iteration was much slower than the first.\n\nRewriting it to something like this made the last iteration about as \nfast as the first:\nSELECT docid, (SELECT work to be done for each document)\nFROM documents\nWHERE docid IN (SELECT docid FROM documents\n\tORDER BY docid\n\tLIMIT 1000\n\tOFFSET ?\n)\n\nI know something like that isn't very set-based thinking, but then again \nthe query's structure did come from a iterative algoritm, but turned out \nto be faster (less query-overhead) and easier to scale in PostgreSQL. \nI've tried a few more set-like structures, but those were all slower \nthan this aproach probably because they would be were a little more \ncomplex. Some of them took more than 10x the amount of time...\n\nAnother real-life example would be to display the amount of replies to a \ntopic in a topic listing of a forum or the name of the author of the \nlast message. You probably don't want to count all the replies for each \ntopic if you're only going to display headings 100 - 200.\nAnd there are a few more examples to think of where a join+group by \nisn't going to work, but a subquery in the selectlist just does what you \nwant.\nOf course most of the time you won't be using a OFFSET then.\n\nBest regards,\n\nArjen\n",
"msg_date": "Fri, 03 Nov 2006 10:39:08 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for \"heavy\" SELECT with \"lite\" sub-SELECTs"
},
{
"msg_contents": "Arjen van der Meijden <[email protected]> writes:\n> ... Rewriting it to something like this made the last iteration about as \n> fast as the first:\n\n> SELECT docid, (SELECT work to be done for each document)\n> FROM documents\n> WHERE docid IN (SELECT docid FROM documents\n> \tORDER BY docid\n> \tLIMIT 1000\n> \tOFFSET ?\n> )\n\nThe reason for this, of course, is that the LIMIT/OFFSET filter is the\nlast step in a query plan --- it comes *after* computation of the SELECT\noutput list. (So does ORDER BY, if an explicit sort step is needed.)\nSo if you have an expensive-to-compute output list, a trick like Arjen's\nwill help. I don't think you can use an \"IN\" though, at least not if\nyou want to preserve the sort ordering in the final result.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Nov 2006 09:50:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan for \"heavy\" SELECT with \"lite\" sub-SELECTs "
}
] |
[
{
"msg_contents": "Hi,\n\nThe documentation says that function blocks with exceptions are far costlier\nthan without one.\n\nSo if I need to implement an INSTEAD OF trigger (after checking for unique\nconstraint violations) which way should I go ?\n\n1. Get a table lock\n2. Use 'Select ... For Update' (which could be used to lock only the desired\nrecordsets)\n3. Use Exceptions\n\nAny advice / experiences or even pointers would be helpful.\n\nThanks\nRobins Tharakan\n\nHi,The documentation says that function blocks with exceptions are far costlier than without one.So if I need to implement an INSTEAD OF trigger (after checking for unique constraint violations) which way should I go ?\n1. Get a table lock2. Use 'Select ... For Update' (which could be used to lock only the desired recordsets)3. Use ExceptionsAny advice / experiences or even pointers would be helpful.\nThanksRobins Tharakan",
"msg_date": "Thu, 2 Nov 2006 18:15:53 +0530",
"msg_from": "Robins <[email protected]>",
"msg_from_op": true,
"msg_subject": "Locking vs. Exceptions"
},
{
"msg_contents": "Robins wrote:\n> Hi,\n> \n> The documentation says that function blocks with exceptions are far \n> costlier than without one.\n> \n\nI recommend against using exceptions. There is a memory leak in the \nexception handler that will cause headaches if it is called many times \nin the transaction.\n\nIn plpgsql, I would use:\n\nSELECT ... FOR UPDATE;\nIF FOUND THEN\n\tUPDATE ...;\nELSE\n\tINSERT ...;\nEND IF;\n\n\nIf you have multiple transactions doing this process at the same time, \nyou'll need explicit locking of the table to avoid a race condition.\n\n-- \nBenjamin Minshall <[email protected]>\nSenior Developer -- Intellicon, Inc.\nhttp://www.intellicon.biz\n",
"msg_date": "Thu, 02 Nov 2006 18:17:47 -0500",
"msg_from": "Benjamin Minshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Locking vs. Exceptions"
}
] |
[
{
"msg_contents": "Hi all,\n\n I've got a script (perl, in case it matters) that I need to run once\na month to prepare statements. This script queries and updates the\ndatabase a *lot*. I am not concerned with the performance of the SQL\ncalls so much as I am about the impact it has on the server's load.\n\n Is there a way to limit queries speed (ie: set a low 'nice' value on\na query)? This might be an odd question, or I could be asking the\nquestion the wrong way, but hopefully you the idea. :)\n\nThanks!\n\nMadi\n\n",
"msg_date": "Thu, 02 Nov 2006 10:14:49 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Setting \"nice\" values"
},
{
"msg_contents": "On Thu, 2006-11-02 at 09:14, Madison Kelly wrote:\n> Hi all,\n> \n> I've got a script (perl, in case it matters) that I need to run once\n> a month to prepare statements. This script queries and updates the\n> database a *lot*. I am not concerned with the performance of the SQL\n> calls so much as I am about the impact it has on the server's load.\n> \n> Is there a way to limit queries speed (ie: set a low 'nice' value on\n> a query)? This might be an odd question, or I could be asking the\n> question the wrong way, but hopefully you the idea. :)\n\nWhile you can safely set the priority lower on the calling perl script,\nsetting db backend priorities lower can result in problems caused by\n\"priority inversion\" Look up that phrase on the pgsql admin, perform,\ngeneral, or hackers lists for an explanation, or go here:\n\nhttp://en.wikipedia.org/wiki/Priority_inversion\n\nI have a simple script that grabs raw data from an oracle db and shoves\nit into a postgresql database for reporting purposes. Every 100 rows I\nput into postgresql, I usleep 10 or so and the load caused by that\nscript on both systems is minimal. You might try something like that.\n",
"msg_date": "Thu, 02 Nov 2006 09:20:36 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Thu, 2006-11-02 at 09:14, Madison Kelly wrote:\n>> Hi all,\n>>\n>> I've got a script (perl, in case it matters) that I need to run once\n>> a month to prepare statements. This script queries and updates the\n>> database a *lot*. I am not concerned with the performance of the SQL\n>> calls so much as I am about the impact it has on the server's load.\n>>\n>> Is there a way to limit queries speed (ie: set a low 'nice' value on\n>> a query)? This might be an odd question, or I could be asking the\n>> question the wrong way, but hopefully you the idea. :)\n> \n> While you can safely set the priority lower on the calling perl script,\n> setting db backend priorities lower can result in problems caused by\n> \"priority inversion\" Look up that phrase on the pgsql admin, perform,\n> general, or hackers lists for an explanation, or go here:\n> \n> http://en.wikipedia.org/wiki/Priority_inversion\n> \n> I have a simple script that grabs raw data from an oracle db and shoves\n> it into a postgresql database for reporting purposes. Every 100 rows I\n> put into postgresql, I usleep 10 or so and the load caused by that\n> script on both systems is minimal. You might try something like that.\n\nWill the priority of the script pass down to the pgsql queries it calls? \nI figured (likely incorrectly) that because the queries were executed by \nthe psql server the queries ran with the server's priority. If this \nisn't the case, then perfect. :)\n\nThanks for the tip, too, it's something I will try.\n\nMadi\n",
"msg_date": "Thu, 02 Nov 2006 10:25:07 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "On Thu, 2006-11-02 at 09:25, Madison Kelly wrote:\n> Scott Marlowe wrote:\n> > On Thu, 2006-11-02 at 09:14, Madison Kelly wrote:\n> >> Hi all,\n> >>\n> >> I've got a script (perl, in case it matters) that I need to run once\n> >> a month to prepare statements. This script queries and updates the\n> >> database a *lot*. I am not concerned with the performance of the SQL\n> >> calls so much as I am about the impact it has on the server's load.\n> >>\n> >> Is there a way to limit queries speed (ie: set a low 'nice' value on\n> >> a query)? This might be an odd question, or I could be asking the\n> >> question the wrong way, but hopefully you the idea. :)\n> > \n> > While you can safely set the priority lower on the calling perl script,\n> > setting db backend priorities lower can result in problems caused by\n> > \"priority inversion\" Look up that phrase on the pgsql admin, perform,\n> > general, or hackers lists for an explanation, or go here:\n> > \n> > http://en.wikipedia.org/wiki/Priority_inversion\n> > \n> > I have a simple script that grabs raw data from an oracle db and shoves\n> > it into a postgresql database for reporting purposes. Every 100 rows I\n> > put into postgresql, I usleep 10 or so and the load caused by that\n> > script on both systems is minimal. You might try something like that.\n> \n> Will the priority of the script pass down to the pgsql queries it calls? \n> I figured (likely incorrectly) that because the queries were executed by \n> the psql server the queries ran with the server's priority. If this \n> isn't the case, then perfect. :)\n\nnope, the priorities don't pass down. you connect via a client lib to\nthe server, which spawns a backend process that does the work for you. \nThe backend process inherits its priority from the postmaster that\nspawns it, and they all run at the same priority.\n\n> Thanks for the tip, too, it's something I will try.\n\nSometimes it's the simple solutions that work best. :) Welcome to the\nworld of pgsql, btw...\n",
"msg_date": "Thu, 02 Nov 2006 09:41:11 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "Am Donnerstag, den 02.11.2006, 09:41 -0600 schrieb Scott Marlowe:\n> Sometimes it's the simple solutions that work best. :) Welcome to the\n> world of pgsql, btw...\n\nOTOH, there are also non-simple solutions to this, which might make\nsense anyway: Install slony, and run your queries against a readonly\nreplica of your data.\n\nAndreas",
"msg_date": "Fri, 03 Nov 2006 14:34:24 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "On Nov 2, 2006, at 9:14 AM, Madison Kelly wrote:\n> I've got a script (perl, in case it matters) that I need to run once\n> a month to prepare statements. This script queries and updates the\n> database a *lot*. I am not concerned with the performance of the SQL\n> calls so much as I am about the impact it has on the server's load.\n>\n> Is there a way to limit queries speed (ie: set a low 'nice' value on\n> a query)? This might be an odd question, or I could be asking the\n> question the wrong way, but hopefully you the idea. :)\n\nThe BizGres folks have been working on resource queuing, which will \neventually do what you want. Take a look at the BizGres mailing list \narchives for more info.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Sun, 5 Nov 2006 22:27:48 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "Scott Marlowe wrote:\n> nope, the priorities don't pass down. you connect via a client lib to\n> the server, which spawns a backend process that does the work for you. \n> The backend process inherits its priority from the postmaster that\n> spawns it, and they all run at the same priority.\n\nShoot, but figured. :)\n\n>> Thanks for the tip, too, it's something I will try.\n> \n> Sometimes it's the simple solutions that work best. :) Welcome to the\n> world of pgsql, btw...\n\nHeh, if only I was new to pgsql I wouldn't feel silly for asking so many \nquestions :P. In the same right though, I enjoy PgSQL/Linux/FOSS in \ngeneral *because* there seems to never be a shortage of things to learn.\n\nThanks!\n\nMadi\n",
"msg_date": "Mon, 06 Nov 2006 08:12:48 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "Andreas Kostyrka wrote:\n> Am Donnerstag, den 02.11.2006, 09:41 -0600 schrieb Scott Marlowe:\n>> Sometimes it's the simple solutions that work best. :) Welcome to the\n>> world of pgsql, btw...\n> \n> OTOH, there are also non-simple solutions to this, which might make\n> sense anyway: Install slony, and run your queries against a readonly\n> replica of your data.\n\nBingo! This seems like exactly what we can/should do, and it will likely \nhelp with other jobs we run, too.\n\nI feel a little silly for not having thought of this myself... Guess I \nwas too focused on niceness :). Thanks!\n\nMadi\n",
"msg_date": "Mon, 06 Nov 2006 08:13:52 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting \"nice\" values"
}
] |
[
{
"msg_contents": "[Madison Kelly - Thu at 10:25:07AM -0500]\n> Will the priority of the script pass down to the pgsql queries it calls? \n> I figured (likely incorrectly) that because the queries were executed by \n> the psql server the queries ran with the server's priority. \n\nI think you are right, and in any case, I don't think the niceness\nvalue won't help much if the bottleneck is iowait.\n\nIn our application, I've made a special function for doing\nlow-priority transactions which I believe is quite smart - though maybe\nnot always. Before introducing this logic, we observed we had a tipping\npoint, too many queries, and the database wouldn't swallow them fast\nenough, and the database server just jammed up, trying to work at too\nmany queries at once, yielding the results far too slow.\n\nIn the config file, I now have those two flags set:\n\n stats_start_collector = on\n stats_command_string = on\n\nThis will unfortunately cause some CPU-load, but the benefit is great\n- one can actually check what the server is working with at any time:\n\n select * from pg_stat_activity\n\nwith those, it is possible to check a special view pg_stat_activity -\nit will contain all the queries the database is working on right now.\nMy idea is to peek into this table - if there is no active queries,\nthe database is idle, and it's safe to start our low-priority\ntransaction. If this view is full of stuff, one should certainly not\nrun any low-priority transactions, rather sleep a bit and try again\nlater.\n\n select count(*) from pg_stat_activity where not current_query like\n '<IDLE>%' and query_start+?<now()\n\nThe algorithm takes four parameters, the time value to put in above,\nthe maximum number of queries allowed to run, the sleep time between\neach attempt, and the amount of attempts to try before giving up.\n\n\nSo here are the cons and drawbacks:\n\n con: Given small queries and small transactions, one can tune this in\n such a way that the low priority queries (almost) never causes\n significant delay for the higher priority queries.\n\n con: can be used to block users of an interactive query\n application to cause disturbances on the production database.\n\n con: can be used for pausing low-priority batch jobs to execute only\n when the server is idle.\n\n drawback: unsuitable for long-running queries and transactions \n\n drawback: with fixed values in the parameters above, one risks that\n the queries never gets run if the server is sufficiently stressed.\n\n drawback: the stats collection requires some CPU\n\n drawback: the \"select * from pg_stats_activity\" query requires some CPU\n\n drawback: the pg_stats_activity-view is constant within the\n transaction, so one has to roll back if there is activity\n (this is however not a really bad thing, because one\n certainly shouldn't live an idle transaction around if the\n database is stressed).\n",
"msg_date": "Thu, 2 Nov 2006 17:00:57 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "Tobias Brox wrote:\n> [Madison Kelly - Thu at 10:25:07AM -0500]\n>> Will the priority of the script pass down to the pgsql queries it calls? \n>> I figured (likely incorrectly) that because the queries were executed by \n>> the psql server the queries ran with the server's priority. \n> \n> I think you are right, and in any case, I don't think the niceness\n> value won't help much if the bottleneck is iowait.\n> \n> In our application, I've made a special function for doing\n> low-priority transactions which I believe is quite smart - though maybe\n> not always. Before introducing this logic, we observed we had a tipping\n> point, too many queries, and the database wouldn't swallow them fast\n> enough, and the database server just jammed up, trying to work at too\n> many queries at once, yielding the results far too slow.\n> \n> In the config file, I now have those two flags set:\n> \n> stats_start_collector = on\n> stats_command_string = on\n> \n> This will unfortunately cause some CPU-load, but the benefit is great\n> - one can actually check what the server is working with at any time:\n> \n> select * from pg_stat_activity\n> \n> with those, it is possible to check a special view pg_stat_activity -\n> it will contain all the queries the database is working on right now.\n> My idea is to peek into this table - if there is no active queries,\n> the database is idle, and it's safe to start our low-priority\n> transaction. If this view is full of stuff, one should certainly not\n> run any low-priority transactions, rather sleep a bit and try again\n> later.\n> \n> select count(*) from pg_stat_activity where not current_query like\n> '<IDLE>%' and query_start+?<now()\n> \n> The algorithm takes four parameters, the time value to put in above,\n> the maximum number of queries allowed to run, the sleep time between\n> each attempt, and the amount of attempts to try before giving up.\n> \n> \n> So here are the cons and drawbacks:\n> \n> con: Given small queries and small transactions, one can tune this in\n> such a way that the low priority queries (almost) never causes\n> significant delay for the higher priority queries.\n> \n> con: can be used to block users of an interactive query\n> application to cause disturbances on the production database.\n> \n> con: can be used for pausing low-priority batch jobs to execute only\n> when the server is idle.\n> \n> drawback: unsuitable for long-running queries and transactions \n> \n> drawback: with fixed values in the parameters above, one risks that\n> the queries never gets run if the server is sufficiently stressed.\n> \n> drawback: the stats collection requires some CPU\n> \n> drawback: the \"select * from pg_stats_activity\" query requires some CPU\n> \n> drawback: the pg_stats_activity-view is constant within the\n> transaction, so one has to roll back if there is activity\n> (this is however not a really bad thing, because one\n> certainly shouldn't live an idle transaction around if the\n> database is stressed).\n\nI can see how this would be very useful (and may make use of it later!). \nFor the current job at hand though, at full tilt it can take a few hours \nto run, which puts it into your \"drawback\" section. The server in \nquestion is also almost under load of some sort, too.\n\nA great tip and one I am sure to make use of later, thanks!\n\nMadi\n",
"msg_date": "Mon, 06 Nov 2006 08:10:12 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "[Madison Kelly - Mon at 08:10:12AM -0500]\n> to run, which puts it into your \"drawback\" section. The server in \n> question is also almost under load of some sort, too.\n> \n> A great tip and one I am sure to make use of later, thanks!\n\nI must have been sleepy, listing up \"cons\" vs \"drawbacks\" ;-)\n\nAnyway, the central question is not the size of the job, but the size of\nthe transactions within the job - if the job consists of many\ntransactions, \"my\" test can be run before every transaction. Having\ntransactions lasting for hours is a very bad thing to do, anyway.\n",
"msg_date": "Mon, 6 Nov 2006 14:33:53 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "Tobias Brox wrote:\n> [Madison Kelly - Mon at 08:10:12AM -0500]\n>> to run, which puts it into your \"drawback\" section. The server in \n>> question is also almost under load of some sort, too.\n>>\n>> A great tip and one I am sure to make use of later, thanks!\n> \n> I must have been sleepy, listing up \"cons\" vs \"drawbacks\" ;-)\n\n:) I noticed but figured what you meant (I certainly do similar flubs!).\n\n> Anyway, the central question is not the size of the job, but the size of\n> the transactions within the job - if the job consists of many\n> transactions, \"my\" test can be run before every transaction. Having\n> transactions lasting for hours is a very bad thing to do, anyway.\n\nAh, sorry, long single queries is what you meant. I have inherited this \ncode so I am not sure how long a given query takes, though they do use a \nlot of joins and such, so I suspect it isn't quick; indexes aside. When \nI get some time (and get the backup server running) I plan to play with \nthis. Currently the DB is on a production server so I am hesitant to \npoke around just now. Once I get the backup server though, I will play \nwith your suggestions. I am quite curious to see how it will work out.\n\nThanks again!\n\nMadi\n\n",
"msg_date": "Mon, 06 Nov 2006 08:48:19 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "[Madison Kelly - Mon at 08:48:19AM -0500]\n> Ah, sorry, long single queries is what you meant. \n\nNo - long running single transactions :-) If it's only read-only\nqueries, one will probably benefit by having one transaction for every\nquery.\n\n",
"msg_date": "Mon, 6 Nov 2006 15:11:49 +0100",
"msg_from": "Tobias Brox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "Tobias Brox wrote:\n> [Madison Kelly - Mon at 08:48:19AM -0500]\n>> Ah, sorry, long single queries is what you meant. \n> \n> No - long running single transactions :-) If it's only read-only\n> queries, one will probably benefit by having one transaction for every\n> query.\n> \n\nIn this case, what happens is one kinda ugly big transaction is read \ninto a hash, and then looped through (usually ~10,000 rows). On each \nloop another, slightly less ugly query is performed based on the first \nquery's values now in the hash (these queries being where throttling \nmight help). Then after the second query is parsed a PDF file is created \n(also a big source of slowness). It isn't entirely read-only though \nbecause as the PDFs are created a flag is updated in the given record's \nrow. So yeah, need to experiment some. :)\n\nMadi\n",
"msg_date": "Mon, 06 Nov 2006 09:18:03 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Setting \"nice\" values"
},
{
"msg_contents": "I'm having a spot of problem with out storage device vendor. Read \nperformance (as measured by both bonnie++ and hdparm -t) is abysmal \n(~14Mbyte/sec), and we're trying to get them to fix it. Unfortunately, \nthey're using the fact that bonnie++ is an open source benchmark to \nweasle out of doing anything- they can't fix it unless I can show an \nimpact in Postgresql.\n\nSo the question is: is there an easy to install and run, read-heavy \nbenchmark out there that I can wave at them to get them to fix the \nproblem? I have a second database running on a single SATA drive, so I \ncan use that as a comparison point- \"look, we're getting 1/3rd the read \nspeed of a single SATA drive- this sucks!\"\n\nAny advice?\n\nBrian\n\n",
"msg_date": "Mon, 06 Nov 2006 15:47:30 -0500",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Easy read-heavy benchmark kicking around?"
},
{
"msg_contents": "On 11/6/06, Brian Hurt <[email protected]> wrote:\n> I'm having a spot of problem with out storage device vendor. Read\n> performance (as measured by both bonnie++ and hdparm -t) is abysmal\n> (~14Mbyte/sec), and we're trying to get them to fix it. Unfortunately,\n> they're using the fact that bonnie++ is an open source benchmark to\n> weasle out of doing anything- they can't fix it unless I can show an\n> impact in Postgresql.\n>\n> So the question is: is there an easy to install and run, read-heavy\n> benchmark out there that I can wave at them to get them to fix the\n> problem? I have a second database running on a single SATA drive, so I\n> can use that as a comparison point- \"look, we're getting 1/3rd the read\n> speed of a single SATA drive- this sucks!\"\n\nhitachi?\n\nmy experience with storage vendors is when they say things like that\nthey know full well their device completely sucks and are just\nstalling so that you give up.\n\nmerlin\n",
"msg_date": "Mon, 6 Nov 2006 16:09:59 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy read-heavy benchmark kicking around?"
},
{
"msg_contents": "On Mon, 2006-11-06 at 15:09, Merlin Moncure wrote:\n> On 11/6/06, Brian Hurt <[email protected]> wrote:\n> > I'm having a spot of problem with out storage device vendor. Read\n> > performance (as measured by both bonnie++ and hdparm -t) is abysmal\n> > (~14Mbyte/sec), and we're trying to get them to fix it. Unfortunately,\n> > they're using the fact that bonnie++ is an open source benchmark to\n> > weasle out of doing anything- they can't fix it unless I can show an\n> > impact in Postgresql.\n> >\n> > So the question is: is there an easy to install and run, read-heavy\n> > benchmark out there that I can wave at them to get them to fix the\n> > problem? I have a second database running on a single SATA drive, so I\n> > can use that as a comparison point- \"look, we're getting 1/3rd the read\n> > speed of a single SATA drive- this sucks!\"\n> \n> hitachi?\n> \n> my experience with storage vendors is when they say things like that\n> they know full well their device completely sucks and are just\n> stalling so that you give up.\n\nMan, if I were the OP I'd be naming names, and letting the idiots at\nINSERT MAJOR VENDOR HERE know that I was naming names to the whole of\nthe postgresql community and open source as well to make the point that\nif they look down on open source so much, then open source should look\ndown on them.\n\nPostgreSQL is open source software, BSD and Linux are open source / free\nsoftware. bonnie++'s licensing shouldn't matter one nit, and I'd let\neveryone know how shittily I was being treated by this vendor until\ntheir fixed their crap or took it back.\n\nNote that if you're using fibre channel etc... the problem might well be\nin your own hardware / device drivers. There are a lot of real crap FC\nand relative cards out there.\n",
"msg_date": "Mon, 06 Nov 2006 15:26:40 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy read-heavy benchmark kicking around?"
},
{
"msg_contents": "Brian Hurt wrote:\n> I'm having a spot of problem with out storage device vendor. Read \n> performance (as measured by both bonnie++ and hdparm -t) is abysmal \n> (~14Mbyte/sec), and we're trying to get them to fix it. Unfortunately, \n> they're using the fact that bonnie++ is an open source benchmark to \n> weasle out of doing anything- they can't fix it unless I can show an \n> impact in Postgresql.\n> \n> So the question is: is there an easy to install and run, read-heavy \n> benchmark out there that I can wave at them to get them to fix the \n> problem? I have a second database running on a single SATA drive, so I \n> can use that as a comparison point- \"look, we're getting 1/3rd the read \n> speed of a single SATA drive- this sucks!\"\n> \n\nYou could use the lineitem table from the TPC-H dataset \n(http://www.tpc.org/tpch/default.asp).\n\nGenerate the dataset for a scale factor that makes lineitem about 2x \nyour ram, load the table and do:\n\nSELECT count(*) FROM lineitem\n\nvmstat or iostat while this is happening should display your meager \nthroughput well enough to get your vendors attention (I'm checking this \non a fairly old 4 disk system of mine as I type this - I'm seeing about \n90Mb/s...)\n\nbest wishes\n\nMark\n",
"msg_date": "Tue, 07 Nov 2006 14:52:09 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy read-heavy benchmark kicking around?"
},
{
"msg_contents": "Hi,\n\nLe lundi 6 novembre 2006 21:47, Brian Hurt a écrit :\n> So the question is: is there an easy to install and run, read-heavy\n> benchmark out there that I can wave at them to get them to fix the\n> problem? I have a second database running on a single SATA drive, so I\n> can use that as a comparison point- \"look, we're getting 1/3rd the read\n> speed of a single SATA drive- this sucks!\"\n>\n> Any advice?\n\nTsung is an easy to use open-source multi-protocol distributed load testing \ntool. The more simple way to use it to stress test your machine would be \nusing pgfouine to setup a test from postgresql logs:\n http://tsung.erlang-projects.org/\n http://pgfouine.projects.postgresql.org/tsung.html\n\nTsung will easily simulate a lot (more than 1000) of concurrent users playing \nyour custom defined load scenario. Then it will give you some plots to \nanalyze results, for disk io though you'll have to use some other tool(s) \nwhile tests are running.\n\nRegards,\n-- \nDimitri Fontaine\nhttp://www.dalibo.com/",
"msg_date": "Tue, 7 Nov 2006 11:49:05 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy read-heavy benchmark kicking around?"
},
{
"msg_contents": "Hi, Brian,\n\nBrian Hurt wrote:\n\n> So the question is: is there an easy to install and run, read-heavy\n> benchmark out there that I can wave at them to get them to fix the\n> problem?\n\nFor sequential read performance, use dd. Most variants of dd I've seen\noutput some timing information, and if not, do a \"time dd\nif=/your/device of=/dev/null bs=1M\" on the partition.\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Wed, 08 Nov 2006 17:18:00 +0100",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy read-heavy benchmark kicking around?"
},
{
"msg_contents": "On 11/8/06, Markus Schaber <[email protected]> wrote:\n> Hi, Brian,\n>\n> Brian Hurt wrote:\n>\n> > So the question is: is there an easy to install and run, read-heavy\n> > benchmark out there that I can wave at them to get them to fix the\n> > problem?\n>\n> For sequential read performance, use dd. Most variants of dd I've seen\n> output some timing information, and if not, do a \"time dd\n> if=/your/device of=/dev/null bs=1M\" on the partition.\n\nwe had a similar problem with a hitachi san, the ams200. Their\nperformance group refused to admit the fact that 50mb/sec dd test was\na valid performance benchmark and needed to be addressed. Yes, that\nwas a HITACHI SAN, the AMS200, which hitachi's performance group\nclaimed was 'acceptable performance'. This was the advice we got\nafter swapping out all the hardware and buying an entitlement to\nredhat enterprise which we had to do to get them to talk to us.\n\noh, the unit also lost a controller after about a week of\noperation...the unit being a HITACHI SAN, the AMS200.\n\nany questions?\n\nmerlin\n\np.s. we have had good experiences with the adtx.\n",
"msg_date": "Wed, 8 Nov 2006 11:34:02 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy read-heavy benchmark kicking around?"
},
{
"msg_contents": "Merlin Moncure wrote:\n\n> On 11/8/06, Markus Schaber <[email protected]> wrote:\n>> Hi, Brian,\n>>\n>> Brian Hurt wrote:\n>>\n>> > So the question is: is there an easy to install and run, read-heavy\n>> > benchmark out there that I can wave at them to get them to fix the\n>> > problem?\n>>\n>> For sequential read performance, use dd. Most variants of dd I've seen\n>> output some timing information, and if not, do a \"time dd\n>> if=/your/device of=/dev/null bs=1M\" on the partition.\n> \n> we had a similar problem with a hitachi san, the ams200. Their\n> performance group refused to admit the fact that 50mb/sec dd test was\n> a valid performance benchmark and needed to be addressed.\n >\n > [...]\n >\n> oh, the unit also lost a controller after about a week of\n> operation...the unit being a HITACHI SAN, the AMS200.\n> \n> any questions?\n\nYes, one.\nWhat was that unit?\n\n;-)\n\n-- \nCosimo\n",
"msg_date": "Wed, 08 Nov 2006 17:57:00 +0100",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy read-heavy benchmark kicking around?"
},
{
"msg_contents": "Similar experiences with HP and their SmartArray 5i controller on Linux.\nThe answer was: \"this controller has won awards for performance! It can't be\nslow!\", so we made them test it in their own labs an prove just how awfully\nslow it was. In the case of the 5i, it became apparent that HP had no\ninternal expertise on Linux and their controllers, the driver was built by a\nthird party that they didn't support and their performance people didn't\ndeal with the 5i at all.\n\nIn the end, all manner of benchmarks after you've purchased aren't a good\nsubstitute for the up front question: do you have documentation of the\nperformance of your RAID controller on [Linux, Solaris, ...]?\n\nI would like everyone who purchases IBM, Dell, HP or Sun to demand that\ndocumentation - then perhaps we'd see higher quality drivers and hardware\nresult.\n\n- Luke\n\n\nOn 11/8/06 8:34 AM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> On 11/8/06, Markus Schaber <[email protected]> wrote:\n>> Hi, Brian,\n>> \n>> Brian Hurt wrote:\n>> \n>>> So the question is: is there an easy to install and run, read-heavy\n>>> benchmark out there that I can wave at them to get them to fix the\n>>> problem?\n>> \n>> For sequential read performance, use dd. Most variants of dd I've seen\n>> output some timing information, and if not, do a \"time dd\n>> if=/your/device of=/dev/null bs=1M\" on the partition.\n> \n> we had a similar problem with a hitachi san, the ams200. Their\n> performance group refused to admit the fact that 50mb/sec dd test was\n> a valid performance benchmark and needed to be addressed. Yes, that\n> was a HITACHI SAN, the AMS200, which hitachi's performance group\n> claimed was 'acceptable performance'. This was the advice we got\n> after swapping out all the hardware and buying an entitlement to\n> redhat enterprise which we had to do to get them to talk to us.\n> \n> oh, the unit also lost a controller after about a week of\n> operation...the unit being a HITACHI SAN, the AMS200.\n> \n> any questions?\n> \n> merlin\n> \n> p.s. we have had good experiences with the adtx.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n",
"msg_date": "Wed, 08 Nov 2006 11:32:50 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Easy read-heavy benchmark kicking around?"
}
] |
[
{
"msg_contents": "Hi,\n \nWe've migrated one of our servers from pg 7.4 to 8.1 and from times to times (4 hours) the server start doing a lot of context switching and all transactions become very slow.\n \nThe average context switching for this server as vmstat shows is 1 but when the problem occurs it goes to 250000.\n \nCPU and memory usage are ok.\n \nWhat is producing this context switching storms?\n \nIt is a box with 16GB RAM and 4 XEON processors running RedHat Enterprise Linux AS.\n \nShould I disable Hyperthreading?\n \nThank you in advance!\n \nReimer\n\n\n\nHi,\n \nWe've migrated one of our servers from pg 7.4 to 8.1 and from times to times (4 hours) the server start doing a lot of context switching and all transactions become very slow.\n \nThe average context switching for this server as vmstat shows is 1 but when the problem occurs it goes to 250000.\n \nCPU and memory usage are ok.\n \nWhat is producing this context switching storms?\n \nIt is a box with 16GB RAM and 4 XEON processors running RedHat Enterprise Linux AS.\n \nShould I disable Hyperthreading?\n \nThank you in advance!\n \nReimer",
"msg_date": "Fri, 03 Nov 2006 08:32:13 -0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Context switch storm"
},
{
"msg_contents": "Based on what other people have posted, hyperthreading seems not to be beneficial for postgres -- try searching through the archives of this list. (And then turn it off and see if it helps.)\n\nYou might also post a few details:\n\nconfig settings (shared_buffers, work_mem, maintenance_work_mem, wal and checkpoint settings, etc.)\n\nare you using autovacuum ?\n\nall tables are vacuumed and analyzed regularly ? How big are they ? Do they and indexes fit in RAM ?\n\nany particular queries that running and might be related (explain analyze results of them would be useful)\n\ndisk configuration\n\nOther processes on this box ?\n\n# of connections to it (I've seen this alone push servers over the edge)\n\nHTH,\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom:\[email protected] on behalf of [email protected]\nSent:\tFri 11/3/2006 2:32 AM\nTo:\[email protected]\nCc:\t\nSubject:\t[PERFORM] Context switch storm\n\nHi,\n \nWe've migrated one of our servers from pg 7.4 to 8.1 and from times to times (4 hours) the server start doing a lot of context switching and all transactions become very slow.\n \nThe average context switching for this server as vmstat shows is 1 but when the problem occurs it goes to 250000.\n \nCPU and memory usage are ok.\n \nWhat is producing this context switching storms?\n \nIt is a box with 16GB RAM and 4 XEON processors running RedHat Enterprise Linux AS.\n \nShould I disable Hyperthreading?\n \nThank you in advance!\n \nReimer\n\n\n\n-------------------------------------------------------\nClick link below if it is SPAM [email protected]\n\"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=454b34ac206028992556831&[email protected]&retrain=spam&template=history&history_page=1\"\n!DSPAM:454b34ac206028992556831!\n-------------------------------------------------------\n\n\n\n",
"msg_date": "Fri, 3 Nov 2006 04:50:37 -0800",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "[email protected] wrote:\n> Hi,\n> \n> We've migrated one of our servers from pg 7.4 to 8.1 and from times\n> to times (4 hours) the server start doing a lot of context switching\n> and all transactions become very slow.\n> \n> The average context switching for this server as vmstat shows is 1\n> but when the problem occurs it goes to 250000.\n> \n> CPU and memory usage are ok.\n> \n> What is producing this context switching storms?\n >\n> It is a box with 16GB RAM and 4 XEON processors running RedHat\n> Enterprise Linux AS.\n\nIt's memory bandwidth issues on the older Xeons. If you search the \narchives you'll see a lot of discussion of this. I'd have thought 8.1 \nwould be better than 7.4 though.\n\nYou'll tend to see it when you have multiple clients and most queries \ncan use RAM rather than disk I/O. My understanding of what happens is \nthat PG requests data from RAM - it's not in cache so the process gets \nsuspended to wait. The next process does the same, with the same result. \n You end up with lots of processes all fighting over what data is in \nthe cache and no-one gets much work done.\n\n> Should I disable Hyperthreading?\n\nI seem to remember that helps, but do check the mailing list archives \nfor discussion on this.\n\nIf you can keep your numbers of clients down below the critical level, \nyou should find the overall workload is fine. The problem is of course \nthat as the context-switching increases, each query takes longer which \nmeans more clients connect, which increases the context-swtching, which \nmeans...\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 03 Nov 2006 12:52:05 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "Richard Huxton wrote:\n\n> [email protected] wrote:\n>> Hi,\n>>\n>> We've migrated one of our servers from pg 7.4 to 8.1 and from times\n>> to times (4 hours) the server start doing a lot of context switching\n>> and all transactions become very slow.\n>>\n>> The average context switching for this server as vmstat shows is 1\n>> but when the problem occurs it goes to 250000.\n>\n> You'll tend to see it when you have multiple clients and most queries \n> can use RAM rather than disk I/O. My understanding of what happens is \n> that PG requests data from RAM - it's not in cache so the process gets \n> suspended to wait. The next process does the same, with the same result. \n> You end up with lots of processes all fighting over what data is in \n> the cache and no-one gets much work done.\n\nDoes this happen also with 8.0, or is specific to 8.1 ?\nI seem to have the same exact behaviour for an OLTP-loaded 8.0.1 server\nwhen I raise `shared_buffers' from 8192 to 40000.\nI would expect an increase in tps/concurrent clients, but I see an average\nperformance below a certain threshold of users, and when concurrent users\nget above that level, performance starts to drop, no matter what I do.\n\nServer logs and io/vm statistics seem to indicate that there is little\nor no disk activity but machine loads increases to 7.0/8.0.\nAfter some minutes, the problem goes away, and performance returns\nto acceptable levels.\n\nWhen the load increases, *random* database queries show this \"slowness\",\neven if they are perfectly planned and indexed.\n\nIs there anything we can do?\n\n-- \nCosimo\n\n",
"msg_date": "Fri, 03 Nov 2006 14:06:36 +0100",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "> If you can keep your numbers of clients down below the critical \n> level, \n> you should find the overall workload is fine. \n\nWe have at about 600 connections. Is this a case to use a connection pool (pg_pool) system?\n \nAnd why this happens only with 8.0 and 8.1 and not with the 7.4?\n\n> If you can keep your numbers of clients down below the critical > level, > you should find the overall workload is fine. \n \n\nWe have at about 600 connections. Is this a case to use a connection pool (pg_pool) system?\n \nAnd why this happens only with 8.0 and 8.1 and not with the 7.4?",
"msg_date": "Fri, 03 Nov 2006 11:28:36 -0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "Cosimo Streppone wrote:\n> Richard Huxton wrote:\n> \n>> [email protected] wrote:\n>>>\n>>> The average context switching for this server as vmstat shows is 1\n>>> but when the problem occurs it goes to 250000.\n>>\n>> You'll tend to see it when you have multiple clients and most queries \n>> can use RAM rather than disk I/O. My understanding of what happens is \n>> that PG requests data from RAM - it's not in cache so the process gets \n>> suspended to wait. The next process does the same, with the same \n>> result. You end up with lots of processes all fighting over what \n>> data is in the cache and no-one gets much work done.\n> \n> Does this happen also with 8.0, or is specific to 8.1 ?\n\nAll versions suffer to a degree - they just push the old Xeon in the \nwrong way. However, more recent versions *should* be better than older \nversions. I believe some work was put in to prevent contention on \nvarious locks which should reduce context-switching across the board.\n\n> I seem to have the same exact behaviour for an OLTP-loaded 8.0.1 server\n\nupgrade from 8.0.1 - the most recent is 8.0.9 iirc\n\n> when I raise `shared_buffers' from 8192 to 40000.\n> I would expect an increase in tps/concurrent clients, but I see an average\n> performance below a certain threshold of users, and when concurrent users\n> get above that level, performance starts to drop, no matter what I do.\n\nAre you seeing a jump in context-switching in top? You'll know when you \ndo - it's a *large* jump. That's the key diagnosis. Otherwise it might \nsimply be your configuration settings aren't ideal for that workload.\n\n> Server logs and io/vm statistics seem to indicate that there is little\n> or no disk activity but machine loads increases to 7.0/8.0.\n> After some minutes, the problem goes away, and performance returns\n> to acceptable levels.\n\nThat sounds like it. Query time increases across the board as all the \nclients fail to get any data back.\n\n> When the load increases, *random* database queries show this \"slowness\",\n> even if they are perfectly planned and indexed.\n> \n> Is there anything we can do?\n\nWell, the client I saw it with just bought a dual-opteron server and \nused their quad-Xeon for something else. However, I do remember that 8.1 \nseemed better than 7.4 before they switched. Part of that might just \nhave been better query-planning and other efficiences though.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 03 Nov 2006 13:29:28 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "The solution for us has been twofold:\n\nupgrade to the newest PG version available at the time while we waited\nfor our new Opteron-based DB hardware to arrive.\n\nAndreas\n\n\nAm Freitag, den 03.11.2006, 13:29 +0000 schrieb Richard Huxton:\n> Cosimo Streppone wrote:\n> > Richard Huxton wrote:\n> > \n> >> [email protected] wrote:\n> >>>\n> >>> The average context switching for this server as vmstat shows is 1\n> >>> but when the problem occurs it goes to 250000.\n> >>\n> >> You'll tend to see it when you have multiple clients and most queries \n> >> can use RAM rather than disk I/O. My understanding of what happens is \n> >> that PG requests data from RAM - it's not in cache so the process gets \n> >> suspended to wait. The next process does the same, with the same \n> >> result. You end up with lots of processes all fighting over what \n> >> data is in the cache and no-one gets much work done.\n> > \n> > Does this happen also with 8.0, or is specific to 8.1 ?\n> \n> All versions suffer to a degree - they just push the old Xeon in the \n> wrong way. However, more recent versions *should* be better than older \n> versions. I believe some work was put in to prevent contention on \n> various locks which should reduce context-switching across the board.\n> \n> > I seem to have the same exact behaviour for an OLTP-loaded 8.0.1 server\n> \n> upgrade from 8.0.1 - the most recent is 8.0.9 iirc\n> \n> > when I raise `shared_buffers' from 8192 to 40000.\n> > I would expect an increase in tps/concurrent clients, but I see an average\n> > performance below a certain threshold of users, and when concurrent users\n> > get above that level, performance starts to drop, no matter what I do.\n> \n> Are you seeing a jump in context-switching in top? You'll know when you \n> do - it's a *large* jump. That's the key diagnosis. Otherwise it might \n> simply be your configuration settings aren't ideal for that workload.\n> \n> > Server logs and io/vm statistics seem to indicate that there is little\n> > or no disk activity but machine loads increases to 7.0/8.0.\n> > After some minutes, the problem goes away, and performance returns\n> > to acceptable levels.\n> \n> That sounds like it. Query time increases across the board as all the \n> clients fail to get any data back.\n> \n> > When the load increases, *random* database queries show this \"slowness\",\n> > even if they are perfectly planned and indexed.\n> > \n> > Is there anything we can do?\n> \n> Well, the client I saw it with just bought a dual-opteron server and \n> used their quad-Xeon for something else. However, I do remember that 8.1 \n> seemed better than 7.4 before they switched. Part of that might just \n> have been better query-planning and other efficiences though.\n>",
"msg_date": "Fri, 03 Nov 2006 14:53:02 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "Richard Troy wrote:\n> On Fri, 3 Nov 2006, Richard Huxton wrote:\n>> It's memory bandwidth issues on the older Xeons. If you search the\n>> archives you'll see a lot of discussion of this. I'd have thought 8.1\n>> would be better than 7.4 though.\n> \n> Hmmm... I just checked; one of our production systems is a multi-cpu Xeon\n> based system of uncertain age (nobody remember 'zactly). While we haven't\n> seen this problem yet, it's scheduled to take over demo-duty shortly and\n> it would be an embarassment if we had this trouble during a demo... Is\n> there any easy way to tell if you're at risk?\n\nTry:\n- multiple clients\n- query doing sorts that fit into work_mem\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 03 Nov 2006 14:05:49 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "\nOn Fri, 3 Nov 2006, Richard Huxton wrote:\n>\n> It's memory bandwidth issues on the older Xeons. If you search the\n> archives you'll see a lot of discussion of this. I'd have thought 8.1\n> would be better than 7.4 though.\n\nHmmm... I just checked; one of our production systems is a multi-cpu Xeon\nbased system of uncertain age (nobody remember 'zactly). While we haven't\nseen this problem yet, it's scheduled to take over demo-duty shortly and\nit would be an embarassment if we had this trouble during a demo... Is\nthere any easy way to tell if you're at risk?\n\nThanks,\nRichard\n\n\n-- \nRichard Troy, Chief Scientist\nScience Tools Corporation\n510-924-1363 or 202-747-1263\[email protected], http://ScienceTools.com/\n\n",
"msg_date": "Fri, 3 Nov 2006 06:10:21 -0800 (PST)",
"msg_from": "Richard Troy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "[email protected] wrote:\n>> If you can keep your numbers of clients down below the critical \n>> level, you should find the overall workload is fine.\n> \n> We have at about 600 connections. Is this a case to use a connection\n> pool (pg_pool) system?\n\nPossibly - that should help. I'm assuming that most of your queries are \nvery short, so you could probably get that figure down a lot lower. \nYou'll keep the same amount of queries running through the system, just \nqueue them up.\n\n> And why this happens only with 8.0 and 8.1 and not with the 7.4?\n\nNot sure. Maybe 8.x is making more intensive use of your memory, \npossibly with a change in your plans.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 03 Nov 2006 14:38:25 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "[email protected] writes:\n> And why this happens only with 8.0 and 8.1 and not with the 7.4?\n\n8.0 and 8.1 are vulnerable to this behavior because of conflicts for\naccess to pg_subtrans (which didn't exist in 7.4). The problem occurs\nwhen you have old open transactions, causing the window over which\npg_subtrans must be accessed to become much wider than normal.\n8.2 should eliminate or at least alleviate the issue, but in the\nmeantime see if you can get your applications to not sit on open\ntransactions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 03 Nov 2006 10:25:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm "
},
{
"msg_contents": "Am Freitag, den 03.11.2006, 14:38 +0000 schrieb Richard Huxton:\n> [email protected] wrote:\n> >> If you can keep your numbers of clients down below the critical \n> >> level, you should find the overall workload is fine.\n> > \n> > We have at about 600 connections. Is this a case to use a connection\n> > pool (pg_pool) system?\n> \n> Possibly - that should help. I'm assuming that most of your queries are \n> very short, so you could probably get that figure down a lot lower. \n> You'll keep the same amount of queries running through the system, just \n> queue them up.\nthat have \nAh, yes, now that you mention, avoid running many queries with a\nsimiliar timing behaviour, PG8 seems to have a lock design that's very\nbad for the memory architecture of the Xeons.\n\nSo running SELECT * FROM table WHERE id=1234567890; from 600 clients in\nparallel can be quite bad than say a complicated 6-way join :(\n\nAndreas\n\n> \n> > And why this happens only with 8.0 and 8.1 and not with the 7.4?\n> \n> Not sure. Maybe 8.x is making more intensive use of your memory, \n> possibly with a change in your plans.\n>",
"msg_date": "Fri, 03 Nov 2006 17:16:31 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "Andreas Kostyrka wrote:\n\n> The solution for us has been twofold:\n> \n> upgrade to the newest PG version available at the time while we waited\n> for our new Opteron-based DB hardware to arrive.\n\nDo you remember the exact Pg version?\n\n-- \nCosimo\n\n",
"msg_date": "Mon, 06 Nov 2006 22:33:16 +0100",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "Richard Huxton wrote:\n> Cosimo Streppone wrote:\n>> Richard Huxton wrote:\n>>\n>>>> The average context switching for this server as vmstat shows is 1\n>>>> but when the problem occurs it goes to 250000.\n>>>\n>> I seem to have the same exact behaviour for an OLTP-loaded 8.0.1 server\n> \n> upgrade from 8.0.1 - the most recent is 8.0.9 iirc\n> [...]\n> Are you seeing a jump in context-switching in top? You'll know when you \n> do - it's a *large* jump. That's the key diagnosis. Otherwise it might \n> simply be your configuration settings aren't ideal for that workload.\n> \n\nSorry for the delay.\n\nI have logged vmstat results for the last 3 days.\nMax context switches figure is 20500.\n\nIf I understand correctly, this does not mean a \"storm\",\nbut only that the 2 Xeons are overloaded.\nProbably, I can do a good thing switching off the HyperThreading.\nI get something like 12/15 *real* concurrent processes hitting\nthe server.\n\nI must say I lowered \"shared_buffers\" to 8192, as it was before.\nI tried raising it to 16384, but I can't seem to find a relationship\nbetween shared_buffers and performance level for this server.\n\n> Well, the client I saw it with just bought a dual-opteron server and \n> used their quad-Xeon for something else. However, I do remember that 8.1 \n> seemed better than 7.4 before they switched. Part of that might just \n> have been better query-planning and other efficiences though.\n\nAn upgrade to 8.1 is definitely the way to go.\nAny 8.0 - 8.1 migration advice?\n\nThanks.\n\n-- \nCosimo\n\n",
"msg_date": "Tue, 14 Nov 2006 10:51:44 +0100",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "* Cosimo Streppone <[email protected]> [061114 10:52]:\n> Richard Huxton wrote:\n> >Cosimo Streppone wrote:\n> >>Richard Huxton wrote:\n> >>\n> >>>>The average context switching for this server as vmstat shows is 1\n> >>>>but when the problem occurs it goes to 250000.\n> >>>\n> >>I seem to have the same exact behaviour for an OLTP-loaded 8.0.1 server\n> >upgrade from 8.0.1 - the most recent is 8.0.9 iirc\n> >[...]\n> >Are you seeing a jump in context-switching in top? You'll know when you do - it's a *large* jump. That's the key diagnosis. Otherwise it might simply be your configuration settings \n> >aren't ideal for that workload.\n> \n> Sorry for the delay.\n> \n> I have logged vmstat results for the last 3 days.\n> Max context switches figure is 20500.\n> \n> If I understand correctly, this does not mean a \"storm\",\nNope, 20500 is a magnitude to low to the storms we were experiencing.\n\n> but only that the 2 Xeons are overloaded.\n> Probably, I can do a good thing switching off the HyperThreading.\n> I get something like 12/15 *real* concurrent processes hitting\n> the server.\n\nActually, for the storms we had, the number of concurrent processes\nAND the workload is important:\n\nmany processes that do all different things => overloaded server\nmany processes that do all the same queries => storm.\n\nBasically, it seems that postgresql implementation of locking is on\nquite unfriendly standings with the Xeon memory subsystems. googling\naround might provide more details. \n\n> \n> I must say I lowered \"shared_buffers\" to 8192, as it was before.\n> I tried raising it to 16384, but I can't seem to find a relationship\n> between shared_buffers and performance level for this server.\n> \n> >Well, the client I saw it with just bought a dual-opteron server and used their quad-Xeon for something else. However, I do remember that 8.1 seemed better than 7.4 before they \n> >switched. Part of that might just have been better query-planning and other efficiences though.\n> \n> An upgrade to 8.1 is definitely the way to go.\n> Any 8.0 - 8.1 migration advice?\nSimple, there are basically two ways:\na) you can take downtime: pg_dump + restore\nb) you cannot take downtime: install slony, install your new 8.1\nserver, replicate into it, switchover to the new server.\n\nIf you can get new hardware for the 8.1 box, you have two benefits:\na) order Opterons. That doesn't solve the overload problem as such,\nbut these pesky cs storms seems to have gone away this way.\n(that was basically the \"free\" advice from an external consultant,\nwhich luckily matched with my ideas what the problem could be. Cheap\nsolution at $3k :) )\nb) you can use the older box still as readonly replica.\nc) you've got a hot backup of your db.\n\nAndreas\n",
"msg_date": "Tue, 14 Nov 2006 11:13:11 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "On 11/14/06, Cosimo Streppone <[email protected]> wrote:\n> I must say I lowered \"shared_buffers\" to 8192, as it was before.\n> I tried raising it to 16384, but I can't seem to find a relationship\n> between shared_buffers and performance level for this server.\n\nMy findings are pretty much the same here. I don't see any link\nbetween shared buffers and performance. I'm still looking for hard\nevidence to rebut this point. Lower shared buffers leaves more\nmemory for what really matters, which is sorting.\n\n> > Well, the client I saw it with just bought a dual-opteron server and\n> > used their quad-Xeon for something else. However, I do remember that 8.1\n> > seemed better than 7.4 before they switched. Part of that might just\n> > have been better query-planning and other efficiences though.\n>\n> An upgrade to 8.1 is definitely the way to go.\n> Any 8.0 - 8.1 migration advice?\n\nIf you are getting ready to stage an upgrade, you definately will want\nto test on 8.2 and 8.1. 8.2 might give you better results in the lab,\nand has some nice features.\n\nmerlin\n",
"msg_date": "Tue, 14 Nov 2006 09:17:08 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "On Tue, Nov 14, 2006 at 09:17:08AM -0500, Merlin Moncure wrote:\n> On 11/14/06, Cosimo Streppone <[email protected]> wrote:\n> >I must say I lowered \"shared_buffers\" to 8192, as it was before.\n> >I tried raising it to 16384, but I can't seem to find a relationship\n> >between shared_buffers and performance level for this server.\n> \n> My findings are pretty much the same here. I don't see any link\n> between shared buffers and performance. I'm still looking for hard\n> evidence to rebut this point. Lower shared buffers leaves more\n> memory for what really matters, which is sorting.\n\nIt depends on your workload. If you're really sort-heavy, then having\nmemory available for that will be hard to beat. Otherwise, having a\nlarge shared_buffers setting can really help cut down on switching back\nand forth between the kernel and PostgreSQL.\n\nBTW, shared_buffers of 16384 is pretty low by today's standards, so that\ncould be why you're not seeing much difference between that and 8192.\nTry upping it to 1/4 - 1/2 of memory and see if that changes things.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Tue, 14 Nov 2006 10:50:22 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "> a) order Opterons. That doesn't solve the overload problem as such,\n> but these pesky cs storms seems to have gone away this way.\n\nI haven't run into context switch storms or similar issues with the new\nIntel Woodcrests (yet.. they're still pretty new and not yet under real\nproduction load), has anyone else had any more experience with these\n(good/bad)? From what I understand, the memory architecture is quite a\nbit different than the Xeon, and they got rid of Hyperthreading in favor\nof the dual core with shared cache.\n\nIf/when I run into the issue, I'll be sure to post, but I was wondering\nif anyone had gotten there first. \n\nThanks,\n- Bucky\n",
"msg_date": "Tue, 14 Nov 2006 12:53:16 -0500",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "On 11/14/06, Jim C. Nasby <[email protected]> wrote:\n> On Tue, Nov 14, 2006 at 09:17:08AM -0500, Merlin Moncure wrote:\n> > On 11/14/06, Cosimo Streppone <[email protected]> wrote:\n> > >I must say I lowered \"shared_buffers\" to 8192, as it was before.\n> > >I tried raising it to 16384, but I can't seem to find a relationship\n> > >between shared_buffers and performance level for this server.\n> >\n> > My findings are pretty much the same here. I don't see any link\n> > between shared buffers and performance. I'm still looking for hard\n> > evidence to rebut this point. Lower shared buffers leaves more\n> > memory for what really matters, which is sorting.\n>\n> It depends on your workload. If you're really sort-heavy, then having\n> memory available for that will be hard to beat. Otherwise, having a\n> large shared_buffers setting can really help cut down on switching back\n> and forth between the kernel and PostgreSQL.\n>\n> BTW, shared_buffers of 16384 is pretty low by today's standards, so that\n> could be why you're not seeing much difference between that and 8192.\n> Try upping it to 1/4 - 1/2 of memory and see if that changes things.\n\nCan you think of a good way to construct a test case that would\ndemonstrate the difference? What would be the 'best case' where a\nhigh shared buffers would be favored over a low setting?\n\nmerlin\n",
"msg_date": "Tue, 14 Nov 2006 15:11:40 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "Merlin wrote:\n\n> On 11/14/06, Jim C. Nasby <[email protected]> wrote:\n> \n>> On Tue, Nov 14, 2006 at 09:17:08AM -0500, Merlin Moncure wrote:\n>> > On 11/14/06, Cosimo Streppone <[email protected]> wrote:\n>> > >I must say I lowered \"shared_buffers\" to 8192, as it was before.\n>> > >I tried raising it to 16384, but I can't seem to find a relationship\n>> > >between shared_buffers and performance level for this server.\n>> >\n>> > My findings are pretty much the same here.\n>> > [...]\n>>\n>> BTW, shared_buffers of 16384 is pretty low by today's standards\n> \n> Can you think of a good way to construct a test case that would\n> demonstrate the difference?\n\nNot sure of actual relevance, but some time ago I performed\n(with 8.0) several pg_bench tests with 1,5,10,20 concurrent\nclients with same pg configuration except one parameter for\nevery run.\n\nIn one of these tests I run pgbench with shared_buffers starting\nat 1024 and doubling it to 2048, ..., until 16384.\nI found the best performance in terms of transactions per second\naround 4096/8192.\n\nThat said, I don't know if pgbench stresses the database\nlike my typical oltp application does.\n\nAnd also, I suspect that shared_buffers should not be\nevaluated as an absolute number, but rather as a number relative to\nmaximum main memory (say 1/2 the total ram, 1/3, 2/3, ...).\n\n-- \nCosimo\n\n",
"msg_date": "Tue, 14 Nov 2006 22:43:20 +0100",
"msg_from": "Cosimo Streppone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "On Tue, 2006-11-14 at 09:17 -0500, Merlin Moncure wrote:\n> On 11/14/06, Cosimo Streppone <[email protected]> wrote:\n> > I must say I lowered \"shared_buffers\" to 8192, as it was before.\n> > I tried raising it to 16384, but I can't seem to find a relationship\n> > between shared_buffers and performance level for this server.\n> \n> My findings are pretty much the same here. I don't see any link\n> between shared buffers and performance. I'm still looking for hard\n> evidence to rebut this point. Lower shared buffers leaves more\n> memory for what really matters, which is sorting.\n\nIn 8.0 there is a performance issue such that bgwriter will cause a\nperformance problem with large shared_buffers setting. That in itself\ncould lead to some fairly poor measurements of the value of\nshared_buffers.\n\nIn 7.4 and prior releases setting shared_buffers higher was counter\nproductive in many ways, so isn't highly recommended.\n\nIn general, setting shared_buffers higher works for some workloads and\ndoesn't for others. So any measurements anybody makes depend upon the\nworkload and the size of the database. The more uniformly/randomly you\naccess a large database, the more benefit you'll see from large\nshared_buffers. 8.1 benefits from having a higher shared_buffers in some\ncases because it reduces contention on the buffer lwlocks; 8.2 solves\nthis issue.\n\nEven in 8.2 ISTM that a higher shared_buffers setting wastes memory with\nmany connected users since the PrivRefCount array uses memory that could\nhave been used as filesystem cache.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Nov 2006 09:07:16 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
},
{
"msg_contents": "On Nov 14, 2006, at 1:11 PM, Merlin Moncure wrote:\n> On 11/14/06, Jim C. Nasby <[email protected]> wrote:\n>> On Tue, Nov 14, 2006 at 09:17:08AM -0500, Merlin Moncure wrote:\n>> > On 11/14/06, Cosimo Streppone <[email protected]> wrote:\n>> > >I must say I lowered \"shared_buffers\" to 8192, as it was before.\n>> > >I tried raising it to 16384, but I can't seem to find a \n>> relationship\n>> > >between shared_buffers and performance level for this server.\n>> >\n>> > My findings are pretty much the same here. I don't see any link\n>> > between shared buffers and performance. I'm still looking for hard\n>> > evidence to rebut this point. Lower shared buffers leaves more\n>> > memory for what really matters, which is sorting.\n>>\n>> It depends on your workload. If you're really sort-heavy, then having\n>> memory available for that will be hard to beat. Otherwise, having a\n>> large shared_buffers setting can really help cut down on switching \n>> back\n>> and forth between the kernel and PostgreSQL.\n>>\n>> BTW, shared_buffers of 16384 is pretty low by today's standards, \n>> so that\n>> could be why you're not seeing much difference between that and 8192.\n>> Try upping it to 1/4 - 1/2 of memory and see if that changes things.\n>\n> Can you think of a good way to construct a test case that would\n> demonstrate the difference? What would be the 'best case' where a\n> high shared buffers would be favored over a low setting?\n\nSomething that's read-heavy will benefit the most from a large \nshared_buffers setting, since it means less trips to the kernel. \nWrite-heavy apps won't benefit that much because you'll end up double- \nbuffering written data.\n--\nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n",
"msg_date": "Thu, 16 Nov 2006 14:01:28 -0700",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context switch storm"
}
] |
[
{
"msg_contents": "I have 700 lines of non-performant pgSQL code that I'd like to \nprofile to see what's going on.\n\nWhat's the best way to profile stored procedures?\n\nThanks,\n\nDrew\n",
"msg_date": "Fri, 3 Nov 2006 03:12:14 -0800",
"msg_from": "Drew Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "profiling PL/pgSQL?"
},
{
"msg_contents": "am Fri, dem 03.11.2006, um 3:12:14 -0800 mailte Drew Wilson folgendes:\n> I have 700 lines of non-performant pgSQL code that I'd like to \n> profile to see what's going on.\n> \n> What's the best way to profile stored procedures?\n\nRAISE NOTICE, you can raise the aktual time within a transaction with\ntimeofday()\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47215, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n",
"msg_date": "Fri, 3 Nov 2006 12:21:37 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: profiling PL/pgSQL?"
},
{
"msg_contents": "A. Kretschmer wrote:\n> am Fri, dem 03.11.2006, um 3:12:14 -0800 mailte Drew Wilson folgendes:\n>> I have 700 lines of non-performant pgSQL code that I'd like to \n>> profile to see what's going on.\n>>\n>> What's the best way to profile stored procedures?\n> \n> RAISE NOTICE, you can raise the aktual time within a transaction with\n> timeofday()\n\nOf course you only have very small values of \"best\" available with \nplpgsql debugging.\n\nThere's a GUI debugger from EnterpriseDB I believe, but I've no idea how \ngood it is. Any users/company bods care to let us know?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 03 Nov 2006 12:07:58 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: profiling PL/pgSQL?"
},
{
"msg_contents": "On 11/3/06, Richard Huxton <[email protected]> wrote:\n> There's a GUI debugger from EnterpriseDB I believe, but I've no idea how\n> good it is. Any users/company bods care to let us know?\n\nIf you visit:\nhttp://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/edb-debugger/#dirlist\n\nWe have both a PL/pgSQL profiler and tracer available.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1300\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 2nd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n",
"msg_date": "Fri, 3 Nov 2006 10:27:28 -0500",
"msg_from": "\"Jonah H. Harris\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: profiling PL/pgSQL?"
},
{
"msg_contents": "> > am Fri, dem 03.11.2006, um 3:12:14 -0800 mailte Drew Wilson folgendes:\n> >> I have 700 lines of non-performant pgSQL code that I'd like to \n> >> profile to see what's going on.\n> >>\n> >> What's the best way to profile stored procedures?\n> > \n> > RAISE NOTICE, you can raise the aktual time within a transaction with\n> > timeofday()\n> \n> Of course you only have very small values of \"best\" available with \n> plpgsql debugging.\n> \n> There's a GUI debugger from EnterpriseDB I believe, but I've no idea how \n> good it is. Any users/company bods care to let us know?\n\n\n\nIt's an excellent debugger (of course, I'm a bit biased). \n\nWe are working on open-sourcing it now - we needed some of the plugin\nfeatures in 8.2.\n\nAs Jonah pointed out, we also have a PL/pgSQL profiler (already\nopen-sourced but a bit tricky to build). The profiler tells you how\nmuch CPU time you spent at each line of PL/pgSQL code, how many times\nyou executed each line of code, and how much I/O was caused by each line\n(number of scans, blocks fetched, blocks hit, tuples returned, tuples\nfetched, tuples inserted, tuples updated, tuples deleted).\n\nIt's been a while since I looked at it, but I seem to remember that it\nspits out an XML report that you can coax into a nice HTML page via the\nXSLT.\n\nThe plugin_profiler needs to be converted over to the plugin\narchitecture in 8.2, but that's not a lot of work.\n\n -- Korry\n\n\n--\n Korry Douglas [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n> am Fri, dem 03.11.2006, um 3:12:14 -0800 mailte Drew Wilson folgendes:\n>> I have 700 lines of non-performant pgSQL code that I'd like to \n>> profile to see what's going on.\n>>\n>> What's the best way to profile stored procedures?\n> \n> RAISE NOTICE, you can raise the aktual time within a transaction with\n> timeofday()\n\nOf course you only have very small values of \"best\" available with \nplpgsql debugging.\n\nThere's a GUI debugger from EnterpriseDB I believe, but I've no idea how \ngood it is. Any users/company bods care to let us know?\n\n\n\n\n\nIt's an excellent debugger (of course, I'm a bit biased). \n\nWe are working on open-sourcing it now - we needed some of the plugin features in 8.2.\n\nAs Jonah pointed out, we also have a PL/pgSQL profiler (already open-sourced but a bit tricky to build). The profiler tells you how much CPU time you spent at each line of PL/pgSQL code, how many times you executed each line of code, and how much I/O was caused by each line (number of scans, blocks fetched, blocks hit, tuples returned, tuples fetched, tuples inserted, tuples updated, tuples deleted).\n\nIt's been a while since I looked at it, but I seem to remember that it spits out an XML report that you can coax into a nice HTML page via the XSLT.\n\nThe plugin_profiler needs to be converted over to the plugin architecture in 8.2, but that's not a lot of work.\n\n -- Korry\n\n\n\n\n\n--\n Korry Douglas [email protected]\n EnterpriseDB http://www.enterprisedb.com",
"msg_date": "Fri, 03 Nov 2006 14:40:25 -0500",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: profiling PL/pgSQL?"
}
] |
[
{
"msg_contents": "To support migration of existing queries, it would be nice not to have\nto rewrite EXISTS clauses as IN clauses. Here is one example of a query\nwhich optimizes poorly:\n \n DELETE FROM \"CaseDispo\"\n WHERE EXISTS\n (\n SELECT * FROM \"Consolidation\" \"C\"\n WHERE \"C\".\"caseNo\" = '2006CM000123'\n AND \"C\".\"xrefOrConsol\" = 'C'\n AND \"C\".\"countyNo\" = 30\n AND \"CaseDispo\".\"caseNo\" = \"C\".\"crossRefCase\"\n AND \"CaseDispo\".\"countyNo\" = \"C\".\"countyNo\"\n AND \"CaseDispo\".\"dispoDate\" = DATE '2005-10-31'\n );\n \n Seq Scan on \"CaseDispo\" (cost=0.00..1227660.52 rows=176084 width=6)\n(actual time=501.557..501.557 rows=0 loops=1)\n Filter: (subplan)\n SubPlan\n -> Result (cost=0.00..3.46 rows=1 width=48) (actual\ntime=0.000..0.000 rows=0 loops=352167)\n One-Time Filter: (($2)::date = '2005-10-31'::date)\n -> Index Scan using \"Consolidation_pkey\" on \"Consolidation\"\n\"C\" (cost=0.00..3.46 rows=1 width=48) (actual time=0.008..0.008 rows=0\nloops=84)\n Index Cond: (((\"caseNo\")::bpchar =\n'2006CM000123'::bpchar) AND (($0)::bpchar = (\"crossRefCase\")::bpchar)\nAND ((\"countyNo\")::smallint = 30) AND (($1)::smallint =\n(\"countyNo\")::smallint))\n Filter: (\"xrefOrConsol\" = 'C'::bpchar)\n Total runtime: 501.631 ms\n(9 rows)\n \nTo most programmers, it would be obvious that this is an exact logical\nequivalent to:\n \n DELETE FROM \"CaseDispo\"\n WHERE \"countyNo\" = 30\n AND \"dispoDate\" = DATE '2005-10-31'\n AND \"caseNo\" IN\n (\n SELECT \"crossRefCase\" FROM \"Consolidation\" \"C\"\n WHERE \"C\".\"caseNo\" = '2006CM000123'\n AND \"C\".\"xrefOrConsol\" = 'C'\n AND \"C\".\"countyNo\" = 30\n );\n \n Nested Loop (cost=7.02..10.50 rows=1 width=6) (actual\ntime=0.036..0.036 rows=0 loops=1)\n -> HashAggregate (cost=7.02..7.03 rows=1 width=18) (actual\ntime=0.034..0.034 rows=0 loops=1)\n -> Index Scan using \"Consolidation_pkey\" on \"Consolidation\"\n\"C\" (cost=0.00..7.02 rows=1 width=18) (actual time=0.032..0.032 rows=0\nloops=1)\n Index Cond: (((\"caseNo\")::bpchar =\n'2006CM000123'::bpchar) AND ((\"countyNo\")::smallint = 30))\n Filter: (\"xrefOrConsol\" = 'C'::bpchar)\n -> Index Scan using \"CaseDispo_pkey\" on \"CaseDispo\" \n(cost=0.00..3.46 rows=1 width=24) (never executed)\n Index Cond: (((\"CaseDispo\".\"caseNo\")::bpchar =\n(\"outer\".\"crossRefCase\")::bpchar) AND ((\"CaseDispo\".\"dispoDate\")::date =\n'2005-10-31'::date) AND ((\"CaseDispo\".\"countyNo\")::smallint = 30))\n Total runtime: 0.109 ms\n(8 rows)\n \nOn this particular query, three orders of magnitude only gets you up to\nhalf a second, but the same thing happens on longer running queries. \nAnd even that half a second is significant when a user has to sit there\nand wait for the hourglass to clear on a regular basis. Clearly, the\nproblem is not in the costing -- it recognizes the high cost of the\nEXISTS form. The problem is that it doesn't recognize that these are\nlogically equivalent.\n \nIs there any work in progress to expand the set of plans examined for\nan EXISTS clause? If not, can we add such an enhancement to the TODO\nlist? Do we need a good write-up on what optimizations are legal for\nEXISTS?\n \n-Kevin\n \n\n\n",
"msg_date": "Fri, 03 Nov 2006 13:04:34 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "EXISTS optimization"
}
] |
[
{
"msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 2737\nLogged by: Balazs Nagy\nEmail address: [email protected]\nPostgreSQL version: 8.1.5\nOperating system: RHEL4\nDescription: hash indexing large table fails, while btree of same\nindex works\nDetails: \n\nPostgres: 8.1.5\nDatabase table size: ~60 million rows\nField to index: varchar 127\n\nCREATE INDEX ... USING hash ...\n\nfails with a file not found error (psql in verbose mode):\n\nERROR: 58P01: could not open segment 3 of relation 1663/16439/16509 (target\nblock 528283): No such file or directory\nLOCATION: _mdfd_getseg, md.c:954\n\nVACUUM, VACUUM FULL doesn't help, full dump and reload doesn't help either\n\nCREATE INDEX ... USING btree ...\n\nworks fine. Could there be a bug in the hash algorithm's implementation?\n\nSystem is x86_64 SMP 8 CPU core, 16GB RAM, Fiber channel SAN, kernel\n2.6.9-42.0.3.ELsmp\n\nI haven't tried the 8.2beta2 yet, but would be happy to try, as the hash\nmethod is better suited for the kind of index I need...\n",
"msg_date": "Sun, 5 Nov 2006 13:47:31 GMT",
"msg_from": "\"Balazs Nagy\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #2737: hash indexing large table fails,\n\twhile btree of same index works"
},
{
"msg_contents": "[ cc'ing to pgsql-performance because of performance issue for hash indexes ]\n\n\"Balazs Nagy\" <[email protected]> writes:\n> Database table size: ~60 million rows\n> Field to index: varchar 127\n\n> CREATE INDEX ... USING hash ...\n\n> fails with a file not found error (psql in verbose mode):\n\n> ERROR: 58P01: could not open segment 3 of relation 1663/16439/16509 (target\n> block 528283): No such file or directory\n> LOCATION: _mdfd_getseg, md.c:954\n\nWow, you're trying to build an 8GB hash index? Considering that hash\nindexes still don't have WAL support, it hardly seems like a good idea\nto be using one that large.\n\nThe immediate problem here seems to be that the hash code is trying to\ntouch a page in segment 4 when it hasn't yet touched anything in segment\n3. The low-level md.c code assumes (not unreasonably) that this\nprobably represents a bug in the calling code, and errors out instead of\nallowing the segment to be created.\n\nWe ought to think about rejiggering the smgr.c interface to support\nhash's behavior more reasonably. There's already one really bad kluge\nin mdread() for hash support :-(\n\nOne thought that comes to mind is to require hash to do an smgrextend()\naddressing the last block it intends to use whenever it allocates a new\nbatch of blocks, whereupon md.c could adopt a saner API: allow\nsmgrextend but not other calls to address blocks beyond the current EOF.\nI had once wanted to require hash to explicitly fill all the blocks in\nsequence, but that's probably too radical compared to what it does now\n--- especially seeing that it seems the extension has to be done while\nholding the page-zero lock (see _hash_expandtable). Writing just the\nlogically last block in a batch would have the effect that hash indexes\ncould contain holes (unallocated extents) on filesystems that support\nthat. Normally I would think that probably a Bad Thing, but since hash\nindexes are never scanned sequentially, it might not matter whether they\nend up badly fragmented because of after-the-fact filling in of a hole.\nThoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Nov 2006 18:55:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2737: hash indexing large table fails,\n\twhile btree of same index works"
},
{
"msg_contents": "On Fri, 2006-11-10 at 18:55 -0500, Tom Lane wrote:\n> [ cc'ing to pgsql-performance because of performance issue for hash indexes ]\n> \n> \"Balazs Nagy\" <[email protected]> writes:\n> > Database table size: ~60 million rows\n> > Field to index: varchar 127\n> \n> > CREATE INDEX ... USING hash ...\n\nI'd be interested in a performance test that shows this is the best way\nto index a table though, especially for such a large column. No wonder\nthere is an 8GB index.\n\n> One thought that comes to mind is to require hash to do an smgrextend()\n> addressing the last block it intends to use whenever it allocates a new\n> batch of blocks, whereupon md.c could adopt a saner API: allow\n> smgrextend but not other calls to address blocks beyond the current EOF.\n\n> Thoughts?\n\nYes, do it. \n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 11 Nov 2006 08:17:54 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] BUG #2737: hash indexing large table fails,\n\twhile btree of same index works"
},
{
"msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> On Fri, 2006-11-10 at 18:55 -0500, Tom Lane wrote:\n>> One thought that comes to mind is to require hash to do an smgrextend()\n>> addressing the last block it intends to use whenever it allocates a new\n>> batch of blocks, whereupon md.c could adopt a saner API: allow\n>> smgrextend but not other calls to address blocks beyond the current EOF.\n\n> Yes, do it. \n\nI found out that it's easy to reproduce this failure in the regression\ntests, just by building with RELSEG_SIZE set to 128K instead of 1G:\n\n*** ./expected/create_index.out Sun Sep 10 13:44:25 2006\n--- ./results/create_index.out Thu Nov 16 17:33:29 2006\n***************\n*** 323,328 ****\n--- 323,329 ----\n --\n CREATE INDEX hash_i4_index ON hash_i4_heap USING hash (random int4_ops);\n CREATE INDEX hash_name_index ON hash_name_heap USING hash (random name_ops);\n+ ERROR: could not open segment 7 of relation 1663/16384/26989 (target block 145): No such file or directory\n CREATE INDEX hash_txt_index ON hash_txt_heap USING hash (random text_ops);\n CREATE INDEX hash_f8_index ON hash_f8_heap USING hash (random float8_ops);\n -- CREATE INDEX hash_ovfl_index ON hash_ovfl_heap USING hash (x int4_ops);\n\nAFAICS, any hash index exceeding a single segment is at serious risk.\nThe fact that we've not heard gripes before suggests that no one is\nusing gigabyte-sized hash indexes.\n\nBut it seems mighty late in the beta cycle to be making subtle changes\nin the smgr API. What I'm inclined to do for now is to hack\n_hash_expandtable() to write a page of zeroes at the end of each file\nsegment when an increment in hashm_ovflpoint causes the logical EOF to\ncross segment boundary(s). This is pretty ugly and nonmodular, but it\nwill fix the bug without risking breakage of any non-hash code.\nI'll revisit the cleaner solution once 8.3 devel begins. Comments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Nov 2006 17:48:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] BUG #2737: hash indexing large table fails,\n\twhile btree of same index works"
},
{
"msg_contents": "On Thu, 2006-11-16 at 17:48 -0500, Tom Lane wrote:\n\n> AFAICS, any hash index exceeding a single segment is at serious risk.\n> The fact that we've not heard gripes before suggests that no one is\n> using gigabyte-sized hash indexes.\n\nSeems so.\n\n> But it seems mighty late in the beta cycle to be making subtle changes\n> in the smgr API. What I'm inclined to do for now is to hack\n> _hash_expandtable() to write a page of zeroes at the end of each file\n> segment when an increment in hashm_ovflpoint causes the logical EOF to\n> cross segment boundary(s). This is pretty ugly and nonmodular, but it\n> will fix the bug without risking breakage of any non-hash code.\n> I'll revisit the cleaner solution once 8.3 devel begins. Comments?\n\nDo we think there is hope of improving hash indexes? If not, I'm\ninclined to remove them rather than to spend time bolstering them. We\ncan remap the keyword as was done with RTREE. It's somewhat embarrassing\nhaving an index without clear benefit that can't cope across crashes. We\nwouldn't accept that in other parts of the software...\n\nIf there is hope, is there a specific place to look? It would be good to\nbrain dump some starting places for an investigation.\n\nDoes anybody have a perf test that shows hash indexes beating btrees by\nany significant margin? (Not saying there isn't one...)\n\nI can see the argument that fixed hash indexes would be faster, but\nthere are obviously major downsides to that approach.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 Nov 2006 10:59:11 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] BUG #2737: hash indexing large tablefails,\n\twhile btree of same index works"
},
{
"msg_contents": "On Fri, Nov 17, 2006 at 10:59:11AM +0000, Simon Riggs wrote:\n> On Thu, 2006-11-16 at 17:48 -0500, Tom Lane wrote:\n> \n> > AFAICS, any hash index exceeding a single segment is at serious risk.\n> > The fact that we've not heard gripes before suggests that no one is\n> > using gigabyte-sized hash indexes.\n> \n> Seems so.\n> \n> > But it seems mighty late in the beta cycle to be making subtle changes\n> > in the smgr API. What I'm inclined to do for now is to hack\n> > _hash_expandtable() to write a page of zeroes at the end of each file\n> > segment when an increment in hashm_ovflpoint causes the logical EOF to\n> > cross segment boundary(s). This is pretty ugly and nonmodular, but it\n> > will fix the bug without risking breakage of any non-hash code.\n> > I'll revisit the cleaner solution once 8.3 devel begins. Comments?\n> \n> Do we think there is hope of improving hash indexes? If not, I'm\n> inclined to remove them rather than to spend time bolstering them. We\n> can remap the keyword as was done with RTREE. It's somewhat embarrassing\n> having an index without clear benefit that can't cope across crashes. We\n> wouldn't accept that in other parts of the software...\n> \n\nWhile I understand that there are currently serious problems in terms of\nrecovery with the hash index as it stands currently, it is theoretically\npossible to get a result back with a single I/O probe from a hash and\nthat is not the case with btree indexes. At the top end of the performance\ncurve, every little bit that can be done to minimize actual I/Os is needed.\nI certainly hold out some hope that they can improved. I would like to see\nthem still included. Once they are gone, it will be much harder to ever\nadd them back.\n\nKen Marshall\n\n",
"msg_date": "Fri, 17 Nov 2006 08:26:57 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] BUG #2737: hash indexing large tablefails,\n\twhile btree of same index works"
},
{
"msg_contents": "On Fri, 2006-11-17 at 08:26 -0600, Kenneth Marshall wrote:\n\n> I certainly hold out some hope that they can improved. I would like to see\n> them still included. Once they are gone, it will be much harder to ever\n> add them back.\n\nOK, you got it - keep hash indexes then.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 17 Nov 2006 14:38:12 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] BUG #2737: hash indexing largetablefails,\n\twhile btree of same index works"
},
{
"msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n> Do we think there is hope of improving hash indexes?\n\nSure. They lack WAL support, which is just a small matter of\nprogramming. And no one has ever spent any time on performance\noptimization for them, but it certainly seems like there ought to be\nscope for improvement. I don't we should toss them unless it's been\nproven that their theoretical performance advantages can't be realized\nfor some reason. (This is unlike the situation for rtree, because with\nrtree there was no reason to expect that rtree could dominate gist along\nany axis.)\n\n> If there is hope, is there a specific place to look?\n\nI recall some speculation that using bucket size == page size might\nbe a bad choice, because it leads to mostly-empty buckets for typical\nkey sizes and fill factors. Using a configurable bucket size could\nhelp a lot.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Nov 2006 10:08:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] BUG #2737: hash indexing large tablefails,\n\twhile btree of same index works"
},
{
"msg_contents": "Simon Riggs wrote:\n> Do we think there is hope of improving hash indexes?\nI thought about this a bit. I have an idea that the hash index might \nhave the fixed number of buckets specified in create index statement and \nthe tuples in each of these buckets should be stored in a b-tree. This \nshould give a constant performance improvement (but based on the number \nof buckets) for each fetch of a tuple from index compared to a fetch \nfrom b-tree index.\n\ncheers\n\nJulo\n\n\n",
"msg_date": "Fri, 17 Nov 2006 16:36:18 +0100",
"msg_from": "\"Julius.Stroffek\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] BUG #2737: hash indexing large tablefails,while"
}
] |
[
{
"msg_contents": "Though I've read recent threads, I'm unsure if any matches my case.\n\nWe have 2 tables: revisions and revisions_active. revisions contains \n117707 rows, revisions_active 17827 rows.\n\nDDL: http://hannes.imos.net/ddl.sql.txt\n\nJoining the 2 tables without an additional condition seems ok for me \n(given our outdated hardware): http://hannes.imos.net/query_1.sql.txt\n\nWhat worries me is the performance when limiting the recordset:\nhttp://hannes.imos.net/query_2.sql.txt\n\nThough it should only have to join a few rows it seems to scan all rows. \n From experience I thought that adding an ORDER BY on the index columns \nshould speed it up. But no effect: http://hannes.imos.net/query_3.sql.txt\n\nI'm on 8.1.5, statistics (ANALYZE) are up to date, the tables have each \nbeen CLUSTERed by PK, statistic target for the join columns has been set \nto 100 (without any effect).\n\n\nThanks in advance!\n\n\n-- \nRegards,\nHannes Dorbath\n",
"msg_date": "Mon, 06 Nov 2006 15:13:10 +0100",
"msg_from": "Hannes Dorbath <[email protected]>",
"msg_from_op": true,
"msg_subject": "Yet another question on LIMIT performance :/"
},
{
"msg_contents": "Hannes Dorbath wrote:\n> Though it should only have to join a few rows it seems to scan all rows. \n\nWhat makes you think that's the case?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Mon, 06 Nov 2006 14:13:19 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another question on LIMIT performance :/"
},
{
"msg_contents": "On 06.11.2006 15:13, Heikki Linnakangas wrote:\n> Hannes Dorbath wrote:\n>> Though it should only have to join a few rows it seems to scan all rows. \n> \n> What makes you think that's the case?\n\nSorry, not all rows, but 80753. It's not clear to me why this number is \nso high with LIMIT 10.\n\n\n-- \nRegards,\nHannes Dorbath\n",
"msg_date": "Mon, 06 Nov 2006 15:21:35 +0100",
"msg_from": "Hannes Dorbath <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another question on LIMIT performance :/"
},
{
"msg_contents": "\"Heikki Linnakangas\" <[email protected]> writes:\n> Hannes Dorbath wrote:\n>> Though it should only have to join a few rows it seems to scan all rows. \n\n> What makes you think that's the case?\n\nWhat it looks like to me is that the range of keys present in\npk_revisions_active corresponds to just the upper end of the range of\nkeys present in pk_revisions (somehow not too surprising). So the\nmergejoin isn't the most effective plan possible for this case --- it\nhas to scan through much of pk_revisions before it starts getting\nmatches. The planner doesn't have any model for that though, and is\ncosting the plan on the assumption of uniformly-distributed matches.\n\nA nestloop plan would be faster for this specific case, but much\nslower if a large number of rows were requested.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 06 Nov 2006 09:33:02 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another question on LIMIT performance :/ "
}
] |
[
{
"msg_contents": "Select count(*) from table-twice-size-of-ram\n\nDivide the query time by the number of pages in the table times the pagesize (normally 8KB) and you have your net disk rate.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tBrian Hurt [mailto:[email protected]]\nSent:\tMonday, November 06, 2006 03:49 PM Eastern Standard Time\nTo:\[email protected]\nSubject:\t[PERFORM] Easy read-heavy benchmark kicking around?\n\nI'm having a spot of problem with out storage device vendor. Read \nperformance (as measured by both bonnie++ and hdparm -t) is abysmal \n(~14Mbyte/sec), and we're trying to get them to fix it. Unfortunately, \nthey're using the fact that bonnie++ is an open source benchmark to \nweasle out of doing anything- they can't fix it unless I can show an \nimpact in Postgresql.\n\nSo the question is: is there an easy to install and run, read-heavy \nbenchmark out there that I can wave at them to get them to fix the \nproblem? I have a second database running on a single SATA drive, so I \ncan use that as a comparison point- \"look, we're getting 1/3rd the read \nspeed of a single SATA drive- this sucks!\"\n\nAny advice?\n\nBrian\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Mon, 6 Nov 2006 15:53:14 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Easy read-heavy benchmark kicking around?"
}
] |
[
{
"msg_contents": "Jean-David Beyer wrote:\n> \n> Sure, some even read the entire cylinder. But unless the data are stored\n> contiguously, this does little good. The Linux ext2 and ext3 file systems\n> try to get more contiguity by allocating (IIRC) 8 blocks each time a write\n> needs space\n\n From where do you recall this?\n\nIt looks to me like managing the block reservation window\nseems like a pretty involved process - at first glance way\nmore sophisticated than a hardcoded 8 blocks.\n http://www.gelato.unsw.edu.au/lxr/source/fs/ext3/balloc.c\n\n\n> (and gives the unused ones back when the file is closed for\n> writing). But for a dbms that uses much larger page and extent sizes, this\n> makes little difference. This is one of the reasons a modern dbms does its\n> own file system and uses only the drivers to run the disk.\n\nI'd have thought the opposite. The fact that old filesystems\nhad pretty poor block reservation algorithms and even poorer\nreadahead algorithms is one of the reasons historical dbms\nwriters wrote their own filesystems in the past. If you're\non a '90's VMS or Win9X/FAT - you have a lot to win by having\nyour own filesystem. With more modern OS's, less so.\n\n> That way, the\n> DBMS can allocate the whole partition in a contiguous lump, if need be.\n\nThere's nothing that special about a database file in that\nregard. You may win by having the database executable program\nbe a continuous lump too - especially if lesser used pages of\nthe executable get swapped out (which they should - if they're\naccessed less frequently than a database table that could use\nthe RAM).\n",
"msg_date": "Wed, 08 Nov 2006 09:14:49 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Which OS provides the _fastest_ PostgreSQL performance?"
},
{
"msg_contents": "toby wrote:\n> \n> That's not quite what I meant by \"trust\". Some drives lie about the\n> flush. \n\nIs that really true, or a misdiagnosed software bug?\n\nI know many _drivers_ lie about flushing - for example EXT3\non Linux before early 2005 \"did not have write barrier support\nthat issues the FLUSH CACHE (IDE) or SYNCHRONIZE CACHE (SCSI)\ncommands even on fsync\" according to the writer of\nthe Linux SATA driver.[1]\n\nThis has the same effect of having a lying disk drive to\nany application code (including those designed to test for\nlying drives), but is instead merely a software bug.\n\n\nDoes anyone have an example of an current (on the market so\nI can get one) drive that lies about sync? I'd be interested\nin getting my hands on one to see if it's a OS-software or\na drive-hw/firmware issue.\n\n\n[1] http://hardware.slashdot.org/comments.pl?sid=149349&cid=12519114\n",
"msg_date": "Fri, 10 Nov 2006 10:54:01 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lying drives [Was: Re: Which OS provides the _fastest_ PostgreSQL\n\tperformance?]"
},
{
"msg_contents": "> > That's not quite what I meant by \"trust\". Some drives lie about the\n> > flush. \n> \n> Is that really true, or a misdiagnosed software bug?\n\nI've yet to find a drive that lies about write completion. (*)\n\nThe problem is that the drives boot-up default is write-caching enabled (or\nperhaps the system BIOS sets it that way).\n\nIf you turn an IDE disks write cache off explicity, using hdparm or similar,\nthey behave.\n\nThe problem, I think, is a bug in hdparm or the linux kernel: if you use the\nlittle-'i' option, the output indicates the WC is disabled. However, if you\nuse big-'I' to actually interrogate the drive, you get the correct setting.\n\nI tested this a while ago by writing a program that did fsync() to test\nwrite latency and random-reads to test read latency, and then comparing\nthem.\n\n- Guy\n\n* I did experience a too-close-to-call case, where after write-cache was\n disabled, the write latency was the same as the read latency. For every\n other drive the write latency much, MUCH higher. However, before I\n disabled the WC, the write latency was a fraction of the read latency.\n",
"msg_date": "Mon, 13 Nov 2006 21:32:05 +1300",
"msg_from": "Guy Thornley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lying drives [Was: Re: Which OS provides the _fastest_ PostgreSQL\n\tperformance?]"
},
{
"msg_contents": "On Mon, 13 Nov 2006, Guy Thornley wrote:\n\n> I've yet to find a drive that lies about write completion. The problem \n> is that the drives boot-up default is write-caching enabled (or perhaps \n> the system BIOS sets it that way). If you turn an IDE disks write cache \n> off explicity, using hdparm or similar, they behave.\n\nI found a rather ominous warning from SGI on this subject at \nhttp://oss.sgi.com/projects/xfs/faq.html#wcache_query\n\n\"[Disabling the write cache] is kept persistent for a SCSI disk. However, \nfor a SATA/PATA disk this needs to be done after every reset as it will \nreset back to the default of the write cache enabled. And a reset can \nhappen after reboot or on error recovery of the drive. This makes it \nrather difficult to guarantee that the write cache is maintained as \ndisabled.\"\n\nAs I've been learning more about this subject recently, I've become \nincreasingly queasy about using IDE drives for databases unless they're \nhooked up to a high-end (S|P)ATA controller. As far as I know the BIOS \ndoesn't mess with the write caches, it's strictly that the drives default \nto having them on. Some manufacturers lets you adjust the default, which \nshould prevent the behavior SGI warns about from happening; Hitachi's \n\"Feature Tool\" at http://www.hitachigst.com/hdd/support/download.htm is \none example I've used successfully before.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 23 Nov 2006 02:31:22 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lying drives [Was: Re: Which OS provides the _fastest_"
},
{
"msg_contents": "Greg Smith wrote:\n> On Mon, 13 Nov 2006, Guy Thornley wrote:\n> \n> > I've yet to find a drive that lies about write completion. The problem \n> > is that the drives boot-up default is write-caching enabled (or perhaps \n> > the system BIOS sets it that way). If you turn an IDE disks write cache \n> > off explicity, using hdparm or similar, they behave.\n> \n> I found a rather ominous warning from SGI on this subject at \n> http://oss.sgi.com/projects/xfs/faq.html#wcache_query\n> \n> \"[Disabling the write cache] is kept persistent for a SCSI disk. However, \n> for a SATA/PATA disk this needs to be done after every reset as it will \n> reset back to the default of the write cache enabled. And a reset can \n> happen after reboot or on error recovery of the drive. This makes it \n> rather difficult to guarantee that the write cache is maintained as \n> disabled.\"\n> \n> As I've been learning more about this subject recently, I've become \n> increasingly queasy about using IDE drives for databases unless they're \n> hooked up to a high-end (S|P)ATA controller. As far as I know the BIOS \n\nYes, avoiding IDE for serious database servers is a conclusion I made\nlong ago.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 23 Nov 2006 11:44:25 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lying drives [Was: Re: Which OS provides the"
}
] |
[
{
"msg_contents": "\nDear All\nLooking at the processes running on our server, it appears that each time\na web server program makes a call to the database server, we start a new\nprocess on the database server which obviously has a start up cost. In\nApache, for example, you can say at start up time,that you want the\nmachine to reserve eg 8 processes and keep them open at all times so you\ndon't have this overhead until you exceed this minimum number. Is there a\nway that we can achieve this in Postgres? We have a situation whereby we\nhave lots of web based users doing short quick queries and obviously the\nstart up time for a process must add to their perceived response\ntime.\nThanks\nHilary\n\n\nHilary Forbes\nDMR Limited (UK registration 01134804) \nA DMR Information and Technology Group company\n(www.dmr.co.uk)\n\nDirect tel 01689 889950 Fax 01689 860330 \nDMR is a UK registered trade mark of DMR Limited\n**********************************************************\n",
"msg_date": "Thu, 09 Nov 2006 12:35:00 +0000",
"msg_from": "Hilary Forbes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Keeping processes open for re-use"
},
{
"msg_contents": "On Thu, 2006-11-09 at 13:35, Hilary Forbes wrote:\n> [snip] Is there a way that we can achieve this in Postgres? We have a\n> situation whereby we have lots of web based users doing short quick\n> queries and obviously the start up time for a process must add to\n> their perceived response time.\n\nYes: google for \"connection pooling\". Note that different solutions\nexist for different programming languages, so you should look for\nconnection pooling for the language you're using.\n\nHTH,\nCsaba.\n\n\n",
"msg_date": "Thu, 09 Nov 2006 13:56:04 +0100",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Keeping processes open for re-use"
},
{
"msg_contents": "Yes. This is connection pooling. You can find a lot of examples from\nthe internet on connection pooling, rather source codes. Also keep in\nmind that connection pools can be maintained on the application as\nwell as the database server side. Check which one suits you.\n\n\n--Imad\nwww.EnterpriseDB.com\n\n\nOn 11/9/06, Hilary Forbes <[email protected]> wrote:\n> Dear All\n>\n> Looking at the processes running on our server, it appears that each time a\n> web server program makes a call to the database server, we start a new\n> process on the database server which obviously has a start up cost. In\n> Apache, for example, you can say at start up time,that you want the machine\n> to reserve eg 8 processes and keep them open at all times so you don't have\n> this overhead until you exceed this minimum number. Is there a way that we\n> can achieve this in Postgres? We have a situation whereby we have lots of\n> web based users doing short quick queries and obviously the start up time\n> for a process must add to their perceived response time.\n>\n> Thanks\n> Hilary\n>\n>\n>\n>\n>\n> Hilary Forbes\n> DMR Limited (UK registration 01134804)\n> A DMR Information and Technology Group company (www.dmr.co.uk)\n> Direct tel 01689 889950 Fax 01689 860330\n> DMR is a UK registered trade mark of DMR Limited\n> **********************************************************\n",
"msg_date": "Thu, 9 Nov 2006 22:55:41 +0500",
"msg_from": "imad <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Keeping processes open for re-use"
},
{
"msg_contents": "Csaba Nagy wrote:\n> On Thu, 2006-11-09 at 13:35, Hilary Forbes wrote:\n>> [snip] Is there a way that we can achieve this in Postgres? We have a\n>> situation whereby we have lots of web based users doing short quick\n>> queries and obviously the start up time for a process must add to\n>> their perceived response time.\n> \n> Yes: google for \"connection pooling\". Note that different solutions\n> exist for different programming languages, so you should look for\n> connection pooling for the language you're using.\n> \n\nIf you are using PHP then persistent connections may be a simpler way if \nit is enough for your needs.\n\nBasically replace pg_connect with pg_pconnect\n\nOther languages may have a similar option.\n\nhttp://php.net/manual/en/features.persistent-connections.php\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n",
"msg_date": "Fri, 10 Nov 2006 12:39:59 +1030",
"msg_from": "Shane Ambler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Keeping processes open for re-use"
},
{
"msg_contents": "On Fri, 2006-11-10 at 12:39 +1030, Shane Ambler wrote:\n> Csaba Nagy wrote:\n> > On Thu, 2006-11-09 at 13:35, Hilary Forbes wrote:\n> >> [snip] Is there a way that we can achieve this in Postgres? We have a\n> >> situation whereby we have lots of web based users doing short quick\n> >> queries and obviously the start up time for a process must add to\n> >> their perceived response time.\n> > \n> > Yes: google for \"connection pooling\". Note that different solutions\n> > exist for different programming languages, so you should look for\n> > connection pooling for the language you're using.\n> > \n> \n> If you are using PHP then persistent connections may be a simpler way if \n> it is enough for your needs.\n\nI would actually suggest pg_pool over pg_pconnect.\n\nJoshua D. Drake\n\n\n> \n> Basically replace pg_connect with pg_pconnect\n> \n> Other languages may have a similar option.\n> \n> http://php.net/manual/en/features.persistent-connections.php\n> \n> \n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Thu, 09 Nov 2006 18:19:11 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Keeping processes open for re-use"
},
{
"msg_contents": "2006/11/10, Joshua D. Drake <[email protected]>:\n>\n> I would actually suggest pg_pool over pg_pconnect.\n>\n\nPlease, can you explain advantages of pg_pool over pg_connect ?\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n",
"msg_date": "Thu, 16 Nov 2006 18:39:28 +0100",
"msg_from": "\"Jean-Max Reymond\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Keeping processes open for re-use"
},
{
"msg_contents": "Jean-Max Reymond wrote:\n> 2006/11/10, Joshua D. Drake <[email protected]>:\n>>\n>> I would actually suggest pg_pool over pg_pconnect.\n>>\n> \n> Please, can you explain advantages of pg_pool over pg_connect ?\n\nHe said pg_pconnect (note the extra \"p\"). This provides permanent \nconnections to the database from PHP. However, you will end up with one \nconnection per Apache process running (without running multiple Apache \nsetups to serve different content types). Since many processes will be \nserving images or static content you have a lot of wasted, idle, \nconnections.\n\nNow, pg_pool allows you to maintain a small number of active connections \n(say 20 or 50) and have PHP connect to your pgpool server. It allocates \nthe next free connection it holds and so lets you minimise the number of \nconnections to PostgreSQL.\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 17 Nov 2006 11:20:52 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Keeping processes open for re-use"
}
] |
[
{
"msg_contents": "On 11/8/06, Spiegelberg, Greg <[email protected]> wrote:\n> Merlin,\n>\n> I'm kinda shocked you had such a bad exp. with the AMS200. We have a\n> unit here hooked up to a 4-node Linux cluster with 4 databases banging\n> on it and we get good, consistent perfomance out of it. All 4 nodes can\n> throw 25 to 75 MB/s simultaneously without a hiccup.\n>\n> I'm curious, what was your AMS, server and SAN config?\n\nwe had quad opteron 870 in a sun v40z. two trays of 400g sata drives\nand the 4 15k fc drives they make you buy. o/s was originally gentoo\nand emulex but we switched to redhat as4/qlogic to get support from\nthem.\n\nthe highest performance we ever got was around 120mb/sec writing to\nthe 4 fc drives in raid 10. however, the sata's could not even do 100\nand for some reason when we added a second raid group the performance\ndropped 40% for a reason that their performance group could not\nexplain. compounding the problem was that our assigned tech did not\nknow linux and there was a one week turnaround to get support emails\nanswered. Their sales and support staff were snotty and completely\nunhelpful. Also we had to do a complex migration process which\ninvolved physically moving the unit to multiple racks for data\ntransfer which we were going to have to coordinate with hitachi\nsupport because they do not allow you to rack/unrack your own unit.\n\nultimately, we returned the unit and bought a adtx san. for less than\nhalf the price of the hitachi, we got a dual 4gb controller mixed\nsata/sas that supports 750g sata drives. It also has sas ports in the\nback for direct attachment to sas hba. in an active/active\nconfiguration, the unit can sustain 500mb/sec, and has 50% more\nstorage in 1/3 the rack space.\n\nmelrin\n",
"msg_date": "Thu, 9 Nov 2006 12:28:50 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Easy read-heavy benchmark kicking around?"
}
] |
[
{
"msg_contents": "I'm executing the following query:\n\n select\n hf.mailbox,hf.uid,hf.position,hf.part,hf.field,hf.value,\n af.address,a.name,a.localpart,a.domain\n from\n header_fields hf\n left join address_fields af\n using ( mailbox, uid, position, part, field )\n left join addresses a\n on (af.address=a.id)\n where\n hf.field<=12 and (hf.part!='' or hf.value ilike '%,%') ;\n\nThe header_fields table contains 13.5M rows, of which only ~250K match\nthe where condition. I created an index like this:\n\n create index hffpv on header_fields(field)\n where field<=12 and (part!='' or value ilike '%,%')\n\nBy default, explain analyse shows me a plan like this:\n\n Hash Left Join (cost=1225503.02..1506125.88 rows=2077771 width=143) (actual time=106467.431..117902.397 rows=1113355 loops=1)\n Hash Cond: (\"outer\".address = \"inner\".id)\n -> Merge Left Join (cost=1211354.65..1288896.97 rows=2077771 width=91) (actual time=104792.505..110648.253 rows=1113355 loops=1)\n Merge Cond: ((\"outer\".field = \"inner\".field) AND (\"outer\".part = \"inner\".part) AND (\"outer\".\"position\" = \"inner\".\"position\") AND (\"outer\".uid = \"inner\".uid) AND (\"outer\".mailbox = \"inner\".mailbox))\n -> Sort (cost=665399.78..670594.21 rows=2077771 width=87) (actual time=39463.784..39724.772 rows=264180 loops=1)\n Sort Key: hf.field, hf.part, hf.\"position\", hf.uid, hf.mailbox\n -> Bitmap Heap Scan on header_fields hf (cost=1505.63..325237.46 rows=2077771 width=87) (actual time=3495.308..33767.229 rows=264180 loops=1)\n Recheck Cond: ((field <= 12) AND ((part <> ''::text) OR (value ~~* '%,%'::text)))\n -> Bitmap Index Scan on hffpv (cost=0.00..1505.63 rows=2077771 width=0) (actual time=3410.069..3410.069 rows=264180 loops=1)\n Index Cond: (field <= 12)\n -> Sort (cost=545954.87..553141.07 rows=2874480 width=24) (actual time=65328.437..67437.846 rows=2874230 loops=1)\n Sort Key: af.field, af.part, af.\"position\", af.uid, af.mailbox\n -> Seq Scan on address_fields af (cost=0.00..163548.00 rows=2874480 width=24) (actual time=12.434..4076.694 rows=2874230 loops=1)\n -> Hash (cost=11714.35..11714.35 rows=190807 width=56) (actual time=1670.629..1670.629 rows=190807 loops=1)\n -> Seq Scan on addresses a (cost=0.00..11714.35 rows=190807 width=56) (actual time=39.944..1398.897 rows=190807 loops=1)\n Total runtime: 118381.608 ms\n\nNote the 2M estimated rowcount in the bitmap index scan on header_fields\nvs. the actual number (264180). That mis-estimation also causes it to do\na sequential scan of address_fields, though there's a usable index. If I\nset both enable_mergejoin and enable_seqscan to false, I get a plan like\nthe following:\n\n Hash Left Join (cost=8796.82..72064677.06 rows=2077771 width=143) (actual time=4400.706..58110.697 rows=1113355 loops=1)\n Hash Cond: (\"outer\".address = \"inner\".id)\n -> Nested Loop Left Join (cost=1505.63..71937416.17 rows=2077771 width=91) (actual time=3486.238..52351.567 rows=1113355 loops=1)\n Join Filter: ((\"outer\".\"position\" = \"inner\".\"position\") AND (\"outer\".part = \"inner\".part) AND (\"outer\".field = \"inner\".field))\n -> Bitmap Heap Scan on header_fields hf (cost=1505.63..242126.62 rows=2077771 width=87) (actual time=3478.202..39181.477 rows=264180 loops=1)\n Recheck Cond: ((field <= 12) AND ((part <> ''::text) OR (value ~~* '%,%'::text)))\n -> Bitmap Index Scan on hffpv (cost=0.00..1505.63 rows=2077771 width=0) (actual time=3393.949..3393.949 rows=264180 loops=1)\n Index Cond: (field <= 12)\n -> Index Scan using af_mu on address_fields af (cost=0.00..34.26 rows=11 width=24) (actual time=0.028..0.040 rows=7 loops=264180)\n Index Cond: ((\"outer\".mailbox = af.mailbox) AND (\"outer\".uid = af.uid))\n -> Hash (cost=4857.17..4857.17 rows=190807 width=56) (actual time=764.337..764.337 rows=190807 loops=1)\n -> Index Scan using addresses_pkey on addresses a (cost=0.00..4857.17 rows=190807 width=56) (actual time=29.381..484.826 rows=190807 loops=1)\n Total runtime: 58459.624 ms\n\nWhich looks like a considerably nicer plan (but still features the wild\nmis-estimation, though the index has approximately the right rowcount).\nI tried increasing the statistics target on header_fields.field, part,\nand value to 100, but the estimate always hovers around the 2M mark.\n\nDoes anyone have any ideas about what's wrong, and how to fix it?\n\nThanks.\n\n-- ams\n",
"msg_date": "Fri, 10 Nov 2006 10:42:45 +0530",
"msg_from": "Abhijit Menon-Sen <[email protected]>",
"msg_from_op": true,
"msg_subject": "10x rowcount mis-estimation favouring merge over nestloop"
},
{
"msg_contents": "Abhijit Menon-Sen <[email protected]> writes:\n> The header_fields table contains 13.5M rows, of which only ~250K match\n> the where condition. I created an index like this:\n> create index hffpv on header_fields(field)\n> where field<=12 and (part!='' or value ilike '%,%')\n\n> Note the 2M estimated rowcount in the bitmap index scan on header_fields\n> vs. the actual number (264180).\n\nI think this is basically a lack-of-column-correlation-stats problem.\nThe planner is estimating this on the basis of the overall selectivity\nof the \"field<=12\" condition, but it seems that \"field<=12\" is true for\na much smaller fraction of the rows satisfying (part!='' or value ilike '%,%')\nthan for the general population of rows in the header_fields table.\n\nThere's been some speculation about obtaining stats on partial indexes\nas a substitute for solving the general problem of correlation stats,\nbut I for one don't have a very clear understanding of how it'd work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Nov 2006 01:15:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 10x rowcount mis-estimation favouring merge over nestloop "
},
{
"msg_contents": "At 2006-11-10 01:15:24 -0500, [email protected] wrote:\n>\n> it seems that \"field<=12\" is true for a much smaller fraction of the\n> rows satisfying (part!='' or value ilike '%,%') than for the general\n> population of rows in the header_fields table.\n\nIndeed. One-sixth of the rows in the entire table match field<=12, but\nonly one-fifteenth of the rows matching the part/value condition also\nmatch field<=12.\n\n> There's been some speculation about obtaining stats on partial indexes\n> as a substitute for solving the general problem of correlation stats,\n\nOh. So my partial index's rowcount isn't being considered at all? That\nexplains a lot. Ok, I'll just run the query with mergejoin and seqscan\ndisabled. (I can't think of much else to do to speed it up, anyway.)\n\nThanks.\n\n-- ams\n",
"msg_date": "Fri, 10 Nov 2006 12:37:00 +0530",
"msg_from": "Abhijit Menon-Sen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 10x rowcount mis-estimation favouring merge over nestloop"
}
] |
[
{
"msg_contents": "Hi!\n\nIm new to PostgreSQL.\n\nMy current project uses PostgreSQL 7.3.4.\n\nthe problem is like this:\n\nI have a table with 94 fields and a select with only one resultset in only\none client consumes about 0.86 seconds.\nThe client executes three 'select' statements to perform the task which\nconsumes 2.58 seconds.\nWith only one client this is acceptable, but the real problem is as i add\nmore clients, it goes more and more slower.\n\nfor a single select with one field in one resultset, is 0.86 seconds normal?\n\nI tried vacuuming and reindexing but to no avail.\nthe total record count in that particular table is 456,541.\n\nThanks in advance.\n\nHi!\n\nIm new to PostgreSQL.\n\nMy current project uses PostgreSQL 7.3.4.\n\nthe problem is like this:\n\nI have a table with 94 fields and a select with only one resultset in only one client consumes about 0.86 seconds.\nThe client executes three 'select' statements to perform the task which consumes 2.58 seconds.\nWith only one client this is acceptable, but the real problem is as i add more clients, it goes more and more slower.\n\nfor a single select with one field in one resultset, is 0.86 seconds normal?\n\nI tried vacuuming and reindexing but to no avail.\nthe total record count in that particular table is 456,541.\n\nThanks in advance.",
"msg_date": "Wed, 15 Nov 2006 19:37:56 +0800",
"msg_from": "\"AMIR FRANCO D. JOVEN\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow SELECT on three or more clients"
},
{
"msg_contents": "* AMIR FRANCO D. JOVEN <[email protected]> [061115 12:44]:\n> Hi!\n> \n> Im new to PostgreSQL.\n> \n> My current project uses PostgreSQL 7.3.4.\nAncient. Upgrade it, especially if it's a new database.\n\n> \n> the problem is like this:\n> \n> I have a table with 94 fields and a select with only one resultset in only\n> one client consumes about 0.86 seconds.\n> The client executes three 'select' statements to perform the task which\n> consumes 2.58 seconds.\n> With only one client this is acceptable, but the real problem is as i add\n> more clients, it goes more and more slower.\nThat depends upon:\na) your table schema.\nb) the data in the tables. E.g. how big are rows, how many rows.\nc) the size of the result sets.\nd) your indexes?\n\nAndreas\n",
"msg_date": "Wed, 15 Nov 2006 14:29:09 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT on three or more clients"
},
{
"msg_contents": "AMIR FRANCO D. JOVEN wrote:\n> Hi!\n>\n> Im new to PostgreSQL.\n>\n> My current project uses PostgreSQL 7.3.4.\nUpgrading your version of PostgreSQL to 8.1 will give you significant \nbenefits to performance.\n>\n> the problem is like this:\n>\n> I have a table with 94 fields and a select with only one resultset in \n> only one client consumes about 0.86 seconds.\n> The client executes three 'select' statements to perform the task \n> which consumes 2.58 seconds.\n> With only one client this is acceptable, but the real problem is as i \n> add more clients, it goes more and more slower.\n>\n> for a single select with one field in one resultset, is 0.86 seconds \n> normal?\nYou will need to attach the query.\nEXPLAIN ANALYZE SELECT ...\n\nwhere SELECT ... is your query. That will help us work out what the \nproblem is. \n\n0.86 seconds might be slow for a query that returns 1 row, it might be \nfast for a query that returns a large set with complex joins and where \nconditions. Fast and slow are not objective terms. They are very \ndependent on the query.\n\n>\n> I tried vacuuming and reindexing but to no avail.\n> the total record count in that particular table is 456,541.\n>\n456,541 is not all that many records. But again you will need to post \nmore information for us to be able to assist.\n> Thanks in advance.\n>\n\n",
"msg_date": "Thu, 16 Nov 2006 00:31:59 +1100",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT on three or more clients"
},
{
"msg_contents": "Operating system and some of the basic PostreSQL config settings would be helpful, plus any info you have on your disks, the size of the relevant tables, their structure and indexes & vacuum/analyze status ... plus what others have said:\n\nUpgrade!\n\nThere are considerable improvements in, well, *everything* !, since 7.3 (we havew some database atb 7.4.x and I consider them out-of-date). Hopefully this list can provide help to get you through whatever your immediate crisis is, but do consider planning for this as soon as time and resource permit.\n\nData integrity is a _good_ thing!\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n\n\n-----Original Message-----\nFrom:\[email protected] on behalf of Russell Smith\nSent:\tWed 11/15/2006 5:31 AM\nTo:\tAMIR FRANCO D. JOVEN\nCc:\[email protected]\nSubject:\tRe: [PERFORM] Slow SELECT on three or more clients\n\nAMIR FRANCO D. JOVEN wrote:\n> Hi!\n>\n> Im new to PostgreSQL.\n>\n> My current project uses PostgreSQL 7.3.4.\nUpgrading your version of PostgreSQL to 8.1 will give you significant \nbenefits to performance.\n>\n> the problem is like this:\n>\n> I have a table with 94 fields and a select with only one resultset in \n> only one client consumes about 0.86 seconds.\n> The client executes three 'select' statements to perform the task \n> which consumes 2.58 seconds.\n> With only one client this is acceptable, but the real problem is as i \n> add more clients, it goes more and more slower.\n>\n> for a single select with one field in one resultset, is 0.86 seconds \n> normal?\nYou will need to attach the query.\nEXPLAIN ANALYZE SELECT ...\n\nwhere SELECT ... is your query. That will help us work out what the \nproblem is. \n\n0.86 seconds might be slow for a query that returns 1 row, it might be \nfast for a query that returns a large set with complex joins and where \nconditions. Fast and slow are not objective terms. They are very \ndependent on the query.\n\n>\n> I tried vacuuming and reindexing but to no avail.\n> the total record count in that particular table is 456,541.\n>\n456,541 is not all that many records. But again you will need to post \nmore information for us to be able to assist.\n> Thanks in advance.\n>\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n-------------------------------------------------------\nClick link below if it is SPAM [email protected]\n\"https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=455b17b2223071076418835&[email protected]&retrain=spam&template=history&history_page=1\"\n!DSPAM:455b17b2223071076418835!\n-------------------------------------------------------\n\n\n\n\n\n",
"msg_date": "Wed, 15 Nov 2006 06:01:11 -0800",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT on three or more clients"
},
{
"msg_contents": "On 11/15/06, AMIR FRANCO D. JOVEN <[email protected]> wrote:\n> Hi!\n>\n> Im new to PostgreSQL.\n>\n> My current project uses PostgreSQL 7.3.4.\n>\n> the problem is like this:\n>\n> I have a table with 94 fields and a select with only one resultset in only\n> one client consumes about 0.86 seconds.\n> The client executes three 'select' statements to perform the task which\n> consumes 2.58 seconds.\n> With only one client this is acceptable, but the real problem is as i add\n> more clients, it goes more and more slower.\n>\n> for a single select with one field in one resultset, is 0.86 seconds\n> normal?\n>\n> I tried vacuuming and reindexing but to no avail.\n> the total record count in that particular table is 456,541.\n\nreturning 450k rows in around 1 second is about right for a result set\nwith one field. imo, your best bet is to try and break up your table\nand reorganize it so you dont have to query the whole thing every\ntime. why do you need to return all the rows over and over?\n\nmerlin\n",
"msg_date": "Wed, 15 Nov 2006 09:17:42 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT on three or more clients"
},
{
"msg_contents": "Hi, Amir,\n\nAMIR FRANCO D. JOVEN wrote:\n\n> My current project uses PostgreSQL 7.3.4.\n\nBy all means, please upgrade.\n\nThe newest 7.3 series version is 7.3.16, which fixes lots of critical\nbugs, and can be used as a drop-in replacement for 7.3.4 (see Release\nNotes at http://www.postgresql.org/docs/7.3/interactive/release.html )\n\nThe newest stable release is 8.1.5, and 8.2 is just on the roads...\n\n> I have a table with 94 fields and a select with only one resultset in\n> only one client consumes about 0.86 seconds.\n\n\"with only on resultset\"?\n\nYou mean \"with only one returned row\", I presume.\n\nEach SELECT has exactly one resultset, which can contain zero to many rows.\n\nPlease check the following:\n\n- Did you create the appropriate indices?\n\n- Version 7.3.X may suffer from index bloat, so REINDEX might help.\n\n- Did you VACUUM and ANALYZE the table properly?\n\n- Is your free space map setting, the statistics targets, and other\nconfig options tuned to fit your environment?\n\n- Maybe a VACUUM FULL or a CLUSTER command may help you.\n\n> for a single select with one field in one resultset, is 0.86 seconds normal?\n\nThat depends on the circumstances.\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Wed, 15 Nov 2006 15:47:04 +0100",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECT on three or more clients"
},
{
"msg_contents": "Hi Markus,\n\nThank you very much for the information.\n\nI was able to make it fast by correcting indices, i created index on\nfrequently filtered fields.\nnow it runs at 0.05 seconds average, much faster than before 0.86.\n\nI will also upgrade to 8.1.5.\n\nOnce again, thank you very much. it helped me a lot.\n\nAmir\n\nOn 11/15/06, Markus Schaber <[email protected]> wrote:\n>\n> Hi, Amir,\n>\n> AMIR FRANCO D. JOVEN wrote:\n>\n> > My current project uses PostgreSQL 7.3.4.\n>\n> By all means, please upgrade.\n>\n> The newest 7.3 series version is 7.3.16, which fixes lots of critical\n> bugs, and can be used as a drop-in replacement for 7.3.4 (see Release\n> Notes at http://www.postgresql.org/docs/7.3/interactive/release.html )\n>\n> The newest stable release is 8.1.5, and 8.2 is just on the roads...\n>\n> > I have a table with 94 fields and a select with only one resultset in\n> > only one client consumes about 0.86 seconds.\n>\n> \"with only on resultset\"?\n>\n> You mean \"with only one returned row\", I presume.\n>\n> Each SELECT has exactly one resultset, which can contain zero to many\n> rows.\n>\n> Please check the following:\n>\n> - Did you create the appropriate indices?\n>\n> - Version 7.3.X may suffer from index bloat, so REINDEX might help.\n>\n> - Did you VACUUM and ANALYZE the table properly?\n>\n> - Is your free space map setting, the statistics targets, and other\n> config options tuned to fit your environment?\n>\n> - Maybe a VACUUM FULL or a CLUSTER command may help you.\n>\n> > for a single select with one field in one resultset, is 0.86 seconds\n> normal?\n>\n> That depends on the circumstances.\n>\n> Markus\n>\n> --\n> Markus Schaber | Logical Tracking&Tracing International AG\n> Dipl. Inf. | Software Development GIS\n>\n> Fight against software patents in Europe! www.ffii.org\n> www.nosoftwarepatents.org\n>\n\n\n\n-- \nAMIR FRANCO D. JOVEN\nSoftware Engineer\nDIGI Software (PHILS.) Inc.\n\nHi Markus,\n\nThank you very much for the information.\n\nI was able to make it fast by correcting indices, i created index on frequently filtered fields.\nnow it runs at 0.05 seconds average, much faster than before 0.86.\n\nI will also upgrade to 8.1.5.\n\nOnce again, thank you very much. it helped me a lot.\n\nAmirOn 11/15/06, Markus Schaber <[email protected]> wrote:\nHi, Amir,AMIR FRANCO D. JOVEN wrote:> My current project uses PostgreSQL 7.3.4.By all means, please upgrade.The newest 7.3 series version is 7.3.16, which fixes lots of criticalbugs, and can be used as a drop-in replacement for \n7.3.4 (see ReleaseNotes at http://www.postgresql.org/docs/7.3/interactive/release.html )The newest stable release is 8.1.5, and 8.2 is just on the roads...\n> I have a table with 94 fields and a select with only one resultset in> only one client consumes about 0.86 seconds.\"with only on resultset\"?You mean \"with only one returned row\", I presume.\nEach SELECT has exactly one resultset, which can contain zero to many rows.Please check the following:- Did you create the appropriate indices?- Version 7.3.X may suffer from index bloat, so REINDEX might help.\n- Did you VACUUM and ANALYZE the table properly?- Is your free space map setting, the statistics targets, and otherconfig options tuned to fit your environment?- Maybe a VACUUM FULL or a CLUSTER command may help you.\n> for a single select with one field in one resultset, is 0.86 seconds normal?That depends on the circumstances.Markus--Markus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GISFight against software patents in Europe! www.ffii.orgwww.nosoftwarepatents.org\n-- AMIR FRANCO D. JOVENSoftware EngineerDIGI Software (PHILS.) Inc.",
"msg_date": "Thu, 16 Nov 2006 16:47:23 +0800",
"msg_from": "\"AMIR FRANCO D. JOVEN\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow SELECT on three or more clients"
}
] |
[
{
"msg_contents": "A few months ago a couple guys got \"bragging rights\" for having the most separate databases. A couple guys claimed several hundred databases and one said he had several thousand databases. The concensus was that Postgres has no problem handling many separate databases.\n\nI took that to heart and redesigned our system; we now have about 150 \"primary data sources\" that are used to build couple of \"warehouses\" that our customers actually search. Each database has about 20 tables. The total size (all databases and all tables together) is not huge, about 40 million rows. Eventually the warehouse (customer accessible) databases will be moved to separate servers, configured and indexed specifically for the task.\n\nThe only problem I've encountered is messages in the log:\n\n NOTICE: number of page slots needed (131904) exceeds max_fsm_pages (100000)\n HINT: Consider increasing the configuration parameter \"max_fsm_pages\" to a value over 131904.\n\nSo I dutifully followed this advice:\n\n max_fsm_pages = 320000\n max_fsm_relations = 20000\n\nThis is based on our current 150 databases times 20 tables, or 3000 tables total. But I wasn't sure if sequences count as \"relations\", which would double the number. So I set it at 20K relations to allow for growth.\n\nIs there anything else I need to worry about? What happens if I go to, say, 500 databases (aside from increasing the FSM numbers even more)? 1000 databases?\n\nThe servers are 4 GB, dual Xeon, Postgres 8.1.4 on Linux FC4.\n\nThanks,\nCraig\n\n",
"msg_date": "Wed, 15 Nov 2006 08:31:42 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hundreds of database and FSM"
},
{
"msg_contents": "Craig A. James wrote:\n\n> This is based on our current 150 databases times 20 tables, or 3000 tables \n> total. But I wasn't sure if sequences count as \"relations\", which would \n> double the number.\n\nThey don't because they don't have free space.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Wed, 15 Nov 2006 14:31:45 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hundreds of database and FSM"
},
{
"msg_contents": "On Wed, Nov 15, 2006 at 02:31:45PM -0300, Alvaro Herrera wrote:\n>> This is based on our current 150 databases times 20 tables, or 3000 tables \n>> total. But I wasn't sure if sequences count as \"relations\", which would \n>> double the number.\n> They don't because they don't have free space.\n\nOTOH, indexes do.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 15 Nov 2006 18:42:33 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hundreds of database and FSM"
}
] |
[
{
"msg_contents": "I'm trying to optimize a PostgreSQL 8.1.5 database running on an \nApple G5 Xserve (dual G5 2.3 GHz w/ 8GB of RAM), running Mac OS X \n10.4.8 Server.\n\nThe queries on the database are mostly reads, and I know a larger \nshared memory allocation will help performance (also by comparing it \nto the performance of the same database running on a SUSE Linux box, \nwhich has a higher shared_buffers setting).\n\nWhen I set shared_buffers above 284263 (~ 2.17 GB) in the \npostgresql.conf file, I get the standard error message when trying to \nstart the db:\n\nFATAL: could not create shared memory segment: Cannot allocate memory\nDETAIL: Failed system call was shmget(key=5432001, size=3289776128, \n03600).\n\nshmmax and shmall are set to 4GB, as can be seen by the output from \nsysctl:\nhw.physmem = 2147483648\nhw.usermem = 1885794304\nhw.memsize = 8589934592\nkern.sysv.shmmax: 4294967296\nkern.sysv.shmmin: 1\nkern.sysv.shmmni: 32\nkern.sysv.shmseg: 8\nkern.sysv.shmall: 1048576\n\nHas anyone else noticed this limitation on OS X? Any ideas on how I \nmight get shared_buffers higher than 284263?\n\nBrian Wipf\n<[email protected]>\n\n",
"msg_date": "Thu, 16 Nov 2006 17:03:21 -0700",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared_buffers > 284263 on OS X"
},
{
"msg_contents": "Brian,\nOn 16-Nov-06, at 7:03 PM, Brian Wipf wrote:\n\n> I'm trying to optimize a PostgreSQL 8.1.5 database running on an \n> Apple G5 Xserve (dual G5 2.3 GHz w/ 8GB of RAM), running Mac OS X \n> 10.4.8 Server.\n>\n> The queries on the database are mostly reads, and I know a larger \n> shared memory allocation will help performance (also by comparing \n> it to the performance of the same database running on a SUSE Linux \n> box, which has a higher shared_buffers setting).\n>\n> When I set shared_buffers above 284263 (~ 2.17 GB) in the \n> postgresql.conf file, I get the standard error message when trying \n> to start the db:\n>\n> FATAL: could not create shared memory segment: Cannot allocate memory\n> DETAIL: Failed system call was shmget(key=5432001, \n> size=3289776128, 03600).\n>\n> shmmax and shmall are set to 4GB, as can be seen by the output from \n> sysctl:\n> hw.physmem = 2147483648\n> hw.usermem = 1885794304\n> hw.memsize = 8589934592\n> kern.sysv.shmmax: 4294967296\n> kern.sysv.shmmin: 1\n> kern.sysv.shmmni: 32\n> kern.sysv.shmseg: 8\n> kern.sysv.shmall: 1048576\n>\n> Has anyone else noticed this limitation on OS X? Any ideas on how I \n> might get shared_buffers higher than 284263?\n\nMy guess is something else has taken shared memory ahead of you. OS X \nseems to be somewhat strange in how it deals with shared memory. Try \nallocating more to shmmax ?\n\nDave\n>\n> Brian Wipf\n> <[email protected]>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Sat, 18 Nov 2006 11:17:01 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "Brian Wipf <[email protected]> wrote:\n\n> I'm trying to optimize a PostgreSQL 8.1.5 database running on an \n> Apple G5 Xserve (dual G5 2.3 GHz w/ 8GB of RAM), running Mac OS X \n> 10.4.8 Server.\n> \n> The queries on the database are mostly reads, and I know a larger \n> shared memory allocation will help performance (also by comparing it\n> to the performance of the same database running on a SUSE Linux box,\n> which has a higher shared_buffers setting).\n> \n> When I set shared_buffers above 284263 (~ 2.17 GB) in the \n> postgresql.conf file, I get the standard error message when trying to\n> start the db:\n\nIt might be, that you hit an upper limit in Mac OS X:\n\n[galadriel: memtext ] cug $ ./test\ntest(291) malloc: *** vm_allocate(size=2363490304) failed (error code=3)\ntest(291) malloc: *** error: can't allocate region\ntest(291) malloc: *** set a breakpoint in szone_error to debug\nmax alloc = 2253 M\n\nThat seems near the size you found to work. \n\nI don't really know much about that, but it seems you just can't alloc\nmore memory than a bit over 2GB. So, be careful with my non-existing\nknowledge about that ... ;-)\n\ncug\n",
"msg_date": "Sat, 18 Nov 2006 18:48:07 +0100",
"msg_from": "[email protected] (Guido Neitzer)",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "Dave Cramer <[email protected]> writes:\n> On 16-Nov-06, at 7:03 PM, Brian Wipf wrote:\n>> Has anyone else noticed this limitation on OS X? Any ideas on how I \n>> might get shared_buffers higher than 284263?\n\n> My guess is something else has taken shared memory ahead of you. OS X \n> seems to be somewhat strange in how it deals with shared memory. Try \n> allocating more to shmmax ?\n\nLook in \"ipcs -m -a\" output to check this theory. (I am glad to see\nthat ipcs and ipcrm are finally there in recent OS X releases --- awhile\nback they were not, leaving people to fly entirely blind while dealing\nwith issues like this :-()\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Nov 2006 13:30:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X "
},
{
"msg_contents": "Hi.\n\nI've sent this out once, but I think it didn't make it through the \nmail server ... don't know why. If it is a double post - sorry for it.\n\nBrian Wipf <[email protected]> wrote:\n\n > I'm trying to optimize a PostgreSQL 8.1.5 database running on an\n > Apple G5 Xserve (dual G5 2.3 GHz w/ 8GB of RAM), running Mac OS X\n > 10.4.8 Server.\n >\n > The queries on the database are mostly reads, and I know a larger\n > shared memory allocation will help performance (also by comparing it\n > to the performance of the same database running on a SUSE Linux box,\n > which has a higher shared_buffers setting).\n >\n > When I set shared_buffers above 284263 (~ 2.17 GB) in the\n > postgresql.conf file, I get the standard error message when trying to\n > start the db:\n\nIt might be, that you hit an upper limit in Mac OS X:\n\n[galadriel: memtext ] cug $ ./test\ntest(291) malloc: *** vm_allocate(size=2363490304) failed (error code=3)\ntest(291) malloc: *** error: can't allocate region\ntest(291) malloc: *** set a breakpoint in szone_error to debug\nmax alloc = 2253 M\n\nThat seems near the size you found to work.\n\nI don't really know much about that, but it seems you just can't alloc\nmore memory than a bit over 2GB. So, be careful with my non-existing\nknowledge about that ... ;-)\n\ncug\n\n",
"msg_date": "Sat, 18 Nov 2006 19:44:39 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "On 18-Nov-06, at 11:30 AM, Tom Lane wrote:\n> Dave Cramer <[email protected]> writes:\n>> On 16-Nov-06, at 7:03 PM, Brian Wipf wrote:\n>>> Has anyone else noticed this limitation on OS X? Any ideas on how I\n>>> might get shared_buffers higher than 284263?\n>\n>> My guess is something else has taken shared memory ahead of you. OS X\n>> seems to be somewhat strange in how it deals with shared memory. Try\n>> allocating more to shmmax ?\n>\n> Look in \"ipcs -m -a\" output to check this theory. (I am glad to see\n> that ipcs and ipcrm are finally there in recent OS X releases --- \n> awhile\n> back they were not, leaving people to fly entirely blind while dealing\n> with issues like this :-()\n\nipcs -m -a\nShared Memory:\nT ID KEY MODE OWNER GROUP CREATOR CGROUP \nNATTCH SEGSZ CPID LPID ATIME DTIME CTIME\nm 196607 5432001 --rw------- postgres postgres postgres \npostgres 8 -2100436992 223 223 23:00:07 2:49:44 23:00:07\n\n(I also bumped shmmax and shmall to 6GB with the same shared_buffers \nlimit.)\n\nIt certainly is unfortunate if Guido's right and this is an upper \nlimit for OS X. The performance benefit of having high shared_buffers \non our mostly read database is remarkable.\n\n",
"msg_date": "Sat, 18 Nov 2006 20:13:26 -0700",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X "
},
{
"msg_contents": "Am 19.11.2006 um 04:13 schrieb Brian Wipf:\n\n> It certainly is unfortunate if Guido's right and this is an upper \n> limit for OS X. The performance benefit of having high \n> shared_buffers on our mostly read database is remarkable.\n\nI hate to say that, but if you want best performance out of \nPostgreSQL, Mac OS X (Server) isn't the best OS to achieve this. This \nmight change in the future (who knows), but currently you get more \nout of Linux. Brendan might have some of my old benchmarks. We wrote \na couple of mails about that a couple of months ago.\n\nIf you're interested, I can run a pgbench benchmark on my desktop \nmachine in the company comparing Mac OS X Tiger to Yellow Dog Linux \nwith 8.1.5 and 8.2beta3. If I remember correctly I have YDL installed \non a second hard drive and should be about a couple of minutes to \ninstall the latest PostgreSQL release.\n\nSo, there is no need for you to do the testing of YDL on your Xserves \nwithout knowing pretty much for sure, that it will bring you some \nbenefit.\n\nAs far as I remember I got around 50% to 80% better performance with \nLinux on the same machine with same settings but that was in times \nwhen I hardly new anything about optimizing the OS and PostgreSQL for \nOLTP performance.\n\nSome hints from what I have learned in the past about PostgreSQL on \nMac OS X / Apple machines:\n\n- Turn off Spotlight on all harddrives on the server (in /etc/ \nhostconfig)\n\n- Use the latest compilers (gcc) and PostgreSQL versions (I'm sure, \nyou do ... ;-)).\n\n- If you need the highest possible performance, use Linux instead of \nMac OS X for the DB server. :-/\n\nI know that some of the tips don't help with your current setup. \nPerhaps the switch to Linux on the DB machines might help. But I \ndon't know whether they work good with the XserveRAID you have. Might \nbring you some headache - I don't know, perhaps you can find opinions \non the net.\n\nRegarding the memory test I also tried it on Leopard and it seems \nthat the problem persists. Perhaps someone from Apple can say \nsomething about that. We might ask on the Darwin list.\n\nI'll post some results tomorrow.\n\ncug\n",
"msg_date": "Sun, 19 Nov 2006 16:20:46 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X "
},
{
"msg_contents": "Am 18.11.2006 um 19:44 schrieb Guido Neitzer:\n\n> It might be, that you hit an upper limit in Mac OS X:\n>\n> [galadriel: memtext ] cug $ ./test\n> test(291) malloc: *** vm_allocate(size=2363490304) failed (error \n> code=3)\n> test(291) malloc: *** error: can't allocate region\n> test(291) malloc: *** set a breakpoint in szone_error to debug\n> max alloc = 2253 M\n\nCompiled with 64 Bit support the test program doesn't bring an error.\n\nI have now tried to compile PostgreSQL as a 64 Bit binary on Mac OS X \nbut wasn't able to do so. I'm running against the wall with my \nattempts but I must admit that I'm not an expert on that low level C \nstuff.\n\nI tried with setting the CFLAGS env variable to '-mpowerpc64 - \nmcpu=970 -m64' but with that, I'm not able to compile PostgreSQL on \nmy G5.\n\nHas someone hints for that?\n\ncug\n",
"msg_date": "Sun, 19 Nov 2006 23:22:18 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "PostgreSQL with 64 bit was: Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "On Sat, Nov 18, 2006 at 08:13:26PM -0700, Brian Wipf wrote:\n> It certainly is unfortunate if Guido's right and this is an upper \n> limit for OS X. The performance benefit of having high shared_buffers \n> on our mostly read database is remarkable.\n\nGot any data about that you can share? People have been wondering about\ncases where drastically increasing shared_buffers makes a difference.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Sun, 26 Nov 2006 17:25:23 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "I think the main issue is that we can't seem to get PostgreSQL \ncompiled for 64 bit on OS X on an Xserve G5. Has anyone done that?\n\nWe have 8 GB of RAM on that server, but we can't seem to utilize it \nall. At least not for the shared_buffers setting.\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Nov 26, 2006, at 4:25 PM, Jim C. Nasby wrote:\n\n> On Sat, Nov 18, 2006 at 08:13:26PM -0700, Brian Wipf wrote:\n>> It certainly is unfortunate if Guido's right and this is an upper\n>> limit for OS X. The performance benefit of having high shared_buffers\n>> on our mostly read database is remarkable.\n>\n> Got any data about that you can share? People have been wondering \n> about\n> cases where drastically increasing shared_buffers makes a difference.\n> -- \n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n\n",
"msg_date": "Sun, 26 Nov 2006 20:20:52 -0700",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "Brendan Duddridge <[email protected]> writes:\n> I think the main issue is that we can't seem to get PostgreSQL \n> compiled for 64 bit on OS X on an Xserve G5. Has anyone done that?\n\nThere is no obvious reason why it would not work, and if anyone has\ntried and failed, they've not bothered to provide details on any PG list\nI read ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 26 Nov 2006 23:04:16 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X "
},
{
"msg_contents": "On 27-Nov-06, at 4:04 AM, Tom Lane wrote:\n> Brendan Duddridge <[email protected]> writes:\n>> I think the main issue is that we can't seem to get PostgreSQL\n>> compiled for 64 bit on OS X on an Xserve G5. Has anyone done that?\n>\n> There is no obvious reason why it would not work, and if anyone has\n> tried and failed, they've not bothered to provide details on any PG \n> list\n> I read ...\n\nI'll post details of the problems I've had compiling for 64-bit on OS \nX Tiger to the pgsql-ports when I get a chance.\n\nBrian Wipf\n\n",
"msg_date": "Mon, 27 Nov 2006 04:10:25 +0000",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X "
},
{
"msg_contents": "Am 27.11.2006 um 04:20 schrieb Brendan Duddridge:\n\n> I think the main issue is that we can't seem to get PostgreSQL \n> compiled for 64 bit on OS X on an Xserve G5. Has anyone done that?\n>\n> We have 8 GB of RAM on that server, but we can't seem to utilize it \n> all. At least not for the shared_buffers setting.\n\nOne VERY ugly idea is: if you have your stuff in more than one db, \nlet two PostgreSQL installations run on the same machine and put some \ndatabases on one and others on the second installation (on different \nports and different data directories of course) and give either one \nthe 2GB shared mem you like. So you can use the 50% of the available \nRAM.\n\nI don't know whether Mac OS X itself is able to handle a larger \namount of shared memory but I believe it can.\n\nBut nevertheless this is only a very ugly workaround on a problem \nthat shouldn't exist. The correct way would be to get a 64 Bit binary \nof PostgreSQL - which I wasn't able to create.\n\nBut, be aware of another thing here: As far as I have read about 64 \nBit applications on G5, these apps are definitely slower than their \n32 bit counterparts (I'm currently on the train so I can't be more \nprecise here without Google ...). Was it something with not enough \nregisters in the CPU? Something like that ... So it might be, that \nthe 64 bit version is able to use more shared memory but is slower \nthan the 32 bit version and you come out with the same performance. \nNobody knows ...\n\ncug\n",
"msg_date": "Mon, 27 Nov 2006 08:04:38 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "Am 27.11.2006 um 00:25 schrieb Jim C. Nasby:\n\n> Got any data about that you can share? People have been wondering \n> about\n> cases where drastically increasing shared_buffers makes a difference.\n\nI have tried to compile PostgreSQL as a 64Bit application on my G5 \nbut wasn't successful. But I must admit, that I'm not a C programmer \nat all. I know enough to work with source packages and configure / \nmake but not enough to work with the errors I got from the compile. \nAnd as I'm extremely busy right now, I can't follow the trail and \nlearn more about it.\n\nPerhaps someone with more knowledge can take a look at it.\n\ncug\n",
"msg_date": "Mon, 27 Nov 2006 08:04:59 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "On 26-Nov-06, at 11:25 PM, Jim C. Nasby wrote:\n> On Sat, Nov 18, 2006 at 08:13:26PM -0700, Brian Wipf wrote:\n>> It certainly is unfortunate if Guido's right and this is an upper\n>> limit for OS X. The performance benefit of having high shared_buffers\n>> on our mostly read database is remarkable.\n>\n> Got any data about that you can share? People have been wondering \n> about\n> cases where drastically increasing shared_buffers makes a difference.\n\nUnfortunately, there are more differences than just the \nshared_buffers setting in production right now; it's a completely \ndifferent set up, so the numbers I have to compare against aren't \nparticularly useful.\n\nWhen I get the chance, I will try to post data that shows the benefit \nof having a higher value of shared_buffers for our usage pattern \n(with all other settings being constant -- well, except maybe \neffective_cache_size). Basically, in our current configuration, we \ncan cache all of the data we care about 99% of the time in about 3GB \nof shared_buffers. Having shared_buffers set to 512MB as it was \noriginally, we were needlessly going to disk all of the time.\n\nBrian Wipf\n\n",
"msg_date": "Mon, 27 Nov 2006 07:23:47 +0000",
"msg_from": "Brian Wipf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "Am 27.11.2006 um 08:04 schrieb Guido Neitzer:\n\n> But, be aware of another thing here: As far as I have read about 64 \n> Bit applications on G5, these apps are definitely slower than their \n> 32 bit counterparts (I'm currently on the train so I can't be more \n> precise here without Google ...). Was it something with not enough \n> registers in the CPU? Something like that ... So it might be, that \n> the 64 bit version is able to use more shared memory but is slower \n> than the 32 bit version and you come out with the same performance. \n> Nobody knows ...\n\nSome information about that:\n\n<http://www.geekpatrol.ca/2006/09/32-bit-vs-64-bit-performance/>\n\nSo, the impact doesn't seem to high. So it seems to depend on the \nusage pattern whether the 32 bit with less RAM and slightly higher \nperformance might be faster than 64 bit with more shared memory and \nslightly lower performance.\n\ncug\n\n",
"msg_date": "Mon, 27 Nov 2006 08:35:27 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "On Mon, Nov 27, 2006 at 07:23:47AM +0000, Brian Wipf wrote:\n> On 26-Nov-06, at 11:25 PM, Jim C. Nasby wrote:\n> >On Sat, Nov 18, 2006 at 08:13:26PM -0700, Brian Wipf wrote:\n> >>It certainly is unfortunate if Guido's right and this is an upper\n> >>limit for OS X. The performance benefit of having high shared_buffers\n> >>on our mostly read database is remarkable.\n> >\n> >Got any data about that you can share? People have been wondering \n> >about\n> >cases where drastically increasing shared_buffers makes a difference.\n> \n> Unfortunately, there are more differences than just the \n> shared_buffers setting in production right now; it's a completely \n> different set up, so the numbers I have to compare against aren't \n> particularly useful.\n> \n> When I get the chance, I will try to post data that shows the benefit \n> of having a higher value of shared_buffers for our usage pattern \n> (with all other settings being constant -- well, except maybe \n> effective_cache_size). Basically, in our current configuration, we \n> can cache all of the data we care about 99% of the time in about 3GB \n> of shared_buffers. Having shared_buffers set to 512MB as it was \n> originally, we were needlessly going to disk all of the time.\n\nDisk or to the kernel cache?\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Mon, 27 Nov 2006 03:22:56 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "\nOn Nov 27, 2006, at 2:23 , Brian Wipf wrote:\n\n> On 26-Nov-06, at 11:25 PM, Jim C. Nasby wrote:\n>> On Sat, Nov 18, 2006 at 08:13:26PM -0700, Brian Wipf wrote:\n>>> It certainly is unfortunate if Guido's right and this is an upper\n>>> limit for OS X. The performance benefit of having high \n>>> shared_buffers\n>>> on our mostly read database is remarkable.\n>>\n>> Got any data about that you can share? People have been wondering \n>> about\n>> cases where drastically increasing shared_buffers makes a difference.\n>\n> Unfortunately, there are more differences than just the \n> shared_buffers setting in production right now; it's a completely \n> different set up, so the numbers I have to compare against aren't \n> particularly useful.\n>\n> When I get the chance, I will try to post data that shows the \n> benefit of having a higher value of shared_buffers for our usage \n> pattern (with all other settings being constant -- well, except \n> maybe effective_cache_size). Basically, in our current \n> configuration, we can cache all of the data we care about 99% of \n> the time in about 3GB of shared_buffers. Having shared_buffers set \n> to 512MB as it was originally, we were needlessly going to disk all \n> of the time.\n\nThere is a known unfortunate limitation on Darwin for SysV shared \nmemory which, incidentally, does not afflict POSIX or mmap'd shared \nmemory.\n\nhttp://archives.postgresql.org/pgsql-patches/2006-02/msg00176.php\n",
"msg_date": "Mon, 27 Nov 2006 11:05:12 -0500",
"msg_from": "AgentM <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
},
{
"msg_contents": "Am 27.11.2006 um 17:05 schrieb AgentM:\n\n> There is a known unfortunate limitation on Darwin for SysV shared \n> memory which, incidentally, does not afflict POSIX or mmap'd shared \n> memory.\n\nHmmm. The article from Chris you have linked does not mention the \nsize of the mem segment you can allocate. Nevertheless - if you \ncompile a 32 Bit binary, there is the limitation Brian mentioned.\n\nYou can easily simulate this with a small C program that allocates \nmemory - if you compile it as 64 Bit binary - not problem, if you \ncompile as 32 Bit - crash.\n\ncug\n",
"msg_date": "Mon, 27 Nov 2006 17:21:37 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers > 284263 on OS X"
}
] |
[
{
"msg_contents": "I see many of you folks singing the praises of the Areca and 3ware SATA \ncontrollers, but I've been trying to price some systems and am having trouble \nfinding a vendor who ships these controllers with their systems. Are you \nrolling your own white boxes or am I just looking in the wrong places?\n\nCurrently, I'm looking at Penguin, HP and Sun (though Sun's store isn't \nworking for me at the moment). Maybe I just need to order a Penguin and then \nbuy the controller separately, but was hoping to get support from a single \nentity.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Fri, 17 Nov 2006 09:45:42 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "availability of SATA vendors"
},
{
"msg_contents": "On 17-11-2006 18:45 Jeff Frost wrote:\n> I see many of you folks singing the praises of the Areca and 3ware SATA \n> controllers, but I've been trying to price some systems and am having \n> trouble finding a vendor who ships these controllers with their \n> systems. Are you rolling your own white boxes or am I just looking in \n> the wrong places?\n\nIn Holland it are indeed the smaller companies who supply such cards. \nBut luckily there is a very simple solution, all those big suppliers do \nsupply SAS-controllers. And as you may know, SATA disks can be used \nwithout any problem on a SAS controller. Of course they are less \nadvanced and normally slower than a SAS disk.\n\nSo you can have a nice SAS raid card and insert SATA disks in it. And \nthan you can shop at any major server vendor I know off.\n\nGood luck,\n\nArjen\n",
"msg_date": "Fri, 17 Nov 2006 20:19:59 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "Contact Pogo Linux, www.pogolinux.com. I believe they OEM and VAR \nboth Areca and 3ware in their systems.\n\n3ware was bought out by AMCC, www.amcc.com\n\nAreca cards are distributed in NA by Tekram, www.tekram.com, and are \navailable from them as solo items as well as in OEM storage systems.\nAreca main offices are in Taiwan.\n\nRon\n\nAt 12:45 PM 11/17/2006, Jeff Frost wrote:\n>I see many of you folks singing the praises of the Areca and 3ware \n>SATA controllers, but I've been trying to price some systems and am \n>having trouble finding a vendor who ships these controllers with \n>their systems. Are you rolling your own white boxes or am I just \n>looking in the wrong places?\n>\n>Currently, I'm looking at Penguin, HP and Sun (though Sun's store \n>isn't working for me at the moment). Maybe I just need to order a \n>Penguin and then buy the controller separately, but was hoping to \n>get support from a single entity.\n\n",
"msg_date": "Fri, 17 Nov 2006 15:00:35 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "\nOn Nov 17, 2006, at 9:45 AM, Jeff Frost wrote:\n\n> I see many of you folks singing the praises of the Areca and 3ware \n> SATA controllers, but I've been trying to price some systems and am \n> having trouble finding a vendor who ships these controllers with \n> their systems. Are you rolling your own white boxes or am I just \n> looking in the wrong places?\n>\n> Currently, I'm looking at Penguin, HP and Sun (though Sun's store \n> isn't working for me at the moment). Maybe I just need to order a \n> Penguin and then buy the controller separately, but was hoping to \n> get support from a single entity.\n\nI bought my last system pre-built from asacomputers.com, with the \n3ware controller in it. I don't recall if the controller was listed \non the webpage or not - I wrote their sales address and asked for a \nquote on \"one like that, with a 9550sx controller\".\n\n(\"one like that\" was their 5U, Opteron, 24 bay box.)\n\nCheers,\n Steve\n\n",
"msg_date": "Fri, 17 Nov 2006 12:41:57 -0800",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "Jeff,\n\nOn 11/17/06 11:45 AM, \"Jeff Frost\" <[email protected]> wrote:\n\n> I see many of you folks singing the praises of the Areca and 3ware SATA\n> controllers, but I've been trying to price some systems and am having trouble\n> finding a vendor who ships these controllers with their systems. Are you\n> rolling your own white boxes or am I just looking in the wrong places?\n> \n> Currently, I'm looking at Penguin, HP and Sun (though Sun's store isn't\n> working for me at the moment). Maybe I just need to order a Penguin and then\n> buy the controller separately, but was hoping to get support from a single\n> entity.\n\nRackable or Asacomputers sell and support systems with the 3Ware or Areca\ncontrollers.\n\n- Luke\n\n\n",
"msg_date": "Fri, 17 Nov 2006 15:54:06 -0600",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "On Fri, 17 Nov 2006, Luke Lonergan wrote:\n\n>> Currently, I'm looking at Penguin, HP and Sun (though Sun's store isn't\n>> working for me at the moment). Maybe I just need to order a Penguin and then\n>> buy the controller separately, but was hoping to get support from a single\n>> entity.\n>\n> Rackable or Asacomputers sell and support systems with the 3Ware or Areca\n> controllers.\n\nLuke,\n\nASAcomputers has been the most helpful of all the vendors so far, so thanks \nfor point me at them. I know you've been posting results with the Areca and \n3ware controllers, do you have a preference for one over the other? It seems \nthat you can only get 256MB cache with the 3ware 9550SX and you can get 512MB \nwith the 9650SE, but only the Areca cards go up to 1GB.\n\nI'm curious how big a performance gain we would see going from 256MB cache to \n512MB to 1GB. This is for a web site backend DB which is mostly read \nintensive, but occassionally has large burts of write activity due to new user \nsignups generated by the marketing engine.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Tue, 21 Nov 2006 17:54:38 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "\n> ASAcomputers has been the most helpful of all the vendors so far, so thanks \n> for point me at them. I know you've been posting results with the Areca and \n> 3ware controllers, do you have a preference for one over the other? It seems \n> that you can only get 256MB cache with the 3ware 9550SX and you can get 512MB \n> with the 9650SE, but only the Areca cards go up to 1GB.\n\nDon't count out LSI either. They make a great SATA controller based off\ntheir very well respected SCSI controller.\n\nSincerely,\n\nJoshua D. Drake\n\n> \n> I'm curious how big a performance gain we would see going from 256MB cache to \n> 512MB to 1GB. This is for a web site backend DB which is mostly read \n> intensive, but occassionally has large burts of write activity due to new user \n> signups generated by the marketing engine.\n> \n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Tue, 21 Nov 2006 18:13:21 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "On Tue, 21 Nov 2006, Joshua D. Drake wrote:\n\n>\n>> ASAcomputers has been the most helpful of all the vendors so far, so thanks\n>> for point me at them. I know you've been posting results with the Areca and\n>> 3ware controllers, do you have a preference for one over the other? It seems\n>> that you can only get 256MB cache with the 3ware 9550SX and you can get 512MB\n>> with the 9650SE, but only the Areca cards go up to 1GB.\n>\n> Don't count out LSI either. They make a great SATA controller based off\n> their very well respected SCSI controller.\n\nInteresting. Does it perform as well as the ARECAs and how much BBU cache can \nyou put in it? Oh, does it use the good ole megaraid_mbox driver as well?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Tue, 21 Nov 2006 18:15:22 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "\n> > Don't count out LSI either. They make a great SATA controller based off\n> > their very well respected SCSI controller.\n> \n> Interesting. Does it perform as well as the ARECAs \n\nI don't know if it performs as well as the ARECAs but I can say, I have\nnever had a complaint.\n\n> and how much BBU cache can \n> you put in it? \n\nYes it support BBU and the max cache depends on the card.\n\n4 drive model, comes with a static 64 megs\n8 drive model, comes with a static 128 megs\n\nI don't know if they are expandable but keep.\n\n\n> Oh, does it use the good ole megaraid_mbox driver as well?\n> \n\nYeah it uses the long standing megaraid, stable as all get out and fast\ndriver :)\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Tue, 21 Nov 2006 18:37:34 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "Dells (at least the 1950 and 2950) come with the Perc5, which is\nbasically just the LSI MegaRAID. The units I have come with a 256MB BBU,\nI'm not sure if it's upgradeable, but it looks like a standard DIMM in\nthere... \n\nI posted some dd and bonnie++ benchmarks of a 6-disk setup a while back\non a 2950, so you might search the archive for those numbers if you're\ninterested- you should be able to get the same or better from a\nsimilarly equipped LSI setup. I don't recall if I posted pgbench\nnumbers, but I can if that's of interest.\n\nDell's probably not the best performance you can get for the money, and\nif you're not running a supported Linux distro, you might have to do a\nlittle cajoling to get support from Dell (though in my experience, it's\nnot too difficult to get them to fix failed hw regardless of the OS),\nbut other than that I haven't had too many issues. \n\nHTH,\n\nBucky\n\n> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Joshua D. Drake\n> Sent: Tuesday, November 21, 2006 9:38 PM\n> To: Jeff Frost\n> Cc: Luke Lonergan; [email protected]\n> Subject: Re: [PERFORM] availability of SATA vendors\n> \n> \n> > > Don't count out LSI either. They make a great SATA controller\nbased\n> off\n> > > their very well respected SCSI controller.\n> >\n> > Interesting. Does it perform as well as the ARECAs\n> \n> I don't know if it performs as well as the ARECAs but I can say, I\nhave\n> never had a complaint.\n> \n> > and how much BBU cache can\n> > you put in it?\n> \n> Yes it support BBU and the max cache depends on the card.\n> \n> 4 drive model, comes with a static 64 megs\n> 8 drive model, comes with a static 128 megs\n> \n> I don't know if they are expandable but keep.\n> \n> \n> > Oh, does it use the good ole megaraid_mbox driver as well?\n> >\n> \n> Yeah it uses the long standing megaraid, stable as all get out and\nfast\n> driver :)\n> \n> Sincerely,\n> \n> Joshua D. Drake\n> \n> \n> --\n> \n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n> \n> Donate to the PostgreSQL Project:\nhttp://www.postgresql.org/about/donate\n> \n> \n> \n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n",
"msg_date": "Wed, 22 Nov 2006 09:18:11 -0500",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "On Wed, 22 Nov 2006, Bucky Jordan wrote:\n\n> Dells (at least the 1950 and 2950) come with the Perc5, which is\n> basically just the LSI MegaRAID. The units I have come with a 256MB BBU,\n> I'm not sure if it's upgradeable, but it looks like a standard DIMM in\n> there...\n>\n> I posted some dd and bonnie++ benchmarks of a 6-disk setup a while back\n> on a 2950, so you might search the archive for those numbers if you're\n> interested- you should be able to get the same or better from a\n> similarly equipped LSI setup. I don't recall if I posted pgbench\n> numbers, but I can if that's of interest.\n\nI could only find the 6 disk RAID5 numbers in the archives that were run with \nbonnie++1.03. Have you run the RAID10 tests since? Did you settle on 6 disk \nRAID5 or 2xRAID1 + 4XRAID10?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Wed, 22 Nov 2006 08:36:11 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "On Wed, 2006-11-22 at 08:36 -0800, Jeff Frost wrote:\n> On Wed, 22 Nov 2006, Bucky Jordan wrote:\n> \n> > Dells (at least the 1950 and 2950) come with the Perc5, which is\n> > basically just the LSI MegaRAID. The units I have come with a 256MB BBU,\n> > I'm not sure if it's upgradeable, but it looks like a standard DIMM in\n> > there...\n> >\n> > I posted some dd and bonnie++ benchmarks of a 6-disk setup a while back\n> > on a 2950, so you might search the archive for those numbers if you're\n> > interested- you should be able to get the same or better from a\n> > similarly equipped LSI setup. I don't recall if I posted pgbench\n> > numbers, but I can if that's of interest.\n> \n> I could only find the 6 disk RAID5 numbers in the archives that were run with \n> bonnie++1.03. Have you run the RAID10 tests since? Did you settle on 6 disk \n> RAID5 or 2xRAID1 + 4XRAID10?\n\nWhy not 6 drive raid 10? IIRC you need 4 to start RAID 10 but only pairs\nafter that.\n\nJoshua D. Drake\n\n\n> \n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Wed, 22 Nov 2006 09:00:16 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "On Wed, 22 Nov 2006, Joshua D. Drake wrote:\n\n>> I could only find the 6 disk RAID5 numbers in the archives that were run with\n>> bonnie++1.03. Have you run the RAID10 tests since? Did you settle on 6 disk\n>> RAID5 or 2xRAID1 + 4XRAID10?\n>\n> Why not 6 drive raid 10? IIRC you need 4 to start RAID 10 but only pairs\n> after that.\n\nA valid question. Does the caching raid controller negate the desire to \nseparate pg_xlog from PGDATA?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Wed, 22 Nov 2006 09:02:04 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "Jeff,\n\nYou can find some (Dutch) results here on our website:\nhttp://tweakers.net/reviews/647/5\n\nYou'll find the AMCC/3ware 9550SX-12 with up to 12 disks, Areca 1280 and \n1160 with up to 14 disks and a Promise and LSI sata-raid controller with \neach up to 8 disks. Btw, that Dell Perc5 (sas) is afaik not the same \ncard as the LSI MegaRAID SATA 300-8X, but I have no idea whether they \nshare the same controllerchip.\nIn most of the graphs you also see a Areca 1160 with 1GB in stead of its \ndefault 256MB. Hover over the labels to see only that specific line, \nthat makes the graphs quite readable.\n\nYou'll also see a Dell Perc5/e in the results, but that was done using \nFujitsu SAS 15k rpm drives, not the WD Raptor 10k rpm's\n\nIf you dive deeper in our (still Dutch) \"benchmark database\" you may \nfind some results of several disk-configurations on several controllers \nin various storage related tests, like here:\nhttp://tweakers.net/benchdb/test/193\n\nIf you want to filter some results, look for \"Resultaatfilter & \ntabelgenerator\" and press on the \"Toon filteropties\"-tekst. I think \nyou'll be able to understand the selection-overview there, even if you \ndon't understand Dutch ;)\n\"Filter resultaten\" below means the same as in English (filter [the] \nresults)\n\nBest regards,\n\nArjen\n\nOn 22-11-2006 17:36 Jeff Frost wrote:\n> On Wed, 22 Nov 2006, Bucky Jordan wrote:\n> \n>> Dells (at least the 1950 and 2950) come with the Perc5, which is\n>> basically just the LSI MegaRAID. The units I have come with a 256MB BBU,\n>> I'm not sure if it's upgradeable, but it looks like a standard DIMM in\n>> there...\n>>\n>> I posted some dd and bonnie++ benchmarks of a 6-disk setup a while back\n>> on a 2950, so you might search the archive for those numbers if you're\n>> interested- you should be able to get the same or better from a\n>> similarly equipped LSI setup. I don't recall if I posted pgbench\n>> numbers, but I can if that's of interest.\n> \n> I could only find the 6 disk RAID5 numbers in the archives that were run \n> with bonnie++1.03. Have you run the RAID10 tests since? Did you settle \n> on 6 disk RAID5 or 2xRAID1 + 4XRAID10?\n> \n",
"msg_date": "Wed, 22 Nov 2006 18:07:17 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "On Wed, 2006-11-22 at 09:02 -0800, Jeff Frost wrote:\n> On Wed, 22 Nov 2006, Joshua D. Drake wrote:\n> \n> >> I could only find the 6 disk RAID5 numbers in the archives that were run with\n> >> bonnie++1.03. Have you run the RAID10 tests since? Did you settle on 6 disk\n> >> RAID5 or 2xRAID1 + 4XRAID10?\n> >\n> > Why not 6 drive raid 10? IIRC you need 4 to start RAID 10 but only pairs\n> > after that.\n> \n> A valid question. Does the caching raid controller negate the desire to \n> separate pg_xlog from PGDATA?\n\nThere is a point where the seperate pg_xlog does not help with RAID 10.\nWhere that point is, entirely depends on your database usage :)\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n\n",
"msg_date": "Wed, 22 Nov 2006 09:09:50 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "On Wed, 2006-11-22 at 11:02, Jeff Frost wrote:\n> On Wed, 22 Nov 2006, Joshua D. Drake wrote:\n> \n> >> I could only find the 6 disk RAID5 numbers in the archives that were run with\n> >> bonnie++1.03. Have you run the RAID10 tests since? Did you settle on 6 disk\n> >> RAID5 or 2xRAID1 + 4XRAID10?\n> >\n> > Why not 6 drive raid 10? IIRC you need 4 to start RAID 10 but only pairs\n> > after that.\n> \n> A valid question. Does the caching raid controller negate the desire to \n> separate pg_xlog from PGDATA?\n\nI remember seeing something on the list a while back that having\nseparate file systems was as important as having separate disks / arrays\nfor pg_xlog and PGDATA.\n\nSomething about the linux on the machine under test being better at\nordering of writes if they were to two separate file systems. Of\ncourse, the weird thing is how counter intuitive that is, knowing that\nthe heads will have to move from one partition to another on a single\ndisk.\n\nbut on a multi-disk RAID10, it starts to make sense that the writes to\npg_xlog and the writes to data would likely be happening to different\ndrives at once, and so having them be on separate file systems would\nmake it faster if the kernel was better at handling the ordering that\nway.\n\nIt's worth looking into at least.\n\nOh, and another vote of confidence for the LSI based controllers. I've\nhad good luck with both the \"genuine\" article from LSI and the Dell\naftermarket ones. Avoid the Dell - Adaptec controllers like the\nplague. If you're lucky being slow is the only problem you'll have with\nthose.\n\nI'm really hoping to spec out a data warehouse machine here in the next\nyear with lots of drives and an Areca or LSI controller in it... This\nthread (and all the ones that have come before it) has been most useful\nand will be archived.\n",
"msg_date": "Wed, 22 Nov 2006 11:34:23 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "> \n> I could only find the 6 disk RAID5 numbers in the archives that were\nrun\n> with\n> bonnie++1.03. Have you run the RAID10 tests since? Did you settle on\n6\n> disk\n> RAID5 or 2xRAID1 + 4XRAID10?\n> \n\nUnfortunately most of the tests were run with bonnie 1.9 since they were\nbefore I realized that people didn't use the latest version. If I get\nthe chance to run additional tests, I'll post bonnie++ 1.03 numbers.\n\nWe ended up going with 6xRaid5 since we need as much storage as we can\nget at the moment.\n\nWhile I'm at it, if I have time I'll run pgbench with pg_log on a\nseparate RAID1, and one with it on a RAID10x6, but I don't know how\nuseful those results will be.\n\nThanks,\n\nBucky\n",
"msg_date": "Wed, 22 Nov 2006 16:35:37 -0500",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "Arjen,\n\nAs usual, your articles are excellent!\n\nYour results show again that the 3Ware 9550SX is really poor at random I/O\nwith RAID5 and all of the Arecas are really good. 3Ware/AMCC have designed\nthe 96xx to do much better for RAID5, but I've not seen results - can you\nget a card and test it?\n\nWe now run the 3Ware controllers in RAID10 with 8 disks each and they have\nbeen excellent. Here (on your site) are results that bear this out:\n http://tweakers.net/reviews/639/9\n\n- Luke\n\n\nOn 11/22/06 11:07 AM, \"Arjen van der Meijden\" <[email protected]>\nwrote:\n\n> Jeff,\n> \n> You can find some (Dutch) results here on our website:\n> http://tweakers.net/reviews/647/5\n> \n> You'll find the AMCC/3ware 9550SX-12 with up to 12 disks, Areca 1280 and\n> 1160 with up to 14 disks and a Promise and LSI sata-raid controller with\n> each up to 8 disks. Btw, that Dell Perc5 (sas) is afaik not the same\n> card as the LSI MegaRAID SATA 300-8X, but I have no idea whether they\n> share the same controllerchip.\n> In most of the graphs you also see a Areca 1160 with 1GB in stead of its\n> default 256MB. Hover over the labels to see only that specific line,\n> that makes the graphs quite readable.\n> \n> You'll also see a Dell Perc5/e in the results, but that was done using\n> Fujitsu SAS 15k rpm drives, not the WD Raptor 10k rpm's\n> \n> If you dive deeper in our (still Dutch) \"benchmark database\" you may\n> find some results of several disk-configurations on several controllers\n> in various storage related tests, like here:\n> http://tweakers.net/benchdb/test/193\n> \n> If you want to filter some results, look for \"Resultaatfilter &\n> tabelgenerator\" and press on the \"Toon filteropties\"-tekst. I think\n> you'll be able to understand the selection-overview there, even if you\n> don't understand Dutch ;)\n> \"Filter resultaten\" below means the same as in English (filter [the]\n> results)\n> \n> Best regards,\n> \n> Arjen\n> \n> On 22-11-2006 17:36 Jeff Frost wrote:\n>> On Wed, 22 Nov 2006, Bucky Jordan wrote:\n>> \n>>> Dells (at least the 1950 and 2950) come with the Perc5, which is\n>>> basically just the LSI MegaRAID. The units I have come with a 256MB BBU,\n>>> I'm not sure if it's upgradeable, but it looks like a standard DIMM in\n>>> there...\n>>> \n>>> I posted some dd and bonnie++ benchmarks of a 6-disk setup a while back\n>>> on a 2950, so you might search the archive for those numbers if you're\n>>> interested- you should be able to get the same or better from a\n>>> similarly equipped LSI setup. I don't recall if I posted pgbench\n>>> numbers, but I can if that's of interest.\n>> \n>> I could only find the 6 disk RAID5 numbers in the archives that were run\n>> with bonnie++1.03. Have you run the RAID10 tests since? Did you settle\n>> on 6 disk RAID5 or 2xRAID1 + 4XRAID10?\n>> \n> \n\n\n",
"msg_date": "Wed, 22 Nov 2006 15:47:56 -0600",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "Hi Luke,\n\nI forgot about that article, thanks for that link. That's indeed a nice \noverview of (in august) recent controllers. The Areca 1280 in that test \n(and the results I linked to earlier) is a pre-production model, so it \nmight actually perform even better than in that test.\n\nWe've been getting samples from AMCC in the past, so a 96xx should be \npossible. I've pointed it out to the author of the previous \nraid-articles. Thanks for pointing that out to me.\n\nBest regards,\n\nArjen\n\nOn 22-11-2006 22:47 Luke Lonergan wrote:\n> Arjen,\n> \n> As usual, your articles are excellent!\n> \n> Your results show again that the 3Ware 9550SX is really poor at random I/O\n> with RAID5 and all of the Arecas are really good. 3Ware/AMCC have designed\n> the 96xx to do much better for RAID5, but I've not seen results - can you\n> get a card and test it?\n> \n> We now run the 3Ware controllers in RAID10 with 8 disks each and they have\n> been excellent. Here (on your site) are results that bear this out:\n> http://tweakers.net/reviews/639/9\n> \n> - Luke\n> \n> \n> On 11/22/06 11:07 AM, \"Arjen van der Meijden\" <[email protected]>\n> wrote:\n> \n>> Jeff,\n>>\n>> You can find some (Dutch) results here on our website:\n>> http://tweakers.net/reviews/647/5\n>>\n>> You'll find the AMCC/3ware 9550SX-12 with up to 12 disks, Areca 1280 and\n>> 1160 with up to 14 disks and a Promise and LSI sata-raid controller with\n>> each up to 8 disks. Btw, that Dell Perc5 (sas) is afaik not the same\n>> card as the LSI MegaRAID SATA 300-8X, but I have no idea whether they\n>> share the same controllerchip.\n>> In most of the graphs you also see a Areca 1160 with 1GB in stead of its\n>> default 256MB. Hover over the labels to see only that specific line,\n>> that makes the graphs quite readable.\n>>\n>> You'll also see a Dell Perc5/e in the results, but that was done using\n>> Fujitsu SAS 15k rpm drives, not the WD Raptor 10k rpm's\n>>\n>> If you dive deeper in our (still Dutch) \"benchmark database\" you may\n>> find some results of several disk-configurations on several controllers\n>> in various storage related tests, like here:\n>> http://tweakers.net/benchdb/test/193\n>>\n>> If you want to filter some results, look for \"Resultaatfilter &\n>> tabelgenerator\" and press on the \"Toon filteropties\"-tekst. I think\n>> you'll be able to understand the selection-overview there, even if you\n>> don't understand Dutch ;)\n>> \"Filter resultaten\" below means the same as in English (filter [the]\n>> results)\n>>\n>> Best regards,\n>>\n>> Arjen\n>>\n>> On 22-11-2006 17:36 Jeff Frost wrote:\n>>> On Wed, 22 Nov 2006, Bucky Jordan wrote:\n>>>\n>>>> Dells (at least the 1950 and 2950) come with the Perc5, which is\n>>>> basically just the LSI MegaRAID. The units I have come with a 256MB BBU,\n>>>> I'm not sure if it's upgradeable, but it looks like a standard DIMM in\n>>>> there...\n>>>>\n>>>> I posted some dd and bonnie++ benchmarks of a 6-disk setup a while back\n>>>> on a 2950, so you might search the archive for those numbers if you're\n>>>> interested- you should be able to get the same or better from a\n>>>> similarly equipped LSI setup. I don't recall if I posted pgbench\n>>>> numbers, but I can if that's of interest.\n>>> I could only find the 6 disk RAID5 numbers in the archives that were run\n>>> with bonnie++1.03. Have you run the RAID10 tests since? Did you settle\n>>> on 6 disk RAID5 or 2xRAID1 + 4XRAID10?\n>>>\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n",
"msg_date": "Thu, 23 Nov 2006 09:40:43 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "On Wed, Nov 22, 2006 at 09:02:04AM -0800, Jeff Frost wrote:\n> A valid question. Does the caching raid controller negate the desire to \n> separate pg_xlog from PGDATA?\n\nTheoretically, yes. But I don't think I've seen any hard numbers from\ntesting.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Sun, 26 Nov 2006 17:30:56 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
},
{
"msg_contents": "On Wed, Nov 22, 2006 at 04:35:37PM -0500, Bucky Jordan wrote:\n> While I'm at it, if I have time I'll run pgbench with pg_log on a\n> separate RAID1, and one with it on a RAID10x6, but I don't know how\n> useful those results will be.\n\nVery, but only if the controller has write-caching enabled. For testing\npurposes it won't batter if it's actually got a BBU so long as the write\ncache works (of course you wouldn't run in production like that...)\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Sun, 26 Nov 2006 17:33:48 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: availability of SATA vendors"
}
] |
[
{
"msg_contents": "Berner,\n\nFirst, I've corrected you e-mail so that it goes to the list, and not to \nme directly.\n\n> I use my PostgreSQL 8.0.4 as Catalogue-Database for Bacula.\n> Bacula is a Backupsoftware.\n\nYes. The lead contributor to Bacula is a active PostgreSQL project \nparticipant; I'll see if he'll look into your issue.\n\n> When I backing up System (lot of very small Files) then PostgreSQL seams to by the bottleneck by inserting Catalogueinformation of every single File.\n> The System on which Postgres runs is a Sun Solaris 10 Server on a Sun Fire V240 with 1GB RAM, 1CPU (SUNW,UltraSPARC-IIIi at 1.3GHz), 2 Ultra SCSI-3 Disks 73GB at 10k RPM which are in Raid1 (Solaris Softraid).\n> \n> Can someone gif me a hint for compiling PostgreSQL or configuring the Database.\n> \n> fsync is already disabled..\n\nThis is a bad idea if you care about your database.\n\nSo, PostgreSQL 8.1 is now official supported by Sun and ships with \nSolaris 10 update 2 or later. It is recommended that you use that \nrather and an out-of-date version. Second, see \nwww.powerpostgresql.com/PerfList\n\n--Josh Berkus\n\n",
"msg_date": "Fri, 17 Nov 2006 10:44:12 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimicing Postgres for SunSolaris10 on V240"
},
{
"msg_contents": "Hi...\n\nBacula does no transaction right now, so every insert is done separately with \nautocommit.\nMoreover, the insert loop for the main table is done by several individual\nqueries to insert data in several tables (filename, dir, then file), so this\nis slow.\nThere's work underway to speed that up, using a big COPY to a temp table,\nthen queries to dispatch the records in the right places as fast as\npossible. The patch has been made, but as it is a noticeable change in the\ncore, will take some time to be integrated... See the thread about that in\nthe bacula devel list a few weeks ago... Anyhow, our benchmark for now shows\na 10-20 times speedup with postgresql, fsync stays on, and it becomes faster\nthan mysql, and scales with the number of cpus... I cannot tell when/if it\nwill be included, but there's work on this.\n\nFor now, the only thing you can do is fsync=off, knowing you're taking a\nchance with the data (but it's not that big a problem, as it's only bacula's\ndatabase, and can be rebuilt from the tapes or from a dump...) or a writeback\ndisk controller.\n\n\n\nOn Friday 17 November 2006 19:44, Josh Berkus wrote:\n> Berner,\n>\n> First, I've corrected you e-mail so that it goes to the list, and not to\n> me directly.\n>\n> > I use my PostgreSQL 8.0.4 as Catalogue-Database for Bacula.\n> > Bacula is a Backupsoftware.\n>\n> Yes. The lead contributor to Bacula is a active PostgreSQL project\n> participant; I'll see if he'll look into your issue.\n>\n> > When I backing up System (lot of very small Files) then PostgreSQL seams\n> > to by the bottleneck by inserting Catalogueinformation of every single\n> > File. The System on which Postgres runs is a Sun Solaris 10 Server on a\n> > Sun Fire V240 with 1GB RAM, 1CPU (SUNW,UltraSPARC-IIIi at 1.3GHz), 2\n> > Ultra SCSI-3 Disks 73GB at 10k RPM which are in Raid1 (Solaris Softraid).\n> >\n> > Can someone gif me a hint for compiling PostgreSQL or configuring the\n> > Database.\n> >\n> > fsync is already disabled..\n>\n> This is a bad idea if you care about your database.\n>\n> So, PostgreSQL 8.1 is now official supported by Sun and ships with\n> Solaris 10 update 2 or later. It is recommended that you use that\n> rather and an out-of-date version. Second, see\n> www.powerpostgresql.com/PerfList\n>\n> --Josh Berkus\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n",
"msg_date": "Sat, 18 Nov 2006 10:08:44 +0100",
"msg_from": "Marc Cousin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimicing Postgres for SunSolaris10 on V240"
}
] |
[
{
"msg_contents": "Hi ,\n I wanted to know , how the start up cost is computed in postgresql\n. can u give me an example to illustrate the estimation of start up cost .\n thanku raa .\n\nHi , \n I wanted to know , how\nthe start up cost is computed in postgresql . can u give me an example\nto illustrate the estimation of start up cost .\n \nthanku raa .",
"msg_date": "Sat, 18 Nov 2006 15:03:48 +0530",
"msg_from": "\"rakesh kumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "start up cost estimate"
},
{
"msg_contents": "On 11/18/06, rakesh kumar <[email protected]> wrote:\n>\n> Hi ,\n> I wanted to know , how the start up cost is computed in\n> postgresql . can u give me an example to illustrate the estimation of start\n> up cost .\n> thanku raa .\n>\n\nIt would be very helpful to have a lot more information. Some questions\nthat come to mind:\n\nApproximately how many records will you be storing?\nHow big do you think your biggest tables will be?\nHow frequently will your biggest tables be accessed (x/day, x/second,\nx/week)?\nWill those accesses be read only, or read/write?\nHow important is availability vs. cost?\nWho is your favorite Irish folk singer?\n\nOn 11/18/06, rakesh kumar <[email protected]> wrote:\nHi , \n I wanted to know , how\nthe start up cost is computed in postgresql . can u give me an example\nto illustrate the estimation of start up cost .\n \nthanku raa .\nIt would be very helpful to have a lot more information. Some questions that come to mind:Approximately how many records will you be storing?How big do you think your biggest tables will be?\nHow frequently will your biggest tables be accessed (x/day, x/second, x/week)?Will those accesses be read only, or read/write?How important is availability vs. cost?Who is your favorite Irish folk singer?",
"msg_date": "Sat, 18 Nov 2006 07:27:09 -0700",
"msg_from": "\"Joshua Marsh\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: start up cost estimate"
},
{
"msg_contents": "\"rakesh kumar\" <[email protected]> writes:\n> I wanted to know , how the start up cost is computed in postgresql\n\nLook into\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/optimizer/path/costsize.c\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 18 Nov 2006 13:11:08 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: start up cost estimate "
},
{
"msg_contents": "---------- Forwarded message ----------\nFrom: rakesh kumar <[email protected]>\nDate: Nov 19, 2006 9:31 AM\nSubject: Re: [PERFORM] start up cost estimate\nTo: Joshua Marsh <[email protected]>\n\nHi\n for suppose if I had a\nquery , something like :\n\n\n select o_orderpriority\n\n from orders\n\n where o_totalprice < 156163 and exists\n\n( select *\n\n\n\nfrom lineitem\n\n\n\nwhere l_orderkey = o_orderkey\n\n\n\nand l_extendedprice < 38570\n\n\n\n)\n\nwhat is the amount of cost of EXISTS SUBQUERY involved in estimate of\ntotalcost .\n\n where l_orderkey is part of index of lineitem , and o_orderkey is primary\nkey of orders table.\n Thanks .\n\n\nOn 11/18/06, Joshua Marsh <[email protected]> wrote:\n\n>\n> On 11/18/06, rakesh kumar < [email protected]> wrote:\n> >\n> > Hi ,\n> > I wanted to know , how the start up cost is computed in\n> > postgresql . can u give me an example to illustrate the estimation of start\n> > up cost .\n> > thanku raa\n> > .\n> >\n>\n> It would be very helpful to have a lot more information. Some questions\n> that come to mind:\n>\n> Approximately how many records will you be storing?\n> How big do you think your biggest tables will be?\n> How frequently will your biggest tables be accessed (x/day, x/second,\n> x/week)?\n> Will those accesses be read only, or read/write?\n> How important is availability vs. cost?\n> Who is your favorite Irish folk singer?\n>\n>\n>\n\n---------- Forwarded message ----------From: rakesh kumar <[email protected]>Date: Nov 19, 2006 9:31 AM\nSubject: Re: [PERFORM] start up cost estimateTo: Joshua Marsh <[email protected]>Hi for suppose if I had a query , something like :\n select o_orderpriority\n from orders where o_totalprice < 156163\n and exists ( select * from lineitem\n where l_orderkey = o_orderkey\n and l_extendedprice < \n\n38570\n ) \n\nwhat is the amount of cost of EXISTS SUBQUERY involved in estimate of totalcost . where l_orderkey is part of index of lineitem , and o_orderkey is primary key of orders table. Thanks .\n On 11/18/06, Joshua Marsh <\[email protected]> wrote:\nOn 11/18/06, rakesh kumar <\[email protected]> wrote:\nHi , \n I wanted to know , how\nthe start up cost is computed in postgresql . can u give me an example\nto illustrate the estimation of start up cost .\n \nthanku raa .\nIt would be very helpful to have a lot more information. Some questions that come to mind:Approximately how many records will you be storing?How big do you think your biggest tables will be?\nHow frequently will your biggest tables be accessed (x/day, x/second, x/week)?Will those accesses be read only, or read/write?How important is availability vs. cost?Who is your favorite Irish folk singer?",
"msg_date": "Sun, 19 Nov 2006 09:35:08 +0530",
"msg_from": "\"rakesh kumar\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: start up cost estimate"
}
] |
[
{
"msg_contents": "Hi All,\n\n\tI have some problems with my sql query :\n\nselect distinct\nINTEGER_VALUE,DATE_VALUE,EI_ID,VALUE_TYPE,FLOAT_VALUE,ID,TEXT_VALUE,CATEGORY_ID,STRING_VALUE,CATEGORYATTR_ID,NAME from ((( select d_attribute as reqin2 where reqin2.CATEGORYATTR_ID = 1041947543 AND reqin2.TEXT_VALUE ilike '%autrefois%' and ei_id in ( select distinct ei_id as EIID from MPNG2_ei_attribute as reqin3 where reqin3.NAME = 'CategoryID-1084520156' AND reqin3.STRING_VALUE = '1084520156' ) ) ) ) ) as req0 join MPNG2_ei_attribute on req0.eiid = MPNG2_ei_attribute.ei_id order by ei_id asc;\n\n\tWhen enable_bitmapscan is enabled this query cost 51893.491 ms and when\nis disabled 117.709 ms. But i heard bitmapscan feature improved\nperformance, can you help me ?\n\n\tYou can read two results of EXPLAIN ANALYZE command here :\nhttp://sharengo.org/explain.txt\n\nBest Regards,\nJérôme.\n\n-- \nJérôme BENOIS\nOpen-Source : http://www.sharengo.org\nCorporate : http://www.argia-engineering.fr\nJabberId : jerome.benois AT gmail.com",
"msg_date": "Tue, 21 Nov 2006 10:21:29 +0100",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "BitMapScan performance degradation"
},
{
"msg_contents": "> When enable_bitmapscan is enabled this query cost 51893.491 ms and when\n> is disabled 117.709 ms. But i heard bitmapscan feature improved\n> performance, can you help me ?\n\nThe standard question we always ask first is if you have run VACUUM\nANALYZE recently?\n\nAre all the costs and estimated number of rows the same after you have run\nVACUUM ANALYZE? If not you might want to show that new plan as well.\n\n/Dennis\n",
"msg_date": "Tue, 21 Nov 2006 16:12:47 +0100 (CET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: BitMapScan performance degradation"
},
{
"msg_contents": "Hi Dennis,\n\n\nLe mardi 21 novembre 2006 à 16:12 +0100, [email protected] a écrit :\n> > When enable_bitmapscan is enabled this query cost 51893.491 ms and when\n> > is disabled 117.709 ms. But i heard bitmapscan feature improved\n> > performance, can you help me ?\n> \n> The standard question we always ask first is if you have run VACUUM\n> ANALYZE recently?\n\nYes i ran VACCUUM ANALYZE just before my EXPLAIN.\n\n> Are all the costs and estimated number of rows the same after you have run\n> VACUUM ANALYZE? If not you might want to show that new plan as well.\n> \n> /Dennis\n\nJérôme.",
"msg_date": "Tue, 21 Nov 2006 16:35:56 +0100",
"msg_from": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BitMapScan performance degradation"
},
{
"msg_contents": "=?ISO-8859-1?Q?J=E9r=F4me?= BENOIS <[email protected]> writes:\n> \tYou can read two results of EXPLAIN ANALYZE command here :\n> http://sharengo.org/explain.txt\n\nI think the problem is the misestimation of the size of the reqin3\nresult:\n\n-> Bitmap Heap Scan on mpng2_ei_attribute reqin3 (cost=28.17..32.18 rows=1 width=4) (actual time=1.512..7.941 rows=1394 loops=1)\n Recheck Cond: (((string_value)::text = '1084520156'::text) AND ((name)::text = 'CategoryID-1084520156'::text))\n -> BitmapAnd (cost=28.17..28.17 rows=1 width=0) (actual time=1.275..1.275 rows=0 loops=1)\n -> Bitmap Index Scan on mpng2_ei_attribute_string_value (cost=0.00..4.78 rows=510 width=0) (actual time=0.534..0.534 rows=1394 loops=1)\n Index Cond: ((string_value)::text = '1084520156'::text)\n -> Bitmap Index Scan on mpng2_ei_attribute_name (cost=0.00..23.13 rows=2896 width=0) (actual time=0.590..0.590 rows=1394 loops=1)\n Index Cond: ((name)::text = 'CategoryID-1084520156'::text)\n\nAnytime a rowcount estimate is off by more than a factor of a thousand,\nyou can expect some poor choices in the rest of the plan :-(. It looks\nto me like the planner is expecting those two index conditions to be\nindependently selective, when in reality they are completely redundant.\nPerhaps rethinking your data model would be a useful activity.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Nov 2006 10:44:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BitMapScan performance degradation "
}
] |
[
{
"msg_contents": "I have a commonly run query that has been executing fine for the last \nyear or so. It usually completes in under 1 second. However, in the \npast week or so the performance of the query has become erratic, \ntaking anywhere from 100 ms to 100,000 ms to complete the same query \n(executed just seconds apart). It's the holiday season, so there has \nprobably been an increase in server activity.\n\nSELECT products.* FROM products LEFT JOIN product_identifiers ON \nproduct_identifiers.product_id = products.id WHERE \nproduct_identifiers.identifier = '21A40606099800168' OR \nproducts.part_number = '21A40606099800168';\n\nI'm just using this query as a concrete example. I have the same \nproblem with other queries (I suspect most queries). Here's the \nEXPLAIN ANALYZE output run twice. The first time is fast. The second \ntime is slow.\n\nThen, below that, I've attached some specs from my configuration... \nif anyone sees anything that is out of whack... I'm new to \ntroubleshooting this sort of thing, so any advise would be appreciated.\n\n-------\n\nofficelink=# EXPLAIN ANALYZE SELECT products.* FROM products LEFT \nJOIN product_identifiers ON product_identifiers.product_id = \nproducts.id WHERE product_identifiers.identifier = \n'21A40606099800168' OR products.part_number = '21A40606099800168';\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------------------------\nMerge Left Join (cost=0.00..4264.09 rows=40368 width=107) (actual \ntime=755.150..755.150 rows=0 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".product_id)\n Filter: (((\"inner\".identifier)::text = '21A40606099800168'::text) \nOR ((\"outer\".part_number)::text = '21A40606099800168'::text))\n -> Index Scan using products_id_idx on products \n(cost=0.00..3680.34 rows=40368 width=107) (actual time=8.762..643.550 \nrows=40382 loops=1)\n -> Index Scan using product_identifiers_product_id_idx on \nproduct_identifiers (cost=0.00..368.69 rows=6524 width=20) (actual \ntime=0.131..76.958 rows=6532 loops=1)\nTotal runtime: 755.301 ms\n(6 rows)\n\nofficelink=# EXPLAIN ANALYZE SELECT products.* FROM products LEFT \nJOIN product_identifiers ON product_identifiers.product_id = \nproducts.id WHERE product_identifiers.identifier = \n'21A40606099800168' OR products.part_number = '21A40606099800168';\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n--------------------------\nMerge Left Join (cost=0.00..4264.09 rows=40368 width=107) (actual \ntime=25885.235..25885.235 rows=0 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".product_id)\n Filter: (((\"inner\".identifier)::text = '21A40606099800168'::text) \nOR ((\"outer\".part_number)::text = '21A40606099800168'::text))\n -> Index Scan using products_id_idx on products \n(cost=0.00..3680.34 rows=40368 width=107) (actual \ntime=0.070..23503.630 rows=40382 loops=1)\n -> Index Scan using product_identifiers_product_id_idx on \nproduct_identifiers (cost=0.00..368.69 rows=6524 width=20) (actual \ntime=0.058..2346.662 rows=6532 loops=1)\nTotal runtime: 25885.375 ms\n(6 rows)\n\n\nServer Specs:\nIntel Core Solo Mac Mini running OS 10.4.7\n1.25 GB RAM\n30 GB of space left on the 55 GB internal hard drive\n\nUsage:\n400 persistent connections from various clients\ntop usually sits at 85%-95% idle.\n\npostgresql.conf Settings [non-defualt]:\nmax_connections = 500\nshared_buffers = 10000\nwork_mem = 2048\nmax_fsm_pages = 150000\nmax_stack_depth = 6000\narchive_command = 'cp -i %p /Volumes/Backup/wal_archive/%f </dev/null'\neffective_cache_size = 30000\nlog_min_duration_statement = 2000\nlog_line_prefix = '%t %h '\nstats_start_collector = on\nstats_row_level = on\nautovacuum = on\nautovacuum_naptime = 60\nautovacuum_vacuum_threshold = 150\nautovacuum_vacuum_scale_factor = 0.00000001\n\n",
"msg_date": "Tue, 21 Nov 2006 12:03:34 -0500",
"msg_from": "Joe Lester <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow Query"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have an application that is mission critical, normally very fast, but\nwhen an I/O or CPU bound transaction appears, the mission critical\napplication suffers. Is there a way go give some kind of priority to this\nkind of application?\nReimer\n\n\n\n\n\n\n\nHi,\n \nWe have an \napplication that is mission critical, normally very fast, but when an I/O or CPU \nbound transaction appears, the mission critical application suffers. Is there a \nway go give some kind of priority to this kind of \napplication?\nReimer",
"msg_date": "Tue, 21 Nov 2006 21:43:26 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Priority to a mission critical transaction"
},
{
"msg_contents": "On Tue, 2006-11-21 at 21:43 -0200, Carlos H. Reimer wrote:\n> Hi,\n> \n> We have an application that is mission critical, normally very fast,\n> but when an I/O or CPU bound transaction appears, the mission critical\n> application suffers. Is there a way go give some kind of priority to\n> this kind of application?\n> Reimer\n\n\nNot that I'm aware of. Depending on what the problems transactions are,\nsetting up a replica on a separate machine and running those\ntransactions against the replica might be the solution.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n",
"msg_date": "Thu, 23 Nov 2006 15:40:15 -0500",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Priority to a mission critical transaction"
},
{
"msg_contents": "On Thu, Nov 23, 2006 at 03:40:15PM -0500, Brad Nicholson wrote:\n> On Tue, 2006-11-21 at 21:43 -0200, Carlos H. Reimer wrote:\n> > Hi,\n> > \n> > We have an application that is mission critical, normally very fast,\n> > but when an I/O or CPU bound transaction appears, the mission critical\n> > application suffers. Is there a way go give some kind of priority to\n> > this kind of application?\n> > Reimer\n> \n> \n> Not that I'm aware of. Depending on what the problems transactions are,\n> setting up a replica on a separate machine and running those\n> transactions against the replica might be the solution.\n\nThe BizGres project has been working on resource quotas, which might\neventually evolve to what you're looking for.\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Sun, 26 Nov 2006 18:51:43 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Priority to a mission critical transaction"
},
{
"msg_contents": "Hi,\n\nThere is an article about \"Lowering the priority of a PostgreSQL query\"\n(http://weblog.bignerdranch.com/?p=11) that explains how to use the\nsetpriority() to lower PostgreSQL processes.\n\nI?m wondering how much effective it would be for i/o bound systems.\n\nWill the setpriority() system call affect i/o queue too?\n\nReimer\n\n\n> -----Mensagem original-----\n> De: Jim C. Nasby [mailto:[email protected]]\n> Enviada em: domingo, 26 de novembro de 2006 22:52\n> Para: Brad Nicholson\n> Cc: [email protected]; [email protected]\n> Assunto: Re: [PERFORM] Priority to a mission critical transaction\n>\n>\n> On Thu, Nov 23, 2006 at 03:40:15PM -0500, Brad Nicholson wrote:\n> > On Tue, 2006-11-21 at 21:43 -0200, Carlos H. Reimer wrote:\n> > > Hi,\n> > >\n> > > We have an application that is mission critical, normally very fast,\n> > > but when an I/O or CPU bound transaction appears, the mission critical\n> > > application suffers. Is there a way go give some kind of priority to\n> > > this kind of application?\n> > > Reimer\n> >\n> >\n> > Not that I'm aware of. Depending on what the problems transactions are,\n> > setting up a replica on a separate machine and running those\n> > transactions against the replica might be the solution.\n>\n> The BizGres project has been working on resource quotas, which might\n> eventually evolve to what you're looking for.\n> --\n> Jim Nasby [email protected]\n> EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n>\n>\n\n",
"msg_date": "Tue, 28 Nov 2006 17:01:25 -0200",
"msg_from": "\"Carlos H. Reimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: Priority to a mission critical transaction"
},
{
"msg_contents": "\"Carlos H. Reimer\" <[email protected]> writes:\n> There is an article about \"Lowering the priority of a PostgreSQL query\"\n> (http://weblog.bignerdranch.com/?p=11) that explains how to use the\n> setpriority() to lower PostgreSQL processes.\n\n> I?m wondering how much effective it would be for i/o bound systems.\n\nThat article isn't worth the electrons it's written on. Aside from the\nI/O point, there's a little problem called \"priority inversion\". See\nthe archives for (many) past discussions of nice'ing backends.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Nov 2006 14:07:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction "
},
{
"msg_contents": "* Carlos H. Reimer <[email protected]> [061128 20:02]:\n> Hi,\n> \n> There is an article about \"Lowering the priority of a PostgreSQL query\"\n> (http://weblog.bignerdranch.com/?p=11) that explains how to use the\n> setpriority() to lower PostgreSQL processes.\n> \n> I?m wondering how much effective it would be for i/o bound systems.\n> \n> Will the setpriority() system call affect i/o queue too?\n\nNope, and in fact the article shows the way not to do it.\n\nSee http://en.wikipedia.org/wiki/Priority_inversion\n\nBasically, lowering the priority of one backend in PostgreSQL can lead\nto reduced performance of all, especially also the backends with\nhigher priorities.\n\n(Think of priority inversion as a timed soft deadlock. It will\neventually resolve, because it's not a real deadlock, but it might\nmean halting important stuff for quite some time.)\n\nTaking the example above, consider the following processes and nice\nvalues:\n\n19x backends As nice = 0\n 1x backend B nice = 10 (doing maintenance work)\n 1x updatedb nice = 5 (running as a cronjob at night)\n \n Now, it possible (the probability depends upon your specific\nsituation), where backend B grabs some internal lock that is needed,\nand then it gets preempted by higher priority stuff. Well, the A\nbackends need that lock too, so they cannot run; instead we wait till\nupdatedb (which updates the locate search db, and goes through the\nwhole filesystem of the server) is finished.\n\nLuckily most if not all of these processes are disc io bound, so they\nget interrupted any way, and low priority processes don't starve.\nWell, replace updatedb with something hogging the CPU, and rethink the\nsituation.\n\nAndreas\n",
"msg_date": "Tue, 28 Nov 2006 20:20:36 +0100",
"msg_from": "Andreas Kostyrka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "All,\n\nThe Bizgres project is working on resource management for PostgreSQL. So far, \nhowever, they have been able to come up with schemes that work for BI/DW at \nthe expense of OLTP. Becuase of O^N lock checking issues, resource \nmanagement for OLTP which doesn't greatly reduce overall performance seems a \nnear-impossible task.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Tue, 28 Nov 2006 11:45:44 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Short summary:\n * Papers studying priority inversion issues with\n databases including PosgreSQL and realistic workloads\n conclude setpriority() helps even in the presence of\n priority inversion issues for TCP-C and TCP-W like\n workloads.\n * Avoiding priority inversion with priority inheritance\n will further help some workloads (TCP-C) more than\n others (TCP-W) but even without such schedulers\n priority inversion does not cause as much harm\n as the benefit you get from indirectly scheduling\n I/O through setpriority() in any paper I've seen.\n\nAndreas Kostyrka wrote:\n> * Carlos H. Reimer <[email protected]> [061128 20:02]:\n>> Will the setpriority() system call affect i/o queue too?\n> \n> Nope, and in fact the article shows the way not to do it.\n\nActually *YES* setpriority() does have an indirect effect\non the I/O queue.\n\nThis paper: http://www.cs.cmu.edu/~bianca/icde04.pdf\nstudies setpriority() with non-trivial (TCP-W and TCP-C)\nworkloads on a variety of databases and shows that\nthat setpriority() is *extremely* effective for\nPostgreSQL.\n\n\"For TPC-C on MVCC DBMS, and in particular PostgreSQL,\n CPU scheduling is most effective, due to its ability\n to indirectly schedule the I/O bottleneck.\n\n For TPC-C running on PostgreSQL,\n the simplest CPU scheduling policy (CPU-Prio) provides\n a factor of 2 improvement for high-priority transactions,\n while adding priority inheritance (CPU-Prio-Inherit)\n provides a factor of 6 improvement while hardly\n penalizing low-priority transactions. Preemption\n (P-CPU) provides no appreciable benefit over\n CPU-Prio-Inherit.\"\n\n> See http://en.wikipedia.org/wiki/Priority_inversion\n\nPriority Inversion is a well studied problem; and depends\non both the workload and the database. In particular,\nTPC-W workloads have been studied on a variety of databases\nincluding PostgreSQL. Again, from:\n http://www.cs.cmu.edu/~bianca/icde04.pdf\n\nThey observe that avoiding priority inversion\nissues by enabling priority inheritance with PostgreSQL\nhas a negligible effect on TCP-W like workloads, but\na significant improvement on TCP-C like workloads.\n\n \"Recall from Section 5.3 that CPU scheduling (CPUPrio)\n is more effective than NP-LQ for TPC-W. Thus Figure 8\n compares the policies CPU-Prio-Inherit to CPU-Prio for\n the TPC-W workload on PostgreSQL.\n\n We find that there is no improvement for CPU-Prio-\n Inherit over CPU-Prio. This is to be expected given\n the low data contention found in the TPC-W workload; priority\n inversions can only occur during data contention. Results\n for low-priority transactions are not shown, but as in\n Figure 4, low-priority transactions are only negligibly\n penalized on average.\"\n\nYes, theoretically priority inversion can have pathologically\nbad effects (not unlike qsort), it affects some workloads more\nthan others.\n\nBut in particular, their paper concludes that\nPostgreSQL with TCP-C and TCP-W like workloads\ngain significant benefits and no drawbacks from\nindirectly tuning I/O scheduling with setpriority().\n\n\n\n\nIf anyone has references to papers or studies that suggest that\npriority inversion actually is a problem with RDBMS's - and\nPostgreSQL on Linux in particular, I'd be very interested.\n\nOtherwise it seems to me existing research points to\nsignificant benefits with only theoretical drawbacks\nin pathological cases.\n",
"msg_date": "Tue, 28 Nov 2006 12:31:28 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "\nSomeone should ask them to remove the article.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"Carlos H. Reimer\" <[email protected]> writes:\n> > There is an article about \"Lowering the priority of a PostgreSQL query\"\n> > (http://weblog.bignerdranch.com/?p=11) that explains how to use the\n> > setpriority() to lower PostgreSQL processes.\n> \n> > I?m wondering how much effective it would be for i/o bound systems.\n> \n> That article isn't worth the electrons it's written on. Aside from the\n> I/O point, there's a little problem called \"priority inversion\". See\n> the archives for (many) past discussions of nice'ing backends.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Tue, 28 Nov 2006 18:44:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Josh Berkus wrote:\n> All,\n> \n> The Bizgres project is working on resource management for PostgreSQL. So far, \n> however, they have been able to come up with schemes that work for BI/DW at \n> the expense of OLTP. Becuase of O^N lock checking issues, resource \n> management for OLTP which doesn't greatly reduce overall performance seems a \n> near-impossible task.\n> \n\nRight - I guess it is probably more correct to say that the \nimplementation used in Bizgres is specifically targeted at BI/DW \nworkloads rather than OLTP.\n\nAt this point we have not measured its impact on concurrency in anything \nother than a handwaving manner - e.g pgbench on an older SMP system \nshowed what looked like about a 10% hit. However the noise level for \npgbench is typically >10% so - a better benchmark on better hardware is \n needed.\n\nCheers\n\nMark\n",
"msg_date": "Wed, 29 Nov 2006 14:11:12 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Before asking them to remove it, are we sure priority inversion\nis really a problem?\n\nI thought this paper: http://www.cs.cmu.edu/~bianca/icde04.pdf\ndid a pretty good job at studying priority inversion on RDBMs's\nincluding PostgreSQL on various workloads (TCP-W and TCP-C) and\nfound that the benefits of setting priorities vastly outweighed\nthe penalties of priority inversion across all the databases and\nall the workloads they tested.\n\n\n\nBruce Momjian wrote:\n> Someone should ask them to remove the article.\n> \n> ---------------------------------------------------------------------------\n> \n> Tom Lane wrote:\n>> \"Carlos H. Reimer\" <[email protected]> writes:\n>>> There is an article about \"Lowering the priority of a PostgreSQL query\"\n>>> (http://weblog.bignerdranch.com/?p=11) that explains how to use the\n>>> setpriority() to lower PostgreSQL processes.\n>>> I?m wondering how much effective it would be for i/o bound systems.\n>> That article isn't worth the electrons it's written on. Aside from the\n>> I/O point, there's a little problem called \"priority inversion\". See\n>> the archives for (many) past discussions of nice'ing backends.\n>>\n>> \t\t\tregards, tom lane\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 7: You can help support the PostgreSQL project by donating at\n>>\n>> http://www.postgresql.org/about/donate\n> \n",
"msg_date": "Tue, 28 Nov 2006 17:20:38 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Ron Mayer wrote:\n> Short summary:\n> * Papers studying priority inversion issues with\n> databases including PosgreSQL and realistic workloads\n> conclude setpriority() helps even in the presence of\n> priority inversion issues for TCP-C and TCP-W like\n> workloads.\n> * Avoiding priority inversion with priority inheritance\n> will further help some workloads (TCP-C) more than\n> others (TCP-W) but even without such schedulers\n> priority inversion does not cause as much harm\n> as the benefit you get from indirectly scheduling\n> I/O through setpriority() in any paper I've seen.\n> \n> Andreas Kostyrka wrote:\n>> * Carlos H. Reimer <[email protected]> [061128 20:02]:\n>>> Will the setpriority() system call affect i/o queue too?\n>> Nope, and in fact the article shows the way not to do it.\n> \n> Actually *YES* setpriority() does have an indirect effect\n> on the I/O queue.\n> \n\nWhile I was at Greenplum a related point was made to me:\n\nFor a TPC-H/BI type workload on a well configured box the IO subsystem \ncan be fast enough so that CPU is the bottleneck for much of the time - \nso being able to use setpriority() as a resource controller makes sense.\n\nAlso, with such a workload being mainly SELECT type queries, the dangers \nconnected with priority inversion are considerably reduced.\n\nCheers\n\nMark\n",
"msg_date": "Wed, 29 Nov 2006 14:27:54 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Mark Kirkwood wrote:\n> Ron Mayer wrote:\n>> Short summary:\n>> * Papers studying priority inversion issues with\n>> databases including PosgreSQL and realistic workloads\n>> conclude setpriority() helps even in the presence of\n>> priority inversion issues for TCP-C and TCP-W like\n>> workloads.\n>> * Avoiding priority inversion with priority inheritance\n>> will further help some workloads (TCP-C) more than\n>> others (TCP-W) but even without such schedulers\n>> priority inversion does not cause as much harm\n>> as the benefit you get from indirectly scheduling\n>> I/O through setpriority() in any paper I've seen.\n>>\n>> Andreas Kostyrka wrote:\n>>> * Carlos H. Reimer <[email protected]> [061128 20:02]:\n>>>> Will the setpriority() system call affect i/o queue too?\n>>> Nope, and in fact the article shows the way not to do it.\n>>\n>> Actually *YES* setpriority() does have an indirect effect\n>> on the I/O queue.\n>>\n> \n> While I was at Greenplum a related point was made to me:\n> \n> For a TPC-H/BI type workload on a well configured box the IO subsystem\n> can be fast enough so that CPU is the bottleneck for much of the time -\n> so being able to use setpriority() as a resource controller makes sense.\n\nPerhaps - but section 4 of the paper in question (pages 3 through 6\nof the 12 pages at http://www.cs.cmu.edu/~bianca/icde04.pdf) go\nthrough great lengths to identify the bottlenecks for each workload\nand each RDBMS. Indeed for the TCP-W on PostgreSQL and DB2, CPU\nwas a bottleneck but no so for TCP-C - which had primarily I/O\ncontention on PostgreSQL and lock contention on DB2.\n\n http://www.cs.cmu.edu/~bianca/icde04.pdf\n \"for TPC-C ... The main result shown in Figure 1 is that locks\n are the bottleneck resource for both Shore and DB2 (rows 1 and\n 2), while I/O tends to be the bottleneck resource for PostgreSQL\n (row 3). We now discuss these in more detail.\n ...\n Thus, CPU is the bottleneck resource for TPC-W 1.\"\n\n> Also, with such a workload being mainly SELECT type queries, the dangers\n> connected with priority inversion are considerably reduced.\n\nAnd indeed the TCP-W benchmark did not show further improvement\nfor high priority transactions with Priority Inheritance enabled\nin the scheduler (which mitigates the priority inversion problem) -\nbut the TCP-C benchmark did show further improvement -- which agrees\nwith Mark's observation. However even with priority inversion\nproblems; the indirect benefits of setpriority() on I/O scheduling\noutweighed the penalties of priority inversion in each of their\ntest cases.\n",
"msg_date": "Wed, 29 Nov 2006 02:55:48 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Ron Mayer wrote:\n\n>Before asking them to remove it, are we sure priority inversion\n>is really a problem?\n>\n>I thought this paper: http://www.cs.cmu.edu/~bianca/icde04.pdf\n>did a pretty good job at studying priority inversion on RDBMs's\n>including PostgreSQL on various workloads (TCP-W and TCP-C) and\n>found that the benefits of setting priorities vastly outweighed\n>the penalties of priority inversion across all the databases and\n>all the workloads they tested.\n>\n> \n>\nI have the same question. I've done some embedded real-time \nprogramming, so my innate reaction to priority inversions is that \nthey're evil. But, especially given priority inheritance, is there any \nsituation where priority inversion provides *worse* performance than \nrunning everything at the same priority? I can easily come up with \nsituations where it devolves to that case- where all processes get \npromoted to the same high priority. But I can't think of one where \nusing priorities makes things worse, and I can think of plenty where it \nmakes things better.\n\nBrian\n\n",
"msg_date": "Wed, 29 Nov 2006 08:25:57 -0500",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:\n...\n> I have the same question. I've done some embedded real-time \n> programming, so my innate reaction to priority inversions is that \n> they're evil. But, especially given priority inheritance, is there any \n> situation where priority inversion provides *worse* performance than \n> running everything at the same priority? I can easily come up with \n> situations where it devolves to that case- where all processes get \n> promoted to the same high priority. But I can't think of one where \n> using priorities makes things worse, and I can think of plenty where it \n> makes things better.\n...\n\nIt can make things worse when there are at least 3 priority levels\ninvolved. The canonical sequence looks as follows:\n\nLOW: Aquire a lock\nMED: Start a long-running batch job that hogs CPU\nHIGH: Wait on lock held by LOW task\n\nat this point, the HIGH task can't run until the LOW task releases its\nlock. but the LOW task can't run to completion and release its lock\nuntil the MED job completes.\n\n(random musing): I wonder if PG could efficiently be made to temporarily\nraise the priority of any task holding a lock that a high priority task\nwaits on. I guess that would just make it so that instead of HIGH tasks\nbeing effectively reduced to LOW, then LOW tasks could be promoted to\nHIGH.\n\n-- Mark Lewis\n",
"msg_date": "Wed, 29 Nov 2006 07:03:44 -0800",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Mark Lewis wrote:\n\n>On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:\n>...\n> \n>\n>>I have the same question. I've done some embedded real-time \n>>programming, so my innate reaction to priority inversions is that \n>>they're evil. But, especially given priority inheritance, is there any \n>>situation where priority inversion provides *worse* performance than \n>>running everything at the same priority? I can easily come up with \n>>situations where it devolves to that case- where all processes get \n>>promoted to the same high priority. But I can't think of one where \n>>using priorities makes things worse, and I can think of plenty where it \n>>makes things better.\n>> \n>>\n>...\n>\n>It can make things worse when there are at least 3 priority levels\n>involved. The canonical sequence looks as follows:\n>\n>LOW: Aquire a lock\n>MED: Start a long-running batch job that hogs CPU\n>HIGH: Wait on lock held by LOW task\n>\n>at this point, the HIGH task can't run until the LOW task releases its\n>lock. but the LOW task can't run to completion and release its lock\n>until the MED job completes.\n>\n> \n>\n>(random musing): I wonder if PG could efficiently be made to temporarily\n>raise the priority of any task holding a lock that a high priority task\n>waits on. I guess that would just make it so that instead of HIGH tasks\n>being effectively reduced to LOW, then LOW tasks could be promoted to\n>HIGH.\n>\n> \n>\n\nI thought that was what priority inheritance did- once HIGH blocks on a \nlock held by LOW, LOW gets it's priority raised to that of HIGH. Then \nLOW takes precedence over MED. If LOW blocks on a lock held by MED when \nit has the same priority of HIGH, MED gets it's priority raised to \nHIGH. Note that now all three processes are running with HIGH priority- \nbut is this any different from the default case of running them as the \nsame priority? This is what I was talking about when I said I could \nimagine priority inheritance \"devolving\" to the single priority case.\n\nOf course, this is a little tricky to implement. I haven't looked at \nhow difficult it'd be within Postgres.\n\nBrian\n\n\n\n\n\n\n\n\nMark Lewis wrote:\n\nOn Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:\n...\n \n\nI have the same question. I've done some embedded real-time \nprogramming, so my innate reaction to priority inversions is that \nthey're evil. But, especially given priority inheritance, is there any \nsituation where priority inversion provides *worse* performance than \nrunning everything at the same priority? I can easily come up with \nsituations where it devolves to that case- where all processes get \npromoted to the same high priority. But I can't think of one where \nusing priorities makes things worse, and I can think of plenty where it \nmakes things better.\n \n\n...\n\nIt can make things worse when there are at least 3 priority levels\ninvolved. The canonical sequence looks as follows:\n\nLOW: Aquire a lock\nMED: Start a long-running batch job that hogs CPU\nHIGH: Wait on lock held by LOW task\n\nat this point, the HIGH task can't run until the LOW task releases its\nlock. but the LOW task can't run to completion and release its lock\nuntil the MED job completes.\n\n \n\n\n(random musing): I wonder if PG could efficiently be made to temporarily\nraise the priority of any task holding a lock that a high priority task\nwaits on. I guess that would just make it so that instead of HIGH tasks\nbeing effectively reduced to LOW, then LOW tasks could be promoted to\nHIGH.\n\n \n\n\nI thought that was what priority inheritance did- once HIGH blocks on a\nlock held by LOW, LOW gets it's priority raised to that of HIGH. Then\nLOW takes precedence over MED. If LOW blocks on a lock held by MED\nwhen it has the same priority of HIGH, MED gets it's priority raised to\nHIGH. Note that now all three processes are running with HIGH\npriority- but is this any different from the default case of running\nthem as the same priority? This is what I was talking about when I\nsaid I could imagine priority inheritance \"devolving\" to the single\npriority case.\n\nOf course, this is a little tricky to implement. I haven't looked at\nhow difficult it'd be within Postgres.\n\nBrian",
"msg_date": "Wed, 29 Nov 2006 10:17:10 -0500",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Brian Hurt wrote:\n> Mark Lewis wrote:\n>> On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:\n>> \n>>> I have the same question. I've done some embedded real-time \n>>> programming, so my innate reaction to priority inversions is that \n>>> they're evil. But, especially given priority inheritance, is there any \n>>> situation where priority inversion provides *worse* performance than \n>>> running everything at the same priority? \n\nYes, there are certainly cases where a single high priority\ntransaction will suffer far worse than it otherwise would have.\n\nApparently there are plenty of papers stating that priority inversion\nis a major problem in RDBMs's for problems that require that specific\ndeadlines have to be met (such as in real time systems). However the\npapers using the much weaker criteria of \"most high priority things\nfinish faster than they would have otherwise, and the others aren't\nhurt too bad\" suggest that it's not as much of a problem. Two of\nthe articles referenced by the paper being discussed here apparently\ngo into these cases.\n\nThe question in my mind is whether overall the benefits outweigh\nthe penalties - in much the same way that qsort's can have O(n^2)\nbehavior but in practice outweigh the penalties of many alternatives.\n\n>> It can make things worse when there are at least 3 priority levels\n>> involved. The canonical sequence looks as follows:\n>>\n>> LOW: Aquire a lock\n>> MED: Start a long-running batch job that hogs CPU\n>> HIGH: Wait on lock held by LOW task\n>>\n>> at this point, the HIGH task can't run until the LOW task releases its\n>> lock. but the LOW task can't run to completion and release its lock\n>> until the MED job completes.\n\nDon't many OS's dynamically tweak priorities such that processes\nthat don't use most of their timeslice (like LOW) get priority\nboosts and those that do use a lot of CPU (like MED) get\npenalized -- which may help protect against this particular\nsequence if you don't set LOW and MED too far apart?\n\n>> (random musing): I wonder if PG could efficiently be made to temporarily\n>> raise the priority of any task holding a lock that a high priority task\n>> waits on. ...\n> \n> I thought that was what priority inheritance did-\n\nYes, me too..\n\n> Of course, this is a little tricky to implement. I haven't looked at\n> how difficult it'd be within Postgres.\n\nISTM that it would be rather OS-dependent anyway. Different OS's\nhave different (or no) hooks - heck, even different 2.6.* linuxes\n(pre 2.6.18 vs post) have different hooks for priority\ninheritance - so I wouldn't really expect to see cpu scheduling\npolicy details like that merged with postgresql except maybe from\na patched version from a RTOS vendor.\n\n",
"msg_date": "Wed, 29 Nov 2006 08:43:53 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Ron Mayer wrote:\n\n>Brian Hurt wrote:\n> \n>\n>>Mark Lewis wrote:\n>> \n>>\n>>>On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:\n>>> \n>>> \n>>>\n>>>>I have the same question. I've done some embedded real-time \n>>>>programming, so my innate reaction to priority inversions is that \n>>>>they're evil. But, especially given priority inheritance, is there any \n>>>>situation where priority inversion provides *worse* performance than \n>>>>running everything at the same priority? \n>>>> \n>>>>\n>\n>Yes, there are certainly cases where a single high priority\n>transaction will suffer far worse than it otherwise would have.\n> \n>\nOK.\n\nAlthough I'm tempted to make the issue more complex by throwing Software \nTransactional Memory into the mix:\nhttp://citeseer.ist.psu.edu/shavit95software.html\nhttp://citeseer.ist.psu.edu/anderson95realtime.html\n\nThat second paper is interesting in that it says that STM solves the \npriority inversion problem. Basically the higher priority process \nforces the lower priority process to abort it's transaction and retry it.\n\nIs it possible to recast Postgres' use of locks to use STM instead? How \nwould STM interact with Postgres' existing transactions? I don't know. \nThis would almost certainly require Postgres to write it's own locking, \nwith all the problems it entails (does the source currently use inline \nassembly anywhere? I'd guess not.).\n\n>Apparently there are plenty of papers stating that priority inversion\n>is a major problem in RDBMs's for problems that require that specific\n>deadlines have to be met (such as in real time systems). \n>\nIt's definately a problem in realtime systems, not just realtime DBMS. \nIn this case, running everything at the same priority doesn't work and \nisn't an option.\n\n>The question in my mind is whether overall the benefits outweigh\n>the penalties - in much the same way that qsort's can have O(n^2)\n>behavior but in practice outweigh the penalties of many alternatives.\n>\n> \n>\nAlso, carefull choice of pivot values, and switching to other sorting \nmethods like heapsort when you detect you're in a pathological case, \nhelp. Make the common case fast and the pathological case not something \nthat causes the database to fall over.\n\nSetting priorities would be a solution to a problem I haven't hit yet, \nbut can see myself needing to deal with. Which is why I'm interested in \nthis issue. If it's a case of \"setting priorities can make things \nbetter, and doesn't make things worse\" is great. If it's a case of \n\"setting priorities can make things better, but occassionally makes \nthings much worse\" is a problem.\n\n\n>>Of course, this is a little tricky to implement. I haven't looked at\n>>how difficult it'd be within Postgres.\n>> \n>>\n>\n>ISTM that it would be rather OS-dependent anyway. Different OS's\n>have different (or no) hooks - heck, even different 2.6.* linuxes\n>(pre 2.6.18 vs post) have different hooks for priority\n>inheritance - so I wouldn't really expect to see cpu scheduling\n>policy details like that merged with postgresql except maybe from\n>a patched version from a RTOS vendor.\n>\n> \n>\nHmm. I was thinking of Posix.4's setpriority() call.\n\nBrian\n\n\n\n\n\n\n\n\nRon Mayer wrote:\n\nBrian Hurt wrote:\n \n\nMark Lewis wrote:\n \n\nOn Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:\n \n \n\nI have the same question. I've done some embedded real-time \nprogramming, so my innate reaction to priority inversions is that \nthey're evil. But, especially given priority inheritance, is there any \nsituation where priority inversion provides *worse* performance than \nrunning everything at the same priority? \n \n\n\n\n\nYes, there are certainly cases where a single high priority\ntransaction will suffer far worse than it otherwise would have.\n \n\nOK.\n\nAlthough I'm tempted to make the issue more complex by throwing\nSoftware Transactional Memory into the mix:\nhttp://citeseer.ist.psu.edu/shavit95software.html\nhttp://citeseer.ist.psu.edu/anderson95realtime.html\n\nThat second paper is interesting in that it says that STM solves the\npriority inversion problem.� Basically the higher priority process\nforces the lower priority process to abort it's transaction and retry\nit.\n\nIs it possible to recast Postgres' use of locks to use STM instead?�\nHow would STM interact with Postgres' existing transactions?� I don't\nknow.� This would almost certainly require Postgres to write it's own\nlocking, with all the problems it entails (does the source currently\nuse inline assembly anywhere?� I'd guess not.).\n\n\nApparently there are plenty of papers stating that priority inversion\nis a major problem in RDBMs's for problems that require that specific\ndeadlines have to be met (such as in real time systems). \n\nIt's definately a problem in realtime systems, not just realtime DBMS.�\nIn this case,� running everything at the same priority doesn't work and\nisn't an option.\n\n\n\nThe question in my mind is whether overall the benefits outweigh\nthe penalties - in much the same way that qsort's can have O(n^2)\nbehavior but in practice outweigh the penalties of many alternatives.\n\n \n\nAlso, carefull choice of pivot values, and switching to other sorting\nmethods like heapsort when you detect you're in a pathological case,\nhelp.� Make the common case fast and the pathological case not\nsomething that causes the database to fall over.\n\nSetting priorities would be a solution to a problem I haven't hit yet,\nbut can see myself needing to deal with.� Which is why I'm interested\nin this issue.� If it's a case of \"setting priorities can make things\nbetter, and doesn't make things worse\" is great.� If it's a case of\n\"setting priorities can make things better, but occassionally makes\nthings much worse\" is a problem.\n\n\n\n\nOf course, this is a little tricky to implement. I haven't looked at\nhow difficult it'd be within Postgres.\n \n\n\nISTM that it would be rather OS-dependent anyway. Different OS's\nhave different (or no) hooks - heck, even different 2.6.* linuxes\n(pre 2.6.18 vs post) have different hooks for priority\ninheritance - so I wouldn't really expect to see cpu scheduling\npolicy details like that merged with postgresql except maybe from\na patched version from a RTOS vendor.\n\n \n\nHmm.� I was thinking of Posix.4's setpriority() call.\n\nBrian",
"msg_date": "Wed, 29 Nov 2006 12:21:31 -0500",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Brian Hurt wrote:\n> Ron Mayer wrote:\n>> Brian Hurt wrote: \n>>> Mark Lewis wrote: \n>>>> On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:\n>>>>> But, especially given priority inheritance, is there any \n> \n> That second paper is interesting in that it says that STM solves the\n> priority inversion problem. Basically the higher priority process\n> forces the lower priority process to abort it's transaction and retry it.\n> \n> Is it possible to recast Postgres' use of locks to use STM instead? \n\nIf I read the CMU paper right (http://www.cs.cmu.edu/~bianca/icde04.pdf),\nthat's equivalent to what they call \"preemptive abort scheduling\"\nand tested as well as priority inversion.\n\nThey did test this and compared it to priority inversion with\npostgresql and found them about equivalent.\n\n \"Preemptive scheduling (P-LQ and P-CPU) attempts to eliminate the\n wait excess for high-priority transactions by preempting low-priority\n lock holders in the way of high-priority transactions. We find that\n preemptive policies provide little benefit\n ...\n TPC-C running on PostgreSQL ... Preemption (P-CPU) provides\n no appreciable benefit over CPU-Prio-Inherit.\"\n\n\n> Setting priorities would be a solution to a problem I haven't hit yet,\n> but can see myself needing to deal with. Which is why I'm interested in\n> this issue. If it's a case of \"setting priorities can make things\n> better, and doesn't make things worse\" is great. If it's a case of\n> \"setting priorities can make things better, but occassionally makes\n> things much worse\" is a problem.\n\n From the papers, it seems to depend quite a bit on the workload.\n\nEvery actual experiment I've seen published suggests that on the\naverage the higher-priority transactions will do better - but that\nthere is the risk of specific individual high priority transactions\nthat can be slower than they would have otherwise been.\n\nI have yet to see a case where anyone measured anything getting\n\"much\" worse, though.\n\n\n>>> Of course, this is a little tricky to implement. I haven't looked at\n>>> how difficult it'd be within Postgres.\n>>\n>> ISTM that it would be rather OS-dependent anyway. Different OS's\n>> have different (or no) hooks - heck, even different 2.6.* linuxes\n>> (pre 2.6.18 vs post) have different hooks for priority\n>> inheritance - so I wouldn't really expect to see cpu scheduling\n>> policy details like that merged with postgresql except maybe from\n>> a patched version from a RTOS vendor.\n>>\n>> \n> Hmm. I was thinking of Posix.4's setpriority() call.\n> \n\nHmm - I thought you were thinking the priority inheritance would\nbe using something like the priority-inheriting\nfutexes that were added to the linux kernel in 2.6.18.\nhttp://lwn.net/Articles/178253/\nhttp://www.linuxhq.com/kernel/v2.6/18/Documentation/pi-futex.txt\n",
"msg_date": "Wed, 29 Nov 2006 10:47:00 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Bruce,\n\n> Someone should ask them to remove the article.\n\n\"Someone\".\n\nUm, *who* taught for Big Nerd Ranch for several years, Bruce?\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Wed, 29 Nov 2006 20:29:11 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: Priority to a mission critical transaction"
},
{
"msg_contents": "Is there any experience with Postgresql and really huge tables? I'm \ntalking about terabytes (plural) here in a single table. Obviously the \ntable will be partitioned, and probably spread among several different \nfile systems. Any other tricks I should know about?\n\nWe have a problem of that form here. When I asked why postgres wasn't \nbeing used, the opinion that postgres would \"just <explicitive> die\" was \ngiven. Personally, I'd bet money postgres could handle the problem (and \nbetter than the ad-hoc solution we're currently using). But I'd like a \ncouple of replies of the form \"yeah, we do that here- no problem\" to \nwave around.\n\nBrian\n\n\n",
"msg_date": "Thu, 18 Jan 2007 15:31:35 -0500",
"msg_from": "Brian Hurt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Postgres and really huge tables"
},
{
"msg_contents": "Brian Hurt wrote:\n> Is there any experience with Postgresql and really huge tables? I'm\n> talking about terabytes (plural) here in a single table. Obviously the\n> table will be partitioned, and probably spread among several different\n> file systems. Any other tricks I should know about?\n> \n> We have a problem of that form here. When I asked why postgres wasn't\n> being used, the opinion that postgres would \"just <explicitive> die\" was\n> given. Personally, I'd bet money postgres could handle the problem (and\n> better than the ad-hoc solution we're currently using). But I'd like a\n> couple of replies of the form \"yeah, we do that here- no problem\" to\n> wave around.\n\nIt entirely depends on the machine and how things are accessed. In\ntheory you could have a multi-terabyte table but my question of course\nis why in the world would you do that? That is what partitioning is for.\n\nRegardless, appropriate use of things like partial indexes should make\nit possible.\n\nJoshua D. Drake\n\n\n> \n> Brian\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\nProviding the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n",
"msg_date": "Thu, 18 Jan 2007 12:39:45 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres and really huge tables"
},
{
"msg_contents": "On Thu, 2007-01-18 at 14:31, Brian Hurt wrote:\n> Is there any experience with Postgresql and really huge tables? I'm \n> talking about terabytes (plural) here in a single table. Obviously the \n> table will be partitioned, and probably spread among several different \n> file systems. Any other tricks I should know about?\n> \n> We have a problem of that form here. When I asked why postgres wasn't \n> being used, the opinion that postgres would \"just <explicitive> die\" was \n> given. Personally, I'd bet money postgres could handle the problem (and \n> better than the ad-hoc solution we're currently using). But I'd like a \n> couple of replies of the form \"yeah, we do that here- no problem\" to \n> wave around.\n\nIt really depends on what you're doing.\n\nAre you updating every row by a single user every hour, or are you\nupdating dozens of rows by hundreds of users at the same time?\n\nPostgreSQL probably wouldn't die, but it may well be that for certain\nbatch processing operations it's a poorer choice than awk/sed or perl.\n\nIf you do want to tackle it with PostgreSQL, you'll likely want to build\na truly fast drive subsystem. Something like dozens to hundreds of\ndrives in a RAID-10 setup with battery backed cache, and a main server\nwith lots of memory on board.\n\nBut, really, it depends on what you're doing to the data.\n",
"msg_date": "Thu, 18 Jan 2007 15:04:07 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgres and really huge tables"
},
{
"msg_contents": "\n> Is there any experience with Postgresql and really huge tables? I'm \n> talking about terabytes (plural) here in a single table. Obviously the \n> table will be partitioned, and probably spread among several different \n> file systems. Any other tricks I should know about?\n> \n> We have a problem of that form here. When I asked why postgres wasn't \n> being used, the opinion that postgres would \"just <explicitive> die\" was \n> given. Personally, I'd bet money postgres could handle the problem (and \n> better than the ad-hoc solution we're currently using). But I'd like a \n> couple of replies of the form \"yeah, we do that here- no problem\" to \n> wave around.\n\nI've done a project using 8.1 on solaris that had a table that was \nclosed to 2TB. The funny thing is that it just worked fine even without \npartitioning.\n\nBut, then again: the size of a single record was huge too: ~ 50K.\nSo there were not insanly many records: \"just\" something\nin the order of 10ths of millions.\n\nThe queries just were done on some int fields, so the index of the\nwhole thing fit into RAM.\n\nA lot of data, but not a lot of records... I don't know if that's\nvalid. I guess the people at Greenplum and/or Sun have more exciting\nstories ;)\n\n\nBye, Chris.\n\n\n\n",
"msg_date": "Thu, 18 Jan 2007 22:42:40 +0100",
"msg_from": "Chris Mair <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres and really huge tables"
},
{
"msg_contents": "Brian Hurt <[email protected]> writes:\n> Is there any experience with Postgresql and really huge tables? I'm \n> talking about terabytes (plural) here in a single table.\n\nThe 2MASS sky survey point-source catalog\nhttp://www.ipac.caltech.edu/2mass/releases/allsky/doc/sec2_2a.html\nis 470 million rows by 60 columns; I don't have it loaded up but\na very conservative estimate would be a quarter terabyte. (I've\ngot a copy of the data ... 5 double-sided DVDs, gzipped ...)\nI haven't heard from Rae Stiening recently but I know he's been using\nPostgres to whack that data around since about 2001 (PG 7.1 or so,\nwhich is positively medieval compared to current releases). So at\nleast for static data, it's certainly possible to get useful results.\nWhat are your processing requirements?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Jan 2007 16:52:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgres and really huge tables "
},
{
"msg_contents": "Chris,\n\nOn 1/18/07 1:42 PM, \"Chris Mair\" <[email protected]> wrote:\n\n> A lot of data, but not a lot of records... I don't know if that's\n> valid. I guess the people at Greenplum and/or Sun have more exciting\n> stories ;)\n\nYou guess correctly :-)\n\nGiven that we're Postgres 8.2, etc compatible, that might answer Brian's\ncoworker's question. Soon we will be able to see that Greenplum/Postgres\nare handling the world's largest databases both in record count and size.\n\nWhile the parallel scaling technology we employ is closed source, we are\nstill contributing scaling technology to the community (partitioning, bitmap\nindex, sort improvements, resource management, more to come), so Postgres as\na \"bet\" is likely safer and better than a completely closed source\ncommercial product.\n\n- Luke\n\n\n",
"msg_date": "Thu, 18 Jan 2007 14:41:30 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres and really huge tables"
},
{
"msg_contents": "Hi Brian,\n\nOn Thu, 18 Jan 2007, Brian Hurt wrote:\n\n> Is there any experience with Postgresql and really huge tables? I'm\n> talking about terabytes (plural) here in a single table. Obviously the\n> table will be partitioned, and probably spread among several different\n> file systems. Any other tricks I should know about?\n\nHere is a blog post from a user who is in the multi-tb range:\n\nhttp://www.lethargy.org/~jesus/archives/49-PostreSQL-swelling.html\n\nI think Theo sums up some of the pros and cons well.\n\nYour best bet is a test on scale. Be sure to get our feed back if you\nencounter issues.\n\nGavin\n",
"msg_date": "Fri, 19 Jan 2007 10:09:57 +1100 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres and really huge tables"
},
{
"msg_contents": "On Thu, 18 Jan 2007, Tom Lane wrote:\n\n> Brian Hurt <[email protected]> writes:\n>> Is there any experience with Postgresql and really huge tables? I'm\n>> talking about terabytes (plural) here in a single table.\n>\n> The 2MASS sky survey point-source catalog\n> http://www.ipac.caltech.edu/2mass/releases/allsky/doc/sec2_2a.html\n> is 470 million rows by 60 columns; I don't have it loaded up but\n> a very conservative estimate would be a quarter terabyte. (I've\n> got a copy of the data ... 5 double-sided DVDs, gzipped ...)\n> I haven't heard from Rae Stiening recently but I know he's been using\n> Postgres to whack that data around since about 2001 (PG 7.1 or so,\n> which is positively medieval compared to current releases). So at\n> least for static data, it's certainly possible to get useful results.\n> What are your processing requirements?\n\nWe are working in production with 2MASS and other catalogues, and\n2MASS is not the biggest. The nomad catalog has more than milliard records.\nYou could query them online\nhttp://vo.astronet.ru/cas/conesearch.php\nEverything is in PostgreSQL 8.1.5 and at present migrate to the 8.2.1,\nwhich is very slow, since slow COPY.\nThe hardware we use is HP rx1620, dual Itanium2, MSA 20, currently\n4.5 Tb.\n\n\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Fri, 19 Jan 2007 13:03:05 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgres and really huge tables "
},
{
"msg_contents": "On 1/18/07, Brian Hurt <[email protected]> wrote:\n> Is there any experience with Postgresql and really huge tables? I'm\n> talking about terabytes (plural) here in a single table. Obviously the\n> table will be partitioned, and probably spread among several different\n> file systems. Any other tricks I should know about?\n\nA pretty effective partitioning strategy that works in some cases is\nto identify a criteria in your dataset that isolates your data on a\nsession basis. For example, if you have a company_id that divides up\nyour company data and a session only needs to deal with company_id,\nyou can separate out all your tables based on company_id into\ndifferent schemas and have the session set the search_path variable\nwhen it logs in. Data that does not partition on your criteria sits\nin public schemas that all the companies can see.\n\nThis takes advantage of a special trick regarding stored procedures\nthat they do not attach to tables until the first time they are\nexecuted in a session -- keeping you from having to make a function\nfor each schema. (note: views do not have this property). You can\nstill cross query using views and the like or hand rolled sql.\n\nI would call this type of partitioning logical partitioning since you\nare leveraging logical divisions in your data. It obviously doesn't\nwork in all cases but when it does it works great.\n\n> We have a problem of that form here. When I asked why postgres wasn't\n> being used, the opinion that postgres would \"just <explicitive> die\" was\n> given. Personally, I'd bet money postgres could handle the problem (and\n> better than the ad-hoc solution we're currently using). But I'd like a\n> couple of replies of the form \"yeah, we do that here- no problem\" to\n> wave around.\n\npg will of course not die as when your dataset hits a certain\nthreshold. It will become slower based on well know mathematical\npatterns that grow with your working set size. One of the few things\nthat gets to be a pain with large tables is vacuum -- since you can't\nvacuum a piece of table and there are certain annoyances with having a\nlong running vacuum this is something to think about.\n\nSpeaking broadly about table partitioning, it optimizes one case at\nthe expense of another. Your focus (IMO) should be on reducing your\nworking set size under certain conditions -- not the physical file\nsize. If you have a properly laid out and logical dataset and can\nidentify special cases where you need some information and not other\ninformation, the partitioning strategy should fall into place, whether\nit is to do nothing, isolate data into separate schemas/tables/files,\nor use the built in table partitioning feature (which to be honest I\nam not crazy about).\n\nmerlin\n",
"msg_date": "Fri, 19 Jan 2007 09:11:31 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgres and really huge tables"
},
{
"msg_contents": "\n> A lot of data, but not a lot of records... I don't know if that's\n> valid. I guess the people at Greenplum and/or Sun have more exciting\n> stories ;)\n\nNot really. Pretty much multi-terabyte tables are fine on vanilla \nPostgreSQL if you can stick to partitioned and/or indexed access. If you \nneed to do unindexed fishing expeditions on 5tb of data, then talk to \nGreenplum.\n\nhttp://www.powerpostgresql.com/Downloads/terabytes_osc2005.pdf\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n",
"msg_date": "Fri, 19 Jan 2007 16:04:12 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres and really huge tables"
}
] |
[
{
"msg_contents": "Hi,\n\nPostgreSQL 8.1 (and, back then, 7.4) have the tendency to underestimate\nthe costs of sort operations, compared to index scans.\n\nThe Backend allocates gigs of memory (we've set sort_mem to 1 gig), and\nthen starts spilling out more Gigs of temporary data to the disk. So the\nexecution gets - in the end - much slower compared to an index scan, and\nwastes lots of disk space.\n\nWe did not manage to tune the config values appropriately, at least not\nwithout causing other query plans to suffer badly.\n\nAre there some nice ideas how to shift the planners preferences slightly\ntowards index scans, without affecting other queries?\n\nThere's one thing that most of those queries have in common: They\ninclude TOAST data (large strings, PostGIS geometries etc.), and I\nremember that there are known problems with estimating the TOAST costs.\nThis may be part of the problem, or may be irrelevant.\n\n\nThanks,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Wed, 22 Nov 2006 11:17:23 +0100",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL underestimates sorting"
},
{
"msg_contents": "On Wed, Nov 22, 2006 at 11:17:23AM +0100, Markus Schaber wrote:\n> The Backend allocates gigs of memory (we've set sort_mem to 1 gig), and\n> then starts spilling out more Gigs of temporary data to the disk.\n\nHow much RAM is in the server? Remember that sort_mem is _per sort_, so if\nyou have multiple sorts, it might allocate several multiples of the amount\nyou set up.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 22 Nov 2006 14:54:34 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL underestimates sorting"
},
{
"msg_contents": "Hi, Steinar,\n\nSteinar H. Gunderson wrote:\n> On Wed, Nov 22, 2006 at 11:17:23AM +0100, Markus Schaber wrote:\n>> The Backend allocates gigs of memory (we've set sort_mem to 1 gig), and\n>> then starts spilling out more Gigs of temporary data to the disk.\n> \n> How much RAM is in the server? Remember that sort_mem is _per sort_, so if\n> you have multiple sorts, it might allocate several multiples of the amount\n> you set up.\n\nThat one machine has 16 Gigs of ram, and about 10 Gigs tend to be \"free\"\n/ part of the Linux blocklayer cache.\n\nThe temporary data is not swapping, it's the Postgres on-disk sort\nalgorithm.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Wed, 22 Nov 2006 15:28:12 +0100",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL underestimates sorting"
},
{
"msg_contents": "On Wed, 22 Nov 2006 15:28:12 +0100\nMarkus Schaber <[email protected]> wrote:\n\n> Hi, Steinar,\n> \n> Steinar H. Gunderson wrote:\n> > On Wed, Nov 22, 2006 at 11:17:23AM +0100, Markus Schaber wrote:\n> >> The Backend allocates gigs of memory (we've set sort_mem to 1\n> >> gig), and then starts spilling out more Gigs of temporary data to\n> >> the disk.\n> > \n> > How much RAM is in the server? Remember that sort_mem is _per\n> > sort_, so if you have multiple sorts, it might allocate several\n> > multiples of the amount you set up.\n> \n> That one machine has 16 Gigs of ram, and about 10 Gigs tend to be\n> \"free\" / part of the Linux blocklayer cache.\n> \n> The temporary data is not swapping, it's the Postgres on-disk sort\n> algorithm.\n\n Are you actually running a query where you have a GB of data\n you need to sort? If not I fear you may be causing the system\n to swap by setting it this high. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Wed, 22 Nov 2006 10:53:41 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL underestimates sorting"
},
{
"msg_contents": "Hi, Frank,\n\nFrank Wiles wrote:\n\n\n>> The temporary data is not swapping, it's the Postgres on-disk sort\n>> algorithm.\n> \n> Are you actually running a query where you have a GB of data\n> you need to sort? If not I fear you may be causing the system\n> to swap by setting it this high. \n\nYes, the table itself is about 40 Gigs in size, thus much larger than\nthe memory. The machine has 16 Gigs of ram, and 10-12 Gigs are available\nfor PostgreSQL + Disk Cache.\n\nThere's no swapping, only 23 MB of swap are used (40 Gigs are available).\n\nThat's one example configuration, there are others on different machines\nwhere it turns out that forcing index usage leads to faster queries, and\nless overall ressource consumption. (Or, at least, faster delivery of\nthe first part of the result so the application can begin to process it\nasynchroneously).\n\nThanks,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n",
"msg_date": "Wed, 22 Nov 2006 17:59:46 +0100",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL underestimates sorting"
},
{
"msg_contents": "On Wed, 2006-11-22 at 11:17 +0100, Markus Schaber wrote:\n\n> PostgreSQL 8.1 (and, back then, 7.4) have the tendency to underestimate\n> the costs of sort operations, compared to index scans.\n> \n> The Backend allocates gigs of memory (we've set sort_mem to 1 gig), and\n> then starts spilling out more Gigs of temporary data to the disk. So the\n> execution gets - in the end - much slower compared to an index scan, and\n> wastes lots of disk space.\n> \n> We did not manage to tune the config values appropriately, at least not\n> without causing other query plans to suffer badly.\n\n8.2 has substantial changes to sort code, so you may want to give the\nbeta version a try to check for how much better it works. That's another\nway of saying that sort in 8.1 and before has some performance problems\nwhen you are sorting more than 6 * 2 * work_mem (on randomly sorted\ndata) and the cost model doesn't get this right, as you observe.\n\nTry enabling trace_sort (available in both 8.1 and 8.2) and post the\nresults here please, which would be very useful to have results on such\na large real-world sort.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Nov 2006 15:20:35 +0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL underestimates sorting"
}
] |
[
{
"msg_contents": "I've been trying to optimize a Linux system where benchmarking suggests \nlarge performance differences between the various wal_sync_method options \n(with o_sync being the big winner). I started that by using \nsrc/tools/fsync/test_fsync to get an idea what I was dealing with (and to \nspot which drives had write caching turned on). Since those results \ndidn't match what I was seeing in the benchmarks, I've been browsing the \nbackend source to figure out why. I noticed test_fsync appears to be, \nahem, out of sync with what the engine is doing.\n\nIt looks like V8.1 introduced O_DIRECT writes to the WAL, determined at \ncompile time by a series of preprocessor tests in \nsrc/backend/access/transam/xlog.c When O_DIRECT is available, \nO_SYNC/O_FSYNC/O_DSYNC writes use it. test_fsync doesn't do that.\n\nI moved the new code (in 8.2 beta 3, lines 61-92 in xlog.c) into \ntest_fsync; all the flags had the same name so it dropped right in. You \ncan get the version I made at http://www.westnet.com/~gsmith/test_fsync.c \n(fixed a compiler warning, too)\n\nThe results I get now look fishy. I'm not sure if I screwed up a step, or \nif I'm seeing a real problem. The system here is running RedHat Linux, \nRHEL ES 4.0 kernel 2.6.9, and the disk I'm writing to is a standard \n7200RPM IDE drive. I turned off write caching with hdparm -W 0\n\nHere's an excerpt from the stock test_fsync:\n\nCompare one o_sync write to two:\n one 16k o_sync write 8.717944\n two 8k o_sync writes 17.501980\n\nCompare file sync methods with 2 8k writes:\n (o_dsync unavailable)\n open o_sync, write 17.018495\n write, fdatasync 8.842473\n write, fsync, 8.809117\n\nAnd here's the version I tried to modify to include O_DIRECT support:\n\nCompare one o_sync write to two:\n one 16k o_sync write 0.004995\n two 8k o_sync writes 0.003027\n\nCompare file sync methods with 2 8k writes:\n (o_dsync unavailable)\n open o_sync, write 0.004978\n write, fdatasync 8.845498\n write, fsync, 8.834037\n\nObivously the o_sync writes aren't waiting for the disk. Is this a \nproblem with O_DIRECT under Linux? Or is my code just not correctly \ntesting this behavior?\n\nJust as a sanity check, I did try this on another system, running SuSE \nwith drives connected to a cciss SCSI device, and I got exactly the same \nresults. I'm concerned that Linux users who use O_SYNC because they \nnotice it's faster will be losing their WAL integrity without being aware \nof the problem, especially as the whole O_DIRECT business isn't even \nmentioned in the WAL documentation--it really deserves to be brought up in \nthe wal_sync_method notes at \nhttp://developer.postgresql.org/pgdocs/postgres/runtime-config-wal.html\n\nAnd while I'm mentioning improvements to that particular documentation \npage...the wal_buffers notes there are so sparse they misled me initially. \nThey suggest only bumping it up for situations with very large \ntransactions; since I was testing with small ones I left it woefully \nundersized initially. I would suggest copying the text from \nhttp://developer.postgresql.org/pgdocs/postgres/wal-configuration.html to \nhere: \"When full_page_writes is set and the system is very busy, setting \nthis value higher will help smooth response times during the period \nimmediately following each checkpoint.\" That seems to match what I found \nin testing.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 23 Nov 2006 01:30:24 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Direct I/O issues"
},
{
"msg_contents": "I have applied your test_fsync patch for 8.2. Thanks.\n\n---------------------------------------------------------------------------\n\nGreg Smith wrote:\n> I've been trying to optimize a Linux system where benchmarking suggests \n> large performance differences between the various wal_sync_method options \n> (with o_sync being the big winner). I started that by using \n> src/tools/fsync/test_fsync to get an idea what I was dealing with (and to \n> spot which drives had write caching turned on). Since those results \n> didn't match what I was seeing in the benchmarks, I've been browsing the \n> backend source to figure out why. I noticed test_fsync appears to be, \n> ahem, out of sync with what the engine is doing.\n> \n> It looks like V8.1 introduced O_DIRECT writes to the WAL, determined at \n> compile time by a series of preprocessor tests in \n> src/backend/access/transam/xlog.c When O_DIRECT is available, \n> O_SYNC/O_FSYNC/O_DSYNC writes use it. test_fsync doesn't do that.\n> \n> I moved the new code (in 8.2 beta 3, lines 61-92 in xlog.c) into \n> test_fsync; all the flags had the same name so it dropped right in. You \n> can get the version I made at http://www.westnet.com/~gsmith/test_fsync.c \n> (fixed a compiler warning, too)\n> \n> The results I get now look fishy. I'm not sure if I screwed up a step, or \n> if I'm seeing a real problem. The system here is running RedHat Linux, \n> RHEL ES 4.0 kernel 2.6.9, and the disk I'm writing to is a standard \n> 7200RPM IDE drive. I turned off write caching with hdparm -W 0\n> \n> Here's an excerpt from the stock test_fsync:\n> \n> Compare one o_sync write to two:\n> one 16k o_sync write 8.717944\n> two 8k o_sync writes 17.501980\n> \n> Compare file sync methods with 2 8k writes:\n> (o_dsync unavailable)\n> open o_sync, write 17.018495\n> write, fdatasync 8.842473\n> write, fsync, 8.809117\n> \n> And here's the version I tried to modify to include O_DIRECT support:\n> \n> Compare one o_sync write to two:\n> one 16k o_sync write 0.004995\n> two 8k o_sync writes 0.003027\n> \n> Compare file sync methods with 2 8k writes:\n> (o_dsync unavailable)\n> open o_sync, write 0.004978\n> write, fdatasync 8.845498\n> write, fsync, 8.834037\n> \n> Obivously the o_sync writes aren't waiting for the disk. Is this a \n> problem with O_DIRECT under Linux? Or is my code just not correctly \n> testing this behavior?\n> \n> Just as a sanity check, I did try this on another system, running SuSE \n> with drives connected to a cciss SCSI device, and I got exactly the same \n> results. I'm concerned that Linux users who use O_SYNC because they \n> notice it's faster will be losing their WAL integrity without being aware \n> of the problem, especially as the whole O_DIRECT business isn't even \n> mentioned in the WAL documentation--it really deserves to be brought up in \n> the wal_sync_method notes at \n> http://developer.postgresql.org/pgdocs/postgres/runtime-config-wal.html\n> \n> And while I'm mentioning improvements to that particular documentation \n> page...the wal_buffers notes there are so sparse they misled me initially. \n> They suggest only bumping it up for situations with very large \n> transactions; since I was testing with small ones I left it woefully \n> undersized initially. I would suggest copying the text from \n> http://developer.postgresql.org/pgdocs/postgres/wal-configuration.html to \n> here: \"When full_page_writes is set and the system is very busy, setting \n> this value higher will help smooth response times during the period \n> immediately following each checkpoint.\" That seems to match what I found \n> in testing.\n> \n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +",
"msg_date": "Thu, 23 Nov 2006 11:41:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Direct I/O issues"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> The results I get now look fishy.\n\nThere are at least two things wrong with this program:\n\n* It does not respect the alignment requirement for O_DIRECT buffers\n (reportedly either 512 or 4096 bytes depending on filesystem).\n\n* It does not check for errors (if it had, you might have realized the\n other problem).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Nov 2006 11:45:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O issues "
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have applied your test_fsync patch for 8.2. Thanks.\n\n... which means test_fsync is now broken. Why did you apply a patch\nwhen the author pointed out that the program isn't working?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Nov 2006 11:49:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Direct I/O issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I have applied your test_fsync patch for 8.2. Thanks.\n> \n> ... which means test_fsync is now broken. Why did you apply a patch\n> when the author pointed out that the program isn't working?\n\nI thought his code was OK, but the OS had issues. Clearly we need to\nupdate test_fsync.c because it doesn't match the code. I have reverted\nthe patch but some day we need a fixed version.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 23 Nov 2006 12:20:58 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Direct I/O issues"
},
{
"msg_contents": "On Thu, 23 Nov 2006, Tom Lane wrote:\n\n> * It does not check for errors (if it had, you might have realized the\n> other problem).\n\nAll the test_fsync code needs to check for errors better; there have been \nmultiple occasions where I've run that with quesiontable input and it \ndidn't complain, it just happily ran and reported times that were almost \n0.\n\nThanks for the note about alignment, I had seen something about that in \nthe xlog.c but wasn't sure if that was important in this case.\n\nIt's very important to the project I'm working on that I get this cleared \nup, and I think I'm in a good position to fix it myself now. I just \nwanted to report the issue and get some initial feedback on what's wrong. \nI'll try to rewrite that code with an eye toward the \"Determine optimal \nfdatasync/fsync, O_SYNC/O_DSYNC options\" to-do item, which is what I'd \nreally like to have.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n",
"msg_date": "Thu, 23 Nov 2006 13:09:54 -0500 (EST)",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Direct I/O issues "
},
{
"msg_contents": "Greg Smith wrote:\n> On Thu, 23 Nov 2006, Tom Lane wrote:\n> \n> > * It does not check for errors (if it had, you might have realized the\n> > other problem).\n> \n> All the test_fsync code needs to check for errors better; there have been \n> multiple occasions where I've run that with quesiontable input and it \n> didn't complain, it just happily ran and reported times that were almost \n> 0.\n> \n> Thanks for the note about alignment, I had seen something about that in \n> the xlog.c but wasn't sure if that was important in this case.\n> \n> It's very important to the project I'm working on that I get this cleared \n> up, and I think I'm in a good position to fix it myself now. I just \n> wanted to report the issue and get some initial feedback on what's wrong. \n> I'll try to rewrite that code with an eye toward the \"Determine optimal \n> fdatasync/fsync, O_SYNC/O_DSYNC options\" to-do item, which is what I'd \n> really like to have.\n\nPlease send an updated patch for test_fsync.c so we can get it working\nfor 8.2.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 23 Nov 2006 23:06:05 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Direct I/O issues"
},
{
"msg_contents": "Greg Smith wrote:\n> On Thu, 23 Nov 2006, Tom Lane wrote:\n> \n> > * It does not check for errors (if it had, you might have realized the\n> > other problem).\n> \n> All the test_fsync code needs to check for errors better; there have been \n> multiple occasions where I've run that with quesiontable input and it \n> didn't complain, it just happily ran and reported times that were almost \n> 0.\n> \n> Thanks for the note about alignment, I had seen something about that in \n> the xlog.c but wasn't sure if that was important in this case.\n> \n> It's very important to the project I'm working on that I get this cleared \n> up, and I think I'm in a good position to fix it myself now. I just \n> wanted to report the issue and get some initial feedback on what's wrong. \n> I'll try to rewrite that code with an eye toward the \"Determine optimal \n> fdatasync/fsync, O_SYNC/O_DSYNC options\" to-do item, which is what I'd \n> really like to have.\n\nI have developed a patch that moves the defines into a include file\nwhere they can be used by the backend and test_fsync.c. I have also set\nup things so there is proper alignment for O_DIRECT, and added error\nchecking.\n\nNot sure if people want this for 8.2. I think we can modify\ntest_fsync.c anytime but the movement of the defines into an include\nfile is a backend code change.\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +",
"msg_date": "Fri, 24 Nov 2006 13:58:45 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Direct I/O issues"
},
{
"msg_contents": "Bruce Momjian <[email protected]> writes:\n> Not sure if people want this for 8.2. I think we can modify\n> test_fsync.c anytime but the movement of the defines into an include\n> file is a backend code change.\n\nI think fooling with this on the day before RC1 is an unreasonable risk ...\nand I disapprove of moving this code into a widely-used include file\nlike xlog.h, too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2006 14:08:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Direct I/O issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Not sure if people want this for 8.2. I think we can modify\n> > test_fsync.c anytime but the movement of the defines into an include\n> > file is a backend code change.\n> \n> I think fooling with this on the day before RC1 is an unreasonable risk ...\n> and I disapprove of moving this code into a widely-used include file\n> like xlog.h, too.\n\nOK, you want a separate include or xlog_internal.h? And should I put in\njust the test_fsync changes next week so at least we are closer to\nhaving it work for 8.2?\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Fri, 24 Nov 2006 18:43:36 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] [PERFORM] Direct I/O issues"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Not sure if people want this for 8.2. I think we can modify\n> > test_fsync.c anytime but the movement of the defines into an include\n> > file is a backend code change.\n> \n> I think fooling with this on the day before RC1 is an unreasonable risk ...\n> and I disapprove of moving this code into a widely-used include file\n> like xlog.h, too.\n\nfsync method defines moved to /include/access/xlogdefs.h so they can be\nused by test_fsync.c.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +",
"msg_date": "Wed, 14 Feb 2007 00:01:27 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Direct I/O issues"
}
] |
[
{
"msg_contents": "Hi all,\n\n \n\nI have a postgres installation thats running under 70-80% CPU usage\nwhile\n\nan MSSQL7 installation did 'roughly' the same thing with 1-2% CPU load.\n\n \n\nHere's the scenario,\n\n300 queries/second\n\nServer: Postgres 8.1.4 on win2k server\n\nCPU: Dual Xeon 3.6 Ghz, \n\nMemory: 4GB RAM\n\nDisks: 3 x 36gb , 15K RPM SCSI\n\nC# based web application calling postgres functions using npgsql 0.7.\n\nIts almost completely read-only db apart from fortnightly updates.\n\n \n\nTable 1 - About 300,000 rows with simple rectangles\n\nTable 2 - 1 million rows \n\nTotal size: 300MB\n\n \n\nFunctions : Simple coordinate reprojection and intersection query +\ninner join of table1 and table2.\n\nI think I have all the right indexes defined and indeed the performance\nfor queries under low loads is fast.\n\n \n\n \n\n========================================================================\n==========\n\npostgresql.conf has following settings\n\nmax_connections = 150\n\nhared_buffers = 20000 # min 16 or\nmax_connections*2, 8KB each\n\ntemp_buffers = 2000 # min 100, 8KB each\n\nmax_prepared_transactions = 25 # can be 0 or more\n\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n\nwork_mem = 512 # min 64, size in KB\n\n#maintenance_work_mem = 16384 # min 1024, size in\nKB\n\nmax_stack_depth = 2048\n\neffective_cache_size = 82728 # typically 8KB each\n\nrandom_page_cost = 4 # units are one\nsequential page fetch \n\n========================================================================\n==========\n\n \n\nSQL server caches all the data in memory which is making it faster(uses\nabout 1.2GB memory- which is fine).\n\nBut postgres has everything spread across 10-15 processes, with each\nprocess using about 10-30MB, not nearly enough to cache all the data and\nends up doing a lot of disk reads.\n\nI've read that postgres depends on OS to cache the files, I wonder if\nthis is not happenning on windows.\n\n \n\nIn any case I cannot believe that having 15-20 processes running on\nwindows helps. Why not spwan of threads instead of processes, which\nmight\n\nbe far less expensive and more efficient. Is there any way of doing\nthis?\n\n \n\nMy question is, should I just accept the performance I am getting as the\nlimit on windows or should I be looking at some other params that I\nmight have missed?\n\n \n\nThanks,\n\nGopal\n\n\n________________________________________________________________________\nThis e-mail has been scanned for all viruses by Star. The\nservice is powered by MessageLabs. For more information on a proactive\nanti-virus service working around the clock, around the globe, visit:\nhttp://www.star.net.uk\n________________________________________________________________________\n\n\n\n\n\n\n\n\n\n\n\nHi all,\n \nI have a postgres installation thats\nrunning under 70-80% CPU usage while\nan MSSQL7 installation did 'roughly' the\nsame thing with 1-2% CPU load.\n \nHere’s the scenario,\n300 queries/second\nServer: Postgres 8.1.4 on win2k server\nCPU: Dual Xeon 3.6 Ghz, \nMemory: 4GB RAM\nDisks: 3 x 36gb , 15K RPM SCSI\nC# based web application calling postgres\nfunctions using npgsql 0.7.\nIts almost completely read-only db apart\nfrom fortnightly updates.\n \nTable 1 - About 300,000 rows with simple\nrectangles\nTable 2 – 1 million rows \nTotal size: 300MB\n \nFunctions : Simple coordinate reprojection\nand intersection query + inner join of table1 and table2.\nI think I have all the right indexes\ndefined and indeed the performance for queries under low loads is fast.\n \n \n==================================================================================\npostgresql.conf has following settings\nmax_connections = 150\nhared_buffers =\n20000 \n# min 16 or max_connections*2, 8KB each\ntemp_buffers = 2000 \n# min 100, 8KB each\nmax_prepared_transactions =\n25 #\ncan be 0 or more\n# note: increasing\nmax_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space\n(see max_locks_per_transaction).\nwork_mem =\n512 \n# min 64, size in KB\n#maintenance_work_mem =\n16384 \n# min 1024, size in KB\nmax_stack_depth = 2048\neffective_cache_size =\n82728 \n# typically 8KB each\nrandom_page_cost =\n4 \n# units are one sequential page fetch \n==================================================================================\n \nSQL server caches all the data in memory\nwhich is making it faster(uses about 1.2GB memory- which is fine).\nBut postgres has everything spread across\n10-15 processes, with each process using about 10-30MB, not nearly enough to\ncache all the data and ends up doing a lot of disk reads.\nI've read that postgres depends on OS to\ncache the files, I wonder if this is not happenning on windows.\n \nIn any case I cannot believe that having\n15-20 processes running on windows helps. Why not spwan of threads instead of\nprocesses, which might\nbe far less expensive and more efficient.\nIs there any way of doing this?\n \nMy question is, should I just accept the\nperformance I am getting as the limit on windows or should I be looking at some\nother params that I might have missed?\n \nThanks,\nGopal\n\n________________________________________________________________________\nThis e-mail has been scanned for all viruses by Star. The\nservice is powered by MessageLabs. For more information on a proactive\nanti-virus service working around the clock, around the globe, visit:\nhttp://www.star.net.uk\n________________________________________________________________________",
"msg_date": "Thu, 23 Nov 2006 22:37:08 -0000",
"msg_from": "\"Gopal\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres scalability and performance on windows"
},
{
"msg_contents": "Gopal wrote:\n\n> Functions : Simple coordinate reprojection and intersection query +\n> inner join of table1 and table2.\n> \n> I think I have all the right indexes defined and indeed the performance\n> for queries under low loads is fast.\n\nCan you do a EXPLAIN ANALYZE on your queries, and send the results back \nto the list just to be sure?\n\n> SQL server caches all the data in memory which is making it faster(uses\n> about 1.2GB memory- which is fine).\n> \n> But postgres has everything spread across 10-15 processes, with each\n> process using about 10-30MB, not nearly enough to cache all the data and\n> ends up doing a lot of disk reads.\n\nI don't know Windows memory management very well, but let me just say \nthat it's not that simple.\n\n> I've read that postgres depends on OS to cache the files, I wonder if\n> this is not happenning on windows.\n\nUsing the Task Manager, or whatever it's called these days, you can see \nhow much memory is used for caching.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Thu, 23 Nov 2006 22:50:45 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres scalability and performance on windows"
},
{
"msg_contents": "Am 23.11.2006 um 23:37 schrieb Gopal:\n> hared_buffers = 20000 # min 16 or \n> max_connections*2, 8KB each\nIf this is not a copy & paste error, you should add the \"s\" at the \nbeginning of the line.\n\nAlso you might want to set this to a higher number. You are setting \nabout 20000 * 8k = 160MB, this number might be a bit too small if you \ndo a lot of queries spread over the whole dataset. I don't know \nwhether the memory management on Windows handles this well, but you \ncan give it a try.\n> effective_cache_size = 82728 # typically 8KB each\nHmm. I don't know what the real effect of this might be as the doc \nstates:\n\n\"This parameter has no effect on the size of shared memory allocated \nby PostgreSQL, nor does it reserve kernel disk cache; it is used only \nfor estimation purposes.\"\n\nYou should try optimizing your shared_buffers to cache more of the data.\n> But postgres has everything spread across 10-15 processes, with \n> each process using about 10-30MB, not nearly enough to cache all \n> the data and ends up doing a lot of disk reads.\nIt's not soo easy. PostgreSQL maintains a shared_buffer which is \naccessible by all processes for reading. On a Unix system you can see \nthis in the output of top - don't know how this works on Windows.\n> In any case I cannot believe that having 15-20 processes running on \n> windows helps. Why not spwan of threads instead of processes, which \n> migh be far less expensive and more efficient. Is there any way of \n> doing this?\nBecause it brings you a whole lot of other problems? And because \nPostgreSQL is not \"made for Windows\". PostgreSQL runs very good on \nLinux, BSD, Mac OS X and others. The Windows version is quite young.\n\nBut before you blame stuff on PostgreSQL you should give more \ninformation about the query itself.\n> My question is, should I just accept the performance I am getting \n> as the limit on windows or should I be looking at some other params \n> that I might have missed?\nPost the \"explain analyse select <your query here>\" output here. That \nmight help to understand, why you get such a high CPU load.\n\ncug\n",
"msg_date": "Fri, 24 Nov 2006 09:22:45 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres scalability and performance on windows"
},
{
"msg_contents": "On Fri, 24 Nov 2006 09:22:45 +0100\nGuido Neitzer <[email protected]> wrote:\n\n> > effective_cache_size = 82728 # typically 8KB each\n> Hmm. I don't know what the real effect of this might be as the doc \n> states:\n> \n> \"This parameter has no effect on the size of shared memory allocated \n> by PostgreSQL, nor does it reserve kernel disk cache; it is used\n> only for estimation purposes.\"\n\n This is a hint to the optimizer about how much of the database may\n be in the OS level cache. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Fri, 24 Nov 2006 11:04:51 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres scalability and performance on windows"
},
{
"msg_contents": "Tom,\n\nThis is the query and the schema....\n\nQuery is :\nSELECT subq.percentCover, ds.datasetname, ds.maxresolution\n FROM \n ( \n select\nsum(area(intersection(snaptogrid(chunkgeometry,0.00000001), \n GeometryFromText('POLYGON((-0.140030845589332\n50.8208343077265,-0.138958398039148 50.8478005422809,-0.0963639712296823\n50.8471133071392,-0.0974609286275892 50.8201477285483,-0.140030845589332\n50.8208343077265))',4326))) * 100/ (0.00114901195862628)) as\npercentCover, \n datasetid as did from \n tbl_metadata_chunks \n where chunkgeometry &&\nGeometryFromText('POLYGON((-0.140030845589332\n50.8208343077265,-0.138958398039148 50.8478005422809,-0.0963639712296823\n50.8471133071392,-0.0974609286275892 50.8201477285483,-0.140030845589332\n50.8208343077265))',4326)\n and datasetid in (select datasetid from\ntbl_metadata_dataset where typeofdataid=1)\n group by did\n order by did desc \n )\n AS subq INNER JOIN tbl_metadata_dataset AS \n ds ON subq.did = ds.datasetid \n ORDER by ceil(subq.percentCover),1/ds.maxresolution\nDESC;\n\n\nSchema is\n\nTable 1\nCREATE TABLE public.tbl_metadata_dataset\n(\ndatasetname varchar(70) NOT NULL,\nmaxresolution real,\ntypeofdataid integer NOT NULL,\ndatasetid serial NOT NULL,\nCONSTRAINT \"PK_Dataset\" PRIMARY KEY (datasetid)\n);\n-- Indexes\nCREATE INDEX dsnameindex ON tbl_metadata_dataset USING btree\n(datasetname);-- Owner\nALTER TABLE public.tbl_metadata_dataset OWNER TO postgres;\n-- Triggers\nCREATE CONSTRAINT TRIGGER \"RI_ConstraintTrigger_2196039\" AFTER DELETE ON\ntbl_metadata_dataset FROM tbl_metadata_chunks NOT DEFERRABLE INITIALLY\nIMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_del\"('dsid',\n'tbl_metadata_chunks', 'tbl_metadata_dataset', 'UNSPECIFIED',\n'datasetid', 'datasetid');\nCREATE CONSTRAINT TRIGGER \"RI_ConstraintTrigger_2196040\" AFTER UPDATE ON\ntbl_metadata_dataset FROM tbl_metadata_chunks NOT DEFERRABLE INITIALLY\nIMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_upd\"('dsid',\n'tbl_metadata_chunks', 'tbl_metadata_dataset', 'UNSPECIFIED',\n'datasetid', 'datasetid');\n\n\nTable 2\n\nCREATE TABLE public.tbl_metadata_chunks\n(\nchunkid serial NOT NULL,\nchunkgeometry geometry NOT NULL,\ndatasetid integer NOT NULL,\nCONSTRAINT tbl_metadata_chunks_pkey PRIMARY KEY (chunkid),\nCONSTRAINT dsid FOREIGN KEY (datasetid) REFERENCES\ntbl_metadata_dataset(datasetid)\n);\n-- Indexes\nCREATE INDEX idx_dsid ON tbl_metadata_chunks USING btree (datasetid);\nCREATE UNIQUE INDEX tbl_metadata_chunks_idx2 ON tbl_metadata_chunks\nUSING btree (nativetlx, nativetly, datasetid);\nCREATE INDEX tbl_metadata_chunks_idx3 ON tbl_metadata_chunks USING gist\n(chunkgeometry);-- Owner\nALTER TABLE public.tbl_metadata_chunks OWNER TO postgres;\n-- Triggers\nCREATE CONSTRAINT TRIGGER \"RI_ConstraintTrigger_2194515\" AFTER DELETE ON\ntbl_metadata_chunks FROM tbl_metadata_chunkinfo NOT DEFERRABLE INITIALLY\nIMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_restrict_del\"('fk',\n'tbl_metadata_chunkinfo', 'tbl_metadata_chunks', 'UNSPECIFIED',\n'chunkid', 'chunkid');\nCREATE CONSTRAINT TRIGGER \"RI_ConstraintTrigger_2194516\" AFTER UPDATE ON\ntbl_metadata_chunks FROM tbl_metadata_chunkinfo NOT DEFERRABLE INITIALLY\nIMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_restrict_upd\"('fk',\n'tbl_metadata_chunkinfo', 'tbl_metadata_chunks', 'UNSPECIFIED',\n'chunkid', 'chunkid');\nCREATE CONSTRAINT TRIGGER \"RI_ConstraintTrigger_2196037\" AFTER INSERT ON\ntbl_metadata_chunks FROM tbl_metadata_dataset NOT DEFERRABLE INITIALLY\nIMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_check_ins\"('dsid',\n'tbl_metadata_chunks', 'tbl_metadata_dataset', 'UNSPECIFIED',\n'datasetid', 'datasetid');\nCREATE CONSTRAINT TRIGGER \"RI_ConstraintTrigger_2196038\" AFTER UPDATE ON\ntbl_metadata_chunks FROM tbl_metadata_dataset NOT DEFERRABLE INITIALLY\nIMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_check_upd\"('dsid',\n'tbl_metadata_chunks', 'tbl_metadata_dataset', 'UNSPECIFIED',\n'datasetid', 'datasetid');\n\n\n\n-----Original Message-----\nFrom: Frank Wiles [mailto:[email protected]] \nSent: 24 November 2006 17:05\nTo: Guido Neitzer\nCc: Gopal; [email protected]\nSubject: Re: [PERFORM] Postgres scalability and performance on windows\n\nOn Fri, 24 Nov 2006 09:22:45 +0100\nGuido Neitzer <[email protected]> wrote:\n\n> > effective_cache_size = 82728 # typically 8KB each\n> Hmm. I don't know what the real effect of this might be as the doc \n> states:\n> \n> \"This parameter has no effect on the size of shared memory allocated \n> by PostgreSQL, nor does it reserve kernel disk cache; it is used\n> only for estimation purposes.\"\n\n This is a hint to the optimizer about how much of the database may\n be in the OS level cache. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n\n________________________________________________________________________\nThis e-mail has been scanned for all viruses by Star. The\nservice is powered by MessageLabs. For more information on a proactive\nanti-virus service working around the clock, around the globe, visit:\nhttp://www.star.net.uk\n________________________________________________________________________\n",
"msg_date": "Tue, 28 Nov 2006 12:22:31 -0000",
"msg_from": "\"Gopal\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres scalability and performance on windows"
},
{
"msg_contents": "\"Gopal\" <[email protected]> writes:\n> This is the query and the schema....\n> ...\n> select\n> sum(area(intersection(snaptogrid(chunkgeometry,0.00000001), \n> GeometryFromText('POLYGON((-0.140030845589332\n> 50.8208343077265,-0.138958398039148 50.8478005422809,-0.0963639712296823\n> 50.8471133071392,-0.0974609286275892 50.8201477285483,-0.140030845589332\n> 50.8208343077265))',4326))) * 100/ (0.00114901195862628)) as\n> percentCover, \n\nSo evidently area(intersection(snaptogrid(...))) takes about 300\nmicrosec per row. The PostGIS hackers would have to comment on whether\nthat seems out-of-line or not, and whether you can make it faster.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 28 Nov 2006 11:24:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres scalability and performance on windows "
},
{
"msg_contents": "\nOn Nov 28, 2006, at 8:24 AM, Tom Lane wrote:\n> \"Gopal\" <[email protected]> writes:\n>> This is the query and the schema....\n>> ...\n>> select\n>> sum(area(intersection(snaptogrid(chunkgeometry,0.00000001),\n>> GeometryFromText('POLYGON((-0.140030845589332\n>> 50.8208343077265,-0.138958398039148 \n>> 50.8478005422809,-0.0963639712296823\n>> 50.8471133071392,-0.0974609286275892 \n>> 50.8201477285483,-0.140030845589332\n>> 50.8208343077265))',4326))) * 100/ (0.00114901195862628)) as\n>> percentCover,\n>\n> So evidently area(intersection(snaptogrid(...))) takes about 300\n> microsec per row. The PostGIS hackers would have to comment on \n> whether\n> that seems out-of-line or not, and whether you can make it faster.\n\n\nThis is consistent with the typical cost for GIS geometry ops -- they \nare relatively expensive. When running queries against PostGIS \nfields for our apps, about half the CPU time will be spent inside the \ngeometry ops. Fortunately, there is significant opportunity for \nimprovement in the performance of the underlying code if anyone found \nthe time to optimize (and uglify) it for raw speed.\n\n\nCheers,\n\nJ. Andrew Rogers\n\n",
"msg_date": "Tue, 28 Nov 2006 08:51:05 -0800",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres scalability and performance on windows"
}
] |
[
{
"msg_contents": "Hi,\n\nThanks for your suggestions. Here's an output of the explain analyse.\nI'll change the shared_buffers and look at the behaviour again.\n\n\"Limit (cost=59.53..59.53 rows=1 width=28) (actual time=15.681..15.681\nrows=1 loops=1)\"\n\" -> Sort (cost=59.53..59.53 rows=1 width=28) (actual\ntime=15.678..15.678 rows=1 loops=1)\"\n\" Sort Key: ceil(subq.percentcover), (1::double precision /\nds.maxresolution)\"\n\" -> Hash Join (cost=58.19..59.52 rows=1 width=28) (actual\ntime=15.630..15.663 rows=2 loops=1)\"\n\" Hash Cond: (\"outer\".datasetid = \"inner\".did)\"\n\" -> Seq Scan on tbl_metadata_dataset ds (cost=0.00..1.21\nrows=21 width=24) (actual time=0.006..0.021 rows=21 loops=1)\"\n\" -> Hash (cost=58.18..58.18 rows=1 width=12) (actual\ntime=15.591..15.591 rows=2 loops=1)\"\n\" -> Sort (cost=58.17..58.17 rows=1 width=117)\n(actual time=15.585..15.586 rows=2 loops=1)\"\n\" Sort Key: tbl_metadata_chunks.datasetid\"\n\" -> HashAggregate (cost=58.13..58.16 rows=1\nwidth=117) (actual time=15.572..15.573 rows=2 loops=1)\"\n\" -> Hash IN Join (cost=3.34..58.10\nrows=7 width=117) (actual time=0.261..0.544 rows=50 loops=1)\"\n\" Hash Cond: (\"outer\".datasetid =\n\"inner\".datasetid)\"\n\" -> Bitmap Heap Scan on\ntbl_metadata_chunks (cost=2.05..56.67 rows=14 width=117) (actual\ntime=0.204..0.384 rows=60 loops=1)\"\n\" Filter: (chunkgeometry &&\n'0103000020E6100000010000000500000058631EDF87ECC1BF608F3D1911694940A0958\nA8763C9C1BF535069BA846C494026B5F1284FABB8BFAB1577356E6C494094E1170D33F3B\n8BF7700CC99FA68494058631EDF87ECC1BF608F3D1 (..)\"\n\" -> Bitmap Index Scan on\ntbl_metadata_chunks_idx3 (cost=0.00..2.05 rows=14 width=0) (actual\ntime=0.192..0.192 rows=60 loops=1)\"\n\" Index Cond:\n(chunkgeometry &&\n'0103000020E6100000010000000500000058631EDF87ECC1BF608F3D1911694940A0958\nA8763C9C1BF535069BA846C494026B5F1284FABB8BFAB1577356E6C494094E1170D33F3B\n8BF7700CC99FA68494058631EDF87ECC (..)\"\n\" -> Hash (cost=1.26..1.26\nrows=10 width=4) (actual time=0.037..0.037 rows=10 loops=1)\"\n\" -> Seq Scan on\ntbl_metadata_dataset (cost=0.00..1.26 rows=10 width=4) (actual\ntime=0.005..0.024 rows=10 loops=1)\"\n\" Filter: (typeofdataid\n= 1)\"\n\"Total runtime: 15.871 ms\"\n\n\n\nGopal\n\n\n\n\n\nRe: Postgres scalability and performance on windows\n\n\n\nHi,\nThanks for your suggestions. Here’s an output of the explain analyse. I’ll change the shared_buffers and look at the behaviour again.\n\"Limit (cost=59.53..59.53 rows=1 width=28) (actual time=15.681..15.681 rows=1 loops=1)\"\n\" -> Sort (cost=59.53..59.53 rows=1 width=28) (actual time=15.678..15.678 rows=1 loops=1)\"\n\" Sort Key: ceil(subq.percentcover), (1::double precision / ds.maxresolution)\"\n\" -> Hash Join (cost=58.19..59.52 rows=1 width=28) (actual time=15.630..15.663 rows=2 loops=1)\"\n\" Hash Cond: (\"outer\".datasetid = \"inner\".did)\"\n\" -> Seq Scan on tbl_metadata_dataset ds (cost=0.00..1.21 rows=21 width=24) (actual time=0.006..0.021 rows=21 loops=1)\"\n\" -> Hash (cost=58.18..58.18 rows=1 width=12) (actual time=15.591..15.591 rows=2 loops=1)\"\n\" -> Sort (cost=58.17..58.17 rows=1 width=117) (actual time=15.585..15.586 rows=2 loops=1)\"\n\" Sort Key: tbl_metadata_chunks.datasetid\"\n\" -> HashAggregate (cost=58.13..58.16 rows=1 width=117) (actual time=15.572..15.573 rows=2 loops=1)\"\n\" -> Hash IN Join (cost=3.34..58.10 rows=7 width=117) (actual time=0.261..0.544 rows=50 loops=1)\"\n\" Hash Cond: (\"outer\".datasetid = \"inner\".datasetid)\"\n\" -> Bitmap Heap Scan on tbl_metadata_chunks (cost=2.05..56.67 rows=14 width=117) (actual time=0.204..0.384 rows=60 loops=1)\"\n\" Filter: (chunkgeometry && '0103000020E6100000010000000500000058631EDF87ECC1BF608F3D1911694940A0958A8763C9C1BF535069BA846C494026B5F1284FABB8BFAB1577356E6C494094E1170D33F3B8BF7700CC99FA68494058631EDF87ECC1BF608F3D1 (..)\"\n\" -> Bitmap Index Scan on tbl_metadata_chunks_idx3 (cost=0.00..2.05 rows=14 width=0) (actual time=0.192..0.192 rows=60 loops=1)\"\n\" Index Cond: (chunkgeometry && '0103000020E6100000010000000500000058631EDF87ECC1BF608F3D1911694940A0958A8763C9C1BF535069BA846C494026B5F1284FABB8BFAB1577356E6C494094E1170D33F3B8BF7700CC99FA68494058631EDF87ECC (..)\"\n\" -> Hash (cost=1.26..1.26 rows=10 width=4) (actual time=0.037..0.037 rows=10 loops=1)\"\n\" -> Seq Scan on tbl_metadata_dataset (cost=0.00..1.26 rows=10 width=4) (actual time=0.005..0.024 rows=10 loops=1)\"\n\" Filter: (typeofdataid = 1)\"\n\"Total runtime: 15.871 ms\"\n\n\nGopal",
"msg_date": "Fri, 24 Nov 2006 10:11:57 -0000",
"msg_from": "\"Gopal\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres scalability and performance on windows"
},
{
"msg_contents": "\"Gopal\" <[email protected]> writes:\n> Thanks for your suggestions. Here's an output of the explain analyse.\n\nWhat's the query exactly, and what are the schemas of the tables it\nuses (psql \\d descriptions would do)?\n\nThe actual runtime seems to be almost all spent in the hash aggregation\nstep:\n\n> -> HashAggregate (cost=58.13..58.16 rows=1 width=117) (actual time=15.572..15.573 rows=2 loops=1)\n> -> Hash IN Join (cost=3.34..58.10 rows=7 width=117) (actual time=0.261..0.544 rows=50 loops=1)\n\n15 msec seems like a long time to aggregate only 50 rows, so I'm\nwondering what aggregates are being calculated and over what\ndatatypes...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Nov 2006 19:10:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres scalability and performance on windows "
}
] |
[
{
"msg_contents": "Hi everyone,\n\ndoes anyone have the TPC-H benchmark for PostgreSQL? Can you tell me where can i find the database and queries?\n\nThks,\nFelipe\n\n\n\n\n\n\nHi everyone,\n \ndoes anyone have the TPC-H benchmark for \nPostgreSQL? Can you tell me where can i find the database and \nqueries?\n \nThks,\nFelipe",
"msg_date": "Fri, 24 Nov 2006 11:47:06 -0300",
"msg_from": "\"Felipe Rondon Rocha\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "TPC-H Benchmark"
},
{
"msg_contents": "http://www.tpc.org/tpch/spec/tpch_20060831.tar.gz\n\n- Luke\n\nOn 11/24/06 8:47 AM, \"Felipe Rondon Rocha\" <[email protected]> wrote:\n\n> Hi everyone,\n> \n> does anyone have the TPC-H benchmark for PostgreSQL? Can you tell me where can\n> i find the database and queries?\n> \n> Thks,\n> Felipe\n> \n\n\n\n\n\nRe: [PERFORM] TPC-H Benchmark\n\n\nhttp://www.tpc.org/tpch/spec/tpch_20060831.tar.gz\n\n- Luke\n\nOn 11/24/06 8:47 AM, \"Felipe Rondon Rocha\" <[email protected]> wrote:\n\nHi everyone,\n \ndoes anyone have the TPC-H benchmark for PostgreSQL? Can you tell me where can i find the database and queries?\n \nThks,\nFelipe",
"msg_date": "Fri, 24 Nov 2006 10:30:57 -0600",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-H Benchmark"
}
] |
[
{
"msg_contents": "Hi all,\n\n I have a table with statistics with more than 15 million rows. I'd\nlike to delete the oldest statistics and this can be about 7 million\nrows. Which method would you recommend me to do this? I'd be also\ninterested in calculate some kind of statistics about these deleted\nrows, like how many rows have been deleted for date. I was thinking in\ncreating a function, any recommendations?\n\nThank you very much\n-- \nArnau\n",
"msg_date": "Fri, 24 Nov 2006 19:43:34 +0100",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Massive delete of rows, how to proceed?"
},
{
"msg_contents": "On 24/11/06, Arnau <[email protected]> wrote:\n> Hi all,\n>\n> I have a table with statistics with more than 15 million rows. I'd\n> like to delete the oldest statistics and this can be about 7 million\n> rows. Which method would you recommend me to do this? I'd be also\n> interested in calculate some kind of statistics about these deleted\n> rows, like how many rows have been deleted for date. I was thinking in\n> creating a function, any recommendations?\n\n\nCopy and drop old table. If you delete you will have a massive problem\nwith a bloated table and vacuum will not help unless you expect the\ntable to grow to this size regulally otherwise vacuum full will take\nages.\n\nPeter.\n",
"msg_date": "Sat, 25 Nov 2006 15:45:19 +0000",
"msg_from": "\"Peter Childs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Massive delete of rows, how to proceed?"
},
{
"msg_contents": "On 11/25/06, Arnau <[email protected]> wrote:\n> Hi all,\n>\n> I have a table with statistics with more than 15 million rows. I'd\n> like to delete the oldest statistics and this can be about 7 million\n> rows. Which method would you recommend me to do this? I'd be also\n> interested in calculate some kind of statistics about these deleted\n> rows, like how many rows have been deleted for date. I was thinking in\n> creating a function, any recommendations?\n\na function, like an sql statement, operates in a single transaction\nand you are locking quite a few records in this operation. merlin's\n3rd rule: long running transactions are (usually) evil.\n\nmy gut says moving the keeper records to a swap table, dropping the\nmain table, and swapping the tables back might be better. However,\nthis kind of stuff can cause problems with logged in sessions because\nof plan issues, beware.\n\ndo not write a function to delete records row by row unless you have\nexhausted all other courses of action.\n\nmerlin\n",
"msg_date": "Mon, 27 Nov 2006 14:14:28 +0530",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive delete of rows, how to proceed?"
}
] |
[
{
"msg_contents": "I don't believe DROP is necessary; use TRUNCATE instead. No need to re-create dependent objects.\n\n\"Peter Childs\" <[email protected]> wrote ..\n> On 24/11/06, Arnau <[email protected]> wrote:\n> > Hi all,\n> >\n> > I have a table with statistics with more than 15 million rows. I'd\n> > like to delete the oldest statistics and this can be about 7 million\n> > rows. Which method would you recommend me to do this? I'd be also\n> > interested in calculate some kind of statistics about these deleted\n> > rows, like how many rows have been deleted for date. I was thinking in\n> > creating a function, any recommendations?\n> \n> \n> Copy and drop old table. If you delete you will have a massive problem\n> with a bloated table and vacuum will not help unless you expect the\n> table to grow to this size regulally otherwise vacuum full will take\n> ages.\n> \n> Peter.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n",
"msg_date": "Sat, 25 Nov 2006 16:35:01 -0800",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Massive delete of rows, how to proceed?"
}
] |
[
{
"msg_contents": "Hi,\n\nAre there guidelines (or any empirical data) available how to determine\nhow often a table should be vacuumed for optimum performance or is this\nan experience / trial-and-error thing?\n\nTIA \n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\nweb: www.askesis.nl\n",
"msg_date": "Sun, 26 Nov 2006 12:24:17 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": true,
"msg_subject": "When to vacuum a table?"
},
{
"msg_contents": "Hi,\n\nFrom: http://www.postgresql.org/docs/7.4/interactive/sql-vacuum.html\n\n\n\"VACUUM reclaims storage occupied by deleted tuples. In normal\nPostgreSQLoperation, tuples that are deleted or obsoleted by an update\nare not\nphysically removed from their table; they remain present until a VACUUM is\ndone. Therefore it's necessary to do VACUUM periodically, especially on\nfrequently-updated tables.\"\n\n\"The \"vacuum analyze\" form additionally collects statistics on the\ndisbursion of columns in the database, which the optimizer uses when it\ncalculates just how to execute queries. The availability of this data can\nmake a tremendous difference in the execution speed of queries. This command\ncan also be run from cron, but it probably makes more sense to run this\ncommand as part of your nightly backup procedure - if \"vacuum\" is going to\nscrew up the database, you'd prefer it to happen immediately after (not\nbefore!) you've made a backup! The \"vacuum\" command is very reliable, but\nconservatism is the key to good system management. So, if you're using the\nexport procedure described above, you don't need to do this extra step\".\n\nAll its tables constantly manipulated (INSERT, UPDATE, DELETE) they need a\nVACUUM, therefore the necessity to execute at least one time to the day\nnormally of dawn if its database will be very great .\n\n[],s\n\nMarcelo Costa\nSecretaria Executiva de Educação do Pará\nAmazonia - Pará - Brazil\n\n2006/11/26, Joost Kraaijeveld <[email protected]>:\n>\n> Hi,\n>\n> Are there guidelines (or any empirical data) available how to determine\n> how often a table should be vacuumed for optimum performance or is this\n> an experience / trial-and-error thing?\n>\n> TIA\n>\n> --\n> Groeten,\n>\n> Joost Kraaijeveld\n> Askesis B.V.\n> Molukkenstraat 14\n> 6524NB Nijmegen\n> tel: 024-3888063 / 06-51855277\n> fax: 024-3608416\n> web: www.askesis.nl\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n\n-- \nMarcelo Costa\n\nHi,From: http://www.postgresql.org/docs/7.4/interactive/sql-vacuum.html\"VACUUM reclaims storage occupied by deleted tuples. In normal \nPostgreSQL\noperation, tuples that are deleted or obsoleted by an update are not\nphysically removed from their table; they remain present until a VACUUM is done. Therefore it's necessary to do VACUUM periodically, \nespecially on frequently-updated tables.\"\"The \"vacuum analyze\" form\n additionally collects statistics on the \n disbursion of columns in the database, which the optimizer uses when\n it calculates just how to execute queries. The availability of this\n data can make a tremendous difference in the execution speed of\n queries. This command can also be run from cron, but it probably makes\n more sense to run this command as part of your nightly backup\n procedure - if \"vacuum\" is going to screw up the database, you'd\n prefer it to happen immediately after (not before!) you've made a\n backup! The \"vacuum\" command is very reliable, but conservatism is\n the key to good system management. So, if you're using the export\n procedure described above, you don't need to do this extra\n step\".\nAll its tables constantly manipulated\n\t (INSERT, UPDATE, DELETE) they need a VACUUM, therefore the necessity to execute at least one time to the day normally of dawn if its database will be very great\n.[],sMarcelo CostaSecretaria Executiva de Educação do ParáAmazonia - Pará - Brazil 2006/11/26, Joost Kraaijeveld <\[email protected]>:Hi,\nAre there guidelines (or any empirical data) available how to determinehow often a table should be vacuumed for optimum performance or is thisan experience / trial-and-error thing?TIA--Groeten,\nJoost KraaijeveldAskesis B.V.Molukkenstraat 146524NB Nijmegentel: 024-3888063 / 06-51855277fax: 024-3608416web: www.askesis.nl---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [email protected] so that your message can get through to the mailing list cleanly\n-- Marcelo Costa",
"msg_date": "Sun, 26 Nov 2006 09:43:11 -0300",
"msg_from": "\"Marcelo Costa\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to vacuum a table?"
},
{
"msg_contents": "On Sun, Nov 26, 2006 at 09:43:11AM -0300, Marcelo Costa wrote:\n> All its tables constantly manipulated (INSERT, UPDATE, DELETE) they need a\n> VACUUM\n\nJust a minor clarification here: INSERT does not create dead rows, only\nUPDATE and DELETE do. Thus, if you only insert rows, you do not need to\nvacuum (although you probably need to analyze).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sun, 26 Nov 2006 14:11:47 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to vacuum a table?"
},
{
"msg_contents": "Sorry,\n\nrealy you are correct.\n\n[],s\n\nMarcelo Costa.\n\n2006/11/26, Steinar H. Gunderson <[email protected]>:\n>\n> On Sun, Nov 26, 2006 at 09:43:11AM -0300, Marcelo Costa wrote:\n> > All its tables constantly manipulated (INSERT, UPDATE, DELETE) they need\n> a\n> > VACUUM\n>\n> Just a minor clarification here: INSERT does not create dead rows, only\n> UPDATE and DELETE do. Thus, if you only insert rows, you do not need to\n> vacuum (although you probably need to analyze).\n>\n> /* Steinar */\n> --\n> Homepage: http://www.sesse.net/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n\n-- \nMarcelo Costa\n\nSorry, realy you are correct.[],sMarcelo Costa.2006/11/26, Steinar H. Gunderson <[email protected]>:\nOn Sun, Nov 26, 2006 at 09:43:11AM -0300, Marcelo Costa wrote:> All its tables constantly manipulated (INSERT, UPDATE, DELETE) they need a\n> VACUUMJust a minor clarification here: INSERT does not create dead rows, onlyUPDATE and DELETE do. Thus, if you only insert rows, you do not need tovacuum (although you probably need to analyze).\n/* Steinar */--Homepage: http://www.sesse.net/---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your message can get through to the mailing list cleanly\n-- Marcelo Costa",
"msg_date": "Sun, 26 Nov 2006 10:46:41 -0300",
"msg_from": "\"Marcelo Costa\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to vacuum a table?"
},
{
"msg_contents": "\nOn 26-Nov-06, at 8:11 AM, Steinar H. Gunderson wrote:\n\n> On Sun, Nov 26, 2006 at 09:43:11AM -0300, Marcelo Costa wrote:\n>> All its tables constantly manipulated (INSERT, UPDATE, DELETE) \n>> they need a\n>> VACUUM\n>\n> Just a minor clarification here: INSERT does not create dead rows, \n> only\n> UPDATE and DELETE do. Thus, if you only insert rows, you do not \n> need to\n> vacuum (although you probably need to analyze).\n\nNot entirely true. An insert & rollback will create dead rows. If you \nattempt and fail a large number of insert transactions then you will \nstill need to vacuum.\n",
"msg_date": "Sun, 26 Nov 2006 09:24:29 -0500",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to vacuum a table?"
},
{
"msg_contents": "On Sun, Nov 26, 2006 at 09:24:29AM -0500, Rod Taylor wrote:\n> attempt and fail a large number of insert transactions then you will \n> still need to vacuum.\n\nAnd you still need to vacuum an insert-only table sometimes, because\nof the system-wide vacuum requirement.\n\nA\n\n\n-- \nAndrew Sullivan | [email protected]\nThe whole tendency of modern prose is away from concreteness.\n\t\t--George Orwell\n",
"msg_date": "Sun, 26 Nov 2006 09:33:52 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to vacuum a table?"
},
{
"msg_contents": "Rod Taylor wrote:\n>> Just a minor clarification here: INSERT does not create dead rows, only\n>> UPDATE and DELETE do. Thus, if you only insert rows, you do not need to\n>> vacuum (although you probably need to analyze).\n\nIs there no real-time garbage collection at all in Postgres? And if so, is this because nobody has had time to implement garbage collection, or for a more fundamental reason, or because VACUUM is seen as sufficient?\n\nI'm just curious ... Vacuum has always seemed to me like an ugly wart on the pretty face of Postgres. (I say this even though I implemented an identical solution on a non-relational chemistry database system a long time ago. I didn't like it then, either.) \n\nCraig\n",
"msg_date": "Sun, 26 Nov 2006 08:26:52 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to vacuum a table?"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> Is there no real-time garbage collection at all in Postgres?\n\nNo.\n\n> And if so, is this because nobody has had time to implement garbage\n> collection, or for a more fundamental reason, or because VACUUM is\n> seen as sufficient?\n\nIf you really want to know, read the mountains of (mostly) junk that\nhave been written about replacing VACUUM in pgsql-hackers. The short\nanswer (with apologies to Winston Churchill) is that VACUUM is the worst\nsolution, except for all the others that have been suggested.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 26 Nov 2006 12:06:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to vacuum a table? "
},
{
"msg_contents": "\n> If you really want to know, read the mountains of (mostly) junk that\n> have been written about replacing VACUUM in pgsql-hackers. The short\n> answer (with apologies to Winston Churchill) is that VACUUM is the worst\n> solution, except for all the others that have been suggested.\n\nThe lesser of 50 evils? ;)\n\nJoshua D. Drake\n\n\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n",
"msg_date": "Sun, 26 Nov 2006 09:47:38 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to vacuum a table?"
},
{
"msg_contents": "On Sun, Nov 26, 2006 at 12:24:17PM +0100, Joost Kraaijeveld wrote:\n> Hi,\n> \n> Are there guidelines (or any empirical data) available how to determine\n> how often a table should be vacuumed for optimum performance or is this\n> an experience / trial-and-error thing?\n\nMost of the time I just turn autovac on, set the scale factors to\n0.2/0.1 and the thresholds to 300/200 and turn on vacuum_cost_delay\n(usually set to 20). That's a pretty decent setup for most applications.\nIt also doesn't hurt to run a periodic vacuumdb -av and look at the tail\nend of it's output to make sure you have adequate FSM settings.\n\nThe exception to that rule is for tables that are very small and have a\nlot of churn; I'll vacuum those by hand very frequently (every 60\nseconds or better).\n-- \nJim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n",
"msg_date": "Sun, 26 Nov 2006 19:01:35 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to vacuum a table?"
},
{
"msg_contents": ">>> On Sun, Nov 26, 2006 at 5:24 AM, in message\n<1164540257.7902.0.camel@panoramix>, Joost Kraaijeveld\n<[email protected]> wrote: \n> \n> Are there guidelines (or any empirical data) available how to\ndetermine\n> how often a table should be vacuumed for optimum performance or is\nthis\n> an experience / trial- and- error thing?\n \nFor most of our databases we use a daily \"VACUUM ANALYZE VERBOSE;\". We\ngrep the results to show large numbers of removable or dead rows and to\nshow the fsm numbers at the end. There are a few small tables with high\nupdate rates which this doesn't cover. To handle that we set the\nautovacuum to 0.2/0.1 and 1/1 on a ten second interval and do a daily\nCLUSTER of these tables. We really don't have many tables which can hit\nthese autovacuum thresholds in one day.\n \nWe have databases which are 400 GB with the vast majority of that being\nin tables which are insert-only except for a weekly purge of data over a\nyear old. We do nightly vacuums on the few tables with update/delete\nactivity, and a weekly vacuum of the whole database -- right after the\ndelete of old rows from the big tables.\n \n-Kevin\n \n\n",
"msg_date": "Mon, 27 Nov 2006 12:59:48 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When to vacuum a table?"
}
] |
[
{
"msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 2784\nLogged by: Michael Simms\nEmail address: [email protected]\nPostgreSQL version: 8.1.4\nOperating system: Linux kernel 2.6.12\nDescription: Performance serious degrades over a period of a month\nDetails: \n\nOK, we have a database that runs perfectly well after a dump and restore,\nbut over a period of a month or two, it just degrades to the point of\nuselessness.\nvacuumdb -a is run every 24 hours. We have also run for months at a time\nusing -a -z but the effect doesnt change.\n\nThe database is for a counter, not the most critical part of the system, but\na part of the system nonetheless. Other tables we have also degrade over\ntime, but the counter is the most pronounced. There seems to be no common\nfeature of the tables that degrade. All I know is that a series of queries\nthat are run on the database every 24 hours, after a dump/restore takes 2\nhours. Now, 2 months after, it is taking over 12. We are seriously\nconsidering switching to mysql to avoid this issue. \n\nBut I wanted to let you guys have a chance to resolve the issue, we dont\nhave the manpower or expertise to fix it ourselves. I am willing to let\nsomeone from the postgres development team have access to our server for a\nperiod of time to have a look at the issue. This would need to be someone\nextremely trustworthy as the database contains confidential client\ninformation.\n\nI am willing to wait 2 days for a response and for someone to take a look at\nthe problem. The performance degridation isnt something we can leave as it\nis for long, and in 2 days time I will have to dump and restore the\ndatabase, which will reset it to a good state, and will mean I will have to\nresort to the mysql switch instead.\n\nSorry this sounds a bit rushed, but it cant be helped, this is causing\n*problems* and we need a solution, either a fix or a switch to another\ndatabase. Id rather a fix cos I like postgres, but Im willing to bite the\nmysql bullet if I have to...\n",
"msg_date": "Sun, 26 Nov 2006 16:35:52 GMT",
"msg_from": "\"Michael Simms\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #2784: Performance serious degrades over a period of a month"
},
{
"msg_contents": "\"Michael Simms\" <[email protected]> writes:\n> OK, we have a database that runs perfectly well after a dump and restore,\n> but over a period of a month or two, it just degrades to the point of\n> uselessness.\n> vacuumdb -a is run every 24 hours. We have also run for months at a time\n> using -a -z but the effect doesnt change.\n\nYou probably need significantly-more-frequent vacuuming. Have you\nconsidered autovacuum?\n\nThis is not a bug --- you'd get better help on the pgsql-performance\nmailing list.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Nov 2006 21:59:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2784: Performance serious degrades over a period of a month "
},
{
"msg_contents": "This really should have been asked on pgsql-performance and would probably\nget a better response there..\n\nOn Sun, Nov 26, 2006 at 16:35:52 +0000,\n Michael Simms <[email protected]> wrote:\n> PostgreSQL version: 8.1.4\n> Operating system: Linux kernel 2.6.12\n> Description: Performance serious degrades over a period of a month\n> Details: \n> \n> OK, we have a database that runs perfectly well after a dump and restore,\n> but over a period of a month or two, it just degrades to the point of\n> uselessness.\n> vacuumdb -a is run every 24 hours. We have also run for months at a time\n> using -a -z but the effect doesnt change.\n> \n\nThis sounds like you either need to increase your FSM setting or vacuum\nmore often. I think vacuumdb -v will give you enough information to tell\nif FSM is too low at the frequency you are vacuuming.\n\n> The database is for a counter, not the most critical part of the system, but\n> a part of the system nonetheless. Other tables we have also degrade over\n> time, but the counter is the most pronounced. There seems to be no common\n> feature of the tables that degrade. All I know is that a series of queries\n> that are run on the database every 24 hours, after a dump/restore takes 2\n> hours. Now, 2 months after, it is taking over 12. We are seriously\n> considering switching to mysql to avoid this issue. \n\nYou probably will want to vacuum the counter table more often than the other\ntables in the database. Depending on how often the counter(s) are being\nupdated and how many separate counters are in the table you might want to\nvacuum that table as often as once a minute.\n\nDepending on your requirements you might also want to consider using a sequence\ninstead of a table row for the counter.\n",
"msg_date": "Mon, 27 Nov 2006 22:26:27 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2784: Performance serious degrades over a period of a month"
},
{
"msg_contents": "Michael Simms wrote:\n> The following bug has been logged online:\n> \n> Bug reference: 2784\n> Logged by: Michael Simms\n> Email address: [email protected]\n> PostgreSQL version: 8.1.4\n> Operating system: Linux kernel 2.6.12\n> Description: Performance serious degrades over a period of a month\n> Details: \n> \n> OK, we have a database that runs perfectly well after a dump and restore,\n> but over a period of a month or two, it just degrades to the point of\n> uselessness.\n> vacuumdb -a is run every 24 hours. We have also run for months at a time\n> using -a -z but the effect doesnt change.\n\nYou might have a hung transaction that never finishes open, which \nprevents vacuum from removing old tuple versions. Or you might have too \nlow FSM settings as others suggested.\n\nI'd try running VACUUM VERBOSE by hand, and taking a good look at the \noutput. If there's nothing obviously wrong with it, please send the \noutput back to the list (or pgsql-performance, as Tom suggested), and \nmaybe we can help.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n",
"msg_date": "Tue, 28 Nov 2006 11:08:15 +0000",
"msg_from": "\"Heikki Linnakangas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2784: Performance serious degrades over a period of a month"
},
{
"msg_contents": "Bruno Wolff III <[email protected]> wrote:\n>\n> This really should have been asked on pgsql-performance and would probably\n> get a better response there..\n> \n> On Sun, Nov 26, 2006 at 16:35:52 +0000,\n> Michael Simms <[email protected]> wrote:\n> > PostgreSQL version: 8.1.4\n> > Operating system: Linux kernel 2.6.12\n> > Description: Performance serious degrades over a period of a month\n> > Details: \n> > \n> > OK, we have a database that runs perfectly well after a dump and restore,\n> > but over a period of a month or two, it just degrades to the point of\n> > uselessness.\n> > vacuumdb -a is run every 24 hours. We have also run for months at a time\n> > using -a -z but the effect doesnt change.\n> > \n> \n> This sounds like you either need to increase your FSM setting or vacuum\n> more often. I think vacuumdb -v will give you enough information to tell\n> if FSM is too low at the frequency you are vacuuming.\n> \n> > The database is for a counter, not the most critical part of the system, but\n> > a part of the system nonetheless. Other tables we have also degrade over\n> > time, but the counter is the most pronounced. There seems to be no common\n> > feature of the tables that degrade. All I know is that a series of queries\n> > that are run on the database every 24 hours, after a dump/restore takes 2\n> > hours. Now, 2 months after, it is taking over 12. We are seriously\n> > considering switching to mysql to avoid this issue. \n> \n> You probably will want to vacuum the counter table more often than the other\n> tables in the database. Depending on how often the counter(s) are being\n> updated and how many separate counters are in the table you might want to\n> vacuum that table as often as once a minute.\n> \n> Depending on your requirements you might also want to consider using a sequence\n> instead of a table row for the counter.\n\nJust to throw it in to the mix: you might also be in a usage pattern that would\nbenefit from a scheduled reindex every so often.\n",
"msg_date": "Tue, 28 Nov 2006 06:34:39 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2784: Performance serious degrades over a period"
}
] |
[
{
"msg_contents": "Hi.\n\nAfter I had my hands on an Intel MacBook Pro (2 GHz Core Duo, 1GB \nRAM), I made some comparisons between the machines I have here at the \ncompany.\n\nFor the ease of it and the simple way of reproducing the tests, I \ntook pgbench for the test.\n\nKonfigurations:\n\n1. PowerMac G5 (G5 Mac OS X) with two 1.8 CPUs (not a dual core), \n1.25GB RAM, Mac OS X Tiger 10.4.8, Single S-ATA harddrive, fsync on\n\n2. PowerMac G5 from above but with Yellow Dog Linux 4.1\n\n3. MacBook Pro, 2GHz Core Duo, 1GB RAM, Mac OS X Tiger 10.4.8, \ninternal harddrive (5k4, 120GB).\n\nPostgreSQL version is 8.2beta3 compiled with same settings on all \nplattforms, on Mac OS X Spotlight was turned off, same memory \nsettings on all plattforms (320MB of shmmax on Mac OS X, 128MB \nshared_buffers for PostgreSQL).\n\nHere we go:\n\nResults with 2 concurrent connections:\n\nG5 Mac OS X: 495\nG5 YD Linux: 490 - 520\nMBP X: 1125\n\nResults with 10 concurrent connections:\n\nG5 Mac OS X: 393\nG5 YD Linux: 410 - 450\nMBP: 1060\n\nResults with 50 concurrent connections:\n\nG5 Mac OS X: 278\nG5 YD Linux: 232\nMBP X: 575\n\nResults with 90 concurrent connections:\n\nMac OS X: 210\nYD Linux: 120\nMBP X: 378\n\nThe tests were taken with:\n\n[cug@localhost ~]$ for n in `seq 0 9`; do pgbench -U postgres -c 10 - \nt 100 benchdb; done | perl -nle '/tps = (\\d+)/ or next; $cnt++; $tps \n+=$1; END{ $avg = $tps/$cnt; print $avg }'\n\nYesterday a friend had a chance to test with a 2.16GHz MacBook Pro \nCore 2 Duo (Mac OS X, 5k4 160GB internal harddrive):\n\n10 connections: ~1150 tps\n50 connections: ~640 tps\n\nTo quantify the performance hit from the harddrive we tested also \nwith fsync off:\n\n10 connections: ~1500 tps\n50 connections: ~860 tps\n\nThe G5 with fsync off had only 5% more performance, so the harddrive \ndidn't have such a high impact on the performance there.\n\nOkay, nothing really special so far, but interesting enough. Only \nwanted to share the results with you.\n\ncug\n",
"msg_date": "Mon, 27 Nov 2006 09:08:58 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Plattform comparison (lies, damn lies and benchmarks)"
}
] |
[
{
"msg_contents": "Gentlemen,\n\n\nI use a modeling language which compiles down to a fairly verbose SQL DDL. If I\nuse semantically transparent identifiers in the modeling language, the compiler\neasily generates identifiers much longer than the default value of NAMEDATALEN.\nI am considering the possibility of rebuilding the server with NAMEDATALEN equal\nto 256. I have seen an interesting thread [1] about the performance impact of \nraising NAMEDATALEN, but it did not seem conclusive. Are there commonly accepted \ncorrelations between NAMEDATALEN and server performance? How would raising its \nvalue impact disk usage?\n\n\n[1] http://archives.postgresql.org/pgsql-hackers/2002-04/msg01253.php\n\n-- \n*********************************************************************\n\nIng. Alessandro Baretta\n\nStudio Baretta\nhttp://studio.baretta.com/\n\nConsulenza Tecnologica e Ingegneria Industriale\nTechnological Consulting and Industrial Engineering\n\nHeadquarters\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nLab\ntel. +39 02 9880 271\nfax. +39 02 9828 0296\n",
"msg_date": "Wed, 29 Nov 2006 12:31:55 +0100",
"msg_from": "Alessandro Baretta <[email protected]>",
"msg_from_op": true,
"msg_subject": "NAMEDATALEN and performance"
},
{
"msg_contents": "Alessandro Baretta <[email protected]> writes:\n> I am considering the possibility of rebuilding the server with\n> NAMEDATALEN equal to 256. I have seen an interesting thread [1] about\n> the performance impact of raising NAMEDATALEN, but it did not seem\n> conclusive.\n\nMore to the point, tests done on 7.2-era code shouldn't be assumed to be\nrelevant to modern PG releases. I think you'll need to do your own\nbenchmarking if you want to find out the costs of doing this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 29 Nov 2006 10:50:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN and performance "
},
{
"msg_contents": "Tom Lane wrote:\n> Alessandro Baretta <[email protected]> writes:\n>> I am considering the possibility of rebuilding the server with\n>> NAMEDATALEN equal to 256. I have seen an interesting thread [1] about\n>> the performance impact of raising NAMEDATALEN, but it did not seem\n>> conclusive.\n> \n> More to the point, tests done on 7.2-era code shouldn't be assumed to be\n> relevant to modern PG releases. I think you'll need to do your own\n> benchmarking if you want to find out the costs of doing this.\n> \n> \t\t\tregards, tom lane\n> \n\nThat's sensible. Now, performance in my case is much less critical than the \nrobustness and scalability of the application, so I guess I could just leave it \nto that and go with raising namedatalen. Yet, I would like to receive some \ninsight on the implications of such a choice. Beside the fact that the parser \nhas more work to do to decipher queries and whatnot, what other parts of the \nserver would be stressed by a verbose naming scheme? Where should I expect the \nbottlenecks to be?\n\nAlso, I could imagine a solution where I split the names in a schema part and a \nlocal name, thereby refactoring my namespace. I'd get the approximate effect of \ndoubling namedatalen, but at the expense of having a much wider searchpath. \nBased on your experience, which of two possible strategies is more prone to \ncause trouble?\n\nAlex\n\n\n-- \n*********************************************************************\n\nIng. Alessandro Baretta\n\nStudio Baretta\nhttp://studio.baretta.com/\n\nConsulenza Tecnologica e Ingegneria Industriale\nTechnological Consulting and Industrial Engineering\n\nHeadquarters\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nLab\ntel. +39 02 9880 271\nfax. +39 02 9828 0296\n",
"msg_date": "Fri, 01 Dec 2006 09:55:37 +0100",
"msg_from": "Alessandro Baretta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NAMEDATALEN and performance"
},
{
"msg_contents": "Alessandro Baretta <[email protected]> writes:\n> ... I would like to receive some \n> insight on the implications of such a choice. Beside the fact that the parser \n> has more work to do to decipher queries and whatnot, what other parts of the \n> server would be stressed by a verbose naming scheme? Where should I expect the \n> bottlenecks to be?\n\nI suppose the thing that would be notable is bloat in the size of the\nsystem catalogs and particularly their indexes; hence extra I/O.\n\n> Also, I could imagine a solution where I split the names in a schema part and a \n> local name, thereby refactoring my namespace. I'd get the approximate effect of \n> doubling namedatalen, but at the expense of having a much wider searchpath. \n\nThis might be worth thinking about, simply because it'd avoid the need\nfor custom executables.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 01 Dec 2006 10:16:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NAMEDATALEN and performance "
}
] |
[
{
"msg_contents": "Hi List;\n\nI have a client looking to host/co-locate multiple PostgreSQL clusters \n(inclusive of PL/pgSQL application code) per server. I did some co-location \nwork several years back with one of the bigger telco's and remember there \nwere dire consequences for not carefully evaluating the expected resource \nusage of each database/app and matching apps based on some common sense (i.e. \nmatch a memory intensive app with a CPU intensive app)\n\nMy question is:\nDoes anyone have any templates, guidelines, thoughts, etc with respect to how \nto eval and match multiple databases for a single server. Such as a checklist \nof resource items to look at and guidelines per how to line up db clusters \nthat should play nicely together?\n\nThanks in advance...\n",
"msg_date": "Wed, 29 Nov 2006 11:03:03 -0700",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": true,
"msg_subject": "OT - how to size/match multiple databases/apps for a single server"
}
] |
[
{
"msg_contents": "posting this here instead of the GENERAL list...richard is right, this is more of a performance question than a general question.\n\nthanks,\n \n____________________________________\nMark Jensen\n\n----- Forwarded Message ----\nFrom: Mark Jensen <[email protected]>\nTo: Richard Huxton <[email protected]>\nCc: [email protected]\nSent: Wednesday, November 29, 2006 2:40:58 PM\nSubject: Re: [GENERAL] Including unique users in huge data warehouse in Postgresql...\n\nthanks Richard. I've talking to Ron Mayer about this as well offline. I think the main problem is dedupping users, and not being able to aggregate visits in the fact table. that's where most of the query time takes place. but the business guys just won't accept using visits, but not actual uniques dedupped. if visits was in the fact table as an integer i could sum up, i'd be fine. Ron also said he's put the unique user ids into arrays so it's faster to count them, but placing them into aggregate tables. only problem is i'd still have to know what things to aggregate by to create these, which is impossible since we have so many dimensions and facts that are going to be ad-hoc. i have about 20 summary tables i create per day, but most of the time, they have something new they want to query that's not in summary. and will never come up again.\n\nI tried installing Bizgres using their Bizgres loader and custom postgresql package with bitmap indexes, but doesn't seem to increase performance \"that\" much. or as much as i would like compared to the normal postgresql install. loads are pretty slow when using their bitmap indexes compared to just using btree indexes in the standard postgresql install. Query time is pretty good, but i also have to make sure load times are acceptable as well. and had some problems with the bizgres loader losing connection to the database for no reason at all, but when using the normal copy command in 8.2RC1, works fine. love the new query inclusion in the copy command by the way, makes it so easy to aggregrate hourly fact tables into daily/weekly/monthly in one shot :)\n\nand yes, work_mem is optimized as much as possible. postgresql is using about 1.5 gigs of working memory when it runs these queries. looking into getting 64 bit hardware with 16-32 gigs of RAM so i can throw most of this into memory to speed it up. we're also using 3par storage which is pretty fast. we're going to try and put postgresql on a local disk array using RAID 5 as well to see if it makes a difference.\n\nand yes, right now, these are daily aggregate tables summed up from the hourly. so about 17 million rows per day. hourly fact tables are impossible to query right now, so i have to at least put these into daily fact tables. so when you have 30 days in this database, then yes, table scans are going to be huge, thus why it's taking so long, plus dedupping on unique user id :)\n\nand you're right, i should put this on the performance mailing list... see you there :)\n\nthanks guys.\n \n____________________________________\nMark Jensen\n\n----- Original Message ----\nFrom: Richard Huxton <[email protected]>\nTo: Mark Jensen <[email protected]>\nCc: [email protected]\nSent: Wednesday, November 29, 2006 2:29:35 PM\nSubject: Re: [GENERAL] Including unique users in huge data warehouse in Postgresql...\n\nMark Jensen wrote:\n> So i've been given the task of designing a data warehouse in\n> either Postgresql or Mysql for our clickstream data for our sites. I\n> started with Mysql but the joins in Mysql are just way too slow\n> compared to Postgresql when playing with star schemas.\n\nMark - it's not my usual area, but no-one else has picked up your \nposting, so I'll poke my nose in. The other thing you might want to do \nis post this on the performance list - that's probably the best place. \nMight be worth talking to those at www.bizgres.org too (although I think \nthey all hang out on the performance list).\n\n > I can't say\n> which sites i'm working on, but we get close to 3-5 million uniques\n> users per day, so over time, that's a lot of unique users to keep\n> around and de-dup your fact tables by. Need to be able to query normal\n> analytics like:\n<snip>\n\n> i've\n> made a lot of optimizations in postgresql.conf by playing with work_mem\n> and shared_buffers and such and i think the database is using as much\n> as it can disk/memory/cpu wise. \n\nBig work_mem, I'm guessing. Limiting factor is presumably disk I/O.\n\n<snip>\n> here's a sample query that takes a while to run... just a simple report that shows gender by area of the site.\n> \n> select A.gender as gender, B.area as area, sum(C.imps) as imps, sum(C.clicks) as clicks, count(distinct(C.uu_id)) as users\n> from uus as A, areas as B, daily_area_fact as C\n> where A.uu_id = C.uu_id\n> and B.area_id = C.area_id\n> group by gender,area;\n> \n> so\n> by just having one day of data, with 3,168,049 rows in the user\n> dimension table (uus), 17,213,420 in the daily_area_fact table that\n> joins all the dimension tables, takes about 15 minutes. if i had 30-90\n> days in this fact table, who knows how long this would take... i know\n> doing a distinct on uu_id is very expensive, so that's the main problem\n> here i guess and would want to know if anyone else is doing it this way\n> or better.\n\nIn the end, I'd suspect the seq-scan over the fact table will be your \nbiggest problem. Can you pre-aggregate your fact-table into daily summaries?\n\nSee you over on the performance list, where there are more experienced \npeople than myself to help you.\n-- \n Richard Huxton\n Archonet Ltd\n\n\n\n\n\n \n____________________________________________________________________________________\nDo you Yahoo!?\nEveryone is raving about the all-new Yahoo! Mail beta.\nhttp://new.mail.yahoo.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\n\n\n \n____________________________________________________________________________________\nYahoo! Music Unlimited\nAccess over 1 million songs.\nhttp://music.yahoo.com/unlimited\n",
"msg_date": "Wed, 29 Nov 2006 11:43:39 -0800 (PST)",
"msg_from": "Mark Jensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fw: [GENERAL] Including unique users in huge data warehouse in\n\tPostgresql..."
},
{
"msg_contents": "Mark,\n\nThis fits the typical pattern of the \"Big Honking Datamart\" for clickstream\nanalysis, a usage pattern that stresses the capability of all DBMS. Large\ncompanies spend $1M + on combinations of SW and HW to solve this problem,\nand only the large scale parallel DBMS can handle the load. Players in the\nmarket include Oracle, IBM, Teradata, Netezza and of course Greenplum.\n\nUnfortunately, techniques like bitmap indexes only improve things by factors\nO(10). Parallelism is the only proven answer to get O(10,000) improvements\nin response time. Furthermore, simply speeding the I/O underneath one CPU\nper query is insufficient, the query and loading engine need to scale CPU\nand storage access together.\n\n=========== start description of commercial Postgres solution ===========\n=========== If commercial solutions offend, skip this section ===========\n\nThe parallel I/O and CPU of Greenplum DB (formerly Bizgres MPP) is designed\nfor exactly this workload, where a combination of scalable I/O and CPU is\nrequired to speed these kinds of queries (sessionizing weblogs, creating\naggregates, direct ad-hoc analysis).\n\nOne of our customers doing clickstream analysis uses a combination of\nsessionizing ELT processing with Greenplum DB + Bizgres KETL and\nMicrostrategy for the reporting frontend. The complete system is 1/100 the\nprice of systems that are slower.\n\nWe routinely see speedups of over 100 compared to large scale multi-million\ndollar commercial solutions and have reference customers who are regularly\nworking with Terabytes of data.\n\n============ end commercial solution description ========================\n\n- Luke \n\nOn 11/29/06 11:43 AM, \"Mark Jensen\" <[email protected]> wrote:\n\n> posting this here instead of the GENERAL list...richard is right, this is more\n> of a performance question than a general question.\n> \n> thanks,\n> \n> ____________________________________\n> Mark Jensen\n> \n> ----- Forwarded Message ----\n> From: Mark Jensen <[email protected]>\n> To: Richard Huxton <[email protected]>\n> Cc: [email protected]\n> Sent: Wednesday, November 29, 2006 2:40:58 PM\n> Subject: Re: [GENERAL] Including unique users in huge data warehouse in\n> Postgresql...\n> \n> thanks Richard. I've talking to Ron Mayer about this as well offline. I\n> think the main problem is dedupping users, and not being able to aggregate\n> visits in the fact table. that's where most of the query time takes place.\n> but the business guys just won't accept using visits, but not actual uniques\n> dedupped. if visits was in the fact table as an integer i could sum up, i'd\n> be fine. Ron also said he's put the unique user ids into arrays so it's\n> faster to count them, but placing them into aggregate tables. only problem is\n> i'd still have to know what things to aggregate by to create these, which is\n> impossible since we have so many dimensions and facts that are going to be\n> ad-hoc. i have about 20 summary tables i create per day, but most of the\n> time, they have something new they want to query that's not in summary. and\n> will never come up again.\n> \n> I tried installing Bizgres using their Bizgres loader and custom postgresql\n> package with bitmap indexes, but doesn't seem to increase performance \"that\"\n> much. or as much as i would like compared to the normal postgresql install.\n> loads are pretty slow when using their bitmap indexes compared to just using\n> btree indexes in the standard postgresql install. Query time is pretty good,\n> but i also have to make sure load times are acceptable as well. and had some\n> problems with the bizgres loader losing connection to the database for no\n> reason at all, but when using the normal copy command in 8.2RC1, works fine.\n> love the new query inclusion in the copy command by the way, makes it so easy\n> to aggregrate hourly fact tables into daily/weekly/monthly in one shot :)\n> \n> and yes, work_mem is optimized as much as possible. postgresql is using about\n> 1.5 gigs of working memory when it runs these queries. looking into getting\n> 64 bit hardware with 16-32 gigs of RAM so i can throw most of this into memory\n> to speed it up. we're also using 3par storage which is pretty fast. we're\n> going to try and put postgresql on a local disk array using RAID 5 as well to\n> see if it makes a difference.\n> \n> and yes, right now, these are daily aggregate tables summed up from the\n> hourly. so about 17 million rows per day. hourly fact tables are impossible\n> to query right now, so i have to at least put these into daily fact tables.\n> so when you have 30 days in this database, then yes, table scans are going to\n> be huge, thus why it's taking so long, plus dedupping on unique user id :)\n> \n> and you're right, i should put this on the performance mailing list... see you\n> there :)\n> \n> thanks guys.\n> \n> ____________________________________\n> Mark Jensen\n> \n> ----- Original Message ----\n> From: Richard Huxton <[email protected]>\n> To: Mark Jensen <[email protected]>\n> Cc: [email protected]\n> Sent: Wednesday, November 29, 2006 2:29:35 PM\n> Subject: Re: [GENERAL] Including unique users in huge data warehouse in\n> Postgresql...\n> \n> Mark Jensen wrote:\n>> So i've been given the task of designing a data warehouse in\n>> either Postgresql or Mysql for our clickstream data for our sites. I\n>> started with Mysql but the joins in Mysql are just way too slow\n>> compared to Postgresql when playing with star schemas.\n> \n> Mark - it's not my usual area, but no-one else has picked up your\n> posting, so I'll poke my nose in. The other thing you might want to do\n> is post this on the performance list - that's probably the best place.\n> Might be worth talking to those at www.bizgres.org too (although I think\n> they all hang out on the performance list).\n> \n>> I can't say\n>> which sites i'm working on, but we get close to 3-5 million uniques\n>> users per day, so over time, that's a lot of unique users to keep\n>> around and de-dup your fact tables by. Need to be able to query normal\n>> analytics like:\n> <snip>\n> \n>> i've\n>> made a lot of optimizations in postgresql.conf by playing with work_mem\n>> and shared_buffers and such and i think the database is using as much\n>> as it can disk/memory/cpu wise.\n> \n> Big work_mem, I'm guessing. Limiting factor is presumably disk I/O.\n> \n> <snip>\n>> here's a sample query that takes a while to run... just a simple report that\n>> shows gender by area of the site.\n>> \n>> select A.gender as gender, B.area as area, sum(C.imps) as imps, sum(C.clicks)\n>> as clicks, count(distinct(C.uu_id)) as users\n>> from uus as A, areas as B, daily_area_fact as C\n>> where A.uu_id = C.uu_id\n>> and B.area_id = C.area_id\n>> group by gender,area;\n>> \n>> so\n>> by just having one day of data, with 3,168,049 rows in the user\n>> dimension table (uus), 17,213,420 in the daily_area_fact table that\n>> joins all the dimension tables, takes about 15 minutes. if i had 30-90\n>> days in this fact table, who knows how long this would take... i know\n>> doing a distinct on uu_id is very expensive, so that's the main problem\n>> here i guess and would want to know if anyone else is doing it this way\n>> or better.\n> \n> In the end, I'd suspect the seq-scan over the fact table will be your\n> biggest problem. Can you pre-aggregate your fact-table into daily summaries?\n> \n> See you over on the performance list, where there are more experienced\n> people than myself to help you.\n\n\n",
"msg_date": "Wed, 29 Nov 2006 13:55:33 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fw: [GENERAL] Including unique users in huge data"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.