threads
listlengths
1
275
[ { "msg_contents": "Dear PostgreSQL developers,\n\nI'm confused about the absence of a very simple optimization\nin PostgreSQL. Suppose we have a VIEW where some columns are\nexpensive to be calculated:\n\n CREATE VIEW a AS\n SELECT\n (... expensive calculation ...) as expensive,\n count(*) as cheap\n FROM\n x;\n\nwhere \"x\" is a sufficiently large table. I would expect the\nfollowing query to be very fast:\n\n SELECT cheap FROM a;\n\nHowever, it takes the same time as \"SELECT * FROM a;\".\n\nIn other words: The column \"expensive\" is calculated although\nit hasn't been asked for. Of course, there are work-arounds\nfor that, but I wonder why PostgreSQL doesn't perform this\nsmall optimization by itself.\n\nI checked that behaviour with PostgreSQL 8.3.7 (Debian/Etch)\nand 8.4.1 (Debian/Lenny).\n\n\nGreets,\n\n Volker\n\n-- \nVolker Grabsch\n---<<(())>>---\nAdministrator\nNotJustHosting GbR\n", "msg_date": "Sun, 18 Oct 2009 00:58:16 +0200", "msg_from": "Volker Grabsch <[email protected]>", "msg_from_op": true, "msg_subject": "Calculation of unused columns" }, { "msg_contents": "Volker Grabsch <[email protected]> writes:\n> I'm confused about the absence of a very simple optimization\n> in PostgreSQL. Suppose we have a VIEW where some columns are\n> expensive to be calculated:\n\n> CREATE VIEW a AS\n> SELECT\n> (... expensive calculation ...) as expensive,\n> count(*) as cheap\n> FROM\n> x;\n\n> where \"x\" is a sufficiently large table. I would expect the\n> following query to be very fast:\n\n> SELECT cheap FROM a;\n\n> However, it takes the same time as \"SELECT * FROM a;\".\n\nI think your main error is in supposing that count(*) is cheap ...\n\nPG will suppress unused columns when a view can be flattened,\nbut typically not otherwise. In particular a view involving\naggregates won't get flattened; but given that the aggregates\nwill force a whole-table scan anyway, I can't get that excited\nabout removing a subset of them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Oct 2009 20:07:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " }, { "msg_contents": "[ please keep the list cc'd ]\n\nVolker Grabsch <[email protected]> writes:\n> The \"count(*)\" in the example seems to be distracting. In fact,\n> it could be replaced with a simple constant value, the effect\n> is the same:\n\n> CREATE VIEW a AS\n> SELECT\n> (... expensive calculation ...) as expensive,\n> 1 as cheap\n> FROM\n> x;\n\nWell, if you try that case, you'll find that the \"expensive\" column\n*does* get thrown away. (Using EXPLAIN VERBOSE under 8.4 may help you\nsee what's going on here.) It's only when there's some reason why the\nview can't get flattened that there's an issue.\n\nI've been thinking about this since your earlier mail, and I think it\nwould probably be possible to suppress unused columns in a non-flattened\nsubquery. I remain unconvinced that it's worth the trouble though.\nA real (not handwavy) example would help make the case here.\n\nAs an example of why I'm not convinced, one thing we'd have to consider\nis whether it is okay to suppress calculation of columns containing\nvolatile functions. I'd be inclined to think not, since that could\ncause side-effects to not happen that the user might be expecting to\nhappen. (We got beat up in the past for letting the planner be cavalier\nabout that consideration.) I suspect, though, that that case and other\nnon-optimizable cases might account for the bulk of situations where the\nexisting optimization doesn't happen.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Oct 2009 21:41:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " }, { "msg_contents": "On Sat, Oct 17, 2009 at 9:41 PM, Tom Lane <[email protected]> wrote:\n> I've been thinking about this since your earlier mail, and I think it\n> would probably be possible to suppress unused columns in a non-flattened\n> subquery.  I remain unconvinced that it's worth the trouble though.\n> A real (not handwavy) example would help make the case here.\n\nAre there any situations where this would enable join removal that\notherwise wouldn't be possible? Maybe a query involving an\nunflattenable VIEW?\n\n...Robert\n", "msg_date": "Sat, 17 Oct 2009 21:53:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Sat, Oct 17, 2009 at 9:41 PM, Tom Lane <[email protected]> wrote:\n>> I've been thinking about this since your earlier mail, and I think it\n>> would probably be possible to suppress unused columns in a non-flattened\n>> subquery. �I remain unconvinced that it's worth the trouble though.\n>> A real (not handwavy) example would help make the case here.\n\n> Are there any situations where this would enable join removal that\n> otherwise wouldn't be possible?\n\nHmm, yeah maybe, now that we have some join removal logic. Seems a bit\nof a stretch to claim that's a common real-world case though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Oct 2009 22:19:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " }, { "msg_contents": "I have a very common example which would illustrate the above problem a \nbit more. Guess the following view on a company table, which references \nthe country of that company in another table. The view itself just \nreturns the company-id and the country-name,\n\n create view companys_and_countries as\n select company.id, country.name from company left join country on \n(company.country_id = country.id);\n\nPleaso note we have a left join here, so the contents of country do by \nno means affect the contents of the \"id\" row in that view. Lets see what \nhappens when we just query for the ids:\n\n explain select id from companys_and_countries;\n\nThe join is done anyway, even if its removed (At least on Postgres 8.3). \nThe more common usecase would be having Display-Tables, where are \nforeign keys are dereferenced to their values. One could store this in a \nview, and then query only the columns one needs (This is especially \nuseful if the user is able to configure its client for which columns he \nneeds).\n\nI would like if unnessecary joins would be cut off here, as well as \nunnessecary columns. I know this would be a performance hit, so maybe a \nsession option would be the right way here?\n\nRegards,\nDaniel Migowski\n", "msg_date": "Sun, 18 Oct 2009 16:38:18 +0200", "msg_from": "Daniel Migowski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "Daniel Migowski <[email protected]> writes:\n> I have a very common example which would illustrate the above problem a \n> bit more.\n\nThis is (a) still handwaving, not a testable example, and (b) unrelated\nto the question at hand, because the suggested view is flattenable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Oct 2009 11:59:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " }, { "msg_contents": "Daniel Migowski <[email protected]> wrote:\n\n> I have a very common example which would illustrate the\n> above problem a bit more. Guess the following view on a\n> company table, which references the country of that company\n> in another table. The view itself just returns the\n> company-id and the country-name,\n\n> create view companys_and_countries as\n> select company.id, country.name from company left join\n> country on (company.country_id = country.id);\n\n> Pleaso note we have a left join here, so the contents of\n> country do by no means affect the contents of the \"id\" row\n> in that view. Lets see what happens when we just query for\n> the ids:\n\n> explain select id from companys_and_countries;\n\n> The join is done anyway, even if its removed (At least on\n> Postgres 8.3). [...]\n\nHow could that be done otherwise? PostgreSQL *must* look at\ncountry to determine how many rows the left join produces.\n\nTim\n\n", "msg_date": "Sun, 18 Oct 2009 17:35:29 +0000", "msg_from": "Tim Landscheidt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "On Sun, Oct 18, 2009 at 10:35 AM, Tim Landscheidt\n<[email protected]> wrote:\n> Daniel Migowski <[email protected]> wrote:\n>\n>> I have a very common example which would illustrate the\n>> above problem a bit more. Guess the following view on a\n>> company table, which references the country of that company\n>> in another table. The view itself just returns the\n>> company-id and the country-name,\n>\n>>    create view companys_and_countries as\n>>    select company.id, country.name from company left join\n>> country on (company.country_id = country.id);\n>\n>> Pleaso note we have a left join here, so the contents of\n>> country do by no means affect the contents of the \"id\" row\n>> in that view. Lets see what happens when we just query for\n>> the ids:\n>\n>>    explain select id from companys_and_countries;\n>\n>> The join is done anyway, even if its removed (At least on\n>> Postgres 8.3). [...]\n>\n> How could that be done otherwise? PostgreSQL *must* look at\n> country to determine how many rows the left join produces.\n\nEven if country.id is a primary or unique key?\n\nJeff\n", "msg_date": "Sun, 18 Oct 2009 10:59:04 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "On Sun, Oct 18, 2009 at 1:59 PM, Jeff Janes <[email protected]> wrote:\n> On Sun, Oct 18, 2009 at 10:35 AM, Tim Landscheidt\n> <[email protected]> wrote:\n>> Daniel Migowski <[email protected]> wrote:\n>>\n>>> I have a very common example which would illustrate the\n>>> above problem a bit more. Guess the following view on a\n>>> company table, which references the country of that company\n>>> in another table. The view itself just returns the\n>>> company-id and the country-name,\n>>\n>>>    create view companys_and_countries as\n>>>    select company.id, country.name from company left join\n>>> country on (company.country_id = country.id);\n>>\n>>> Pleaso note we have a left join here, so the contents of\n>>> country do by no means affect the contents of the \"id\" row\n>>> in that view. Lets see what happens when we just query for\n>>> the ids:\n>>\n>>>    explain select id from companys_and_countries;\n>>\n>>> The join is done anyway, even if its removed (At least on\n>>> Postgres 8.3). [...]\n>>\n>> How could that be done otherwise? PostgreSQL *must* look at\n>> country to determine how many rows the left join produces.\n>\n> Even if country.id is a primary or unique key?\n\nWell, we currently don't have any logic for making inferences based on\nunique constraints. I have dreams of fixing that at some point (or\nmaybe I'll get lucky and someone else will beat me to it) but it's\ncurrently in the category of \"things for which I won't get paid but\nwould like to spend some of my spare time in the evenings on\", so it\nmay be a while (unless of course it moves into the category of \"things\npeople are paying me a lot of money to get done\", in which case it\nwill likely happen quite a bit sooner...).\n\n...Robert\n", "msg_date": "Sun, 18 Oct 2009 16:00:25 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Sun, Oct 18, 2009 at 1:59 PM, Jeff Janes <[email protected]> wrote:\n>> Even if country.id is a primary or unique key?\n\n> Well, we currently don't have any logic for making inferences based on\n> unique constraints.\n\nHuh?\nhttp://archives.postgresql.org/pgsql-committers/2009-09/msg00159.php\n\nAdmittedly it's just one case and there's lots more to be done, but it's\nmore than nothing. So this is a *potential* argument for trying to trim\nsubquery outputs. What I'm not sure about is whether there are common\ncases where this would be applicable below a non-flattenable subquery.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Oct 2009 16:54:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " }, { "msg_contents": "On Sun, Oct 18, 2009 at 4:54 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Sun, Oct 18, 2009 at 1:59 PM, Jeff Janes <[email protected]> wrote:\n>>> Even if country.id is a primary or unique key?\n>\n>> Well, we currently don't have any logic for making inferences based on\n>> unique constraints.\n>\n> Huh?\n> http://archives.postgresql.org/pgsql-committers/2009-09/msg00159.php\n>\n> Admittedly it's just one case and there's lots more to be done, but it's\n> more than nothing.  So this is a *potential* argument for trying to trim\n> subquery outputs.  What I'm not sure about is whether there are common\n> cases where this would be applicable below a non-flattenable subquery.\n\nSorry, I have to stop writing emails when I'm half-asleep. Obviously\nwhat we don't have is logic for making deductions based on *foreign\nkey* constraints, but that's not relevant here.\n\nMaybe I should shut up before I say any more dumb things, but one\npossible case where we don't currently do join removal but it would be\nnice if we did is:\n\nSELECT ... FROM a.x LEFT JOIN (SELECT bb.x, SUM(1) FROM bb GROUP BY\nbb.x) b ON a.x = b.x;\n\nOr even:\n\nSELECT ... FROM a.x LEFT JOIN (SELECT DISTINCT ON (bb.x) ... FROM bb)\nb ON a.x = b.x;\n\nYour commit message for the join removal patch mentions\nmachine-generated SQL, but where join removal really comes up a lot\nfor me is when using views. I like to define a view that includes all\nthe columns that seem potentially useful and then let the user pick\nwhich ones they'd like to see. The trouble is that you don't want to\nincur the cost of computing the columns that the user doesn't select.\nIt's probably true that in MOST of the cases where this comes up, the\nsubquery can be flattened, from_collapse_limit permitting. But I\nthink there are other cases, too.\n\nAnother thing to keep in mind is that, in OLTP environments, it's\nsometimes important to minimize the number of server round-trips. The\ntype of construction suggested by the OP might be someone's way of\ngather two somewhat-unrelated values with a single query. Except\nsometimes they only need one of them, but they end up paying for both\nanyway. They could probably work around this with a little bit\ndifferent setup, but I don't think they're entirely wrong to find the\ncurrent behavior a little bit surprising.\n\n...Robert\n", "msg_date": "Sun, 18 Oct 2009 18:26:58 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> It's probably true that in MOST of the cases where this comes up, the\n> subquery can be flattened, from_collapse_limit permitting. But I\n> think there are other cases, too.\n\nRight ... and from_collapse_limit is not relevant here; only the form of\nthe subquery is. So I'd sure like to see some actual use cases before\nwe decide to expend planning cycles on this.\n\nJust for fun, I hacked together a first cut at this. It's only about\n120 lines but it's a bit cheesy (the limitation to not handling\nappendrel members in particular). It passes regression tests and\nseems to do what's wanted, but I'm not convinced it's worth the extra\ncycles as-is, let alone with the appendrel limitation fixed.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 18 Oct 2009 19:41:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " }, { "msg_contents": "I wrote:\n> Just for fun, I hacked together a first cut at this.\n\nOh, just for the archives: I forgot about not suppressing volatile\nexpressions --- checking that would increase the cost of this\nsignificantly, though it's only another line or two.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Oct 2009 12:01:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " }, { "msg_contents": "On Sat, 2009-10-17 at 21:41 -0400, Tom Lane wrote:\n\n> one thing we'd have to consider\n> is whether it is okay to suppress calculation of columns containing\n> volatile functions.\n\nI think we should have a 4th class of functions,\nvolatile-without-side-effects (better name needed, obviously).\n\nThat would allow us to optimize such calls away, if appropriate.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n\n", "msg_date": "Mon, 19 Oct 2009 18:36:15 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Sat, 2009-10-17 at 21:41 -0400, Tom Lane wrote:\n>> one thing we'd have to consider\n>> is whether it is okay to suppress calculation of columns containing\n>> volatile functions.\n\n> I think we should have a 4th class of functions,\n> volatile-without-side-effects (better name needed, obviously).\n\nWhat for? There wouldn't be that many, I think. random() and\nclock_timestamp(), yeah, but most volatile user-defined functions\nare either volatile-with-side-effects or misdeclared.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Oct 2009 13:43:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " }, { "msg_contents": "Simon Riggs <[email protected]> wrote:\n \n> I think we should have a 4th class of functions,\n> volatile-without-side-effects \n \nSounds reasonable to me.\n \n> (better name needed, obviously).\n \nWell, from this list (which is where volatile points), mutable seems\nclosest to OK, but I'm not sure I like any of them.\n \nhttp://www.merriam-webster.com/thesaurus/fickle\n \nAnyone else have an idea?\n \n-Kevin\n", "msg_date": "Mon, 19 Oct 2009 12:48:42 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "On Mon, 2009-10-19 at 13:43 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > On Sat, 2009-10-17 at 21:41 -0400, Tom Lane wrote:\n> >> one thing we'd have to consider\n> >> is whether it is okay to suppress calculation of columns containing\n> >> volatile functions.\n> \n> > I think we should have a 4th class of functions,\n> > volatile-without-side-effects (better name needed, obviously).\n> \n> What for? There wouldn't be that many, I think. random() and\n> clock_timestamp(), yeah, but most volatile user-defined functions\n> are either volatile-with-side-effects or misdeclared.\n\nRead only vs. read write?\n\n-- \n Simon Riggs www.2ndQuadrant.com\n\n", "msg_date": "Mon, 19 Oct 2009 18:48:58 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "On Sun, 18 Oct 2009, Tom Lane wrote:\n\n> Robert Haas <[email protected]> writes:\n>> On Sun, Oct 18, 2009 at 1:59 PM, Jeff Janes <[email protected]> wrote:\n>>> Even if country.id is a primary or unique key?\n>\n>> Well, we currently don't have any logic for making inferences based on\n>> unique constraints.\n>\n> Huh?\n> http://archives.postgresql.org/pgsql-committers/2009-09/msg00159.php\n>\n> Admittedly it's just one case and there's lots more to be done, but it's\n> more than nothing. So this is a *potential* argument for trying to trim\n> subquery outputs. What I'm not sure about is whether there are common\n> cases where this would be applicable below a non-flattenable subquery.\n>\n\nWow. That's IHMO a major improvement in the optimizer. Is this also valid \nfor views?\n\nIn the area of views this might even be a killer feature since one can \ndefine a view with many columns and when only e.g. one is used query is \nstill optimal. I had today such a situation where I created a new view to \nbe ~24 times faster (with a lot of left outer joins).\n\nIs the patch only for 8.5 or even backported to 8.4 and 8.3?\n\nThnx.\n\nCiao,\nGerhard\n\n--\nhttp://www.wiesinger.com/\n", "msg_date": "Mon, 19 Oct 2009 19:54:45 +0200 (CEST)", "msg_from": "Gerhard Wiesinger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Mon, 2009-10-19 at 13:43 -0400, Tom Lane wrote:\n>> Simon Riggs <[email protected]> writes:\n>>> I think we should have a 4th class of functions,\n>>> volatile-without-side-effects (better name needed, obviously).\n>> \n>> What for? There wouldn't be that many, I think. random() and\n>> clock_timestamp(), yeah, but most volatile user-defined functions\n>> are either volatile-with-side-effects or misdeclared.\n\n> Read only vs. read write?\n\nMost read-only functions are stable or even immutable. I don't say\nthat there's zero usefulness in a fourth class, but I do say it's\nunlikely to be worth the trouble. (The only reason it even came\nup in this connection is that the default for user-defined functions\nis \"volatile\" which would defeat this optimization ... but we could\nhardly make the default anything else.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Oct 2009 13:58:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " }, { "msg_contents": "Gerhard Wiesinger <[email protected]> writes:\n> Is the patch only for 8.5 or even backported to 8.4 and 8.3?\n\nThat patch will *not* be backported. It hasn't even got through beta yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Oct 2009 14:05:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " }, { "msg_contents": "On Mon, 2009-10-19 at 13:58 -0400, Tom Lane wrote:\n> \n> Most read-only functions are stable or even immutable.\n\nHuh? I mean a function that only contains SELECTs. (How would those ever\nbe Stable or Immutable??)\n\n-- \n Simon Riggs www.2ndQuadrant.com\n\n", "msg_date": "Mon, 19 Oct 2009 19:09:27 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Mon, 2009-10-19 at 13:58 -0400, Tom Lane wrote:\n>> Most read-only functions are stable or even immutable.\n\n> Huh? I mean a function that only contains SELECTs. (How would those ever\n> be Stable or Immutable??)\n\nUh, a function containing SELECTs is exactly the use-case for STABLE.\nMaybe you need to go re-read the definitions?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Oct 2009 14:23:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculation of unused columns " } ]
[ { "msg_contents": "Hi.\n\nI'm currently testing out PostgreSQL's Full Text Search capabillities.\nWe're currenly using Xapian, it has some nice features and some\ndrawbacks (sorting), so it is especially this area I'm investigating.\n\nI've loaded the database with 50K documents, and the table definition\nis:\n\nftstest=# \\d uniprot\n Table \"public.uniprot\"\n Column | Type | Modifiers\n\n------------------+----------+------------------------------------------------------\n id | integer | not null default\nnextval('textbody_id_seq'::regclass)\n body | text | not null default ''::text\n textbody_body_fts | tsvector |\n accession_number | text | not null default ''::text\nIndexes:\n \"accno_unique_idx\" UNIQUE, btree (accession_number)\n \"textbody_tfs_idx\" gin (textbody_body_fts)\nTriggers:\n tsvectorupdate BEFORE INSERT OR UPDATE ON textbody FOR EACH ROW\nEXECUTE PROCEDURE tsvector_update_trigger('textbody_body_fts',\n'pg_catalog.english', 'body')\n\n\"commonterm\" matches 37K of the 50K documents (majority), but the query\nplan is \"odd\" in my eyes.\n\n* Why does it mis-guess the cost of a Seq Scan on textbody so much?\n* Why doesn't it use the index in \"id\" to fetch the 10 records?\n\nftstest=# ANALYZE textbody;\nANALYZE\nftstest=# explain analyze select body from textbody where\ntextbody_body_fts @@ to_tsquery('commonterm') order by id limit 10 offset 0\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2841.08..2841.11 rows=10 width=5) (actual\ntime=48031.563..48031.568 rows=10 loops=1)\n -> Sort (cost=2841.08..2933.01 rows=36771 width=5) (actual\ntime=48031.561..48031.564 rows=10 loops=1)\n Sort Key: id\n Sort Method: top-N heapsort Memory: 31kB\n -> Seq Scan on textbody (cost=0.00..2046.47 rows=36771\nwidth=5) (actual time=100.107..47966.590 rows=37133 loops=1)\n Filter: (textbody_body_fts @@ to_tsquery('commonterm'::text))\n Total runtime: 48031.612 ms\n(7 rows)\n\nThis query-plan doesn't answer the questions above, but it does indeed\nspeed it up significantly (by heading into a Bitmap Index Scan instead\nof a Seq Scan)\n\nftstest=# set enable_seqscan=off;\nSET\n\nftstest=# explain analyze select body from textbody where\ntextbody_body_fts @@ to_tsquery('commonterm') order by id limit 10 offset 0\n\nQUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=269942.41..269942.43 rows=10 width=5) (actual\ntime=47.567..47.572 rows=10 loops=1)\n -> Sort (cost=269942.41..270034.34 rows=36771 width=5) (actual\ntime=47.565..47.567 rows=10 loops=1)\n Sort Key: id\n Sort Method: top-N heapsort Memory: 31kB\n -> Bitmap Heap Scan on textbody (cost=267377.23..269147.80\nrows=36771 width=5) (actual time=15.763..30.576 rows=37133 loops=1)\n Recheck Cond: (textbody_body_fts @@\nto_tsquery('commonterm'::text))\n -> Bitmap Index Scan on textbody_tfs_idx\n(cost=0.00..267368.04 rows=36771 width=0) (actual time=15.419..15.419\nrows=37134 loops=1)\n Index Cond: (textbody_body_fts @@\nto_tsquery('commonterm'::text))\n Total runtime: 47.634 ms\n(9 rows)\n\nTo me it seems like the query planner could do a better job?\n\nOn \"rare\" terms everything seems to work excellent.\n\nN.B.: looks a lot like this:\nhttp://archives.postgresql.org/pgsql-performance/2009-07/msg00190.php\n\n-- \nJesper\n", "msg_date": "Sun, 18 Oct 2009 18:49:45 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Full text search - query plan? PG 8.4.1 " }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> \"commonterm\" matches 37K of the 50K documents (majority), but the query\n> plan is \"odd\" in my eyes.\n\n> * Why does it mis-guess the cost of a Seq Scan on textbody so much?\n\nThe cost looks about right to me. The cost units are not milliseconds.\n\n> * Why doesn't it use the index in \"id\" to fetch the 10 records?\n\nYou haven't *got* an index on id, according to the \\d output.\n\nThe only part of your results that looks odd to me is the very high cost\nestimate for the bitmapscan:\n\n> -> Bitmap Heap Scan on textbody (cost=267377.23..269147.80\n> rows=36771 width=5) (actual time=15.763..30.576 rows=37133 loops=1)\n> Recheck Cond: (textbody_body_fts @@\n> to_tsquery('commonterm'::text))\n> -> Bitmap Index Scan on textbody_tfs_idx\n> (cost=0.00..267368.04 rows=36771 width=0) (actual time=15.419..15.419\n> rows=37134 loops=1)\n> Index Cond: (textbody_body_fts @@\n> to_tsquery('commonterm'::text))\n\nWhen I try this with a 64K-row table having 'commonterm' in half of the\nrows, what I get is estimates of 1530 cost units for the seqscan and\n1405 for the bitmapscan (so it prefers the latter). It will switch over\nto using an index on id if I add one, but that's not the point at the\nmoment. There's something strange about your tsvector index. Maybe\nit's really huge because the documents are huge?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Oct 2009 14:20:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search - query plan? PG 8.4.1 " }, { "msg_contents": "Tom Lane wrote:\n> Jesper Krogh <[email protected]> writes:\n>> \"commonterm\" matches 37K of the 50K documents (majority), but the query\n>> plan is \"odd\" in my eyes.\n> \n>> * Why does it mis-guess the cost of a Seq Scan on textbody so much?\n> \n> The cost looks about right to me. The cost units are not milliseconds.\n> \n>> * Why doesn't it use the index in \"id\" to fetch the 10 records?\n> \n> You haven't *got* an index on id, according to the \\d output.\n\nThanks (/me bangs my head against the table). I somehow assumed that \"id\nSERIAL\" automatically created it for me. Even enough to not looking for\nit to confirm.\n\n> The only part of your results that looks odd to me is the very high cost\n> estimate for the bitmapscan:\n> \n>> -> Bitmap Heap Scan on textbody (cost=267377.23..269147.80\n>> rows=36771 width=5) (actual time=15.763..30.576 rows=37133 loops=1)\n>> Recheck Cond: (textbody_body_fts @@\n>> to_tsquery('commonterm'::text))\n>> -> Bitmap Index Scan on textbody_tfs_idx\n>> (cost=0.00..267368.04 rows=36771 width=0) (actual time=15.419..15.419\n>> rows=37134 loops=1)\n>> Index Cond: (textbody_body_fts @@\n>> to_tsquery('commonterm'::text))\n> \n> When I try this with a 64K-row table having 'commonterm' in half of the\n> rows, what I get is estimates of 1530 cost units for the seqscan and\n> 1405 for the bitmapscan (so it prefers the latter). It will switch over\n> to using an index on id if I add one, but that's not the point at the\n> moment. There's something strange about your tsvector index. Maybe\n> it's really huge because the documents are huge?\n\nhuge is a relative term, but length(ts_vector(body)) is about 200 for\neach document. Is that huge? I can postprocess them a bit to get it down\nand will eventually do that before going to \"production\".\n\nThanks alot.\n\n-- \nJesper\n", "msg_date": "Sun, 18 Oct 2009 20:55:49 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full text search - query plan? PG 8.4.1" }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> Tom Lane wrote:\n>> ... There's something strange about your tsvector index. Maybe\n>> it's really huge because the documents are huge?\n\n> huge is a relative term, but length(ts_vector(body)) is about 200 for\n> each document. Is that huge?\n\nIt's bigger than the toy example I was trying, but not *that* much\nbigger. I think maybe your index is bloated. Try dropping and\nrecreating it and see if the estimates change any.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Oct 2009 15:48:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search - query plan? PG 8.4.1 " }, { "msg_contents": "Tom Lane wrote:\n> Jesper Krogh <[email protected]> writes:\n>> Tom Lane wrote:\n>>> ... There's something strange about your tsvector index. Maybe\n>>> it's really huge because the documents are huge?\n> \n>> huge is a relative term, but length(ts_vector(body)) is about 200 for\n>> each document. Is that huge?\n> \n> It's bigger than the toy example I was trying, but not *that* much\n> bigger. I think maybe your index is bloated. Try dropping and\n> recreating it and see if the estimates change any.\n\nI'm a bit reluctant to dropping it and re-creating it. It'll take a\ncouple of days to regenerate, so this should hopefully not be an common\nsituation for the system.\n\nI have set the statistics target to 1000 for the tsvector, the\ndocumentation didn't specify any heavy negative sides of doing that and\nsince that I haven't seen row estimates that are orders of magnitude off.\n\nIt is build from scratch using inserts all the way to around 10m now,\nshould that result in index-bloat? Can I inspect the size of bloat\nwithout rebuilding (or similar locking operation)?\n\nThe query still has a \"wrong\" tipping point between the two query-plans:\n\nftstest=# explain analyze select body from ftstest where\nftstest_body_fts @@ to_tsquery('testterm') order by id limit 100;\n\nQUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..7357.77 rows=100 width=738) (actual\ntime=3978.974..8595.086 rows=100 loops=1)\n -> Index Scan using ftstest_id_pri_idx on ftstest\n(cost=0.00..1436458.05 rows=19523 width=738) (actual\ntime=3978.971..8594.932 rows=100 loops=1)\n Filter: (ftstest_body_fts @@ to_tsquery('testterm'::text))\n Total runtime: 8595.222 ms\n(4 rows)\n\nftstest=# set enable_indexscan=off;\nSET\nftstest=# explain analyze select body from ftstest where\nftstest_body_fts @@ to_tsquery('testterm') order by id limit 100;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=59959.61..59959.86 rows=100 width=738) (actual\ntime=338.832..339.055 rows=100 loops=1)\n -> Sort (cost=59959.61..60008.41 rows=19523 width=738) (actual\ntime=338.828..338.908 rows=100 loops=1)\n Sort Key: id\n Sort Method: top-N heapsort Memory: 32kB\n -> Bitmap Heap Scan on ftstest (cost=22891.18..59213.45\nrows=19523 width=738) (actual time=5.097..316.780 rows=19444 loops=1)\n Recheck Cond: (ftstest_body_fts @@\nto_tsquery('testterm'::text))\n -> Bitmap Index Scan on ftstest_tfs_idx\n(cost=0.00..22886.30 rows=19523 width=0) (actual time=4.259..4.259\nrows=20004 loops=1)\n Index Cond: (ftstest_body_fts @@\nto_tsquery('testterm'::text))\n Total runtime: 339.201 ms\n(9 rows)\n\nSo for getting 100 rows where the term exists in 19.444 of 10.000.000\ndocuments it chooses the index-scan where it (given random distribution\nof the documents) should scan: 100*(10000000/19444) = 51429 documents.\nSo it somehow believes that the cost for the bitmap index scan is higher\nthan it actually is or the cost for the index-scan is lower than it\nactually is.\n\nIs is possible to manually set the cost for the @@ operator? It seems\nnatural that matching up a ts_vector to a ts_query, which is a much\nheavier operation than = and even is stored in EXTENDED storage should\nbe much higher than a integer in plain storage.\n\nI tried to search docs for operator cost, but I only found the overall\nones in the configuration file that are base values.\n\nJesper\n-- \nJesper\n", "msg_date": "Fri, 23 Oct 2009 22:32:53 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full text search - query plan? PG 8.4.1" }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> Is is possible to manually set the cost for the @@ operator?\n\nYou want to set the cost for the underlying function.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Oct 2009 18:06:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search - query plan? PG 8.4.1 " }, { "msg_contents": "On Fri, Oct 23, 2009 at 2:32 PM, Jesper Krogh <[email protected]> wrote:\n> Tom Lane wrote:\n>> Jesper Krogh <[email protected]> writes:\n>>> Tom Lane wrote:\n>>>> ... There's something strange about your tsvector index.  Maybe\n>>>> it's really huge because the documents are huge?\n>>\n>>> huge is a relative term, but length(ts_vector(body)) is about 200 for\n>>> each document. Is that huge?\n>>\n>> It's bigger than the toy example I was trying, but not *that* much\n>> bigger.  I think maybe your index is bloated.  Try dropping and\n>> recreating it and see if the estimates change any.\n>\n> I'm a bit reluctant to dropping it and re-creating it. It'll take a\n> couple of days to regenerate, so this should hopefully not be an common\n> situation for the system.\n\nNote that if it is bloated, you can create the replacement index with\na concurrently created one, then drop the old one when the new one\nfinishes. So, no time spent without an index.\n\n> I have set the statistics target to 1000 for the tsvector, the\n> documentation didn't specify any heavy negative sides of doing that and\n> since that I haven't seen row estimates that are orders of magnitude off.\n\nIt increases planning time mostly. Also increases analyze times but\nnot that much.\n\n> It is build from scratch using inserts all the way to around 10m now,\n> should that result in index-bloat? Can I inspect the size of bloat\n> without rebuilding (or similar locking operation)?\n\nDepends on how many lost inserts there were. If 95% of all your\ninserts failed then yeah, it would be bloated.\n", "msg_date": "Fri, 23 Oct 2009 16:08:44 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search - query plan? PG 8.4.1" }, { "msg_contents": "Scott Marlowe wrote:\n> On Fri, Oct 23, 2009 at 2:32 PM, Jesper Krogh <[email protected]> wrote:\n>> Tom Lane wrote:\n>>> Jesper Krogh <[email protected]> writes:\n>>>> Tom Lane wrote:\n>>>>> ... There's something strange about your tsvector index. Maybe\n>>>>> it's really huge because the documents are huge?\n>>>> huge is a relative term, but length(ts_vector(body)) is about 200 for\n>>>> each document. Is that huge?\n>>> It's bigger than the toy example I was trying, but not *that* much\n>>> bigger. I think maybe your index is bloated. Try dropping and\n>>> recreating it and see if the estimates change any.\n>> I'm a bit reluctant to dropping it and re-creating it. It'll take a\n>> couple of days to regenerate, so this should hopefully not be an common\n>> situation for the system.\n> \n> Note that if it is bloated, you can create the replacement index with\n> a concurrently created one, then drop the old one when the new one\n> finishes. So, no time spent without an index.\n\nNice tip, thanks.\n\n>> It is build from scratch using inserts all the way to around 10m now,\n>> should that result in index-bloat? Can I inspect the size of bloat\n>> without rebuilding (or similar locking operation)?\n> \n> Depends on how many lost inserts there were. If 95% of all your\n> inserts failed then yeah, it would be bloated.\n\nLess than 10.000 I'd bet, the import-script more or less ran by itself\nthe only failures where when I manually stopped it to add some more code\nin it.\n\n-- \nJesper\n", "msg_date": "Sat, 24 Oct 2009 06:06:11 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full text search - query plan? PG 8.4.1" }, { "msg_contents": "Tom Lane wrote:\n> Jesper Krogh <[email protected]> writes:\n>> Is is possible to manually set the cost for the @@ operator?\n> \n> You want to set the cost for the underlying function.\n\nalter function ts_match_vq(tsvector,tsquery) cost 500\n\nseems to change my test-queries in a very positive way (e.g. resolve to\nbitmap index scan on most queryies but fall onto index-scans on\nalternative columns when queriterms are common enough).\n\nAccording to the documentation the default cost is 1 for builin\nfunctions and 100 for others, is this true for the ts-stuff also?\nCan I query the database for the cost of the functions?\n\nIt somehow seems natural that comparing a 1,3 term tsquery to a 200+\nterm tsvector is orders of magitude more expensive than simple operations.\n\nI somehow suspect that this is a bad thing to do if I have other\ngin-indexes where the tsvector is much smaller in the same database. But\nthen I can just use ts_match_qv for those queries or add my own which\njusty raises the cost.\n\nThanks.\n\n-- \nJesper\n", "msg_date": "Mon, 26 Oct 2009 18:09:32 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full text search - query plan? PG 8.4.1" }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> According to the documentation the default cost is 1 for builin\n> functions and 100 for others, is this true for the ts-stuff also?\n\nYeah. There was some recent discussion of pushing up the default cost\nfor some of these obviously-not-so-cheap functions, but nothing's been\ndone yet.\n\n> Can I query the database for the cost of the functions?\n\nSee pg_proc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Oct 2009 13:31:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search - query plan? PG 8.4.1 " } ]
[ { "msg_contents": "Jeff, Robert, I am still working on the \"low cardinality\" info you requested. Please bear with me.\n\nIn the meantime, have the following question:\n\nAre there known \"scenarios\" where certain types of SQL queries perform worse in PG\nthan they do in ORacle ?\n\nFor example, I have observed some discussion where MAX (In Oracle) was replaced with ORDER/DESC/LIMIT\nin PG.\n\nI realize this is a loaded question, but it would be great if any of you would share some observed\ngeneralities in this context. \n\nThanks\nVK\n\nJeff, Robert, I am still working on the \"low cardinality\" info you requested. Please bear with me.In the meantime, have the following question:Are there known \"scenarios\" where certain types of SQL queries perform worse in PGthan they do in ORacle ?For example, I have observed some discussion where MAX (In Oracle) was replaced with ORDER/DESC/LIMITin PG.I realize this is a loaded question, but it would be great if any of you would share some observedgeneralities in this context. ThanksVK", "msg_date": "Mon, 19 Oct 2009 06:43:06 -0700 (PDT)", "msg_from": "Vikul Khosla <[email protected]>", "msg_from_op": true, "msg_subject": "Known Bottlenecks" }, { "msg_contents": "On Mon, Oct 19, 2009 at 2:43 PM, Vikul Khosla <[email protected]> wrote:\n\n> Jeff, Robert, I am still working on the \"low cardinality\" info you\n> requested. Please bear with me.\n>\n> In the meantime, have the following question:\n>\n> Are there known \"scenarios\" where certain types of SQL queries perform\n> worse in PG\n> than they do in ORacle ?\n>\n> For example, I have observed some discussion where MAX (In Oracle) was\n> replaced with ORDER/DESC/LIMIT\n> in PG.\n>\n> I realize this is a loaded question, but it would be great if any of you\n> would share some observed\n> generalities in this context.\n>\n\nother one would be SELECT .. WHERE foo IN (SELECT ...); (use join instead,\nand in case of NOT IN , use left join).\n\n\n\n-- \nGJ\n\nOn Mon, Oct 19, 2009 at 2:43 PM, Vikul Khosla <[email protected]> wrote:\nJeff, Robert, I am still working on the \"low cardinality\" info you requested. Please bear with me.\nIn the meantime, have the following question:Are there known \"scenarios\" where certain types of SQL queries perform worse in PGthan they do in ORacle ?For example, I have observed some discussion where MAX (In Oracle) was replaced with ORDER/DESC/LIMIT\nin PG.I realize this is a loaded question, but it would be great if any of you would share some observedgeneralities in this context. other one would be SELECT .. WHERE foo IN (SELECT ...);   (use join instead, and in case of NOT IN , use left join).\n-- GJ", "msg_date": "Mon, 19 Oct 2009 14:46:58 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Known Bottlenecks" } ]
[ { "msg_contents": "Hi all,\n\nThe current discussion about \"Indexes on low cardinality columns\" let me discover this \n\"grouped index tuples\" patch (http://community.enterprisedb.com/git/) and its associated \n\"maintain cluster order\" patch (http://community.enterprisedb.com/git/maintain_cluster_order_v5.patch)\n\nThis last patch seems to cover the TODO item named \"Automatically maintain clustering on a table\". \nAs this patch is not so new (2007), I would like to know why it has not been yet integrated in a standart version of PG (not well finalized ? not totaly sure ? not corresponding to the way the core team would like to address this item ?) and if there are good chance to see it committed in a near future.\n\nI currently work for a large customer who is migrating a lot of databases used by an application that currently largely takes benefit from well clustered tables, especialy for batch processing. The migration brings a lot of benefits. In fact, the only regression, compared to the old RDBMS, is the fact that tables organisation level decreases more quickly, generating more frequent heavy cluster operations. \n\nSo this \"maintain cluster order\" patch (and may be \"git\" also) should fill the lack. But leaving the way of the \"standart PG\" is not something very attractive...\n\nRegards. \nPhilippe Beaudoin.\n\n\n\n\n", "msg_date": "Mon, 19 Oct 2009 21:32:18 +0200", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "maintain_cluster_order_v5.patch" }, { "msg_contents": "On Mon, 2009-10-19 at 21:32 +0200, [email protected] wrote:\n> Hi all,\n> \n> The current discussion about \"Indexes on low cardinality columns\" let\n> me discover this \n> \"grouped index tuples\" patch (http://community.enterprisedb.com/git/)\n> and its associated \n> \"maintain cluster order\" patch\n> (http://community.enterprisedb.com/git/maintain_cluster_order_v5.patch)\n> \n> This last patch seems to cover the TODO item named \"Automatically\n> maintain clustering on a table\".\n\nThe TODO item isn't clear about whether the order should be strictly\nmaintained, or whether it should just make an effort to keep the table\nmostly clustered. The patch mentioned above makes an effort, but does\nnot guarantee cluster order.\n\n> As this patch is not so new (2007), I would like to know why it has\n> not been yet integrated in a standart version of PG (not well\n> finalized ? not totaly sure ? not corresponding to the way the core\n> team would like to address this item ?) and if there are good chance\n> to see it committed in a near future.\n\nSearch the archives on -hackers for discussion. I don't think either of\nthese features were rejected, but some of the work and benchmarking have\nnot been completed.\n\nIf you can help (either benchmark work or C coding), try reviving the\nfeatures by testing them and merging them with the current tree. I\nrecommend reading the discussion first, to see if there are any major\nproblems.\n\nPersonally, I'd like to see the GIT feature finished as well. When I\nhave time, I was planning to take a look into it.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Mon, 19 Oct 2009 15:05:53 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintain_cluster_order_v5.patch" } ]
[ { "msg_contents": "Hi,\n\nI have two large tables T1 and T2, such that T2 has a FK to T1 (i.e. T2.FK-->\nT1.PK, possibly multiple T2 rows may reference the same T1 row). I have\ndeleted about 2/3 of table T2. I now want to delete all rows in T1 that are\nnot referenced by T2, i.e. all rows in T1 that cannot join with (any row in)\nT2 on the condition T2.FK = T1.PK (the opposite of a join...)\n\nI assume this will work but will take a long time:\n\nDELETE * FROM T1 where T1.PK NOT IN\n(SELECT T1.PK FROM T1, T2 where T1.PK = T2.FK)\n\nWhat is an *efficient* way to do this?\nThanks,\n\n-- Shaul\n\nHi,I have two large tables T1 and T2, such that T2 has a FK to T1 (i.e. T2.FK --> T1.PK, possibly multiple T2 rows may reference the same T1 row). I have deleted about 2/3 of table T2. I now want to delete all rows in T1 that are not referenced by T2, i.e. all rows in T1 that cannot join with (any row in) T2 on the condition T2.FK = T1.PK (the opposite of a join...)\nI assume this will work but will take a long time:DELETE * FROM T1 where T1.PK NOT IN(SELECT T1.PK FROM T1, T2 where T1.PK = T2.FK)\nWhat is an efficient way to do this?Thanks,-- Shaul", "msg_date": "Tue, 20 Oct 2009 14:37:04 +0200", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Finding rows in table T1 that DO NOT MATCH any row in table T2" }, { "msg_contents": "How about:\n\nDELETE * FROM T1 LEFT JOIN T2 ON T1.PK <http://t1.pk/> = T2.FK<http://t2.fk/>\nWHERE T2.FK IS NULL\n\nShaul\n\n\nOn Tue, Oct 20, 2009 at 2:37 PM, Shaul Dar <[email protected]> wrote:\n\n> Hi,\n>\n> I have two large tables T1 and T2, such that T2 has a FK to T1 (i.e. T2.FK-->\n> T1.PK, possibly multiple T2 rows may reference the same T1 row). I have\n> deleted about 2/3 of table T2. I now want to delete all rows in T1 that are\n> not referenced by T2, i.e. all rows in T1 that cannot join with (any row in)\n> T2 on the condition T2.FK = T1.PK (the opposite of a join...)\n>\n> I assume this will work but will take a long time:\n>\n> DELETE * FROM T1 where T1.PK NOT IN\n> (SELECT T1.PK FROM T1, T2 where T1.PK = T2.FK)\n>\n> What is an *efficient* way to do this?\n> Thanks,\n>\n> -- Shaul\n>\n>\n\nHow about:DELETE * FROM T1 LEFT JOIN T2 ON T1.PK = T2.FKWHERE T2.FK IS NULL\nShaul\nOn Tue, Oct 20, 2009 at 2:37 PM, Shaul Dar <[email protected]> wrote:\nHi,I have two large tables T1 and T2, such that T2 has a FK to T1 (i.e. T2.FK --> T1.PK, possibly multiple T2 rows may reference the same T1 row). I have deleted about 2/3 of table T2. I now want to delete all rows in T1 that are not referenced by T2, i.e. all rows in T1 that cannot join with (any row in) T2 on the condition T2.FK = T1.PK (the opposite of a join...)\nI assume this will work but will take a long time:DELETE * FROM T1 where T1.PK NOT IN(SELECT T1.PK FROM T1, T2 where T1.PK = T2.FK)\nWhat is an efficient way to do this?Thanks,-- Shaul", "msg_date": "Tue, 20 Oct 2009 14:46:23 +0200", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding rows in table T1 that DO NOT MATCH any row in table T2" }, { "msg_contents": "In response to Shaul Dar :\n> Hi,\n> \n> I have two large tables T1 and T2, such that T2 has a FK to T1 (i.e. T2.FK -->\n> T1.PK, possibly multiple T2 rows may reference the same T1 row). I have deleted\n> about 2/3 of table T2. I now want to delete all rows in T1 that are not\n> referenced by T2, i.e. all rows in T1 that cannot join with (any row in) T2 on\n> the condition T2.FK = T1.PK (the opposite of a join...)\n> \n> I assume this will work but will take a long time:\n> \n> DELETE * FROM T1 where T1.PK NOT IN\n> (SELECT T1.PK FROM T1, T2 where T1.PK = T2.FK)\n> \n> What is an efficient way to do this?\n> Thanks,\n\nMaybe this one:\n\n(my id is your pk):\n\ndelete from t1 where t1.id in (select t1.id from t1 left join t2 using\n(id) where t2.id is null);\n\nTry it, and/or use explain for both versions and see which which is\nfaster.\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n", "msg_date": "Tue, 20 Oct 2009 14:54:05 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding rows in table T1 that DO NOT MATCH any row in table T2" }, { "msg_contents": "Shaul Dar <[email protected]> writes:\n> I assume this will work but will take a long time:\n\n> DELETE * FROM T1 where T1.PK NOT IN\n> (SELECT T1.PK FROM T1, T2 where T1.PK = T2.FK)\n\nWell, yeah, but it's unnecessarily inefficient --- why not just\n\nDELETE FROM T1 where T1.PK NOT IN\n(SELECT T2.FK FROM T2)\n\nHowever, that still won't be tremendously fast unless the subselect fits\nin work_mem. As of 8.4 this variant should be reasonable:\n\nDELETE FROM T1 where NOT EXISTS\n(SELECT 1 FROM T2 where T1.PK = T2.FK)\n\nPre-8.4 you should resort to the \"left join where is null\" trick,\nbut there's no need to be so obscure as of 8.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Oct 2009 09:59:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding rows in table T1 that DO NOT MATCH any row in table T2 " }, { "msg_contents": "How about\nDELETE FROM T1 WHERE T1.PK IN\n(SELECT T1.PK FROM T1 EXCEPT SELECT T2.FK FROM T2);\n\nMel\n\nOn Tue, Oct 20, 2009 at 7:59 AM, Tom Lane <[email protected]> wrote:\n\n> Shaul Dar <[email protected]> writes:\n> > I assume this will work but will take a long time:\n>\n> > DELETE * FROM T1 where T1.PK NOT IN\n> > (SELECT T1.PK FROM T1, T2 where T1.PK = T2.FK)\n>\n> Well, yeah, but it's unnecessarily inefficient --- why not just\n>\n> DELETE FROM T1 where T1.PK NOT IN\n> (SELECT T2.FK FROM T2)\n>\n> However, that still won't be tremendously fast unless the subselect fits\n> in work_mem. As of 8.4 this variant should be reasonable:\n>\n> DELETE FROM T1 where NOT EXISTS\n> (SELECT 1 FROM T2 where T1.PK = T2.FK)\n>\n> Pre-8.4 you should resort to the \"left join where is null\" trick,\n> but there's no need to be so obscure as of 8.4.\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHow aboutDELETE FROM T1 WHERE T1.PK IN(SELECT T1.PK FROM T1 EXCEPT SELECT T2.FK FROM T2);\nMelOn Tue, Oct 20, 2009 at 7:59 AM, Tom Lane <[email protected]> wrote:\nShaul Dar <[email protected]> writes:\n> I assume this will work but will take a long time:\n\n> DELETE * FROM T1 where T1.PK NOT IN\n> (SELECT T1.PK FROM T1, T2 where T1.PK = T2.FK)\n\nWell, yeah, but it's unnecessarily inefficient --- why not just\n\nDELETE FROM T1 where T1.PK NOT IN\n(SELECT T2.FK FROM T2)\n\nHowever, that still won't be tremendously fast unless the subselect fits\nin work_mem.  As of 8.4 this variant should be reasonable:\n\nDELETE FROM T1 where NOT EXISTS\n(SELECT 1 FROM T2 where T1.PK = T2.FK)\n\nPre-8.4 you should resort to the \"left join where is null\" trick,\nbut there's no need to be so obscure as of 8.4.\n\n                        regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 20 Oct 2009 10:14:13 -0600", "msg_from": "Melton Low <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding rows in table T1 that DO NOT MATCH any row in\n\ttable T2" }, { "msg_contents": "Tom,\n\n1. Actually I just tested you suggestion\n\nSELECT COUNT (*) FROM T1 where NOT EXISTS\n(SELECT 1 FROM T2 where T1.PK <http://t1.pk/> = T2.FK <http://t2.fk/>)\n\nand in worked in PG 8.3.8. On a DB with 6M T1 records and 5M T2 records it\ntook 1m8s,\n\nMy suggestion, i.e.\n\nSELECT COUNT(*) FROM T1 LEFT JOIN T2 ON T1.PK <http://t1.pk/> =\nT2.FK<http://t2.fk/>\nWHERE T2.FK <http://t2.fk/> IS NULL\n\nwas about twice as fast, 37s. (both returned same number of rows, about 2/3\nof T1)\n\nHowever I can use DELETE with your version (instead of \"SELECT COUNT (*)\"\nabove) but not with mine (can't have LEFT JOIN in DELETE), so YOU WIN.\nThanks!\n\n2. BTW. I presented my question earlier in an overly simplified fashion.\nSorry. In actuality the two tables are joined on two columns,\nsay Ka and Kb (a composite key column), e.g. T1.PKa = T2.FKa and T1.PKb =\nT2.FKb. So the IN versions suggested will not work\nsince AFAIK IN only works for a single value.\n\n-- Shaul\n\nOn Tue, Oct 20, 2009 at 3:59 PM, Tom Lane <[email protected]> wrote:\n\n> Shaul Dar <[email protected]> writes:\n> > I assume this will work but will take a long time:\n>\n> > DELETE * FROM T1 where T1.PK NOT IN\n> > (SELECT T1.PK FROM T1, T2 where T1.PK = T2.FK)\n>\n> Well, yeah, but it's unnecessarily inefficient --- why not just\n>\n> DELETE FROM T1 where T1.PK NOT IN\n> (SELECT T2.FK FROM T2)\n>\n> However, that still won't be tremendously fast unless the subselect fits\n> in work_mem. As of 8.4 this variant should be reasonable:\n>\n> DELETE FROM T1 where NOT EXISTS\n> (SELECT 1 FROM T2 where T1.PK = T2.FK)\n>\n> Pre-8.4 you should resort to the \"left join where is null\" trick,\n> but there's no need to be so obscure as of 8.4.\n>\n> regards, tom lane\n>\n\nTom,\n\n1. Actually I just tested you suggestion\n\nSELECT COUNT (*) FROM T1 where NOT EXISTS\n\n(SELECT 1 FROM T2 where T1.PK = T2.FK)\n\nand in worked in PG 8.3.8. On a DB with 6M T1 records and 5M T2 records it took 1m8s,\n\nMy suggestion, i.e. \n\nSELECT COUNT(*) FROM T1 LEFT JOIN T2 ON T1.PK = T2.FK\nWHERE T2.FK IS NULL\n\nwas about twice as fast, 37s. (both returned same number of rows, about 2/3 of T1)\n\nHowever I can use DELETE with your version (instead of \"SELECT COUNT\n(*)\" above) but not with mine (can't have LEFT JOIN in DELETE), so YOU\nWIN. Thanks!\n\n2. BTW. I presented my question earlier in an overly simplified fashion. Sorry. In actuality the two tables are joined on two columns,\nsay Ka and Kb (a composite key column), e.g. T1.PKa = T2.FKa and T1.PKb = T2.FKb. So the IN versions suggested will not work\nsince AFAIK IN only works for a single value.-- ShaulOn Tue, Oct 20, 2009 at 3:59 PM, Tom Lane <[email protected]> wrote:\nShaul Dar <[email protected]> writes:\n\n> I assume this will work but will take a long time:\n\n> DELETE * FROM T1 where T1.PK NOT IN\n> (SELECT T1.PK FROM T1, T2 where T1.PK = T2.FK)\n\nWell, yeah, but it's unnecessarily inefficient --- why not just\n\nDELETE FROM T1 where T1.PK NOT IN\n(SELECT T2.FK FROM T2)\n\nHowever, that still won't be tremendously fast unless the subselect fits\nin work_mem.  As of 8.4 this variant should be reasonable:\n\nDELETE FROM T1 where NOT EXISTS\n(SELECT 1 FROM T2 where T1.PK = T2.FK)\n\nPre-8.4 you should resort to the \"left join where is null\" trick,\nbut there's no need to be so obscure as of 8.4.\n\n                        regards, tom lane", "msg_date": "Wed, 21 Oct 2009 13:52:44 +0200", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding rows in table T1 that DO NOT MATCH any row in\n\ttable T2" }, { "msg_contents": "\nOn 10/21/09 4:52 AM, \"Shaul Dar\" <[email protected]> wrote:\n\n> Tom,\n> \n> 1. Actually I just tested you suggestion\n> \n> SELECT COUNT (*) FROM T1 where NOT EXISTS\n> (SELECT 1 FROM T2 where T1.PK <http://t1.pk/> = T2.FK <http://t2.fk/> )\n> \n> and in worked in PG 8.3.8. On a DB with 6M T1 records and 5M T2 records it\n> took 1m8s,\n> \n> My suggestion, i.e.\n> \n> SELECT COUNT(*) FROM T1 LEFT JOIN T2 ON T1.PK <http://t1.pk/> = T2.FK\n> <http://t2.fk/> \n> WHERE T2.FK <http://t2.fk/> IS NULL\n> \n> was about twice as fast, 37s. (both returned same number of rows, about 2/3 of\n> T1)\n> \n> However I can use DELETE with your version (instead of \"SELECT COUNT (*)\"\n> above) but not with mine (can't have LEFT JOIN in DELETE), so YOU WIN. Thanks!\n> \n> 2. BTW. I presented my question earlier in an overly simplified fashion.\n> Sorry. In actuality the two tables are joined on two columns,\n> say Ka and Kb (a composite key column), e.g. T1.PKa = T2.FKa and T1.PKb =\n> T2.FKb. So the IN versions suggested will not work\n> since AFAIK IN only works for a single value.\n\nThe performance will stink in many cases, but IN and NOT IN can work on\nmultiple values, for example:\n\nWHERE (a.key1, a.key2) NOT IN (SELECT b.key1, b.key2 FROM b).\n\nThe fastest (in 8.4) is definitely NOT EXISTS.\n\nWHERE NOT EXISTS (SELECT 1 FROM b WHERE (b.key1, b.key2) = (a.key1, a.key2))\n\n I've done this, deleting from tables with 15M + rows where I need a \"not\nin\" on two or three columns on multiple other tables.\nHowever, NOT EXISTS is only fast if every NOT EXISTS clause is a select on\none table, if it is multiple tables and a join, things can get ugly and the\nplanner might not optimize it right. In that case use two NOT EXISTS\nclauses. Always look at the EXPLAIN plan.\n\nWith 8.4 -- for performance generally prefer the following:\n* prefer JOIN and implicit joins to IN and EXISTS.\n* prefer 'NOT EXISTS' to 'NOT IN' or 'LEFT JOIN where (right is null)'\n\n\n> \n> -- Shaul\n> \n> On Tue, Oct 20, 2009 at 3:59 PM, Tom Lane <[email protected]> wrote:\n>> Shaul Dar <[email protected]> writes:\n>>> I assume this will work but will take a long time:\n>> \n>>> DELETE * FROM T1 where T1.PK <http://T1.PK> NOT IN\n>>> (SELECT T1.PK <http://T1.PK> FROM T1, T2 where T1.PK <http://T1.PK> =\n>>> T2.FK <http://T2.FK> )\n>> \n>> Well, yeah, but it's unnecessarily inefficient --- why not just\n>> \n>> DELETE FROM T1 where T1.PK <http://T1.PK> NOT IN\n>> (SELECT T2.FK <http://T2.FK> FROM T2)\n>> \n>> However, that still won't be tremendously fast unless the subselect fits\n>> in work_mem.  As of 8.4 this variant should be reasonable:\n>> \n>> DELETE FROM T1 where NOT EXISTS\n>> (SELECT 1 FROM T2 where T1.PK <http://T1.PK> = T2.FK <http://T2.FK> )\n>> \n>> Pre-8.4 you should resort to the \"left join where is null\" trick,\n>> but there's no need to be so obscure as of 8.4.\n>> \n>>                        regards, tom lane\n> \n> \n\n", "msg_date": "Wed, 21 Oct 2009 11:00:37 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding rows in table T1 that DO NOT MATCH any row in\n table T2" } ]
[ { "msg_contents": "Hi (running PG8.4.1)\n\nAs far as I have gotten in my test of PG Full Text Search.. I have got\nover 6m documents indexed so far and the index has grown to 37GB. The\nsystems didnt do any autovacuums in the process but I manually vacuumed a\nfew times and that stopped growth for a short period of time.\n\n table_name | index_name | times_used | table_size | index_size |\nnum_writes | definition\n------------+-----------------+------------+------------+------------+------------+----------------------------------------------------------------------\n ftstest | body_tfs_idx | 171 | 5071 MB | 37 GB | \n6122086 | CREATE INDEX ftstest_tfs_idx ON ftstest USING gin\n(ftstest_body_fts)\n(1 row)\n\nThis is sort of what I'd expect this is not more scary than the Xapian\nindex it is comparing with. Search speed seems excellent. But I feel I'm\ngetting a significant drop-off in indexing speed as time goes by, I dont\nhave numbers to confirm this.\n\nIf i understand the technicalities correct then INSERT/UPDATES to the\nindex will be accumulated in the \"maintainance_work_mem\" and the \"user\"\nbeing unlucky to fill it up will pay the penalty of merging all the\nchanges into the index?\n\nI currently have \"maintainance_work_mem\" set to 128MB and according to\n\"pg_stat_activity\" i currently have a insert sitting for over 1 hour. If I\nstrace the postgres process-id it is reading and writing a lot on the\nfilesystem and imposing an IO-wait load of 1 cpu.\n\nCan I do something to prevent this from happening? Is it \"by design\"?\n\n-- \nJesper\n\n", "msg_date": "Wed, 21 Oct 2009 17:03:09 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Random penalties on GIN index updates? " }, { "msg_contents": "[email protected] writes:\n> If i understand the technicalities correct then INSERT/UPDATES to the\n> index will be accumulated in the \"maintainance_work_mem\" and the \"user\"\n> being unlucky to fill it up will pay the penalty of merging all the\n> changes into the index?\n\nYou can turn off the \"fastupdate\" index parameter to disable that,\nbut I think there may be a penalty in index bloat as well as insertion\nspeed. It would be better to use a more conservative work_mem\n(work_mem, not maintenance_work_mem, is what limits the amount of stuff\naccumulated during normal inserts).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Oct 2009 11:13:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random penalties on GIN index updates? " }, { "msg_contents": "Tom Lane wrote:\n> [email protected] writes:\n>> If i understand the technicalities correct then INSERT/UPDATES to the\n>> index will be accumulated in the \"maintainance_work_mem\" and the \"user\"\n>> being unlucky to fill it up will pay the penalty of merging all the\n>> changes into the index?\n> \n> You can turn off the \"fastupdate\" index parameter to disable that,\n> but I think there may be a penalty in index bloat as well as insertion\n> speed. It would be better to use a more conservative work_mem\n> (work_mem, not maintenance_work_mem, is what limits the amount of stuff\n> accumulated during normal inserts). \n\nOk, I read the manual about that. Seems worth testing, hat I'm seeing is\nstuff like this:\n\n2009-10-21T16:32:21\n2009-10-21T16:32:25\n2009-10-21T16:32:30\n2009-10-21T16:32:35\n2009-10-21T17:10:50\n2009-10-21T17:10:59\n2009-10-21T17:11:09\n... then it went on steady for another 180.000 documents.\n\nEach row is a printout from the application doing INSERTS, it print the\ntime for each 1000 rows it gets through. It is the 38minutes in the\nmiddle I'm a bit worried about.\n\nwork_mem is set to 512MB, that may translate into 180.000 documents in\nmy system?\n\nWhat I seems to miss a way to make sure som \"background\" application is\nthe one getting the penalty, so a random user doing a single insert\nwon't get stuck. Is that doable?\n\nIt also seems to lock out other inserts while being in this state.\n\n-- \nJesper\n", "msg_date": "Wed, 21 Oct 2009 19:58:34 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random penalties on GIN index updates?" }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> What I seems to miss a way to make sure som \"background\" application is\n> the one getting the penalty, so a random user doing a single insert\n> won't get stuck. Is that doable?\n\nYou could force a vacuum every so often, but I don't think that will\nhelp the locking situation. You really need to back off work_mem ---\n512MB is probably not a sane global value for that anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Oct 2009 14:35:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random penalties on GIN index updates? " }, { "msg_contents": "On Wed, Oct 21, 2009 at 2:35 PM, Tom Lane <[email protected]> wrote:\n> Jesper Krogh <[email protected]> writes:\n>> What I seems to miss a way to make sure som \"background\" application is\n>> the one getting the penalty, so a random user doing a single insert\n>> won't get stuck. Is that doable?\n>\n> You could force a vacuum every so often, but I don't think that will\n> help the locking situation.  You really need to back off work_mem ---\n> 512MB is probably not a sane global value for that anyway.\n\nYeah, it's hard to imagine a system where that doesn't threaten all\nkinds of other bad results. I bet setting this to 4MB will make this\nproblem largely go away.\n\nArguably we shouldn't be using work_mem to control this particular\nbehavior, but...\n\n...Robert\n", "msg_date": "Wed, 21 Oct 2009 23:16:28 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random penalties on GIN index updates?" }, { "msg_contents": "Robert Haas wrote:\n> On Wed, Oct 21, 2009 at 2:35 PM, Tom Lane <[email protected]> wrote:\n>> Jesper Krogh <[email protected]> writes:\n>>> What I seems to miss a way to make sure som \"background\" application is\n>>> the one getting the penalty, so a random user doing a single insert\n>>> won't get stuck. Is that doable?\n>> You could force a vacuum every so often, but I don't think that will\n>> help the locking situation. You really need to back off work_mem ---\n>> 512MB is probably not a sane global value for that anyway.\n> \n> Yeah, it's hard to imagine a system where that doesn't threaten all\n> kinds of other bad results. I bet setting this to 4MB will make this\n> problem largely go away.\n> \n> Arguably we shouldn't be using work_mem to control this particular\n> behavior, but...\n\nI came from Xapian, where you only can have one writer process, but\nbatching up in several GB's improved indexing performance dramatically.\nLowering work_mem to 16MB gives \"batches\" of 11.000 documents and stall\nbetween 45 and 90s. ~ 33 docs/s\n\n-- \nJesper\n", "msg_date": "Thu, 22 Oct 2009 06:57:49 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random penalties on GIN index updates?" } ]
[ { "msg_contents": "Hi guys,\n\nImagine if you will that I have a table thus\n\nCREATE TABLE \"lumps\" (\n \"id\" SERIAL PRIMARY KEY,\n \"name\" TEXT NOT NULL,\n \"data\" BYTEA NOT NULL\n);\n\nImagine I have stored say 1000 rows.\n\nIn each row, we have stored on average\n20 bytes in column \"name\",\n10 megabytes in column \"data\".\n\nSo my table contains 10 gigabytes of \"data\" and 20 kilobytes of \"name\"s.\n\nThe values in colum \"data\" will presumably be TOASTed.\n\nNow, I go ahead and run the following query:\n\nSELECT \"name\" FROM \"lumps\";\n\nClearly the query will need to retrieve something from all 1000 rows.\n\nAnd now we get to the question:\n\nWill the query engine retrieve the entire row (including 10 megabytes of \nout-of-line TOASTed data) for every row, and then pick out column \n\"name\", and take an age to do so, OR will the query engine retrive just \nthe \"direct\" row, which includes \"name\" in-line, and return those to me, \nin the blink of an eye?\n\nClearly the former would be slow and undesirable, and the latter quick \nand desirable.\n\nRegards,\n\nBill\n", "msg_date": "Wed, 21 Oct 2009 18:26:16 +0100", "msg_from": "William Blunn <[email protected]>", "msg_from_op": true, "msg_subject": "Are unreferenced TOASTed values retrieved?" }, { "msg_contents": "William Blunn <[email protected]> writes:\n> Will the query engine retrieve the entire row (including 10 megabytes of \n> out-of-line TOASTed data) for every row, and then pick out column \n> \"name\", and take an age to do so,\n\nNo. That's pretty much the whole point of the TOAST mechanism;\nout-of-line values are not fetched unless actually needed. See\nhttp://developer.postgresql.org/pgdocs/postgres/storage-toast.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Oct 2009 15:12:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are unreferenced TOASTed values retrieved? " } ]
[ { "msg_contents": "Hi Jeff,\n\n>> Hi all,\n>> \n>> The current discussion about \"Indexes on low cardinality columns\" let\n>> me discover this \n>> \"grouped index tuples\" patch (http://community.enterprisedb.com/git/)\n>> and its associated \n>> \"maintain cluster order\" patch\n>> (http://community.enterprisedb.com/git/maintain_cluster_order_v5.patch)\n>> \n>> This last patch seems to cover the TODO item named \"Automatically\n>> maintain clustering on a table\".\n>\n>The TODO item isn't clear about whether the order should be strictly\n>maintained, or whether it should just make an effort to keep the table\n>mostly clustered. The patch mentioned above makes an effort, but does\n>not guarantee cluster order.\n>\nYou are right, there are 2 different visions : a strictly maintained order or a possibly maintained order.\nThis later is already a good enhancement as it largely decrease the time interval between 2 CLUSTER operations, in particular if the FILLFACTOR is properly set. In term of performance, having 99% of rows in the \"right\" page is not realy worse than having totaly optimized storage. \nThe only benefit of a strictly maintained order is that there is no need for CLUSTER at all, which could be very interesting for very large databases with 24/24 access constraint.\nFor our need, the \"possibly maintained order\" is enough.\n\n>> As this patch is not so new (2007), I would like to know why it has\n>> not been yet integrated in a standart version of PG (not well\n>> finalized ? not totaly sure ? not corresponding to the way the core\n>> team would like to address this item ?) and if there are good chance\n>> to see it committed in a near future.\n>\n>Search the archives on -hackers for discussion. I don't think either of\n>these features were rejected, but some of the work and benchmarking have\n>not been completed.\nOK, I will have a look.\n>\n>If you can help (either benchmark work or C coding), try reviving the\n>features by testing them and merging them with the current tree.\nOK, that's the rule of the game in such a community.\nI am not a good C writer, but I will see what I could do.\n\n> I recommend reading the discussion first, to see if there are any major\n>problems.\n\n>\n>Personally, I'd like to see the GIT feature finished as well. When I\n>have time, I was planning to take a look into it.\n>\n>Regards,\n>\tJeff Davis\n\n\n", "msg_date": "Wed, 21 Oct 2009 19:55:18 +0200", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: maintain_cluster_order_v5.patch" }, { "msg_contents": "[email protected] wrote:\n> Hi Jeff,\n>> If you can help (either benchmark work or C coding), try reviving the\n>> features by testing them and merging them with the current tree.\n> OK, that's the rule of the game in such a community.\n> I am not a good C writer, but I will see what I could do.\n\nThe FSM rewrite in 8.4 opened up more options for implementing this. The\npatch used to check the index for the block the nearest key is stored\nin, read that page in, and insert there if there's enough free space on\nit. with the new FSM, you can check how much space there is on that\nparticular page before fetching it. And if it's full, the new FSM data\nstructure can be searched for a page with enough free space as close as\npossible to the old page, although there's no interface to do that yet.\n\nA completely different line of attack would be to write a daemon that\nconcurrently moves tuples in order to keep the table clustered. It would\ninterfere with UPDATEs and DELETEs, and ctids of the tuples would\nchange, but for many use cases it would be just fine. We discussed a\nutility like that as a replacement for VACUUM FULL on hackers a while\nago, see thread \"Feedback on getting rid of VACUUM FULL\". A similar\napproach would work here, the logic for deciding which tuples to move\nand where would just be different.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 22 Oct 2009 09:56:23 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintain_cluster_order_v5.patch" } ]
[ { "msg_contents": "I have a reporting query that is taking nearly all of it's time in aggregate\nfunctions and I'm trying to figure out how to optimize it. The query takes\napproximately 170ms when run with \"select *\", but when run with all the\naggregate functions the query takes 18 seconds. The slowness comes from our\nattempt to find distribution data using selects of the form:\n\nSUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n\nrepeated across many different x,y values and fields to build out several\nhistograms of the data. The main culprit appears to be the CASE statement,\nbut I'm not sure what to use instead. I'm sure other people have had\nsimilar queries and I was wondering what methods they used to build out data\nlike this?\nThanks for your help,\nDoug\n\nI have a reporting query that is taking nearly all of it's time in aggregate functions and I'm trying to figure out how to optimize it.  The query takes approximately 170ms when run with \"select *\", but when run with all the aggregate functions the query takes 18 seconds.  The slowness comes from our attempt to find distribution data using selects of the form: \nSUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)repeated across many different x,y values and fields to build out several histograms of the data.  The main culprit appears to be the CASE statement, but I'm not sure what to use instead.  I'm sure other people have had similar queries and I was wondering what methods they used to build out data like this?\nThanks for your help,Doug", "msg_date": "Wed, 21 Oct 2009 15:51:25 -0700", "msg_from": "Doug Cole <[email protected]>", "msg_from_op": true, "msg_subject": "optimizing query with multiple aggregates" }, { "msg_contents": "On Wed, Oct 21, 2009 at 6:51 PM, Doug Cole <[email protected]> wrote:\n> I have a reporting query that is taking nearly all of it's time in aggregate\n> functions and I'm trying to figure out how to optimize it.  The query takes\n> approximately 170ms when run with \"select *\", but when run with all the\n> aggregate functions the query takes 18 seconds.  The slowness comes from our\n> attempt to find distribution data using selects of the form:\n>\n> SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n>\n> repeated across many different x,y values and fields to build out several\n> histograms of the data.  The main culprit appears to be the CASE statement,\n> but I'm not sure what to use instead.  I'm sure other people have had\n> similar queries and I was wondering what methods they used to build out data\n> like this?\n\nhave you tried:\n\ncount(*) where field >= x AND field < y;\n\n??\n\nmerlin\n", "msg_date": "Wed, 21 Oct 2009 20:39:48 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing query with multiple aggregates" }, { "msg_contents": "On Wed, Oct 21, 2009 at 5:39 PM, Merlin Moncure <[email protected]> wrote:\n>\n> On Wed, Oct 21, 2009 at 6:51 PM, Doug Cole <[email protected]> wrote:\n> > I have a reporting query that is taking nearly all of it's time in aggregate\n> > functions and I'm trying to figure out how to optimize it.  The query takes\n> > approximately 170ms when run with \"select *\", but when run with all the\n> > aggregate functions the query takes 18 seconds.  The slowness comes from our\n> > attempt to find distribution data using selects of the form:\n> >\n> > SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n> >\n> > repeated across many different x,y values and fields to build out several\n> > histograms of the data.  The main culprit appears to be the CASE statement,\n> > but I'm not sure what to use instead.  I'm sure other people have had\n> > similar queries and I was wondering what methods they used to build out data\n> > like this?\n>\n> have you tried:\n>\n> count(*) where field >= x AND field < y;\n>\n> ??\n>\n> merlin\n\nUnless I'm misunderstanding you, that would require breaking each bin\ninto a separate sql statement and since I'm trying to calculate more\nthan 100 bins between the different fields any improvement in the\naggregate functions would be overwhelmed by the cost of the actual\nquery, which is about 170ms.\nThanks,\nDoug\n", "msg_date": "Wed, 21 Oct 2009 19:21:36 -0700", "msg_from": "Doug Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing query with multiple aggregates" }, { "msg_contents": "On Wed, Oct 21, 2009 at 6:51 PM, Doug Cole <[email protected]> wrote:\n\n>\n> repeated across many different x,y values and fields to build out several\n> histograms of the data. The main culprit appears to be the CASE statement,\n> but I'm not sure what to use instead. I'm sure other people have had\n> similar queries and I was wondering what methods they used to build out data\n> like this?\n>\n\nUse group by with an appropriate division/rounding to create the appropriate\nbuckets, if they're all the same size.\n\nselect round(field/100) as bucket, count(*) as cnt from foo group by\nround(field/100);\n\n-- \n- David T. Wilson\[email protected]\n\nOn Wed, Oct 21, 2009 at 6:51 PM, Doug Cole <[email protected]> wrote:\nrepeated across many different x,y values and fields to build out several histograms of the data.  The main culprit appears to be the CASE statement, but I'm not sure what to use instead.  I'm sure other people have had similar queries and I was wondering what methods they used to build out data like this?\nUse group by with an appropriate division/rounding to create the appropriate buckets, if they're all the same size.select round(field/100) as bucket, count(*) as cnt from foo group by round(field/100);\n-- - David T. [email protected]", "msg_date": "Wed, 21 Oct 2009 22:47:31 -0400", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing query with multiple aggregates" }, { "msg_contents": "So you've got a query like:\nSELECT SUM(CASE WHEN field >= 0 AND field < 10 THEN 1 ELSE 0 END) as\nzeroToTen,\n SUM(CASE WHEN field >= 10 AND field < 20 THEN 1 ELSE 0 END) as\ntenToTwenty,\n SUM(CASE WHEN field >= 20 AND field < 30 THEN 1 ELSE 0 END) as\ntenToTwenty,\n...\nFROM bigtable\n\n\nMy guess is this forcing a whole bunch of if checks and your getting cpu\nbound. Could you try something like:\n\nSELECT SUM(CASE WHEN field >= 0 AND field < 10 THEN count ELSE 0 END) as\nzeroToTen,\n SUM(CASE WHEN field >= 10 AND field < 20 THEN count ELSE 0\nEND) as tenToTwenty,\n SUM(CASE WHEN field >= 20 AND field < 30 THEN count ELSE 0\nEND) as tenToTwenty,\n...\nFROM (SELECT field, count(*) FROM bigtable GROUP BY field)\n\nwhich will allow a hash aggregate? You'd do a hash aggregate on the whole\ntable which should be quick and then you'd summarize your bins.\n\nThis all supposes that you don't want to just query postgres's column\nstatistics.\n\nOn Wed, Oct 21, 2009 at 10:21 PM, Doug Cole <[email protected]> wrote:\n\n> On Wed, Oct 21, 2009 at 5:39 PM, Merlin Moncure <[email protected]>\n> wrote:\n> >\n> > On Wed, Oct 21, 2009 at 6:51 PM, Doug Cole <[email protected]> wrote:\n> > > I have a reporting query that is taking nearly all of it's time in\n> aggregate\n> > > functions and I'm trying to figure out how to optimize it. The query\n> takes\n> > > approximately 170ms when run with \"select *\", but when run with all the\n> > > aggregate functions the query takes 18 seconds. The slowness comes\n> from our\n> > > attempt to find distribution data using selects of the form:\n> > >\n> > > SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n> > >\n> > > repeated across many different x,y values and fields to build out\n> several\n> > > histograms of the data. The main culprit appears to be the CASE\n> statement,\n> > > but I'm not sure what to use instead. I'm sure other people have had\n> > > similar queries and I was wondering what methods they used to build out\n> data\n> > > like this?\n> >\n> > have you tried:\n> >\n> > count(*) where field >= x AND field < y;\n> >\n> > ??\n> >\n> > merlin\n>\n> Unless I'm misunderstanding you, that would require breaking each bin\n> into a separate sql statement and since I'm trying to calculate more\n> than 100 bins between the different fields any improvement in the\n> aggregate functions would be overwhelmed by the cost of the actual\n> query, which is about 170ms.\n> Thanks,\n> Doug\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nSo you've got a query like:SELECT SUM(CASE WHEN field >= 0 AND field < 10 THEN 1 ELSE 0 END) as zeroToTen,\n              SUM(CASE WHEN field >= 10 AND field < 20 THEN 1 ELSE 0 END) as tenToTwenty,\n              SUM(CASE WHEN field >= 20 AND field < 30 THEN 1 ELSE 0 END) as tenToTwenty,\n...FROM  bigtable\n\nMy guess is this forcing a whole bunch of if checks and your getting cpu bound.  Could you try something like:\n\n\nSELECT SUM(CASE WHEN field >= 0 AND field < 10 THEN count ELSE 0 END) as zeroToTen,\n              SUM(CASE WHEN field >= 10 AND field < 20 THEN count ELSE 0 END) as tenToTwenty,\n              SUM(CASE WHEN field >= 20 AND field < 30 THEN count ELSE 0 END) as tenToTwenty,\n...FROM  (SELECT field, count(*) FROM bigtable GROUP BY field)\nwhich will allow a hash aggregate?  You'd do a hash aggregate on the whole table which should be quick and then you'd summarize your bins.\nThis all supposes that you don't want to just query postgres's column statistics.\nOn Wed, Oct 21, 2009 at 10:21 PM, Doug Cole <[email protected]> wrote:\nOn Wed, Oct 21, 2009 at 5:39 PM, Merlin Moncure <[email protected]> wrote:\n\n\n>\n> On Wed, Oct 21, 2009 at 6:51 PM, Doug Cole <[email protected]> wrote:\n> > I have a reporting query that is taking nearly all of it's time in aggregate\n> > functions and I'm trying to figure out how to optimize it.  The query takes\n> > approximately 170ms when run with \"select *\", but when run with all the\n> > aggregate functions the query takes 18 seconds.  The slowness comes from our\n> > attempt to find distribution data using selects of the form:\n> >\n> > SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n> >\n> > repeated across many different x,y values and fields to build out several\n> > histograms of the data.  The main culprit appears to be the CASE statement,\n> > but I'm not sure what to use instead.  I'm sure other people have had\n> > similar queries and I was wondering what methods they used to build out data\n> > like this?\n>\n> have you tried:\n>\n> count(*) where field >= x AND field < y;\n>\n> ??\n>\n> merlin\n\nUnless I'm misunderstanding you, that would require breaking each bin\ninto a separate sql statement and since I'm trying to calculate more\nthan 100 bins between the different fields any improvement in the\naggregate functions would be overwhelmed by the cost of the actual\nquery, which is about 170ms.\nThanks,\nDoug\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 21 Oct 2009 22:47:45 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing query with multiple aggregates" }, { "msg_contents": "On Wed, Oct 21, 2009 at 03:51:25PM -0700, Doug Cole wrote:\n> I have a reporting query that is taking nearly all of it's time in aggregate\n> functions and I'm trying to figure out how to optimize it. The query takes\n> approximately 170ms when run with \"select *\", but when run with all the\n> aggregate functions the query takes 18 seconds. The slowness comes from our\n> attempt to find distribution data using selects of the form:\n> \n> SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n> \n> repeated across many different x,y values and fields to build out several\n> histograms of the data. The main culprit appears to be the CASE statement,\n> but I'm not sure what to use instead. I'm sure other people have had\n> similar queries and I was wondering what methods they used to build out data\n> like this?\n> Thanks for your help,\n> Doug\n\nHi Doug,\n\nHave you tried using the width_bucket() function? Here is a nice\narticle describing its use for making histograms:\n\nhttp://quantmeditate.blogspot.com/2005/03/creating-histograms-using-sql-function.html\n\nRegards,\nKen\n", "msg_date": "Thu, 22 Oct 2009 08:22:14 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing query with multiple aggregates" }, { "msg_contents": "On Wed, Oct 21, 2009 at 10:21 PM, Doug Cole <[email protected]> wrote:\n> On Wed, Oct 21, 2009 at 5:39 PM, Merlin Moncure <[email protected]> wrote:\n>>\n>> On Wed, Oct 21, 2009 at 6:51 PM, Doug Cole <[email protected]> wrote:\n>> > I have a reporting query that is taking nearly all of it's time in aggregate\n>> > functions and I'm trying to figure out how to optimize it.  The query takes\n>> > approximately 170ms when run with \"select *\", but when run with all the\n>> > aggregate functions the query takes 18 seconds.  The slowness comes from our\n>> > attempt to find distribution data using selects of the form:\n>> >\n>> > SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n>> >\n>> > repeated across many different x,y values and fields to build out several\n>> > histograms of the data.  The main culprit appears to be the CASE statement,\n>> > but I'm not sure what to use instead.  I'm sure other people have had\n>> > similar queries and I was wondering what methods they used to build out data\n>> > like this?\n>>\n>> have you tried:\n>>\n>> count(*) where field >= x AND field < y;\n>>\n>> ??\n>>\n>> merlin\n>\n> Unless I'm misunderstanding you, that would require breaking each bin\n> into a separate sql statement and since I'm trying to calculate more\n> than 100 bins between the different fields any improvement in the\n> aggregate functions would be overwhelmed by the cost of the actual\n> query, which is about 170ms.\n\nWell, you might be able to use subselects to fetch all the results in\na single query, but it might still be slow.\n\n...Robert\n", "msg_date": "Thu, 22 Oct 2009 09:26:56 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing query with multiple aggregates" }, { "msg_contents": "\n\n\nOn 10/21/09 3:51 PM, \"Doug Cole\" <[email protected]> wrote:\n\n> I have a reporting query that is taking nearly all of it's time in aggregate\n> functions and I'm trying to figure out how to optimize it.  The query takes\n> approximately 170ms when run with \"select *\", but when run with all the\n> aggregate functions the query takes 18 seconds.  The slowness comes from our\n> attempt to find distribution data using selects of the form:\n> \n> SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n> \n> repeated across many different x,y values and fields to build out several\n> histograms of the data.  The main culprit appears to be the CASE statement,\n> but I'm not sure what to use instead.  I'm sure other people have had similar\n> queries and I was wondering what methods they used to build out data like\n> this?\n\nYou might be able to do this with plain aggregates. Define a function that\ngenerates your partitions that you can group by, then aggregate functions\nfor the outputs\n\nIn either case, rather than each result being a column in one result row,\neach result will be its own row.\n\nEach row would have a column that defines the type of the result (that you\ngrouped on), and one with the result value. If each is just a sum, its\neasy. If there are lots of different calculation types, it would be harder.\nPotentially, you could wrap that in a subselect to pull out each into its\nown column but that is a bit messy.\n\nAlso, in 8.4 window functions could be helpful. PARTITION BY something that\nrepresents your buckets perhaps?\nhttp://developer.postgresql.org/pgdocs/postgres/tutorial-window.html\n\nThis will generally force a sort, but shouldn't be that bad.\n\nThe function used for the group by or partition by could just be a big case\nstatement to generate a unique int per bucket, or a truncate/rounding\nfunction. It just needs to spit out a unique result for each bucket for the\ngroup or partition.\n\n\n> Thanks for your help,\n> Doug\n> \n\n", "msg_date": "Thu, 22 Oct 2009 14:48:29 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing query with multiple aggregates" }, { "msg_contents": "Hello,\n \nI didn't try it, but following should be slightly faster:\n \nCOUNT( CASE WHEN field >= x AND field < y THEN true END)\nintead of \nSUM( CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n \nHTH,\n \nMarc Mamin\n\n\n________________________________\n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Nikolas\nEverett\nSent: Thursday, October 22, 2009 4:48 AM\nTo: Doug Cole\nCc: pgsql-performance\nSubject: Re: [PERFORM] optimizing query with multiple aggregates\n\n\nSo you've got a query like: \n\nSELECT SUM(CASE WHEN field >= 0 AND field < 10 THEN 1 ELSE 0 END) as\nzeroToTen,\n SUM(CASE WHEN field >= 10 AND field < 20 THEN 1 ELSE 0\nEND) as tenToTwenty,\n SUM(CASE WHEN field >= 20 AND field < 30 THEN 1 ELSE 0\nEND) as tenToTwenty,\n...\nFROM bigtable\n\n\n\n\nMy guess is this forcing a whole bunch of if checks and your getting cpu\nbound. Could you try something like:\n\n\nSELECT SUM(CASE WHEN field >= 0 AND field < 10 THEN count ELSE 0 END) as\nzeroToTen,\n SUM(CASE WHEN field >= 10 AND field < 20 THEN count ELSE 0\nEND) as tenToTwenty,\n SUM(CASE WHEN field >= 20 AND field < 30 THEN count ELSE 0\nEND) as tenToTwenty,\n...\nFROM (SELECT field, count(*) FROM bigtable GROUP BY field)\n\n\nwhich will allow a hash aggregate? You'd do a hash aggregate on the\nwhole table which should be quick and then you'd summarize your bins.\n\n\nThis all supposes that you don't want to just query postgres's column\nstatistics.\n\n\nOn Wed, Oct 21, 2009 at 10:21 PM, Doug Cole <[email protected]> wrote:\n\n\n\tOn Wed, Oct 21, 2009 at 5:39 PM, Merlin Moncure\n<[email protected]> wrote:\n\t>\n\t> On Wed, Oct 21, 2009 at 6:51 PM, Doug Cole\n<[email protected]> wrote:\n\t> > I have a reporting query that is taking nearly all of it's\ntime in aggregate\n\t> > functions and I'm trying to figure out how to optimize it.\nThe query takes\n\t> > approximately 170ms when run with \"select *\", but when run\nwith all the\n\t> > aggregate functions the query takes 18 seconds. The\nslowness comes from our\n\t> > attempt to find distribution data using selects of the form:\n\t> >\n\t> > SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n\t> >\n\t> > repeated across many different x,y values and fields to\nbuild out several\n\t> > histograms of the data. The main culprit appears to be the\nCASE statement,\n\t> > but I'm not sure what to use instead. I'm sure other people\nhave had\n\t> > similar queries and I was wondering what methods they used\nto build out data\n\t> > like this?\n\t>\n\t> have you tried:\n\t>\n\t> count(*) where field >= x AND field < y;\n\t>\n\t> ??\n\t>\n\t> merlin\n\t\n\t\n\tUnless I'm misunderstanding you, that would require breaking\neach bin\n\tinto a separate sql statement and since I'm trying to calculate\nmore\n\tthan 100 bins between the different fields any improvement in\nthe\n\taggregate functions would be overwhelmed by the cost of the\nactual\n\tquery, which is about 170ms.\n\tThanks,\n\tDoug\n\t\n\n\t--\n\tSent via pgsql-performance mailing list\n([email protected])\n\tTo make changes to your subscription:\n\thttp://www.postgresql.org/mailpref/pgsql-performance\n\t\n\n\n\n\n\n\n\nHello,\n \nI didn't try it, but following should be slightly \nfaster:\n \nCOUNT( CASE WHEN field >= x AND field < y THEN true \nEND)intead of \nSUM( CASE WHEN field >= x AND field < y THEN 1 ELSE 0 \nEND)\n \nHTH,\n \nMarc \nMamin\n\n\n\nFrom: [email protected] \n[mailto:[email protected]] On Behalf Of Nikolas \nEverettSent: Thursday, October 22, 2009 4:48 AMTo: Doug \nColeCc: pgsql-performanceSubject: Re: [PERFORM] optimizing \nquery with multiple aggregates\nSo you've got a query like:\n\nSELECT SUM(CASE \nWHEN field >= 0 AND field < 10 THEN 1 ELSE 0 END) as \nzeroToTen,\n           \n   SUM(CASE WHEN \nfield >= 10 AND field < 20 THEN 1 ELSE 0 END) as \ntenToTwenty,\n         \n     SUM(CASE WHEN field >= 20 AND field < 30 THEN 1 ELSE 0 \nEND) as tenToTwenty,\n...\nFROM \n bigtable\n\n\nMy guess is this \nforcing a whole bunch of if checks and your getting cpu bound.  Could you \ntry something like:\n\n\nSELECT SUM(CASE \nWHEN field >= 0 AND field < 10 THEN count ELSE 0 END) as \nzeroToTen,\n           \n   SUM(CASE WHEN \nfield >= 10 AND field < 20 THEN count ELSE 0 END) as \ntenToTwenty,\n         \n     SUM(CASE WHEN field >= 20 AND field < 30 THEN count \nELSE 0 END) as tenToTwenty,\n...\nFROM  (SELECT \nfield, count(*) FROM bigtable GROUP BY field)\n\nwhich will allow a hash \naggregate?  You'd do a hash aggregate on the whole table which should be \nquick and then you'd summarize your bins.\n\nThis all supposes that \nyou don't want to just query postgres's column statistics.\n\n\nOn Wed, Oct 21, 2009 at 10:21 PM, Doug Cole <[email protected]> wrote:\n\nOn Wed, Oct 21, 2009 at 5:39 PM, Merlin Moncure <[email protected]> \n wrote:>> On Wed, Oct 21, 2009 at 6:51 PM, Doug Cole <[email protected]> wrote:> \n > I have a reporting query that is taking nearly all of it's time in \n aggregate> > functions and I'm trying to figure out how to optimize \n it.  The query takes> > approximately 170ms when run with \n \"select *\", but when run with all the> > aggregate functions the \n query takes 18 seconds.  The slowness comes from our> > attempt \n to find distribution data using selects of the form:> >> > \n SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)> \n >> > repeated across many different x,y values and fields to \n build out several> > histograms of the data.  The main culprit \n appears to be the CASE statement,> > but I'm not sure what to use \n instead.  I'm sure other people have had> > similar queries and \n I was wondering what methods they used to build out data> > like \n this?>> have you tried:>> count(*) where field \n >= x AND field < y;>> ??>> \n merlinUnless I'm misunderstanding you, that would require \n breaking each bininto a separate sql statement and since I'm trying to \n calculate morethan 100 bins between the different fields any improvement \n in theaggregate functions would be overwhelmed by the cost of the \n actualquery, which is about 170ms.Thanks,Doug\n\n\n--Sent via pgsql-performance mailing list ([email protected])To \n make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 26 Oct 2009 10:39:48 +0100", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimizing query with multiple aggregates" }, { "msg_contents": "On Thu, Oct 22, 2009 at 6:22 AM, Kenneth Marshall <[email protected]> wrote:\n> On Wed, Oct 21, 2009 at 03:51:25PM -0700, Doug Cole wrote:\n>> I have a reporting query that is taking nearly all of it's time in aggregate\n>> functions and I'm trying to figure out how to optimize it.  The query takes\n>> approximately 170ms when run with \"select *\", but when run with all the\n>> aggregate functions the query takes 18 seconds.  The slowness comes from our\n>> attempt to find distribution data using selects of the form:\n>>\n>> SUM(CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)\n>>\n>> repeated across many different x,y values and fields to build out several\n>> histograms of the data.  The main culprit appears to be the CASE statement,\n>> but I'm not sure what to use instead.  I'm sure other people have had\n>> similar queries and I was wondering what methods they used to build out data\n>> like this?\n>> Thanks for your help,\n>> Doug\n>\n> Hi Doug,\n>\n> Have you tried using the width_bucket() function? Here is a nice\n> article describing its use for making histograms:\n>\n> http://quantmeditate.blogspot.com/2005/03/creating-histograms-using-sql-function.html\n>\n> Regards,\n> Ken\n>\n\nThanks Ken,\n I ended up going with this approach - it meant I had to break it\ninto a lot more queries, one for each histogram, but even with that\nadded overhead I cut the time down from 18 seconds to right around 1\nsecond.\nDoug\n", "msg_date": "Thu, 29 Oct 2009 15:24:53 -0700", "msg_from": "Doug Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: optimizing query with multiple aggregates" } ]
[ { "msg_contents": "Hi\n\nMy indexing base is now up to 7.5m documents, I have raise statistics\ntarget to 1000 for the tsvector column in order to make the\nquery-planner choose more correctly. That works excellent.\n\nTable structure is still:\nftstest=# \\d ftsbody\n Table \"public.ftsbody\"\n Column | Type | Modifiers\n\n------------------+----------+------------------------------------------------------\n id | integer | not null default\nnextval('ftsbody_id_seq'::regclass)\n body | text | not null default ''::text\n ftsbody_body_fts | tsvector |\nIndexes:\n \"ftsbody_body_md5\" UNIQUE, btree (md5(body))\n \"ftsbody_id_pri_idx\" UNIQUE, btree (id)\n \"ftsbody_tfs_idx\" gin (ftsbody_body_fts)\nTriggers:\n tsvectorupdate BEFORE INSERT OR UPDATE ON uniprot FOR EACH ROW\nEXECUTE PROCEDURE tsvector_update_trigger('ftsbody_body_fts',\n'pg_catalog.english', 'body')\n\n\nI'm searching the gin-index for 1-5 terms, where all of them matches the\nsame document. TERM1 is unique by itself, TERM2 is a bit more common (52\nrows), TERM3 more common, TERM4 close to all and TERM5 all records.\n\nJust quering for a unique value and add in several values that match\neverything makes the run-time go significantly up.\n\nI somehow would expect the index-search to take advantage of the MCV's\ninformations in the statistics that sort of translate it into a search\nand post-filtering (as PG's queryplanner usually does at the SQL-level).\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=102.45..102.45 rows=1 width=751) (actual time=3.726..3.729\nrows=1 loops=1)\n -> Sort (cost=102.45..102.45 rows=1 width=751) (actual\ntime=3.722..3.723 rows=1 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 27kB\n -> Bitmap Heap Scan on ftsbody (cost=100.42..102.44 rows=1\nwidth=751) (actual time=3.700..3.702 rows=1 loops=1)\n Recheck Cond: (ftsbody_body_fts @@ to_tsquery('TERM1 &\nTERM2'::text))\n -> Bitmap Index Scan on ftsbody_tfs_idx\n(cost=0.00..100.42 rows=1 width=0) (actual time=3.683..3.683 rows=1 loops=1)\n Index Cond: (ftsbody_body_fts @@ to_tsquery('TERM1\n& TERM2'::text))\n Total runtime: 3.790 ms\n(9 rows)\n\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=102.45..102.45 rows=1 width=751) (actual\ntime=850.017..850.020 rows=1 loops=1)\n -> Sort (cost=102.45..102.45 rows=1 width=751) (actual\ntime=850.013..850.015 rows=1 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 27kB\n -> Bitmap Heap Scan on ftsbody (cost=100.42..102.44 rows=1\nwidth=751) (actual time=849.991..849.993 rows=1 loops=1)\n Recheck Cond: (ftsbody_body_fts @@ to_tsquery('TERM1 &\nTERM2 & TERM3'::text))\n -> Bitmap Index Scan on ftsbody_tfs_idx\n(cost=0.00..100.42 rows=1 width=0) (actual time=849.970..849.970 rows=1\nloops=1)\n Index Cond: (ftsbody_body_fts @@ to_tsquery('TERM1\n& TERM2 & TERM3'::text))\n Total runtime: 850.084 ms\n(9 rows)\n\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=102.45..102.45 rows=1 width=751) (actual\ntime=1152.065..1152.068 rows=1 loops=1)\n -> Sort (cost=102.45..102.45 rows=1 width=751) (actual\ntime=1152.061..1152.062 rows=1 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 27kB\n -> Bitmap Heap Scan on ftsbody (cost=100.42..102.44 rows=1\nwidth=751) (actual time=1152.039..1152.041 rows=1 loops=1)\n Recheck Cond: (ftsbody_body_fts @@ to_tsquery('TERM1 &\nTERM2 & TERM3 & TERM4'::text))\n -> Bitmap Index Scan on ftsbody_tfs_idx\n(cost=0.00..100.42 rows=1 width=0) (actual time=1152.020..1152.020\nrows=1 loops=1)\n Index Cond: (ftsbody_body_fts @@ to_tsquery('TERM1\n& TERM2 & TERM3 & TERM4'::text))\n Total runtime: 1152.129 ms\n(9 rows)\n\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=102.45..102.45 rows=1 width=751) (actual\ntime=1509.043..1509.046 rows=1 loops=1)\n -> Sort (cost=102.45..102.45 rows=1 width=751) (actual\ntime=1509.040..1509.040 rows=1 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 27kB\n -> Bitmap Heap Scan on ftsbody (cost=100.42..102.44 rows=1\nwidth=751) (actual time=1509.018..1509.020 rows=1 loops=1)\n Recheck Cond: (ftsbody_body_fts @@ to_tsquery('TERM1 &\nTERM2 & TERM3 & TERM4 & TERM5'::text))\n -> Bitmap Index Scan on ftsbody_tfs_idx\n(cost=0.00..100.42 rows=1 width=0) (actual time=1508.998..1508.998\nrows=1 loops=1)\n Index Cond: (ftsbody_body_fts @@ to_tsquery('TERM1\n& TERM2 & TERM3 & TERM4 & TERM5'::text))\n Total runtime: 1509.109 ms\n(9 rows)\n\nCan (perhaps more readable) be found at http://krogh.cc/~jesper/test.out\n\nCan this be optimized? (I cannot really prevent users from typing stuff\nin that are common).\n\n-- \nJesper\n", "msg_date": "Thu, 22 Oct 2009 18:28:13 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Queryplan within FTS/GIN index -search. " }, { "msg_contents": "Jesper Krogh <[email protected]> wrote:\n \n> I'm searching the gin-index for 1-5 terms, where all of them matches\n> the same document. TERM1 is unique by itself, TERM2 is a bit more\n> common (52 rows), TERM3 more common, TERM4 close to all and TERM5\n> all records.\n \n> Recheck Cond: (ftsbody_body_fts @@ to_tsquery('TERM1\n> & TERM2 & TERM3 & TERM4 & TERM5'::text))\n> -> Bitmap Index Scan on ftsbody_tfs_idx\n> (cost=0.00..100.42 rows=1 width=0) (actual time=1508.998..1508.998\n> rows=1 loops=1)\n> Index Cond: (ftsbody_body_fts @@\n> to_tsquery('TERM1 & TERM2 & TERM3 & TERM4 & TERM5'::text))\n> Total runtime: 1509.109 ms\n \n> Can this be optimized? (I cannot really prevent users from typing\n> stuff in that are common).\n \nI've wondered that myself. Perhaps a term which is ANDed with others\nand is too common could be dropped from the Index Cond and just left\nin the Recheck Cond?\n \n-Kevin\n", "msg_date": "Thu, 22 Oct 2009 13:51:30 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "On Thu, 2009-10-22 at 18:28 +0200, Jesper Krogh wrote:\n> I somehow would expect the index-search to take advantage of the MCV's\n> informations in the statistics that sort of translate it into a search\n> and post-filtering (as PG's queryplanner usually does at the SQL-level).\n\nMCVs are full values that are found in columns or indexes -- you aren't\nlikely to have two entire documents that are exactly equal, so MCVs are\nuseless in your example.\n\nI believe that stop words are a more common way of accomplishing what\nyou want to do, but they are slightly more limited: they won't be\nchecked at any level, and so are best used for truly common words like\n\"and\". From your example, I assume that you still want the word checked,\nbut it's not selective enough to be usefully checked by the index.\n\nIn effect, what you want are words that aren't searched (or stored) in\nthe index, but are included in the tsvector (so the RECHECK still\nworks). That sounds like it would solve your problem and it would reduce\nindex size, improve update performance, etc. I don't know how difficult\nit would be to implement, but it sounds reasonable to me.\n\nThe only disadvantage is that it's more metadata to manage -- all of the\nexisting data like dictionaries and stop words, plus this new \"common\nwords\". Also, it would mean that almost every match requires RECHECK. It\nwould be interesting to know how common a word needs to be before it's\nbetter to leave it out of the index.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Thu, 22 Oct 2009 15:56:56 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "Jeff Davis wrote:\n> On Thu, 2009-10-22 at 18:28 +0200, Jesper Krogh wrote:\n>> I somehow would expect the index-search to take advantage of the MCV's\n>> informations in the statistics that sort of translate it into a search\n>> and post-filtering (as PG's queryplanner usually does at the SQL-level).\n> \n> MCVs are full values that are found in columns or indexes -- you aren't\n> likely to have two entire documents that are exactly equal, so MCVs are\n> useless in your example.\n\nAccording to my testing, this is not the case and if it was the case,\nthe queryplanner most likely wouldn't be able to plan this query correct:\nselect id from ftstable where tsvectorcol @@ to_tsquery('commonterm')\norder by id limit 10;\n(into a index-scan on ID\nand\nselect id from ftstable where tsvectorcol @@ to_tsquery('rareterm');\ninto a bitmap index scan on the tsvectorcol and a subsequent sort.\n\nThis is indeed information on individual terms from the statistics that\nenable this.\n\n> I believe that stop words are a more common way of accomplishing what\n> you want to do, but they are slightly more limited: they won't be\n> checked at any level, and so are best used for truly common words like\n> \"and\". From your example, I assume that you still want the word checked,\n> but it's not selective enough to be usefully checked by the index.\n\nthe terms are really common non-stop-words.\n\n> In effect, what you want are words that aren't searched (or stored) in\n> the index, but are included in the tsvector (so the RECHECK still\n> works). That sounds like it would solve your problem and it would reduce\n> index size, improve update performance, etc. I don't know how difficult\n> it would be to implement, but it sounds reasonable to me.\n> \n> The only disadvantage is that it's more metadata to manage -- all of the\n> existing data like dictionaries and stop words, plus this new \"common\n> words\". Also, it would mean that almost every match requires RECHECK. It\n> would be interesting to know how common a word needs to be before it's\n> better to leave it out of the index.\n\nThat sounds like it could require an index rebuild if the distribution\nchanges?\n\nThat would be another plan to pursue, but the MCV is allready there\n:\nftstest=# select * from ftsbody;\n id | body |\nftsbody_body_fts\n----+----------------------------------------------+-------------------------------------------------\n 1 | the cat is not a rat uniqueterm1 uniqueterm2 | 'cat':2 'rat':6\n'uniqueterm1':7 'uniqueterm2':8\n 2 | elephant uniqueterm1 uniqueterm2 | 'eleph':1\n'uniqueterm1':2 'uniqueterm2':3\n 3 | cannon uniqueterm1 uniqueterm2 | 'cannon':1\n'uniqueterm1':2 'uniqueterm2':3\n(3 rows)\n\nftstest=# select most_common_vals, most_common_freqs from pg_stats where\ntablename = 'ftsbody' and attname = 'ftsbody_body_fts';\n most_common_vals | most_common_freqs\n---------------------------+-------------------\n {uniqueterm1,uniqueterm2} | {1,1,1,1}\n(1 row)\n\nAnd the query-planner uses this information.\n\n-- \nJesper.\n\n", "msg_date": "Fri, 23 Oct 2009 07:18:32 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "On Fri, 2009-10-23 at 07:18 +0200, Jesper Krogh wrote:\n> This is indeed information on individual terms from the statistics that\n> enable this.\n\nMy mistake, I didn't know it was that smart about it.\n\n> > In effect, what you want are words that aren't searched (or stored) in\n> > the index, but are included in the tsvector (so the RECHECK still\n> > works). That sounds like it would solve your problem and it would reduce\n> > index size, improve update performance, etc. I don't know how difficult\n> > it would be to implement, but it sounds reasonable to me.\n\n\n> That sounds like it could require an index rebuild if the distribution\n> changes?\n\nMy thought was that the common words could be declared to be common the\nsame way stop words are. As long as words are only added to this list,\nit should be OK.\n\n> That would be another plan to pursue, but the MCV is allready there\n\nThe problem with MCVs is that the index search can never eliminate\ndocuments because they don't contain a match, because it might contain a\nmatch that was previously an MCV, but is no longer.\n\nAlso, MCVs are relatively few -- you only get ~1000 or so. There might\nbe a lot of common words you'd like to track.\n\nPerhaps ANALYZE can automatically add the common words above some\nfrequency threshold to the list?\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Thu, 22 Oct 2009 22:39:18 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "> On Fri, 2009-10-23 at 07:18 +0200, Jesper Krogh wrote:\n>> > In effect, what you want are words that aren't searched (or stored) in\n>> > the index, but are included in the tsvector (so the RECHECK still\n>> > works). That sounds like it would solve your problem and it would\n>> reduce\n>> > index size, improve update performance, etc. I don't know how\n>> difficult\n< > it would be to implement, but it sounds reasonable to me.\n>\n>> That sounds like it could require an index rebuild if the distribution\n>> changes?\n>\n> My thought was that the common words could be declared to be common the\n> same way stop words are. As long as words are only added to this list,\n> it should be OK.\n>\n>> That would be another plan to pursue, but the MCV is allready there\n>\n> The problem with MCVs is that the index search can never eliminate\n> documents because they don't contain a match, because it might contain a\n> match that was previously an MCV, but is no longer.\n\nNo, it definately has to go visit the index/table to confirm findings, but\nthat why I wrote Queryplan in the subject line, because this os only about\nthe strategy to pursue to obtain the results. And a strategy about\nlimiting the amout of results as early as possible (as PG usually does)\nwould be what I'd expect and MCV can help it guess on that.\n\nSimilar finding, rewrite the query: (now i took the extreme and made\n\"raretem\" a spellingerror), so result is 0.\n\nftstest=# explain analyze select body from ftsbody where ftsbody_body_fts\n@@ to_tsquery('commonterm & spellerror') limit 100;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=132.63..188.89 rows=28 width=739) (actual\ntime=862.714..862.714 rows=0 loops=1)\n -> Bitmap Heap Scan on ftsbody (cost=132.63..188.89 rows=28\nwidth=739) (actual time=862.711..862.711 rows=0 loops=1)\n Recheck Cond: (ftsbody_body_fts @@ to_tsquery('commonterm &\nspellerror'::text))\n -> Bitmap Index Scan on ftsbody_tfs_idx (cost=0.00..132.62\nrows=28 width=0) (actual time=862.702..862.702 rows=0 loops=1)\n Index Cond: (ftsbody_body_fts @@ to_tsquery('commonterm &\nspellerror'::text))\n Total runtime: 862.771 ms\n(6 rows)\n\nftstest=# explain analyze select body from ftsbody where ftsbody_body_fts\n@@ to_tsquery('commonterm') and ftsbody_body_fts @@\nto_tsquery('spellerror') limit 100;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=132.70..189.11 rows=28 width=739) (actual time=8.669..8.669\nrows=0 loops=1)\n -> Bitmap Heap Scan on ftsbody (cost=132.70..189.11 rows=28\nwidth=739) (actual time=8.665..8.665 rows=0 loops=1)\n Recheck Cond: ((ftsbody_body_fts @@\nto_tsquery('commonterm'::text)) AND (ftsbody_body_fts @@\nto_tsquery('spellerror'::text)))\n -> Bitmap Index Scan on ftsbody_tfs_idx (cost=0.00..132.70\nrows=28 width=0) (actual time=8.658..8.658 rows=0 loops=1)\n Index Cond: ((ftsbody_body_fts @@\nto_tsquery('commonterm'::text)) AND (ftsbody_body_fts @@\nto_tsquery('spellerror'::text)))\n Total runtime: 8.724 ms\n(6 rows)\n\nSo getting them with AND inbetween gives x100 better performance. All\nqueries are run on \"hot disk\" repeated 3-5 times and the number are from\nthe last run, so disk-read effects should be filtered away.\n\nShouldn't it somehow just do what it allready are capable of doing?\n\n-- \nJesper\n\n", "msg_date": "Fri, 23 Oct 2009 09:45:44 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "[email protected] wrote:\n> \n> So getting them with AND inbetween gives x100 better performance. All\n> queries are run on \"hot disk\" repeated 3-5 times and the number are from\n> the last run, so disk-read effects should be filtered away.\n> \n> Shouldn't it somehow just do what it allready are capable of doing?\n\nI'm guessing to_tsquery(...) will produce a tree of search terms (since\nit allows for quite complex expressions). Presumably there's a standard\norder it gets processed in too, so it should be possible to generate a\nmore or less efficient ordering.\n\nThat structure isn't exposed to the planner though, so it doesn't\nbenefit from any re-ordering the planner would normally do for normal\n(exposed) AND/OR clauses.\n\nNow, to_tsquery() can't re-order the search terms because it doesn't\nknow what column it's being compared against. In fact, it might not be a\nsimple column at all.\n\nSo - there would either need to be:\n1. Some hooks from the planner to reach into the tsquery datatype.\n2. A variant to_tsquery_with_sorting() which would take the column-name\nor something and look up the stats to work against.\n\n#1 is the better solution, but #2 might well be simpler to implement as\na work-around for now.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 23 Oct 2009 09:26:26 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "> [email protected] wrote:\n>>\n>> So getting them with AND inbetween gives x100 better performance. All\n>> queries are run on \"hot disk\" repeated 3-5 times and the number are from\n>> the last run, so disk-read effects should be filtered away.\n>>\n>> Shouldn't it somehow just do what it allready are capable of doing?\n>\n> I'm guessing to_tsquery(...) will produce a tree of search terms (since\n> it allows for quite complex expressions). Presumably there's a standard\n> order it gets processed in too, so it should be possible to generate a\n> more or less efficient ordering.\n>\n> That structure isn't exposed to the planner though, so it doesn't\n> benefit from any re-ordering the planner would normally do for normal\n> (exposed) AND/OR clauses.\n>\n> Now, to_tsquery() can't re-order the search terms because it doesn't\n> know what column it's being compared against. In fact, it might not be a\n> simple column at all.\n\nI cant follow this logic based on explain output, but I may have\nmisunderstood something. The only difference in these two query-plans is\nthat we have an additional or'd term in the to_tsquery().\n\nWhat we see is that, the query-planner indeed has knowledge about changes\nin the row estimates based on changes in the query to to_tsquery(). My\nguess is that it is because to_tsquery actually parses the query and give\nthe estimates, now how can to_tsquery give estimates without having access\nto the statistics for the column?\n\nftstest=# explain select id from ftsbody where ftsbody_body_fts @@\nto_tsquery('reallyrare');\n QUERY PLAN\n---------------------------------------------------------------------------------\n Bitmap Heap Scan on ftsbody (cost=132.64..190.91 rows=29 width=4)\n Recheck Cond: (ftsbody_body_fts @@ to_tsquery('reallyrare'::text))\n -> Bitmap Index Scan on ftsbody_tfs_idx (cost=0.00..132.63 rows=29\nwidth=0)\n Index Cond: (ftsbody_body_fts @@ to_tsquery('reallyrare'::text))\n(4 rows)\n\nftstest=# explain select id from ftsbody where ftsbody_body_fts @@\nto_tsquery('reallyrare | morerare');\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Bitmap Heap Scan on ftsbody (cost=164.86..279.26 rows=57 width=4)\n Recheck Cond: (ftsbody_body_fts @@ to_tsquery('reallyrare |\nmorerare'::text))\n -> Bitmap Index Scan on ftsbody_tfs_idx (cost=0.00..164.84 rows=57\nwidth=0)\n Index Cond: (ftsbody_body_fts @@ to_tsquery('reallyrare |\nmorerare'::text))\n(4 rows)\n\nftstest=# explain select id from ftsbody where ftsbody_body_fts @@\nto_tsquery('reallyrare | reallycommon');\n QUERY PLAN\n--------------------------------------------------------------------------\n Seq Scan on ftsbody (cost=0.00..1023249.39 rows=5509293 width=4)\n Filter: (ftsbody_body_fts @@ to_tsquery('reallyrare |\nreallycommon'::text))\n(2 rows)\n\n\n> 2. A variant to_tsquery_with_sorting() which would take the column-name\n> or something and look up the stats to work against.\n\nDoes above not seem like its there allready?\n\n(sorry.. looking at C-code from my point of view would set me a couple of\nweeks back, so I have troble getting closer to the answer than\ninterpreting the output and guessing the rest).\n\n-- \nJesper\n\n", "msg_date": "Fri, 23 Oct 2009 11:04:22 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "[email protected] wrote:\n>> That structure isn't exposed to the planner though, so it doesn't\n>> benefit from any re-ordering the planner would normally do for normal\n>> (exposed) AND/OR clauses.\n>>\n>> Now, to_tsquery() can't re-order the search terms because it doesn't\n>> know what column it's being compared against. In fact, it might not be a\n>> simple column at all.\n> \n> I cant follow this logic based on explain output, but I may have\n> misunderstood something. The only difference in these two query-plans is\n> that we have an additional or'd term in the to_tsquery().\n\nHmm - I've had a poke through the source. I've slightly misled you...\n\n> What we see is that, the query-planner indeed has knowledge about changes\n> in the row estimates based on changes in the query to to_tsquery(). \n\nYes, new in 8.4 - sorry, thought that hadn't made it in.\n\nThe two plan-nodes in question are in:\n backend/executor/nodeBitmapIndexscan.c\n backend/executor/nodeBitmapHeapscan.c\nThe actual tsearch stuff is in\n src/backend/utils/adt/ts*.c\n\nIt looks like TS_execute (tsvector_op.c) is the bit of code that handles\nthe tsquery tree. That uses a callback to actually check values\n(checkcondition_gin). The gin_extract_tsquery function is presumably the\nextractQuery function as described in the manuals (Ch 52).\n\nSo, I'm guessing you would want to do is generate a reduced query tree\nfor the indexscan (A & B & C => A if A is an uncommon word) and use the\nfull query tree for the heap check. Now, what isn't clear to me on first\nglance is how to determine which phase of the bitmap scan we are in.\n\nHTH\n\nJust checking, because I don't think it's useful in this case. But, you\ndon know about \"gin_fuzzy_search_limit\"?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 23 Oct 2009 15:30:56 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "On Fri, 2009-10-23 at 09:26 +0100, Richard Huxton wrote:\n> That structure isn't exposed to the planner though, so it doesn't\n> benefit from any re-ordering the planner would normally do for normal\n> (exposed) AND/OR clauses.\n\nI don't think that explains it, because in the second plan you only see\na single index scan with two quals:\n\n Index Cond: ((ftsbody_body_fts @@\n to_tsquery('commonterm'::text)) AND (ftsbody_body_fts @@\n to_tsquery('spellerror'::text)))\n\nSo it's entirely up to GIN how to execute that.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Fri, 23 Oct 2009 08:51:11 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "On Fri, 2009-10-23 at 09:45 +0200, [email protected] wrote:\n> No, it definately has to go visit the index/table to confirm findings, but\n> that why I wrote Queryplan in the subject line, because this os only about\n> the strategy to pursue to obtain the results. And a strategy about\n> limiting the amout of results as early as possible (as PG usually does)\n> would be what I'd expect and MCV can help it guess on that.\n\nI see what you're saying: you could still index the common terms like\nnormal, but just not look for anything in the index if it's an MCV. That\nsounds reasonable, based on the numbers you provided.\n\n> Index Cond: (ftsbody_body_fts @@ to_tsquery('commonterm &\n> spellerror'::text))\n> Total runtime: 862.771 ms\n> (6 rows)\n\n...\n\n> Index Cond: ((ftsbody_body_fts @@\n> to_tsquery('commonterm'::text)) AND (ftsbody_body_fts @@\n> to_tsquery('spellerror'::text)))\n> Total runtime: 8.724 ms\n> (6 rows)\n> \n\nSomething seems strange here. Both are a single index scan, but having a\nsingle complex search key is worse than having two simple search keys. \n\nPerhaps the real problem is that there's a difference between these\ncases at all? I don't see any reason why the first should be more\nexpensive than the second.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Fri, 23 Oct 2009 09:01:02 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "Jeff Davis wrote:\n> On Fri, 2009-10-23 at 09:26 +0100, Richard Huxton wrote:\n>> That structure isn't exposed to the planner though, so it doesn't\n>> benefit from any re-ordering the planner would normally do for normal\n>> (exposed) AND/OR clauses.\n> \n> I don't think that explains it, because in the second plan you only see\n> a single index scan with two quals:\n> \n> Index Cond: ((ftsbody_body_fts @@\n> to_tsquery('commonterm'::text)) AND (ftsbody_body_fts @@\n> to_tsquery('spellerror'::text)))\n> \n> So it's entirely up to GIN how to execute that.\n\nhttp://www.postgresql.org/docs/8.4/static/gin-extensibility.html\nDatum *extractQuery(...)\nReturns an array of keys given a value to be queried; that is, query is\nthe value on the right-hand side of an indexable operator whose\nleft-hand side is the indexed column\n\nSo - that is presumably two separate arrays of keys being matched\nagainst, and the AND means if the first fails it'll never check the second.\n\nWhat I'm not sure about is if tsquery('commonterm & spellerror')\nproduces two sets of keys or if it just produces one.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 23 Oct 2009 17:27:54 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "Jeff Davis wrote:\n> On Fri, 2009-10-23 at 09:45 +0200, [email protected] wrote:\n>> No, it definately has to go visit the index/table to confirm findings, but\n>> that why I wrote Queryplan in the subject line, because this os only about\n>> the strategy to pursue to obtain the results. And a strategy about\n>> limiting the amout of results as early as possible (as PG usually does)\n>> would be what I'd expect and MCV can help it guess on that.\n> \n> I see what you're saying: you could still index the common terms like\n> normal, but just not look for anything in the index if it's an MCV. That\n> sounds reasonable, based on the numbers you provided.\n\nI'm not sure if thats what I'm saying. If i should rephrase it then:\nGiven an AND operator (which translates into an intersection of the left\nand right side), then it should go for the side with the least expected\nresults (SetLeast) and subsequent use the other expression for\nprocessing only that set.\n\n>> Index Cond: (ftsbody_body_fts @@ to_tsquery('commonterm &\n>> spellerror'::text))\n>> Total runtime: 862.771 ms\n>> (6 rows)\n> \n> ...\n> \n>> Index Cond: ((ftsbody_body_fts @@\n>> to_tsquery('commonterm'::text)) AND (ftsbody_body_fts @@\n>> to_tsquery('spellerror'::text)))\n>> Total runtime: 8.724 ms\n>> (6 rows)\n>>\n> \n> Something seems strange here. Both are a single index scan, but having a\n> single complex search key is worse than having two simple search keys. \n> \n> Perhaps the real problem is that there's a difference between these\n> cases at all? I don't see any reason why the first should be more\n> expensive than the second.\n\n\n\n-- \nJesper\n", "msg_date": "Fri, 23 Oct 2009 20:12:51 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "On Fri, 2009-10-23 at 17:27 +0100, Richard Huxton wrote:\n> Returns an array of keys given a value to be queried; that is, query is\n> the value on the right-hand side of an indexable operator whose\n> left-hand side is the indexed column\n> \n> So - that is presumably two separate arrays of keys being matched\n> against, and the AND means if the first fails it'll never check the second.\n\nMy point was that if it's only one index scan in both cases, then GIN\nshould have the same information in both cases, right? So why are they\nbeing treated differently?\n\nI must be missing something.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Fri, 23 Oct 2009 12:22:03 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "Hi.\n\nI've now got a test-set that can reproduce the problem where the two\nfully equivalent queries (\nbody_fts @@ to_tsquery(\"commonterm & nonexistingterm\")\n and\nbody_fts @@ to_tsquery(\"coomonterm\") AND body_fts @@\nto_tsquery(\"nonexistingterm\")\n\ngive a difference of x300 in execution time. (grows with\ndocument-base-size).\n\nthis can now be reproduced using:\n\n* http://krogh.cc/~jesper/fts-queryplan.pl and\nhttp://krogh.cc/~jesper/words.txt\n\nIt build up a table with 200.000 documents where \"commonterm\" exists in\nall of them. \"nonexistingterm\" is in 0.\n\nTo get the query-planner get a \"sane\" query I need to do a:\nftstest# set enable_seqscan=off\n\nThen:\n ftstest=# explain analyze select id from ftstest where body_fts @@\nto_tsquery('nonexistingterm & commonterm');\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on ftstest (cost=5563.09..7230.93 rows=1000 width=4)\n(actual time=30.861..30.861 rows=0 loops=1)\n Recheck Cond: (body_fts @@ to_tsquery('nonexistingterm &\ncommonterm'::text))\n -> Bitmap Index Scan on ftstest_gin_idx (cost=0.00..5562.84\nrows=1000 width=0) (actual time=30.856..30.856 rows=0 loops=1)\n Index Cond: (body_fts @@ to_tsquery('nonexistingterm &\ncommonterm'::text))\n Total runtime: 30.907 ms\n(5 rows)\n\nftstest=# explain analyze select id from ftstest where body_fts @@\nto_tsquery('nonexistingterm') and body_fts @@ to_tsquery('commonterm');\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on ftstest (cost=5565.59..7238.43 rows=1000 width=4)\n(actual time=0.059..0.059 rows=0 loops=1)\n Recheck Cond: ((body_fts @@ to_tsquery('nonexistingterm'::text)) AND\n(body_fts @@ to_tsquery('commonterm'::text)))\n -> Bitmap Index Scan on ftstest_gin_idx (cost=0.00..5565.34\nrows=1000 width=0) (actual time=0.057..0.057 rows=0 loops=1)\n Index Cond: ((body_fts @@ to_tsquery('nonexistingterm'::text))\nAND (body_fts @@ to_tsquery('commonterm'::text)))\n Total runtime: 0.111 ms\n(5 rows)\n\n\nRun repeatedly to get a full memory recident dataset.\n\nIn this situation the former query end up being 300x slower than the\nlatter allthough they are fully equivalent.\n\n\n\n-- \nJesper\n", "msg_date": "Fri, 30 Oct 2009 20:46:37 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> I've now got a test-set that can reproduce the problem where the two\n> fully equivalent queries (\n> body_fts @@ to_tsquery(\"commonterm & nonexistingterm\")\n> and\n> body_fts @@ to_tsquery(\"coomonterm\") AND body_fts @@\n> to_tsquery(\"nonexistingterm\")\n> give a difference of x300 in execution time. (grows with\n> document-base-size).\n\nI looked into this a bit. It seems the reason the first is much slower\nis that the AND nature of the query is not exposed to the GIN control\nlogic (ginget.c). It has to fetch every index-entry combination that\ninvolves any of the terms, which of course is going to be the whole\nindex in this case. This is obvious when you realize that the control\nlogic doesn't know the difference between tsqueries \"commonterm &\nnonexistingterm\" and \"commonterm | nonexistingterm\". The API for\nopclass extractQuery functions just isn't powerful enough to show that.\n\nI think a possible solution to this could involve allowing extractQuery\nto mark individual keys as \"required\" or \"optional\". Then the control\nlogic could know not to bother with combinations that haven't got all\nthe \"required\" keys. There might be other better answers though.\n\nBut having said that, this particular test case is far from compelling.\nAny sane text search application is going to try to filter out\ncommon words as stopwords; it's only the failure to do that that's\nmaking this run slow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Oct 2009 23:11:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search. " }, { "msg_contents": "Tom Lane wrote:\n> But having said that, this particular test case is far from compelling.\n> Any sane text search application is going to try to filter out\n> common words as stopwords; it's only the failure to do that that's\n> making this run slow.\n\nBelow is tests-runs not with a \"commonterm\" but and 80% term and a 60%\nterm.\n\nThere are two issues in this, one is the way PG \"blows up\" when\nsearching for a stop-word (and it even performs excellent when searching\nfor a term in the complete doc-base):\n\nftstest=# select id from ftstest where body_fts @@\nto_tsquery('commonterm') limit 10;\n id\n----\n 1\n 2\n 3\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n(10 rows)\n\nTime: 1.004 ms\nftstest=# select id from ftstest where body_fts @@ to_tsquery('the')\nlimit 10;\nNOTICE: text-search query contains only stop words or doesn't contain\nlexemes, ignored\nNOTICE: text-search query contains only stop words or doesn't contain\nlexemes, ignored\n id\n----\n(0 rows)\n\nTime: 0.587 ms\n\nI can definetely effort the index-size for getting the first behavior to\nmy application. Stop words will first be really useful when searches for\nthem translates into full results not errors.\n\nI also think you're trying to limit the scope of the problem more than\nwhats fair.\n\nftstest=# select id from ftstest where body_fts @@\nto_tsquery('nonexistingterm & commonterm');\n id\n----\n(0 rows)\n\nTime: 28.230 ms\nftstest=# select id from ftstest where body_fts @@\nto_tsquery('nonexistingterm') and body_fts @@ to_tsquery('commonterm');\n id\n----\n(0 rows)\n\nTime: 0.930 ms\n(so explain analyze is not a fair measurement .. it seems to make the\nproblem way worse). This is \"only\" x28\nTime: 22.432 ms\nftstest=# select id from ftstest where body_fts @@\nto_tsquery('nonexistingterm') and body_fts @@ to_tsquery('commonterm80');\n id\n----\n(0 rows)\n\nTime: 0.992 ms\nftstest=# select id from ftstest where body_fts @@\nto_tsquery('nonexistingterm & commonterm80');\n id\n----\n(0 rows)\n\nTime: 22.393 ms\nftstest=#\nAnd for a 80% term .. x23\n\nftstest=# select id from ftstest where body_fts @@\nto_tsquery('nonexistingterm') and body_fts @@ to_tsquery('commonterm60');\n id\n----\n(0 rows)\n\nTime: 0.954 ms\nftstest=# select id from ftstest where body_fts @@\nto_tsquery('nonexistingterm & commonterm60');\n id\n----\n(0 rows)\n\nTime: 17.006 ms\n\nand x17\n\nJust trying to say that the body of the problem isn't a discussion about\nstop-words.\n\nThat being said, if you coin the term \"stopword\" to mean \"any term that\nexists in all or close to all documents\" then the way it behaves when\nsearching for only one of them is a situation that we'll hit all the\ntime. (when dealing with user typed input).\n\nJesper\n-- \nJesper\n", "msg_date": "Sat, 31 Oct 2009 07:20:48 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "On Fri, Oct 30, 2009 at 8:11 PM, Tom Lane <[email protected]> wrote:\n> But having said that, this particular test case is far from compelling.\n> Any sane text search application is going to try to filter out\n> common words as stopwords; it's only the failure to do that that's\n> making this run slow.\n\nWell it would be nice if that wasn't necessary. There are plenty of\napplications where that isn't really an option. Consider searching for\nphrases like \"The The\" or \"The Office\". The sanity of doing this is\npurely a function of implementation quality and not of actual user\ninterface design.\n\n\n-- \ngreg\n", "msg_date": "Sat, 31 Oct 2009 01:55:34 -0700", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." } ]
[ { "msg_contents": "I am running several servers with Postgres 8.3 that are used to house\nlocation data from thousands of devices. Location updates are quite\nfrequent, so our tables rapidly become fairly large (often about 2GB per\nday of growth). We've been using Postgres for close to 10 years now and\nhave been very happy until recent performance issues with larger data\nsets. \n\nOur primary location table is clustered by \"reporttime\" (bigint). Many\nof the queries we need to perform are of the nature : \"get me all\npositions from a given device for yesterday\". Similar queries are \"get\nme the most recent 10 positions from a given device\".\n\nThese are pretty simple and straightforward queries that run\nsurprisingly quickly either on small tables or tables that have been\nclustered by reporttime. If the tables are large and haven't been\nactively clustered for a while, the simplest looking queries will take\nclose to a minute to execute.\n\nUnfortunately, the clustering operation now takes far too long to run in\nany reasonable maintenance window. On smaller datasets clustering will\ntake from 30 minutes to an hour and on our large datasets several hours\n(gave up at our 4 hour maintenance window limit). Obviously I would\nlove to have Postgres support \"online\" clustering, however I need to\nfigure a way around this problem now or start planning the port to\nanother database server. \n\nWe have tried the \"SELECT INTO ... ORDER BY REPORTTIME\" trick instead or\nrunning cluster, but since the tables are quite large that is still\ntaking too long (although quicker than clustering).\n\nI have spent more time than I would like looking into clustering options\nand other load balancing techniques. I was thinking that I'd like to\ntake one server set down for clustering, while failing over to the\nsecondary database set. The problem is I don't see an efficient or easy\nway to then synchronize the sizable amount of updates that will have\noccurred since the clustering started back to the now clustered primary\ndatabase without significantly affecting write performance. If it is\ndifficult to solve, porting to Oracle or *gasp* SQL Server will be an\neasier solution.\n\nI have spent a lot of time Googling, and so far no obvious solutions\njump to mind besides either hoping (or helping) online clustering become\na reality on Postgres, or to migrate to a different DB engine. I'd\nreally appreciate any thoughts or suggestions. \n\n-Kevin\n\n", "msg_date": "Thu, 22 Oct 2009 11:50:44 -0700", "msg_from": "Kevin Buckham <[email protected]>", "msg_from_op": true, "msg_subject": "Table Clustering & Time Range Queries" }, { "msg_contents": "Kevin Buckham <[email protected]> wrote:\n \n> Our primary location table is clustered by \"reporttime\" (bigint). \n> Many of the queries we need to perform are of the nature : \"get me\n> all positions from a given device for yesterday\". Similar queries\n> are \"get me the most recent 10 positions from a given device\".\n \nHave you looked at table partitioning? You would then only need to\ncluster the most recent partition or two. I *seems* like a good fit\nfor your application.\n \nhttp://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html\n \n-Kevin\n", "msg_date": "Thu, 22 Oct 2009 14:25:05 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Clustering & Time Range Queries" }, { "msg_contents": "\n\nOn 10/22/09 12:25 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\n> Kevin Buckham <[email protected]> wrote:\n> \n>> Our primary location table is clustered by \"reporttime\" (bigint).\n>> Many of the queries we need to perform are of the nature : \"get me\n>> all positions from a given device for yesterday\". Similar queries\n>> are \"get me the most recent 10 positions from a given device\".\n> \n> Have you looked at table partitioning? You would then only need to\n> cluster the most recent partition or two. I *seems* like a good fit\n> for your application.\n> \n> http://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html\n> \n> -Kevin\n\nPartitioning by time should help a lot here as Kevin says.\n\nAlso, you might want to experiment with things like pg_reorg:\nhttp://reorg.projects.postgresql.org/\nhttp://pgfoundry.org/projects/reorg/\nhttp://reorg.projects.postgresql.org/pg_reorg.html\n\nWhich is basically an online, optimized cluster or vacuum full. However it\nhas several caveats. I have not used it in production myself, just\nexperiments with it.\n\n\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 22 Oct 2009 14:33:35 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Clustering & Time Range Queries" }, { "msg_contents": "> Also, you might want to experiment with things like\n> pg_reorg:\n\n\nDo you happen to know if that works with 8.4? \n\n\n \n", "msg_date": "Fri, 23 Oct 2009 07:23:06 +0000 (GMT)", "msg_from": "Scara Maccai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Clustering & Time Range Queries" }, { "msg_contents": "I came across links to pg_reorg previously but it seemed that the\nproject was a bit \"dead\". There is active development but not much\ninformation, and not much in the way of discussions. I will definitely\nbe testing both partitioning and pg_reorg. I am curious to see if\npg_reorg will be stable enough for us to use or not.\n\nThanks to everyone who provided answers for great and quick responses!\nWow, it makes me really want to keep Postgres around. :)\n\n-Kevin\n\nOn Thu, 2009-10-22 at 14:33 -0700, Scott Carey wrote:\n> \n> Partitioning by time should help a lot here as Kevin says.\n> \n> Also, you might want to experiment with things like pg_reorg:\n> http://reorg.projects.postgresql.org/\n> http://pgfoundry.org/projects/reorg/\n> http://reorg.projects.postgresql.org/pg_reorg.html\n> \n> Which is basically an online, optimized cluster or vacuum full. However it\n> has several caveats. I have not used it in production myself, just\n> experiments with it.\n\n\n", "msg_date": "Fri, 23 Oct 2009 12:09:41 -0700", "msg_from": "Kevin Buckham <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Table Clustering & Time Range Queries" }, { "msg_contents": "* Kevin Buckham ([email protected]) wrote:\n> I came across links to pg_reorg previously but it seemed that the\n> project was a bit \"dead\". There is active development but not much\n> information, and not much in the way of discussions. I will definitely\n> be testing both partitioning and pg_reorg. I am curious to see if\n> pg_reorg will be stable enough for us to use or not.\n> \n> Thanks to everyone who provided answers for great and quick responses!\n> Wow, it makes me really want to keep Postgres around. :)\n\nI've been following this but havn't commented since it seemed well in\nhand. A few specific things I would mention:\n\nBe sure to read:\nhttp://www.postgresql.org/docs/current/static/ddl-partitioning.html\n\nI'd recommend partitioning using inheiritance.\nMake sure to set constraint_exclusion = on unless you're using 8.4\n(in 8.4, constraint_exclusion is tri-state: 'partition', where it will\nbe used when UNION ALL or inheiritance is used in queries, 'on' where it\nwill try to be used for all queries, and 'off' where it won't be used at\nall; 8.4's default is 'partition').\nYou may want to consider upgrading to 8.4 if you're not on it already.\nYou probably want to use triggers on your 'input' table to handle\nincoming traffic.\nDecide on a sensible partitioning scheme and then test, test, test.\nMake sure it does what you want. explain analyze and all that.\n\n\tEnjoy,\n\n\t\tStephen", "msg_date": "Fri, 23 Oct 2009 18:55:40 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Clustering & Time Range Queries" }, { "msg_contents": "I'm surprised clustering as your main optimization has scaled up for you \nas long as it has, I normally see that approach fall apart once you're \npast a few hundred GB of data. You're putting a lot of work into a \ntechnique that only is useful for smaller data sets than you have now. \nThere are two basic approaches to optimizing queries against large \narchives of time-series data that do scale up when you can use them:\n\n1) Partition the tables downward until you reach a time scale where the \nworking set fits in RAM.\n\n2) Create materialized views that roll up the data needed for the most \ncommon reports people need run in real-time. Optimize when those run to \nkeep overhead reasonable (which sounds possible given your comments about \nregular maintenance windows). Switch the app over to running against the \nmaterialized versions of any data it's possible to do so on. The two \nstandard intros to this topic are at \nhttp://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views and \nhttp://www.pgcon.org/2008/schedule/events/69.en.html\n\n From what you've said about your app, I'd expect both of these would be \nworth considering.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 24 Oct 2009 20:16:46 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Clustering & Time Range Queries" } ]
[ { "msg_contents": "\n\r\nHi,\r\n\r\nIs there any way to get the query plan of the query run in the stored\r\nprocedure?\r\nI am running the following one and it takes 10 minutes in the procedure\r\nwhen it is pretty fast standalone.\r\n\r\nAny ideas would be welcome!\r\n\r\n# EXPLAIN ANALYZE SELECT m.domain_id, nsr_id FROM nsr_meta m, last_snapshot\r\nl WHERE m.domain_id = l.domain_id;\r\n \r\nQUERY PLAN \r\n \r\n------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Nested Loop (cost=0.00..562432.32 rows=12227848 width=16) (actual\r\ntime=1430.034..7476.081 rows=294418 loops=1)\r\n -> Seq Scan on last_snapshot l (cost=0.00..3983.68 rows=60768 width=8)\r\n(actual time=0.010..57.304 rows=60641 loops=1)\r\n -> Index Scan using idx_nsr_meta_domain_id on nsr_meta m \r\n(cost=0.00..6.68 rows=201 width=16) (actual time=0.111..0.115 rows=5\r\nloops=60641)\r\n Index Cond: (m.domain_id = l.domain_id)\r\n Total runtime: 7635.625 ms\r\n(5 rows)\r\n\r\nTime: 7646.243 ms\r\n\r\nMany thanks,\r\nMichal\r\n\r\n-- \r\nI hear and I forget. I see and I believe. I do and I understand.\r\n(Confucius)\n\n", "msg_date": "Fri, 23 Oct 2009 16:38:04 +0100", "msg_from": "Michal J. Kubski <[email protected]>", "msg_from_op": true, "msg_subject": "query planning different in =?UTF-8?Q?plpgsql=3F?=" }, { "msg_contents": "On Fri, Oct 23, 2009 at 11:38 AM, Michal J. Kubski <[email protected]>wrote:\n\n>\n>\n> Hi,\n>\n> Is there any way to get the query plan of the query run in the stored\n> procedure?\n> I am running the following one and it takes 10 minutes in the procedure\n> when it is pretty fast standalone.\n>\n> Any ideas would be welcome!\n>\n> # EXPLAIN ANALYZE SELECT m.domain_id, nsr_id FROM nsr_meta m, last_snapshot\n> l WHERE m.domain_id = l.domain_id;\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..562432.32 rows=12227848 width=16) (actual\n> time=1430.034..7476.081 rows=294418 loops=1)\n> -> Seq Scan on last_snapshot l (cost=0.00..3983.68 rows=60768 width=8)\n> (actual time=0.010..57.304 rows=60641 loops=1)\n> -> Index Scan using idx_nsr_meta_domain_id on nsr_meta m\n> (cost=0.00..6.68 rows=201 width=16) (actual time=0.111..0.115 rows=5\n> loops=60641)\n> Index Cond: (m.domain_id = l.domain_id)\n> Total runtime: 7635.625 ms\n> (5 rows)\n>\n> Time: 7646.243 ms\n>\n\n Do you not have an index on last_snapshot.domain_id?\n\n--Scott\n\nOn Fri, Oct 23, 2009 at 11:38 AM, Michal J. Kubski <[email protected]> wrote:\n\n\nHi,\n\nIs there any way to get the query plan of the query run in the stored\nprocedure?\nI am running the following one and it takes 10 minutes in the procedure\nwhen it is pretty fast standalone.\n\nAny ideas would be welcome!\n\n# EXPLAIN ANALYZE SELECT m.domain_id, nsr_id FROM nsr_meta m, last_snapshot\nl WHERE m.domain_id = l.domain_id;\n\nQUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=0.00..562432.32 rows=12227848 width=16) (actual\ntime=1430.034..7476.081 rows=294418 loops=1)\n   ->  Seq Scan on last_snapshot l  (cost=0.00..3983.68 rows=60768 width=8)\n(actual time=0.010..57.304 rows=60641 loops=1)\n   ->  Index Scan using idx_nsr_meta_domain_id on nsr_meta m\n(cost=0.00..6.68 rows=201 width=16) (actual time=0.111..0.115 rows=5\nloops=60641)\n         Index Cond: (m.domain_id = l.domain_id)\n Total runtime: 7635.625 ms\n(5 rows)\n\nTime: 7646.243 ms  Do you not have an index on last_snapshot.domain_id?--Scott", "msg_date": "Fri, 23 Oct 2009 11:49:50 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planning different in plpgsql?" }, { "msg_contents": "On Fri, Oct 23, 2009 at 4:49 PM, Scott Mead <[email protected]>wrote:\n\n>\n>\n> Do you not have an index on last_snapshot.domain_id?\n>\n\nthat, and also try rewriting a query as JOIN. There might be difference in\nperformance/plan.\n\n\n\n-- \nGJ\n\nOn Fri, Oct 23, 2009 at 4:49 PM, Scott Mead <[email protected]> wrote:\n  Do you not have an index on last_snapshot.domain_id?that, and also try rewriting a query as JOIN. There might be difference in performance/plan.\n -- GJ", "msg_date": "Fri, 23 Oct 2009 16:56:36 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planning different in plpgsql?" }, { "msg_contents": "On Fri, Oct 23, 2009 at 11:38 AM, Michal J. Kubski <[email protected]>wrote:\n>> I am running the following one and it takes 10 minutes in the procedure\n>> when it is pretty fast standalone.\n>> \n>> # EXPLAIN ANALYZE SELECT m.domain_id, nsr_id FROM nsr_meta m, last_snapshot\n>> l WHERE m.domain_id = l.domain_id;\n\nIs it *really* just like that inside the stored procedure? Usually\nthe reason for a difference in plan is that the procedure's query\nreferences some variables of the procedure, which people think act\nlike constants but they don't.\n\nAlso, if you're executing the SELECT as a plpgsql FOR-loop, it will be\nplanned like a cursor, so the thing to compare against is\n\texplain [analyze] declare x cursor for select ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Oct 2009 12:20:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planning different in plpgsql? " }, { "msg_contents": "\n\r\nHi,\r\n\r\nOn Fri, 23 Oct 2009 16:56:36 +0100, Grzegorz Jaśkiewicz\r\n<[email protected]> wrote:\r\n> On Fri, Oct 23, 2009 at 4:49 PM, Scott Mead\r\n> <[email protected]>wrote:\r\n> \r\n>>\r\n>>\r\n>> Do you not have an index on last_snapshot.domain_id?\r\n>>\r\n> \r\n> that, and also try rewriting a query as JOIN. There might be difference\r\nin\r\n> performance/plan.\r\n> \r\nThanks, it runs better (average 240s, not 700s) with the index. Rewriting\r\nqueries\r\nas JOINs does not make any difference.\r\nThe last_snapshot is a temp table created earlier in the procedure\r\nand the query in question is preceded with CREATE TEMPORARY TABLE as well,\r\nnot a cursor. \r\nI still do not get why it performs differently inside the procedure. \r\nIs there any way to see what planning decisions were made?\r\n\r\nBest regards,\r\nMichal\r\n\r\n-- \r\nI hear and I forget. I see and I believe. I do and I understand.\r\n(Confucius)\n\n", "msg_date": "Mon, 26 Oct 2009 10:05:23 +0000", "msg_from": "Michal J. Kubski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query planning different in =?UTF-8?Q?plpgsql=3F?=" }, { "msg_contents": "On Mon, Oct 26, 2009 at 6:05 AM, Michal J. Kubski <[email protected]> wrote:\n> On Fri, 23 Oct 2009 16:56:36 +0100, Grzegorz Jaśkiewicz\n> <[email protected]> wrote:\n>> On Fri, Oct 23, 2009 at 4:49 PM, Scott Mead\n>> <[email protected]>wrote:\n>>\n>>>\n>>>\n>>>   Do you not have an index on last_snapshot.domain_id?\n>>>\n>>\n>> that, and also try rewriting a query as JOIN. There might be difference\n> in\n>> performance/plan.\n>>\n> Thanks, it runs better (average 240s, not 700s) with the index. Rewriting\n> queries\n> as JOINs does not make any difference.\n> The last_snapshot is a temp table created earlier in the procedure\n> and the query in question is preceded with CREATE TEMPORARY TABLE as well,\n> not a cursor.\n> I still do not get why it performs differently inside the procedure.\n> Is there any way to see what planning decisions were made?\n\nnot directly....can we see the function?\n\nmerlin\n", "msg_date": "Mon, 26 Oct 2009 09:19:26 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planning different in plpgsql?" }, { "msg_contents": "\n\r\nOn Mon, 26 Oct 2009 09:19:26 -0400, Merlin Moncure <[email protected]>\r\nwrote:\r\n> On Mon, Oct 26, 2009 at 6:05 AM, Michal J. Kubski <[email protected]>\r\n> wrote:\r\n>> On Fri, 23 Oct 2009 16:56:36 +0100, Grzegorz Jaśkiewicz\r\n>> <[email protected]> wrote:\r\n>>> On Fri, Oct 23, 2009 at 4:49 PM, Scott Mead\r\n>>> <[email protected]>wrote:\r\n>>>\r\n>>>>\r\n>>>>\r\n>>>> Do you not have an index on last_snapshot.domain_id?\r\n>>>>\r\n>>>\r\n>>> that, and also try rewriting a query as JOIN. There might be difference\r\n>> in\r\n>>> performance/plan.\r\n>>>\r\n>> Thanks, it runs better (average 240s, not 700s) with the index.\r\n> Rewriting\r\n>> queries\r\n>> as JOINs does not make any difference.\r\n>> The last_snapshot is a temp table created earlier in the procedure\r\n>> and the query in question is preceded with CREATE TEMPORARY TABLE as\r\n> well,\r\n>> not a cursor.\r\n>> I still do not get why it performs differently inside the procedure.\r\n>> Is there any way to see what planning decisions were made?\r\n> \r\n> not directly....can we see the function?\r\n> \r\n> merlin\r\n\r\nIt looks like that (I stripped off some fields in result_rs record, to make\r\nit more brief\r\nand leave the relevant part) \r\n\r\nCREATE OR REPLACE FUNCTION build_list() RETURNS SETOF result_rs AS $$\r\nDECLARE \r\n start_time TIMESTAMP;\r\n rec result_rs;\r\nBEGIN\r\n start_time := timeofday()::timestamp;\r\n\r\n CREATE TEMPORARY TABLE last_snapshot AS SELECT * FROM last_take; --\r\nlast_take is a view\r\n CREATE INDEX last_snapshot_idx ON last_snapshot USING btree(domain_id)\r\nWITH (fillfactor=100);\r\n\r\n CREATE TEMPORARY TABLE tmp_lm AS SELECT m.domain_id, nsr_id FROM\r\nnsrs_meta m JOIN last_snapshot l ON m.domain_id = l.domain_id;\r\n CREATE INDEX tmp_lm_idx ON tmp_lm USING btree(nsr_id) WITH\r\n(fillfactor=100);\r\n\r\n CREATE TEMPORARY TABLE tmp_ns_bl_matching_domains AS SELECT DISTINCT\r\nlm.domain_id FROM tmp_lm lm JOIN nsrs n ON lm.nsr_id = n.id JOIN ns_bl b ON\r\nn.ip_id = b.ip_id;\r\n CREATE INDEX tmp_bls_0 ON tmp_ns_bl_matching_domains USING\r\nbtree(domain_id) WITH (fillfactor=100);\r\n DROP TABLE tmp_lm;\r\n\r\n CREATE TEMPORARY TABLE temp_result AS\r\n SELECT\r\n t.domain_id,\r\n t.name,\r\n (CASE WHEN b.domain_id IS NULL THEN 0 ELSE 1 END) AS is_bl,\r\n (CASE WHEN f.is_s IS NULL THEN 0 ELSE f.is_s::INTEGER END) AS\r\nis_s,\r\n FROM last_snapshot t \r\n LEFT JOIN tmp_ns_bl_matching_domains b ON\r\nb.domain_id=t.domain_id\r\n LEFT JOIN (SELECT DISTINCT ON (domain_id) * FROM domain_flags\r\nf) f ON t.domain_id=f.domain_id;\r\n\r\n FOR rec IN SELECT\r\n UTC_NOW(),\r\n name,\r\n is_bl,\r\n is_s\r\n FROM temp_result t\r\n LOOP\r\n RETURN NEXT rec;\r\n END LOOP;\r\n\r\n DROP TABLE temp_result;\r\n DROP TABLE tmp_ns_bl_matching_domains;\r\n\r\n PERFORM time_log('BUILD', get_elapsed_time(start_time));\r\n\r\nEND;\r\n$$ LANGUAGE plpgsql;\r\n\r\nThanks,\r\nMichal\r\n\r\n\r\n-- \r\nI hear and I forget. I see and I believe. I do and I understand.\r\n(Confucius)\n\n", "msg_date": "Mon, 26 Oct 2009 13:50:00 +0000", "msg_from": "Michal J. Kubski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query planning different in =?UTF-8?Q?plpgsql=3F?=" }, { "msg_contents": "\"Michal J. Kubski\" <[email protected]> writes:\n> [ function that creates a bunch of temporary tables and immediately\n> joins them ]\n\nIt'd probably be a good idea to insert an ANALYZE on the temp tables\nafter you fill them. The way you've got this set up, there is no chance\nof auto-analyze correcting that oversight for you, so the planner will\nbe planning the join \"blind\" without any stats. Good results would only\ncome by pure luck.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Oct 2009 14:09:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planning different in =?UTF-8?Q?plpgsql=3F?= " }, { "msg_contents": "\n\n\n\n\nTry to force a unique plan, like that:\n\nSELECT field, field2 ...\nFROM table1\nWHERE field3 = 'xxx'\nAND field4 = 'yyy'\nAND field5 = 'zzz'\n\nso, in that example, I need the planner to use my field4 index, but the\nplanner insists to use the field5, so I rewrite the query like this:\n\nSELECT field, field2 ...\nFROM table1\nWHERE trim(field3) = 'xxx'\nAND field4 = 'yyy'\nAND trim(field5) = 'zzz'\n\nI  didn´t give any option to the planner, so I get what plan I want.\n\nWaldomiro\n\n\nTom Lane escreveu:\n\n\"Michal J. Kubski\" <[email protected]> writes:\n \n\n[ function that creates a bunch of temporary tables and immediately\njoins them ]\n \n\n\nIt'd probably be a good idea to insert an ANALYZE on the temp tables\nafter you fill them. The way you've got this set up, there is no chance\nof auto-analyze correcting that oversight for you, so the planner will\nbe planning the join \"blind\" without any stats. Good results would only\ncome by pure luck.\n\n\t\t\tregards, tom lane\n\n \n\n\n\n\n", "msg_date": "Mon, 26 Oct 2009 17:56:12 -0200", "msg_from": "Waldomiro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planning different in plpgsql?" }, { "msg_contents": "\nOn Mon, 26 Oct 2009 11:52:22 -0400, Merlin Moncure <[email protected]>\r\nwrote:\r\n>>>>>>   Do you not have an index on last_snapshot.domain_id?\r\n>>>>>>\r\n>>>>> that, and also try rewriting a query as JOIN. There might be\r\n>>>>> difference in performance/plan.\r\n>>>>>\r\n>>>> Thanks, it runs better (average 240s, not 700s) with the index.\r\n>>> Rewriting\r\n>>>> queries\r\n>>>> as JOINs does not make any difference.\r\n>>>> The last_snapshot is a temp table created earlier in the procedure\r\n>>>> and the query in question is preceded with CREATE TEMPORARY TABLE as\r\nwell,\r\n>>>> not a cursor.\r\n>>>> I still do not get why it performs differently inside the procedure.\r\n>>>> Is there any way to see what planning decisions were made?\r\n>>>\r\n>>> not directly....can we see the function?\r\n>>>\r\n>>> merlin\r\n>>\r\n>> It looks like that (I stripped off some fields in result_rs record, to\r\n>> make\r\n>> it more brief\r\n>> and leave the relevant part)\r\n>>\r\n\r\n>> [..function cut off..]\r\n\r\n> hm. what version of postgres are you using? I have some version\r\n> dependent suggestions. Also, is it ok to respond to the list quoting\r\n> any/all of your function? (I'd perfer to keep the discussion public if\r\n> possible).\r\n> \r\n\r\nHi,\r\n\r\nApologies for late response. It is 8.3.7. Tom Lane's suggestion to add\r\nANALYZE seem to help it,\r\nthough I still sometimes get long query runs.\r\n\r\nThanks,\r\nMichal\r\n\r\n-- \r\nI hear and I forget. I see and I believe. I do and I understand.\r\n(Confucius)\n", "msg_date": "Thu, 29 Oct 2009 14:28:07 +0000", "msg_from": "\"Michal J. Kubski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planning different in =?UTF-8?Q?plpgsql=3F?=" }, { "msg_contents": "\nOn Mon, 26 Oct 2009 14:09:49 -0400, Tom Lane <[email protected]> wrote:\r\n> \"Michal J. Kubski\" <[email protected]> writes:\r\n>> [ function that creates a bunch of temporary tables and immediately\r\n>> joins them ]\r\n> \r\n> It'd probably be a good idea to insert an ANALYZE on the temp tables\r\n> after you fill them. The way you've got this set up, there is no chance\r\n> of auto-analyze correcting that oversight for you, so the planner will\r\n> be planning the join \"blind\" without any stats. Good results would only\r\n> come by pure luck.\r\n> \r\n> \t\t\tregards, tom lane\r\n\r\nHi,\r\n\r\nApologies for late response. Thanks a lot: ANALYZE seem to help it! I\r\nstill sometimes\r\nget long query runs though. As far as I understand using index over\r\nsequential scan\r\non joins should be faster. Could it be possible that the query planner\r\ndecides\r\nto use seqscan instead of index scan on some random occasions? \r\n\r\nThanks,\r\nMichal\r\n\r\n-- \r\nI hear and I forget. I see and I believe. I do and I understand.\r\n(Confucius)\n", "msg_date": "Thu, 29 Oct 2009 14:32:36 +0000", "msg_from": "\"Michal J. Kubski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planning different in =?UTF-8?Q?plpgsql=3F?=" }, { "msg_contents": "On 10/23/09 8:38 AM, \"Michal J.Kubski\" <[email protected]> wrote:\n\n> \n> \n> \n> \n> Hi,\n> \n> \n> \n> Is there any way to get the query plan of the query run in the stored\n> \n> procedure?\n> \n> I am running the following one and it takes 10 minutes in the procedure\n> \n> when it is pretty fast standalone.\n> \n> \n> \n> Any ideas would be welcome!\n> \n> \n\nIf your query is \nSELECT field, field2 FROM table1 WHERE field3 = 'xxx' AND field4 = 'yyy'\n\nAnd you want to test what the planner will do without the knowledge of the\nexact values 'xxx' and 'yyy', you can prepare a statement:\n\n#PREPARE foo() AS SELECT field, field2 FROM table1 WHERE field3 = $1 AND\nfield4 = $2;\n\n#EXPLAIN execute foo('xxx', 'yyy');\n\nIf field3 and field4 don't have unique indexes, the plan might differ. It\nwill most likely differ if 'xxx' or 'yyy' is a very common value in the\ntable and the table is not tiny.\n\n", "msg_date": "Thu, 29 Oct 2009 09:43:56 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planning different in plpgsql?" } ]
[ { "msg_contents": "Hi\n\nIt seems to me that the row estimates on a ts_vector search is a bit on\nthe low side for terms that is not in th MCV-list in pg_stats:\n\nftstest=# explain select id from ftstest where ftstest_body_fts @@\nto_tsquery('nonexistingterm') order by id limit 10;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------\n Limit (cost=221.93..221.95 rows=10 width=4)\n -> Sort (cost=221.93..222.01 rows=33 width=4)\n Sort Key: id\n -> Bitmap Heap Scan on ftstest (cost=154.91..221.22 rows=33\nwidth=4)\n Recheck Cond: (ftstest_body_fts @@\nto_tsquery('nonexistingterm'::text))\n -> Bitmap Index Scan on ftstest_tfs_idx\n(cost=0.00..154.90 rows=33 width=0)\n Index Cond: (ftstest_body_fts @@\nto_tsquery('nonexistingterm'::text))\n(7 rows)\n\nThen I have been reading:\nhttp://www.postgresql.org/docs/8.4/static/row-estimation-examples.html\nand trying to reproduce the selectivity number for this query:\n\nselectivity = (1 - sum(mvf))/(num_distinct - num_mcv)\n\nnum_distinct is around 10m.\nftstest=# SELECT\nattname,array_dims(most_common_vals),array_dims(most_common_freqs) FROM\npg_stats WHERE tablename='ftstest' AND\nattname='ftstest_body_fts';\n attname | array_dims | array_dims\n------------------+------------+------------\n ftstest_body_fts | [1:2088] | [1:2090]\n(1 row)\n\nftstest=# select tablename,attname,freq from (select tablename,attname,\nsum(freq) as freq from (SELECT\ntablename,attname,unnest(most_common_freqs) as freq FROM pg_stats) as\nfoo group by tablename,attname) as foo2 where freq > 1;\n tablename | attname | freq\n-----------+------------------+---------\n ftstest | ftstest_body_fts | 120.967\n(1 row)\n\nthen the selectivity is\n(1-120.967)/(10000000 - 2088) = -.00001199920543409463\n\nWhich seem .. well wrong.\n\nThe algorithm makes the assumption that if a record is matching one of\nthe MCV's then it is not in the matching a rare-term. The above\nalgorithm doesnt give me the 33 rows about, so can anyone shortly\ndescribe the changes for this algorithm when using ts_vectors?\n\nThanks.\n\n-- \nJesper\n", "msg_date": "Fri, 23 Oct 2009 20:38:56 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Calculating selectivity for the query-planner on ts_vector colums. " }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> It seems to me that the row estimates on a ts_vector search is a bit on\n> the low side for terms that is not in th MCV-list in pg_stats:\n\ntsvector has its own selectivity estimator that's not like plain scalar\nequality. Look into src/backend/tsearch/ts_selfuncs.c if you want to\nsee the rules.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Oct 2009 15:11:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calculating selectivity for the query-planner on ts_vector\n\tcolums." }, { "msg_contents": "Tom Lane wrote:\n> Jesper Krogh <[email protected]> writes:\n>> It seems to me that the row estimates on a ts_vector search is a bit on\n>> the low side for terms that is not in th MCV-list in pg_stats:\n> \n> tsvector has its own selectivity estimator that's not like plain scalar\n> equality. Look into src/backend/tsearch/ts_selfuncs.c if you want to\n> see the rules.\n\nThanks.\n\nleast_common_frequence / 2\nWhich also gives 33 in my situation.\n\n-- \nJesper\n\n\n\n", "msg_date": "Fri, 23 Oct 2009 21:29:45 +0200", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Calculating selectivity for the query-planner on ts_vector\n colums." } ]
[ { "msg_contents": "Hi.\n\nI'm currently trying to figure out why the tsearch performance seems to\nvary a lot between different queryplans. I have created a sample dataset\nthat sort of resembles the data I have to work on.\n\nThe script that builds the dataset is at:\nhttp://krogh.cc/~jesper/build-test.pl\nand http://krogh.cc/~jesper/words.txt is needed for it to run.\n\nTest system.. average desktop, 1 SATA drive and 1.5GB memory with pg 8.4.1.\n\nThe dataset consists of words randomized, but .. all records contains\n\"commonterm\", around 80% contains commonterm80 and so on..\n\n\tmy $rand = rand();\n\tpush @doc,\"commonterm\" if $commonpos == $j;\n\tpush @doc,\"commonterm80\" if $commonpos == $j && $rand < 0.8;\n\nResults are run multiple times after each other so they should be\nreproducible:\n\nftstest=# explain analyze select id from ftstest where body_fts @@\nto_tsquery('commonterm80');\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------\n Seq Scan on ftstest (cost=0.00..10750.00 rows=40188 width=4) (actual\ntime=0.102..1792.215 rows=40082 loops=1)\n Filter: (body_fts @@ to_tsquery('commonterm80'::text))\n Total runtime: 1809.437 ms\n(3 rows)\n\nftstest=# set enable_seqscan=off;\nSET\nftstest=# explain analyze select id from ftstest where body_fts @@\nto_tsquery('commonterm80');\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on ftstest (cost=115389.14..125991.96 rows=40188\nwidth=4) (actual time=17.445..197.356 rows=40082 loops=1)\n Recheck Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n -> Bitmap Index Scan on ftstest_gin_idx (cost=0.00..115379.09\nrows=40188 width=0) (actual time=13.370..13.370 rows=40082 loops=1)\n Index Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n Total runtime: 204.201 ms\n(5 rows)\n\nGiven that the seq-scan have to visit 50K row to create the result and\nthe bitmap heap scan only have to visit 40K (but search the index) we\nwould expect the seq-scan to be at most 25% more expensive than the\nbitmap-heap scan.. e.g. less than 300ms.\n\nJesper\n-- \nJesper\n", "msg_date": "Mon, 26 Oct 2009 21:02:57 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "bitmap heap scan way cheaper than seq scan on the same amount of\n\ttuples (fts-search)." }, { "msg_contents": "On Mon, 2009-10-26 at 21:02 +0100, Jesper Krogh wrote:\n\n> Test system.. average desktop, 1 SATA drive and 1.5GB memory with pg 8.4.1.\n> \n> The dataset consists of words randomized, but .. all records contains\n> \"commonterm\", around 80% contains commonterm80 and so on..\n> \n> \tmy $rand = rand();\n> \tpush @doc,\"commonterm\" if $commonpos == $j;\n> \tpush @doc,\"commonterm80\" if $commonpos == $j && $rand < 0.8;\n\nYou should probably re-generate your random value for each call rather\nthan store it. Currently, every document with commonterm20 is guaranteed\nto also have commonterm40, commonterm60, etc, which probably isn't very\nrealistic, and also makes doc size correlate with word rarity.\n\n> Given that the seq-scan have to visit 50K row to create the result and\n> the bitmap heap scan only have to visit 40K (but search the index) we\n> would expect the seq-scan to be at most 25% more expensive than the\n> bitmap-heap scan.. e.g. less than 300ms.\n\nI suspect table bloat. Try VACUUMing your table and trying again.\n\nIn this sort of test it's often a good idea to TRUNCATE the table before\npopulating it with a newly generated data set. That helps avoid any\nresidual effects from table bloat etc from lingering between test runs.\n\n--\nCraig Ringer\n\n", "msg_date": "Tue, 27 Oct 2009 12:57:05 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap heap scan way cheaper than seq scan on the\n\tsame amount of tuples (fts-search)." }, { "msg_contents": "Craig Ringer wrote:\n> On Mon, 2009-10-26 at 21:02 +0100, Jesper Krogh wrote:\n> \n>> Test system.. average desktop, 1 SATA drive and 1.5GB memory with pg 8.4.1.\n>>\n>> The dataset consists of words randomized, but .. all records contains\n>> \"commonterm\", around 80% contains commonterm80 and so on..\n>>\n>> \tmy $rand = rand();\n>> \tpush @doc,\"commonterm\" if $commonpos == $j;\n>> \tpush @doc,\"commonterm80\" if $commonpos == $j && $rand < 0.8;\n> \n> You should probably re-generate your random value for each call rather\n> than store it. Currently, every document with commonterm20 is guaranteed\n> to also have commonterm40, commonterm60, etc, which probably isn't very\n> realistic, and also makes doc size correlate with word rarity.\n\nI had that in the first version, but I wanted to have the gaurantee that\na commonterm60 was indeed a subset of commonterm80, so that why its\nsturctured like that. I know its not realistic, but it gives measureable\nresults since I know my queries will hit the same tuples.\n\nI fail to see how this should have any direct effect on query time?\n\n>> Given that the seq-scan have to visit 50K row to create the result and\n>> the bitmap heap scan only have to visit 40K (but search the index) we\n>> would expect the seq-scan to be at most 25% more expensive than the\n>> bitmap-heap scan.. e.g. less than 300ms.\n> \n> I suspect table bloat. Try VACUUMing your table and trying again.\n\nNo bloat here:\nftstest=# VACUUM FULL VERBOSE ftstest;\nINFO: vacuuming \"public.ftstest\"\nINFO: \"ftstest\": found 0 removable, 50000 nonremovable row versions in\n10000 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 1352 to 1652 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 6859832 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n536 pages containing 456072 free bytes are potential move destinations.\nCPU 0.03s/0.03u sec elapsed 0.06 sec.\nINFO: index \"ftstest_id_key\" now contains 50000 row versions in 139 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.13 sec.\nINFO: index \"ftstest_gin_idx\" now contains 50000 row versions in 35792\npages\nDETAIL: 0 index pages have been deleted, 25022 are currently reusable.\nCPU 0.46s/0.11u sec elapsed 11.16 sec.\nINFO: \"ftstest\": moved 0 row versions, truncated 10000 to 10000 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: vacuuming \"pg_toast.pg_toast_908525\"\nINFO: \"pg_toast_908525\": found 0 removable, 100000 nonremovable row\nversions in 16710 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 270 to 2032 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 3695712 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n5063 pages containing 1918692 free bytes are potential move destinations.\nCPU 0.38s/0.17u sec elapsed 2.64 sec.\nINFO: index \"pg_toast_908525_index\" now contains 100000 row versions in\n276 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.28 sec.\nINFO: \"pg_toast_908525\": moved 0 row versions, truncated 16710 to 16710\npages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\nftstest=#\n\n\n> In this sort of test it's often a good idea to TRUNCATE the table before\n> populating it with a newly generated data set. That helps avoid any\n> residual effects from table bloat etc from lingering between test runs.\n\nAs you could see in the scripts, the table is dropped just before its\nrecreated and filled with data.\n\nDid you try to re-run the test?\n\nJesper\n-- \nJesper\n", "msg_date": "Tue, 27 Oct 2009 06:08:41 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bitmap heap scan way cheaper than seq scan on the same\n\tamount of tuples (fts-search)." }, { "msg_contents": "On Tue, 2009-10-27 at 06:08 +0100, Jesper Krogh wrote:\n\n> > You should probably re-generate your random value for each call rather\n> > than store it. Currently, every document with commonterm20 is guaranteed\n> > to also have commonterm40, commonterm60, etc, which probably isn't very\n> > realistic, and also makes doc size correlate with word rarity.\n> \n> I had that in the first version, but I wanted to have the gaurantee that\n> a commonterm60 was indeed a subset of commonterm80, so that why its\n> sturctured like that. I know its not realistic, but it gives measureable\n> results since I know my queries will hit the same tuples.\n> \n> I fail to see how this should have any direct effect on query time?\n\nProbably not, in truth, but with the statistics-based planner I'm\noccasionally surprised by what can happen.\n\n> \n> > In this sort of test it's often a good idea to TRUNCATE the table before\n> > populating it with a newly generated data set. That helps avoid any\n> > residual effects from table bloat etc from lingering between test runs.\n> \n> As you could see in the scripts, the table is dropped just before its\n> recreated and filled with data.\n> \n> Did you try to re-run the test?\n\nNo, I didn't. I thought it worth checking if bloat might be the result\nfirst, though I should've read the scripts to confirm you weren't\nalready handling that possibility.\n\nAnyway, I've done a run to generate your data set and run a test. After\nexecuting the test statement twice (once with and once without\nenable_seqscan) to make sure all data is in cache and not being read\nfrom disk, when I run the tests here are my results:\n\n\ntest=> set enable_seqscan=on;\nSET\ntest=> explain analyze select id from ftstest where body_fts @@\nto_tsquery('commonterm80');\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on ftstest (cost=36.96..227.10 rows=50 width=4) (actual time=15.830..134.194 rows=40061 loops=1)\n Recheck Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n -> Bitmap Index Scan on ftstest_gin_idx (cost=0.00..36.95 rows=50 width=0) (actual time=11.905..11.905 rows=40061 loops=1)\n Index Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n Total runtime: 148.477 ms\n(5 rows)\n\ntest=> set enable_seqscan=off;\nSET\ntest=> explain analyze select id from ftstest where body_fts @@\nto_tsquery('commonterm80');\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on ftstest (cost=36.96..227.10 rows=50 width=4) (actual time=15.427..134.156 rows=40061 loops=1)\n Recheck Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n -> Bitmap Index Scan on ftstest_gin_idx (cost=0.00..36.95 rows=50 width=0) (actual time=11.739..11.739 rows=40061 loops=1)\n Index Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n Total runtime: 148.583 ms\n(5 rows)\n\n\n\nAny chance your disk cache was cold on the first test run, so Pg was\nhaving to read the table from disk during the seqscan, and could just\nuse shared_buffers when you repeated the test for the index scan?\n\n\n\n--\nCraig Ringer\n\n", "msg_date": "Tue, 27 Oct 2009 13:33:54 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap heap scan way cheaper than seq scan on the\n\tsame amount of tuples (fts-search)." }, { "msg_contents": "Craig Ringer wrote:\n> On Tue, 2009-10-27 at 06:08 +0100, Jesper Krogh wrote:\n> \n>>> You should probably re-generate your random value for each call rather\n>>> than store it. Currently, every document with commonterm20 is guaranteed\n>>> to also have commonterm40, commonterm60, etc, which probably isn't very\n>>> realistic, and also makes doc size correlate with word rarity.\n>> I had that in the first version, but I wanted to have the gaurantee that\n>> a commonterm60 was indeed a subset of commonterm80, so that why its\n>> sturctured like that. I know its not realistic, but it gives measureable\n>> results since I know my queries will hit the same tuples.\n>>\n>> I fail to see how this should have any direct effect on query time?\n> \n> Probably not, in truth, but with the statistics-based planner I'm\n> occasionally surprised by what can happen.\n> \n>>> In this sort of test it's often a good idea to TRUNCATE the table before\n>>> populating it with a newly generated data set. That helps avoid any\n>>> residual effects from table bloat etc from lingering between test runs.\n>> As you could see in the scripts, the table is dropped just before its\n>> recreated and filled with data.\n>>\n>> Did you try to re-run the test?\n> \n> No, I didn't. I thought it worth checking if bloat might be the result\n> first, though I should've read the scripts to confirm you weren't\n> already handling that possibility.\n> \n> Anyway, I've done a run to generate your data set and run a test. After\n> executing the test statement twice (once with and once without\n> enable_seqscan) to make sure all data is in cache and not being read\n> from disk, when I run the tests here are my results:\n> \n> \n> test=> set enable_seqscan=on;\n> SET\n> test=> explain analyze select id from ftstest where body_fts @@\n> to_tsquery('commonterm80');\n\nHere you should search for \"commonterm\" not \"commonterm80\", commonterm\nwill go into a seq-scan. You're not testing the same thing as I did.\n\n> Any chance your disk cache was cold on the first test run, so Pg was\n> having to read the table from disk during the seqscan, and could just\n> use shared_buffers when you repeated the test for the index scan?\n\nthey were run repeatedly.\n", "msg_date": "Tue, 27 Oct 2009 06:44:35 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bitmap heap scan way cheaper than seq scan on the same\n\tamount of tuples (fts-search)." }, { "msg_contents": "On Tue, 2009-10-27 at 06:44 +0100, Jesper Krogh wrote:\n\n> Here you should search for \"commonterm\" not \"commonterm80\", commonterm\n> will go into a seq-scan. You're not testing the same thing as I did.\n\nPoint taken. I ran the same commands as you, but as the planner picked\ndifferent plans it wasn't much use. The fact that I didn't notice that\nis a bit worrying, as it suggests and even worse than normal degree of\nbrain-fade. Sorry for the waste of time.\n\nAnyway, testing more usefully:\n\nOn 8.4 on a different system Pg uses the seq scan by preference, with a\nruntime of 1148ms. It doesn't seem to want to do a bitmap heap scan when\nsearching for `commonterm' even when enable_seqscan is set to `off'. A\nsearch for `commonterm80' also uses a seq scan (1067ms), but if\nenable_seqscan is set to off it'll use a bitmap heap scan at 237ms.\n\nOn my 8.3 Pg isn't using a seqscan even for `commonterm', which is ...\nodd. If I force it not to use a bitmap heap scan it'll use an index\nscan. Preventing that too results in a seq scan with a runtime of\n1500ms vs the 161ms of the bitmap heap scan. I agree that it seems like\na pretty strange result on face value.\n\n\nSo, on both 8.3 and 8.4 the sequential scan is indeed taking a LOT\nlonger than the bitmap heap scan, though similar numbers of tuples are\nbeing read by both. \n\nI see the same results when actually reading the results rather than\njust doing an `explain analyze'. With psql set to send output\nto /dev/null and with \\timing enabled:\n\ntest=> \\o /dev/null\ntest=> set enable_seqscan = on;\nTime: 0.282 ms\ntest=> select id from ftstest where body_fts @@\nto_tsquery('commonterm80');\nTime: 988.880 ms\ntest=> set enable_seqscan = off;\nTime: 0.286 ms\ntest=> select id from ftstest where body_fts @@\nto_tsquery('commonterm80');\nTime: 159.167 ms\n\nso - nearly 1s vs 0.15s is a big difference between what I previously\nconfirmed to be bitmap heap scan and seq scan respectively for the same\nquery. The same number of records are being returned in both cases.\n\nIf I \"select *\" rather than just reading the `id' field, the runtimes\nare much more similar - 4130ms seq scan, and 3285 bitmap heap scan (both\nreading data to /dev/null), a difference of ~800. `EXPLAIN ANALYZE'\nresults are still quite different, though, at 1020ms seq scan vs 233ms\nbitmap heap, suggesting that the similarity is created only by the time\ntaken to actually transfer the data to the client. The time difference\nbetween the two is much the same.\n\nSo - for some reason the seq scan takes 800ms or so longer than the\nbitmap heap scan. I can see why you're puzzled. I can reproduce it on\ntwo different machines with two different Pg versions, and using two\nslightly different methods for loading the data as well. So, I can\nconfirm your test results now that I'm actually testing properly.\n\n\ntest=> explain analyze select * from ftstest where body_fts @@\nto_tsquery('commonterm80');\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on ftstest (cost=25836.66..36432.95 rows=39753\nwidth=54) (actual time=27.452..175.481 rows=39852 loops=1)\n Recheck Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n -> Bitmap Index Scan on ftstest_gin_idx (cost=0.00..25826.72\nrows=39753 width=0) (actual time=25.186..25.186 rows=39852 loops=1)\n Index Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n Total runtime: 233.473 ms\n(5 rows)\n\ntest=> set enable_seqscan = on;\nSET\ntest=> explain analyze select * from ftstest where body_fts @@\nto_tsquery('commonterm80');\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Seq Scan on ftstest (cost=0.00..10750.00 rows=39753 width=54) (actual\ntime=0.141..956.496 rows=39852 loops=1)\n Filter: (body_fts @@ to_tsquery('commonterm80'::text))\n Total runtime: 1020.936 ms\n(3 rows)\n\n\n\n\nBy the way, for the 8.4 test I modifed the loader script so it wouldn't\ntake quite so painfully long to run second time 'round. I turned\nautocommit off, wrapped all the inserts up in a single transaction, and\nmoved the fts index creation to after all the data has been inserted.\nIt's a *LOT* faster, and the test results match yours.\n\n> they were run repeatedly.\n\nYeah, just saw that in your original mail. Sorry.\n\n--\nCraig Ringer\n\n", "msg_date": "Tue, 27 Oct 2009 14:14:37 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap heap scan way cheaper than seq scan on the\n\tsame amount of tuples (fts-search)." }, { "msg_contents": "Craig Ringer wrote:\n> On 8.4 on a different system Pg uses the seq scan by preference, with a\n> runtime of 1148ms. It doesn't seem to want to do a bitmap heap scan when\n> searching for `commonterm' even when enable_seqscan is set to `off'. A\n> search for `commonterm80' also uses a seq scan (1067ms), but if\n> enable_seqscan is set to off it'll use a bitmap heap scan at 237ms.\n\nOk, thats excactly as my number.\n\n> On my 8.3 Pg isn't using a seqscan even for `commonterm', which is ...\n> odd. If I force it not to use a bitmap heap scan it'll use an index\n> scan. Preventing that too results in a seq scan with a runtime of\n> 1500ms vs the 161ms of the bitmap heap scan. I agree that it seems like\n> a pretty strange result on face value.\n\nPG 8.3 doesnt have statistics data available for gin-indexes so that may\nbe why the query-planner can do otherwise on 8.3. It also means that it\nis a regression since in these cases 8.4 will perform worse than 8.3\ndid. (allthough the statistics makes a lot other cases way better).\n\n> So, on both 8.3 and 8.4 the sequential scan is indeed taking a LOT\n> longer than the bitmap heap scan, though similar numbers of tuples are\n> being read by both. \n>\n> I see the same results when actually reading the results rather than\n> just doing an `explain analyze'. With psql set to send output\n> to /dev/null and with \\timing enabled:\n> \n> test=> \\o /dev/null\n> test=> set enable_seqscan = on;\n> Time: 0.282 ms\n> test=> select id from ftstest where body_fts @@\n> to_tsquery('commonterm80');\n> Time: 988.880 ms\n> test=> set enable_seqscan = off;\n> Time: 0.286 ms\n> test=> select id from ftstest where body_fts @@\n> to_tsquery('commonterm80');\n> Time: 159.167 ms\n> \n> so - nearly 1s vs 0.15s is a big difference between what I previously\n> confirmed to be bitmap heap scan and seq scan respectively for the same\n> query. The same number of records are being returned in both cases.\n> \n> If I \"select *\" rather than just reading the `id' field, the runtimes\n> are much more similar - 4130ms seq scan, and 3285 bitmap heap scan (both\n> reading data to /dev/null), a difference of ~800. `EXPLAIN ANALYZE'\n> results are still quite different, though, at 1020ms seq scan vs 233ms\n> bitmap heap, suggesting that the similarity is created only by the time\n> taken to actually transfer the data to the client. The time difference\n> between the two is much the same.\n> \n> So - for some reason the seq scan takes 800ms or so longer than the\n> bitmap heap scan. I can see why you're puzzled. I can reproduce it on\n> two different machines with two different Pg versions, and using two\n> slightly different methods for loading the data as well. So, I can\n> confirm your test results now that I'm actually testing properly.\n\nThanks a lot.\n\n> test=> explain analyze select * from ftstest where body_fts @@\n> to_tsquery('commonterm80');\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on ftstest (cost=25836.66..36432.95 rows=39753\n> width=54) (actual time=27.452..175.481 rows=39852 loops=1)\n> Recheck Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n> -> Bitmap Index Scan on ftstest_gin_idx (cost=0.00..25826.72\n> rows=39753 width=0) (actual time=25.186..25.186 rows=39852 loops=1)\n> Index Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n> Total runtime: 233.473 ms\n> (5 rows)\n> \n> test=> set enable_seqscan = on;\n> SET\n> test=> explain analyze select * from ftstest where body_fts @@\n> to_tsquery('commonterm80');\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n> Seq Scan on ftstest (cost=0.00..10750.00 rows=39753 width=54) (actual\n> time=0.141..956.496 rows=39852 loops=1)\n> Filter: (body_fts @@ to_tsquery('commonterm80'::text))\n> Total runtime: 1020.936 ms\n> (3 rows)\n\nMy systems seems more to prefer bitmap-scans a bit more, but given the\nactual number it seems to be preferrablem. Thats about query-planning,\nmy main reason for posting was the actual run time.\n\n> By the way, for the 8.4 test I modifed the loader script so it wouldn't\n> take quite so painfully long to run second time 'round. I turned\n> autocommit off, wrapped all the inserts up in a single transaction, and\n> moved the fts index creation to after all the data has been inserted.\n> It's a *LOT* faster, and the test results match yours.\n\nI'll make that change if I have to work a bit more with it.\n\nThanks for speding time confirming my findings. (the I know its not just\n me getting blind at some problem).\n\nJesper\n-- \nJesper\n", "msg_date": "Tue, 27 Oct 2009 07:42:00 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bitmap heap scan way cheaper than seq scan on the same\n\tamount of tuples (fts-search)." }, { "msg_contents": "On Mon, Oct 26, 2009 at 4:02 PM, Jesper Krogh <[email protected]> wrote:\n> Hi.\n>\n> I'm currently trying to figure out why the tsearch performance seems to\n> vary a lot between different queryplans. I have created a sample dataset\n> that sort of resembles the data I have to work on.\n>\n> The script that builds the dataset is at:\n> http://krogh.cc/~jesper/build-test.pl\n> and http://krogh.cc/~jesper/words.txt is needed for it to run.\n>\n> Test system.. average desktop, 1 SATA drive and 1.5GB memory with pg 8.4.1.\n>\n> The dataset consists of words randomized, but .. all records contains\n> \"commonterm\", around 80% contains commonterm80 and so on..\n>\n>        my $rand = rand();\n>        push @doc,\"commonterm\" if $commonpos == $j;\n>        push @doc,\"commonterm80\" if $commonpos == $j && $rand < 0.8;\n>\n> Results are run multiple times after each other so they should be\n> reproducible:\n>\n> ftstest=# explain analyze select id from ftstest where body_fts @@\n> to_tsquery('commonterm80');\n>                                                   QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------\n>  Seq Scan on ftstest  (cost=0.00..10750.00 rows=40188 width=4) (actual\n> time=0.102..1792.215 rows=40082 loops=1)\n>   Filter: (body_fts @@ to_tsquery('commonterm80'::text))\n>  Total runtime: 1809.437 ms\n> (3 rows)\n>\n> ftstest=# set enable_seqscan=off;\n> SET\n> ftstest=# explain analyze select id from ftstest where body_fts @@\n> to_tsquery('commonterm80');\n>                                                              QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------\n>  Bitmap Heap Scan on ftstest  (cost=115389.14..125991.96 rows=40188\n> width=4) (actual time=17.445..197.356 rows=40082 loops=1)\n>   Recheck Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n>   ->  Bitmap Index Scan on ftstest_gin_idx  (cost=0.00..115379.09\n> rows=40188 width=0) (actual time=13.370..13.370 rows=40082 loops=1)\n>         Index Cond: (body_fts @@ to_tsquery('commonterm80'::text))\n>  Total runtime: 204.201 ms\n> (5 rows)\n>\n> Given that the seq-scan have to visit 50K row to create the result and\n> the bitmap heap scan only have to visit 40K (but search the index) we\n> would expect the seq-scan to be at most 25% more expensive than the\n> bitmap-heap scan.. e.g. less than 300ms.\n\nI've seen behavior similar to this in the past with a plain old B-tree\nindex. As in your case, a bitmap index scan was significantly faster\nthan a sequential scan even though essentially all the heap pages had\nto be scanned, but the planner expected the opposite to be true. The\nplanner's expectation is that the dominent cost will be fetching the\npages, and it furthermore thinks that fetching things in sequential\norder is much better than skipping around randomly. However, if all\nthe pages are memory-resident - possibly even in L2 or L3 CPU cache -\nfetching the pages is nearly free, so the dominant cost becomes the\nCPU time to process the tuples.\n\nMy best guess is that in cases like this index cond is cheaper to\nevaluate than the recheck cond/filter, so the index scan wins not by\nreading fewer pages but by avoiding the need to examine some of the\ntuples on each page. I might be all wet, though.\n\nIf your whole database fits in RAM, you could try changing your\nseq_page_cost and random_page_cost variables from the default values\nof 1 and 4 to something around 0.05, or maybe even 0.01, and see\nwhether that helps. But if it's just this query that is in cache and\nyou have lots of other things that are going to disk, that's harder to\ntune. You can probably still lower the default values somewhat, but\nif you go crazy with it you'll start to have problems in the other\ndirection.\n\n...Robert\n", "msg_date": "Tue, 27 Oct 2009 10:48:16 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap heap scan way cheaper than seq scan on the same\n\tamount of tuples (fts-search)." }, { "msg_contents": "> On Mon, Oct 26, 2009 at 4:02 PM, Jesper Krogh <[email protected]> wrote:\n>> Given that the seq-scan have to visit 50K row to create the result and\n>> the bitmap heap scan only have to visit 40K (but search the index) we\n>> would expect the seq-scan to be at most 25% more expensive than the\n>> bitmap-heap scan.. e.g. less than 300ms.\n>\n> I've seen behavior similar to this in the past with a plain old B-tree\n> index. As in your case, a bitmap index scan was significantly faster\n> than a sequential scan even though essentially all the heap pages had\n> to be scanned, but the planner expected the opposite to be true. The\n> planner's expectation is that the dominent cost will be fetching the\n> pages, and it furthermore thinks that fetching things in sequential\n> order is much better than skipping around randomly. However, if all\n> the pages are memory-resident - possibly even in L2 or L3 CPU cache -\n> fetching the pages is nearly free, so the dominant cost becomes the\n> CPU time to process the tuples.\n\nWell, no. This topic is not at all about the query-planner. It is about\nthe actual run-time of the two \"allmost\" identical queries. It may be\nthat we're seeing the results because one fits better into L2 or L3 cache,\nbut the complete dataset is memory resident and run multiple times in\na row to eliminate disk-access.\n\n> My best guess is that in cases like this index cond is cheaper to\n> evaluate than the recheck cond/filter, so the index scan wins not by\n> reading fewer pages but by avoiding the need to examine some of the\n> tuples on each page. I might be all wet, though.\n\nIn my example the seq-scan evaulates 50K tuples and the heap-scan 40K.\nThe question is why does the \"per-tuple\" evaluation become that much more\nexpensive (x7.5)[1] on the seq-scan than on the index-scan, when the\ncomplete dataset indeed is in memory?\n\n> If your whole database fits in RAM, you could try changing your\n> seq_page_cost and random_page_cost variables from the default values\n> of 1 and 4 to something around 0.05, or maybe even 0.01, and see\n> whether that helps.\n\nThis is about planning the query. We're talking actual runtimes here.\n\n[1] 50K tuples in 1.800ms vs. 40K tuples in 200ms\n\n-- \nJesper\n\n", "msg_date": "Tue, 27 Oct 2009 16:08:08 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: bitmap heap scan way cheaper than seq scan on the\n\tsame amount of tuples (fts-search)." }, { "msg_contents": "On Tue, Oct 27, 2009 at 11:08 AM, <[email protected]> wrote:\n> In my example the seq-scan evaulates 50K tuples and the heap-scan 40K.\n> The question is why does the \"per-tuple\" evaluation become that much more\n> expensive (x7.5)[1] on the seq-scan than on the index-scan, when the\n> complete dataset indeed is in memory?\n\n[ ... thinks a little more ... ]\n\nThe bitmap index scan returns a TID bitmap. From a quick look at\nnodeBitmapHeapScan.c, it appears that the recheck cond only gets\nevaluated for those portions of the TID bitmap that are lossy. So I'm\nguessing what may be happening here is that although the bitmap heap\nscan is returning 40K rows, it's doing very few (possibly no) qual\nevaluations, and mostly just checking tuple visibility.\n\n>> If your whole database fits in RAM, you could try changing your\n>> seq_page_cost and random_page_cost variables from the default values\n>> of 1 and 4 to something around 0.05, or maybe even 0.01, and see\n>> whether that helps.\n>\n> This is about planning the query. We're talking actual runtimes here.\n\nSorry, I assumed you were trying to get the planner to pick the faster\nplan. If not, never mind.\n\n...Robert\n", "msg_date": "Tue, 27 Oct 2009 21:24:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap heap scan way cheaper than seq scan on the same\n\tamount of tuples (fts-search)." }, { "msg_contents": "Jesper Krogh <[email protected]> writes:\n> I'm currently trying to figure out why the tsearch performance seems to\n> vary a lot between different queryplans. I have created a sample dataset\n> that sort of resembles the data I have to work on.\n\n> The script that builds the dataset is at:\n> http://krogh.cc/~jesper/build-test.pl\n> and http://krogh.cc/~jesper/words.txt is needed for it to run.\n\nI got around to looking at this example finally, and I can reproduce\nyour results pretty closely. I think there are two things going on:\n\n1. The cost estimates for to_tsquery and ts_match_vq don't reflect the\nactually-rather-high costs of those functions. Since the seqscan plan\nexecutes these functions many more times than the indexscan plan, that\nresults in a relative cost error. There's already been some discussion\nof changing the default costs for the tsearch functions, but nothing's\nbeen done yet. However, that seems to be a relatively small problem\ncompared to...\n\n2. The planner is estimating that most of the GIN index has to be\nexamined --- specifically, it estimates (pretty accurately) that\n40188 out of 50000 table rows will match, and the default assumption\nis that that means 40188/50000 of the index blocks will have to be\nread. On my machine the index amounts to 39076 blocks, so we\nestimate 31407 index blocks have to be read, and that's why the cost\nestimate for the indexscan is so huge. The *actual* number of index\nblocks read for the query, according to the stats collector, is 66.\n\nSo it appears that genericcostestimate() is just completely\ninappropriate for GIN indexes, at least when considering common terms.\nI guess that's not so astonishing when you remember that GIN isn't built\naround per-heap-tuple entries the way the other index types are.\nOleg, Teodor, can you suggest any better cost metric to use for GIN?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Oct 2009 11:22:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bitmap heap scan way cheaper than seq scan on the same amount of\n\ttuples (fts-search)." } ]
[ { "msg_contents": "Dear all,\n\nI need to optimize a database used by approx 10 people, I don't need to\nhave the perfect config, simply to avoid stupid bottle necks and follow\nthe best practices...\n\nThe database is used from a web interface the whole work day with\n\"normal\" requests (nothing very special).\n\nAnd each morning huge tables are DELETED and all data is INSERTed new\nfrom a script. (Well, \"huge\" is very relative, it's only 400'000 records)\n\nFor now, we only planned a VACUUM ANALYSE eacha night.\n\nBut the database complained about checkpoint_segments (currently = 3)\n\nWhat should be changed first to improve speed ?\n* memory ?\n *???\nThanks a lot for any advice (I know there are plenty of archived\ndiscussions on this subject but it's always difficult to know what very\nimportant, and what's general as opposed to specific solutions)\n\nHave a nice day !\n\nDenis\n", "msg_date": "Wed, 28 Oct 2009 13:11:28 +0100", "msg_from": "Denis BUCHER <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql optimisation" }, { "msg_contents": "On Wed, Oct 28, 2009 at 12:11 PM, Denis BUCHER <[email protected]>wrote:\n\n> Dear all,\n>\n> I need to optimize a database used by approx 10 people, I don't need to\n> have the perfect config, simply to avoid stupid bottle necks and follow\n> the best practices...\n>\n> The database is used from a web interface the whole work day with\n> \"normal\" requests (nothing very special).\n>\n> And each morning huge tables are DELETED and all data is INSERTed new\n> from a script. (Well, \"huge\" is very relative, it's only 400'000 records)\n>\nuse truncate, to clear the tables.\n\n\n>\n> For now, we only planned a VACUUM ANALYSE eacha night.\n>\nif it is 8.3+, don't , as autovacuum takes care of that.\n\n\n>\n> But the database complained about checkpoint_segments (currently = 3)\n>\ndepending on traffic, that's pretty low. You should increment it, beyond 12\nif possible.\n\n\n\n>\n> What should be changed first to improve speed ?\n> * memory ?\n> *???\n> Thanks a lot for any advice (I know there are plenty of archived\n> discussions on this subject but it's always difficult to know what very\n> important, and what's general as opposed to specific solutions)\n>\n\nagain, if it is 8.3+ (and everyone here would advice you to run at least\nthat version), try using pg_tune script to get best performance settings.\n\n\n\n-- \nGJ\n\nOn Wed, Oct 28, 2009 at 12:11 PM, Denis BUCHER <[email protected]> wrote:\nDear all,\n\nI need to optimize a database used by approx 10 people, I don't need to\nhave the perfect config, simply to avoid stupid bottle necks and follow\nthe best practices...\n\nThe database is used from a web interface the whole work day with\n\"normal\" requests (nothing very special).\n\nAnd each morning huge tables are DELETED and all data is INSERTed new\nfrom a script. (Well, \"huge\" is very relative, it's only 400'000 records)use truncate, to clear the tables. \n\nFor now, we only planned a VACUUM ANALYSE eacha night.if it is 8.3+, don't , as autovacuum takes care of that. \n\nBut the database complained about checkpoint_segments (currently = 3)depending on traffic, that's pretty low. You should increment it, beyond 12 if possible. \n\nWhat should be changed first to improve speed ?\n* memory ?\n *???\nThanks a lot for any advice (I know there are plenty of archived\ndiscussions on this subject but it's always difficult to know what very\nimportant, and what's general as opposed to specific solutions)\nagain, if it is 8.3+ (and everyone here would advice you to run at least that version), try using pg_tune script to get best performance settings. -- GJ", "msg_date": "Wed, 28 Oct 2009 12:26:29 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql optimisation" }, { "msg_contents": "Grzegorz Jaśkiewicz a écrit :\n> \n> \n> On Wed, Oct 28, 2009 at 12:11 PM, Denis BUCHER <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Dear all,\n> \n> I need to optimize a database used by approx 10 people, I don't need to\n> have the perfect config, simply to avoid stupid bottle necks and follow\n> the best practices...\n> \n> The database is used from a web interface the whole work day with\n> \"normal\" requests (nothing very special).\n> \n> And each morning huge tables are DELETED and all data is INSERTed new\n> from a script. (Well, \"huge\" is very relative, it's only 400'000\n> records)\n> \n> use truncate, to clear the tables.\n\nOh yes, instead of DELETE FROM table; ? Ok thanks for the tip\n\n> For now, we only planned a VACUUM ANALYSE eacha night.\n> \n> if it is 8.3+, don't , as autovacuum takes care of that.\n\n8.1.17\n\n> But the database complained about checkpoint_segments (currently = 3)\n> \n> depending on traffic, that's pretty low. You should increment it, beyond\n> 12 if possible.\n\nOk no problem in increasing this value, to, let's say... 50 ?\n\n> What should be changed first to improve speed ?\n> * memory ?\n> *???\n> Thanks a lot for any advice (I know there are plenty of archived\n> discussions on this subject but it's always difficult to know what very\n> important, and what's general as opposed to specific solutions)\n> \n> \n> again, if it is 8.3+ (and everyone here would advice you to run at least\n> that version), try using pg_tune script to get best performance settings.\n\nOk, we will soon move it to a new server, it will be 8.3 then :-)\nAnd I will use pg_tune...\n\nThanks a lot for your advices !\n\nDenis\n", "msg_date": "Wed, 28 Oct 2009 14:48:56 +0100", "msg_from": "Denis BUCHER <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql optimisation" }, { "msg_contents": "2009/10/28 Denis BUCHER <[email protected]>\n\n> Grzegorz Jaśkiewicz a écrit :\n> >\n> >\n> > On Wed, Oct 28, 2009 at 12:11 PM, Denis BUCHER <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >\n> > Dear all,\n> >\n> > I need to optimize a database used by approx 10 people, I don't need\n> to\n> > have the perfect config, simply to avoid stupid bottle necks and\n> follow\n> > the best practices...\n> >\n> > The database is used from a web interface the whole work day with\n> > \"normal\" requests (nothing very special).\n> >\n> > And each morning huge tables are DELETED and all data is INSERTed new\n> > from a script. (Well, \"huge\" is very relative, it's only 400'000\n> > records)\n> >\n> > use truncate, to clear the tables.\n>\n> Oh yes, instead of DELETE FROM table; ? Ok thanks for the tip\n>\n> > For now, we only planned a VACUUM ANALYSE eacha night.\n> >\n> > if it is 8.3+, don't , as autovacuum takes care of that.\n>\n> 8.1.17\n>\n> > But the database complained about checkpoint_segments (currently = 3)\n> >\n> > depending on traffic, that's pretty low. You should increment it, beyond\n> > 12 if possible.\n>\n> Ok no problem in increasing this value, to, let's say... 50 ?\n>\n\nyes. This simply means, that in case of any failure (power outage, etc) -\ndata log could be slightly older, but if you have busy DB on the other hand\n- low number here, means a lot of checkpoints written - which slows down\nperformance. So it is a trade-off.\n8.1 is pretty old. Go for 8.3 if you want something old enough (as in,\nstable-and-old-but-not-too-old). Or 8.4 if you are interested in newest\nfeatures.\n\n\n\n-- \nGJ\n\n2009/10/28 Denis BUCHER <[email protected]>\nGrzegorz Jaśkiewicz a écrit :\n>\n>\n> On Wed, Oct 28, 2009 at 12:11 PM, Denis BUCHER <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n>     Dear all,\n>\n>     I need to optimize a database used by approx 10 people, I don't need to\n>     have the perfect config, simply to avoid stupid bottle necks and follow\n>     the best practices...\n>\n>     The database is used from a web interface the whole work day with\n>     \"normal\" requests (nothing very special).\n>\n>     And each morning huge tables are DELETED and all data is INSERTed new\n>     from a script. (Well, \"huge\" is very relative, it's only 400'000\n>     records)\n>\n> use truncate, to clear the tables.\n\nOh yes, instead of DELETE FROM table; ? Ok thanks for the tip\n\n>     For now, we only planned a VACUUM ANALYSE eacha night.\n>\n> if it is 8.3+, don't , as autovacuum takes care of that.\n\n8.1.17\n\n>     But the database complained about checkpoint_segments (currently = 3)\n>\n> depending on traffic, that's pretty low. You should increment it, beyond\n> 12 if possible.\n\nOk no problem in increasing this value, to, let's say... 50 ?\nyes. This simply means, that in case of any failure (power outage, etc) - data log could be slightly older, but if you have busy DB on the other hand - low number here, means a lot of checkpoints written - which slows down performance. So it is a  trade-off. \n8.1 is pretty old. Go for 8.3 if you want something old enough (as in, stable-and-old-but-not-too-old). Or 8.4 if you are interested in newest features.-- GJ", "msg_date": "Wed, 28 Oct 2009 14:17:16 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql optimisation" }, { "msg_contents": "Denis BUCHER <[email protected]> wrote:\n \n> And each morning ... all data is INSERTed new\n \nI recommend VACUUM ANALYZE of the table(s) after this step. Without\nthat, the first query to read each tuple sets its hint bits and\nrewrites it, causing a surprising delay at unpredictable times\n(although heavier near the start of the day).\n \n-Kevin\n", "msg_date": "Wed, 28 Oct 2009 09:39:13 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql optimisation" }, { "msg_contents": "Kevin Grittner a �crit :\n>> And each morning ... all data is INSERTed new\n> \n> I recommend VACUUM ANALYZE of the table(s) after this step. Without\n> that, the first query to read each tuple sets its hint bits and\n> rewrites it, causing a surprising delay at unpredictable times\n> (although heavier near the start of the day).\n\nOk great, thanks for the advice, I added it at the end of the process...\n\nDenis\n", "msg_date": "Wed, 28 Oct 2009 17:08:29 +0100", "msg_from": "Denis BUCHER <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql optimisation" }, { "msg_contents": "On Wed, 28 Oct 2009, Denis BUCHER wrote:\n\n> For now, we only planned a VACUUM ANALYSE eacha night.\n\nYou really want to be on a later release than 8.1 for an app that is \nheavily deleting things every day. The answer to most VACUUM problems is \n\"VACUUM more often, preferrably with autovacuum\", and using 8.1 puts you \ninto a position where that's not really practical. Also, 8.3 and 8.4 are \nmuch faster anyway.\n\n8.4 in particular has a fix for a problem you're very likely to run into \nwith this sort of workload (running out of max_fsm_pages when running \nVACUUM), so if you're going to upgrade I would highly recommend targeting \n8.4 instead of an earlier version.\n\n> But the database complained about checkpoint_segments (currently = 3)\n> What should be changed first to improve speed ?\n\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server covers this \nparameter and some of the others you should be considering. If your goal \nis just to nail the major bottlenecks and get the configuration in the \nright neighborhood, you probably only need to consider the setting down to \nthe work_mem section; the ones after that are more advanced than you \nprobably need.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 28 Oct 2009 12:20:11 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql optimisation" }, { "msg_contents": "> -----Original Message-----\n> From: Denis BUCHER\n> \n> And each morning huge tables are DELETED and all data is \n> INSERTed new from a script. (Well, \"huge\" is very relative, \n> it's only 400'000 records)\n\nIf you are deleting ALL rows in the tables, then I would suggest using\nTRUNCATE instead of DELETE. Truncate will be faster deleting and it will\nnot accumulate dead tuples.\n\nAlso if you switch to truncate then you should ANALYSE the tables after you\nfinish inserting. Note that VACUUM ANALYSE is not necessary after a\ntruncate/insert because there should be no dead tuples to vacuum.\n\nDave\n\n\n\n", "msg_date": "Wed, 28 Oct 2009 11:30:39 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql optimisation" }, { "msg_contents": "On Wed, 28 Oct 2009, Dave Dutcher wrote:\n> Also if you switch to truncate then you should ANALYSE the tables after you\n> finish inserting. Note that VACUUM ANALYSE is not necessary after a\n> truncate/insert because there should be no dead tuples to vacuum.\n\nPerhaps reading the other replies in the thread before replying yourself \nmight be advisable, because this previous reply directly contradicts you:\n\nOn Wed, 28 Oct 2009, Kevin Grittner wrote:\n> I recommend VACUUM ANALYZE of the table(s) after this step. Without\n> that, the first query to read each tuple sets its hint bits and\n> rewrites it, causing a surprising delay at unpredictable times\n> (although heavier near the start of the day).\n\nThere *is* a benefit of running VACUUM ANALYSE rather than just ANALYSE.\n\nMatthew\n\n-- \n I suppose some of you have done a Continuous Maths course. Yes? Continuous\n Maths? <menacing stares from audience> Whoah, it was like that, was it!\n -- Computer Science Lecturer\n", "msg_date": "Wed, 28 Oct 2009 16:36:52 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql optimisation" }, { "msg_contents": "> From: Matthew Wakeling\n> \n> Perhaps reading the other replies in the thread before \n> replying yourself might be advisable, because this previous \n> reply directly contradicts you:\n> \n> On Wed, 28 Oct 2009, Kevin Grittner wrote:\n> > I recommend VACUUM ANALYZE of the table(s) after this step. Without \n> > that, the first query to read each tuple sets its hint bits and \n> > rewrites it, causing a surprising delay at unpredictable times \n> > (although heavier near the start of the day).\n> \n> There *is* a benefit of running VACUUM ANALYSE rather than \n> just ANALYSE.\n> \n> Matthew\n\nI did read the other replies first, I guess I just missed Kevin Grittner's\nsomehow. I noticed several people were worried the OP had problems with\nbloat, which is why I suggested TRUNCATE if possible. That was my main\npoint. I guess I made the other comment because I feel beginners with\npostgres quite often don't understand the difference between VACUUM and\nANALYSE, and for large tables an ANALYSE alone can take much less time. I\ndidn't think about hint bits because I've never noticed a big impact from\nthem, but that is probably just because of my particular situation. Now\nthat it has been pointed out to me I agree it is good advise for the OP to\nuse VACUUM ANALSE.\n\nDave\n\n\n", "msg_date": "Wed, 28 Oct 2009 12:23:18 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql optimisation" }, { "msg_contents": "Hello Greg,\n\nGreg Smith a �crit :\n> On Wed, 28 Oct 2009, Denis BUCHER wrote:\n> \n>> For now, we only planned a VACUUM ANALYSE eacha night.\n> \n> You really want to be on a later release than 8.1 for an app that is\n> heavily deleting things every day. The answer to most VACUUM problems\n> is \"VACUUM more often, preferrably with autovacuum\", and using 8.1 puts\n> you into a position where that's not really practical. Also, 8.3 and\n> 8.4 are much faster anyway.\n\nOk as the new server will be Debian and the latest stbale is 8.3 we'll\nbe on 8.3 soon :-)\n\n> 8.4 in particular has a fix for a problem you're very likely to run into\n> with this sort of workload (running out of max_fsm_pages when running\n> VACUUM), so if you're going to upgrade I would highly recommend\n> targeting 8.4 instead of an earlier version.\n\nI got this problem already on 8.1, I just increased max_fsm_pages, is\nthat OK ?\n\n>> But the database complained about checkpoint_segments (currently = 3)\n>> What should be changed first to improve speed ?\n> \n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server covers\n> this parameter and some of the others you should be considering. If\n> your goal is just to nail the major bottlenecks and get the\n> configuration in the right neighborhood, you probably only need to\n> consider the setting down to the work_mem section; the ones after that\n> are more advanced than you probably need.\n\nOk I tried to change some parameters, we'll see what happens ;-)\n\nThanks a lot for all your tips :-)\n\nHave a nice evening !\n\n\nDenis\n", "msg_date": "Thu, 29 Oct 2009 15:32:19 +0100", "msg_from": "Denis BUCHER <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql optimisation" } ]
[ { "msg_contents": "Postgres consistently does a sequential scan on the child partitions\nfor this query\n\nselect * from partitioned_table\nwhere partitioned_column > current_timestamp - interval 8 days\nwhere x in (select yy from z where colname like 'aaa%')\n\nIf I replace the query with\n\nselect * from partitioned_table\nwhere partitioned_column > current_timestamp - interval 8 days\nwhere x in (hardcode_value)\n\nThe results are in line with expectation (very fast and uses a Bitmap\nIndex Scan on the column X)\n", "msg_date": "Wed, 28 Oct 2009 11:13:42 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "sub-select in IN clause results in sequential scan" }, { "msg_contents": "On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu <[email protected]> wrote:\n\n> Postgres consistently does a sequential scan on the child partitions\n> for this query\n>\n> select * from partitioned_table\n> where partitioned_column > current_timestamp - interval 8 days\n> where x in (select yy from z where colname like 'aaa%')\n>\n> If I replace the query with\n>\n> select * from partitioned_table\n> where partitioned_column > current_timestamp - interval 8 days\n> where x in (hardcode_value)\n>\n> The results are in line with expectation (very fast and uses a Bitmap\n> Index Scan on the column X)\n> \\\n\n\nuse JOIN luke..\n\n\n-- \nGJ\n\nOn Wed, Oct 28, 2009 at 6:13 PM, Anj Adu <[email protected]> wrote:\nPostgres consistently does a sequential scan on the child partitions\nfor this query\n\nselect * from partitioned_table\nwhere partitioned_column > current_timestamp - interval 8 days\nwhere x in (select yy from z where colname like 'aaa%')\n\nIf I replace the query with\n\nselect * from partitioned_table\nwhere partitioned_column > current_timestamp - interval 8 days\nwhere x in (hardcode_value)\n\nThe results are in line with expectation (very fast and uses a Bitmap\nIndex Scan on the column X)\\use JOIN luke.. -- GJ", "msg_date": "Thu, 29 Oct 2009 09:54:02 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sub-select in IN clause results in sequential scan" }, { "msg_contents": "2009/10/29 Grzegorz Jaśkiewicz <[email protected]>\n\n>\n>\n> On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu <[email protected]> wrote:\n>\n>> Postgres consistently does a sequential scan on the child partitions\n>> for this query\n>>\n>> select * from partitioned_table\n>> where partitioned_column > current_timestamp - interval 8 days\n>> where x in (select yy from z where colname like 'aaa%')\n>>\n>> If I replace the query with\n>>\n>> select * from partitioned_table\n>> where partitioned_column > current_timestamp - interval 8 days\n>> where x in (hardcode_value)\n>>\n>> The results are in line with expectation (very fast and uses a Bitmap\n>> Index Scan on the column X)\n>> \\\n>\n>\n> use JOIN luke..\n>\n>\n> --\n> GJ\n>\n\nYes you try by using Join\n\nJAK\n\n2009/10/29 Grzegorz Jaśkiewicz <[email protected]>\nOn Wed, Oct 28, 2009 at 6:13 PM, Anj Adu <[email protected]> wrote:\n\nPostgres consistently does a sequential scan on the child partitions\nfor this query\n\nselect * from partitioned_table\nwhere partitioned_column > current_timestamp - interval 8 days\nwhere x in (select yy from z where colname like 'aaa%')\n\nIf I replace the query with\n\nselect * from partitioned_table\nwhere partitioned_column > current_timestamp - interval 8 days\nwhere x in (hardcode_value)\n\nThe results are in line with expectation (very fast and uses a Bitmap\nIndex Scan on the column X)\\use JOIN luke.. -- GJ\nYes you try by using JoinJAK", "msg_date": "Thu, 29 Oct 2009 16:02:38 +0530", "msg_from": "Angayarkanni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sub-select in IN clause results in sequential scan" }, { "msg_contents": "Join did not help. A sequential scan is still being done. The\nhardcoded value in the IN clause performs the best. The time\ndifference is more than an order of magnitude.\n\n2009/10/29 Angayarkanni <[email protected]>:\n>\n> 2009/10/29 Grzegorz Jaśkiewicz <[email protected]>\n>>\n>>\n>> On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu <[email protected]> wrote:\n>>>\n>>> Postgres consistently does a sequential scan on the child partitions\n>>> for this query\n>>>\n>>> select * from partitioned_table\n>>> where partitioned_column > current_timestamp - interval 8 days\n>>> where x in (select yy from z where colname like 'aaa%')\n>>>\n>>> If I replace the query with\n>>>\n>>> select * from partitioned_table\n>>> where partitioned_column > current_timestamp - interval 8 days\n>>> where x in (hardcode_value)\n>>>\n>>> The results are in line with expectation (very fast and uses a Bitmap\n>>> Index Scan on the column X)\n>>> \\\n>>\n>> use JOIN luke..\n>>\n>> --\n>> GJ\n>\n> Yes you try by using Join\n>\n> JAK\n>\n", "msg_date": "Thu, 29 Oct 2009 07:10:24 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sub-select in IN clause results in sequential scan" }, { "msg_contents": "Try replacing the 'current_timestamp - interval 8 days' portion with explicit values (e.g. partitioned_column < '2009-10-21'::date ) and see if that works. I think the query planner can only use explicit values to determine if it should go straight to partitioned tables.\n\nBob\n\n--- On Thu, 10/29/09, Anj Adu <[email protected]> wrote:\n\n> From: Anj Adu <[email protected]>\n> Subject: Re: [PERFORM] sub-select in IN clause results in sequential scan\n> To: \"Angayarkanni\" <[email protected]>\n> Cc: \"Grzegorz Jaśkiewicz\" <[email protected]>, [email protected]\n> Date: Thursday, October 29, 2009, 10:10 AM\n> Join did not help. A sequential scan\n> is still being done. The\n> hardcoded value in the IN clause performs the best. The\n> time\n> difference is more than an order of magnitude.\n> \n> 2009/10/29 Angayarkanni <[email protected]>:\n> >\n> > 2009/10/29 Grzegorz Jaśkiewicz <[email protected]>\n> >>\n> >>\n> >> On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu <[email protected]>\n> wrote:\n> >>>\n> >>> Postgres consistently does a sequential scan\n> on the child partitions\n> >>> for this query\n> >>>\n> >>> select * from partitioned_table\n> >>> where partitioned_column >\n> current_timestamp - interval 8 days\n> >>> where x in (select yy from z where colname\n> like 'aaa%')\n> >>>\n> >>> If I replace the query with\n> >>>\n> >>> select * from partitioned_table\n> >>> where partitioned_column >\n> current_timestamp - interval 8 days\n> >>> where x in (hardcode_value)\n> >>>\n> >>> The results are in line with expectation (very\n> fast and uses a Bitmap\n> >>> Index Scan on the column X)\n> >>> \\\n> >>\n> >> use JOIN luke..\n> >>\n> >> --\n> >> GJ\n> >\n> > Yes you try by using Join\n> >\n> > JAK\n> >\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Thu, 29 Oct 2009 08:24:17 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sub-select in IN clause results in sequential scan" }, { "msg_contents": "On Thu, Oct 29, 2009 at 10:10 AM, Anj Adu <[email protected]> wrote:\n> Join did not help. A sequential scan is still being done. The\n> hardcoded value in the IN clause performs the best. The time\n> difference is more than an order of magnitude.\n\nIf you want help debugging a performance problem, you need to post\nyour EXPLAIN ANALYZE results.\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n...Robert\n", "msg_date": "Thu, 29 Oct 2009 11:35:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sub-select in IN clause results in sequential scan" }, { "msg_contents": "I had posted this on another thread..but did not get a response..here\nit is again\n\nexplain analyze select thedate,sent.watch as wat, nod.point as fwl,\nact.acttype, intf.pointofcontact, func.getNum(snum) as sss,\nfunc.getNum(dnum) as ddd, dddport,\n\naaa.aaacol,szone.ssszn as ssszone, dzone.dddzn as\ndddzone,snippets,timea,total from (select date_trunc('day',thedate) as\n\nthedate,watch_id,point_id,acttype_id,pointofcontact_id,snum,dnum,dddport,aaacol_id,ssszone_id,dddzone_id,sum(snippets)\nas snippets,sum(timea) as timea,sum(summcount) as total\n\nfrom realdev_date_facet where thedate between '2009-10-17' and\n'2009-10-24' and watch_id in (3) group by\n\ndate_trunc('day',thedate),watch_id,point_id,acttype_id,pointofcontact_id,snum,dnum,dddport,aaacol_id,ssszone_id,dddzone_id,snippets,timea\norder by 1) a left outer join\n\nrealdev_pointofcontact intf on a.pointofcontact_id =\nintf.pointofcontact_id left outer join realdev_ssszn szone on\na.ssszone_id = szone.ssszn_id left outer join realdev_dddzn\n\ndzone on a.dddzone_id = dzone.dddzn_id left outer join realdev_aaacol\naaa on a.aaacol_id = aaa.aaacol_id, realdev_watch sent, realdev_point\nnod, realdev_acttype act where\n\na.watch_id = sent.watch_id and a.point_id = nod.point_id and\na.acttype_id = act.acttype_id\n\n\n\n\nSlow Query (with IN clause sub-select)\n--------------------------------------------------------------------------------------\n Hash Join (cost=2436528.60..2493232.81 rows=310708 width=996)\n(actual time=144303.550..144609.576 rows=7294 loops=1)\n Hash Cond: (\"outer\".watch_id = \"inner\".watch_id)\n -> Hash Join (cost=2436513.10..2487003.15 rows=310708 width=854)\n(actual time=144222.468..144287.330 rows=7294 loops=1)\n Hash Cond: (\"outer\".point_id = \"inner\".point_id)\n -> Hash Join (cost=2436497.60..2482327.03 rows=310708\nwidth=712) (actual time=144222.358..144281.371 rows=7294 loops=1)\n Hash Cond: (\"outer\".acttype_id = \"inner\".acttype_id)\n -> Hash Left Join (cost=2436477.97..2477646.78\nrows=310708 width=648) (actual time=144222.319..144275.382 rows=7294\nloops=1)\n Hash Cond: (\"outer\".aaacol_id = \"inner\".aaacol_id)\n -> Hash Left Join (cost=2436457.35..2472965.54\nrows=310708 width=594) (actual time=144222.267..144269.326 rows=7294\nloops=1)\n Hash Cond: (\"outer\".dddzone_id = \"inner\".dddzn_id)\n -> Hash Left Join\n(cost=2436440.85..2468288.42 rows=310708 width=480) (actual\ntime=144222.153..144263.530 rows=7294 loops=1)\n Hash Cond: (\"outer\".ssszone_id =\n\"inner\".ssszn_id)\n -> Hash Left Join\n(cost=2436426.97..2463613.92 rows=310708 width=266) (actual\ntime=144222.009..144257.037 rows=7294 loops=1)\n Hash Cond:\n(\"outer\".pointofcontact_id = \"inner\".pointofcontact_id)\n -> GroupAggregate\n(cost=2436410.47..2455829.72 rows=310708 width=80) (actual\ntime=144221.980..144252.195 rows=7294 loops=1)\n -> Sort\n(cost=2436410.47..2437187.24 rows=310708 width=80) (actual\ntime=144221.950..144224.805 rows=10248 loops=1)\n Sort Key:\ndate_trunc('day'::text, public.realdev_date_facet.thedate),\npublic.realdev_date_facet.watch_id,\n\npublic.realdev_date_facet.point_id,\npublic.realdev_date_facet.acttype_id,\npublic.realdev_date_facet.pointofcontact_id,\npublic.realdev_date_facet.snum,\n\npublic.realdev_date_facet.dnum, public.realdev_date_facet.dddport,\npublic.realdev_date_facet.aaacol_id,\npublic.realdev_date_facet.ssszone_id,\n\npublic.realdev_date_facet.dddzone_id,\npublic.realdev_date_facet.snippets, public.realdev_date_facet.timea\n -> Hash IN Join\n(cost=15.51..2408065.83 rows=310708 width=80) (actual\ntime=73.279..144105.862 rows=10248 loops=1)\n Hash Cond:\n(\"outer\".watch_id = \"inner\".watch_id)\n -> Append\n(cost=0.00..2062387.41 rows=68355812 width=80) (actual\ntime=8.161..-17465745.684 rows=68355711 loops=1)\n ->\nIndex Scan using realdev_dy_dim_idx1 on realdev_date_facet\n(cost=0.00..3.25 rows=1 width=80) (actual\n\ntime=0.040..0.040 rows=0 loops=1)\n\nIndex Cond: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n -> Seq\nScan on realdev_date_facet_2009_10_17 realdev_date_facet\n(cost=0.00..216426.39 rows=7166959 width=80)\n\n(actual time=8.119..11012.923 rows=7166717 loops=1)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n -> Seq\nScan on realdev_date_facet_2009_10_18 realdev_date_facet\n(cost=0.00..250263.65 rows=8291577 width=80)\n\n(actual time=7.419..18751.080 rows=8291095 loops=1)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n -> Seq\nScan on realdev_date_facet_2009_10_19 realdev_date_facet\n(cost=0.00..289231.36 rows=9589091 width=80)\n\n(actual time=0.027..19666.968 rows=9589432 loops=1)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n -> Seq\nScan on realdev_date_facet_2009_10_20 realdev_date_facet\n(cost=0.00..288674.88 rows=9572392 width=80)\n\n(actual time=0.029..12557.198 rows=9572601 loops=1)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n -> Seq\nScan on realdev_date_facet_2009_10_21 realdev_date_facet\n(cost=0.00..269963.64 rows=8949976 width=80)\n\n(actual time=0.036..9544.469 rows=8950605 loops=1)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n -> Seq\nScan on realdev_date_facet_2009_10_22 realdev_date_facet\n(cost=0.00..274093.95 rows=9089330 width=80)\n\n(actual time=0.027..26397891.108 rows=9088813 loops=1)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n -> Seq\nScan on realdev_date_facet_2009_10_23 realdev_date_facet\n(cost=0.00..253855.74 rows=8417049 width=80)\n\n(actual time=0.027..9165.289 rows=8417659 loops=1)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n -> Seq\nScan on realdev_date_facet_2009_10_24 realdev_date_facet\n(cost=0.00..219874.55 rows=7279437 width=80)\n\n(actual time=0.035..13203440.555 rows=7278789 loops=1)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n -> Hash\n(cost=15.50..15.50 rows=2 width=4) (actual time=0.025..0.025 rows=1\nloops=1)\n -> Seq\nScan on realdev_watch (cost=0.00..15.50 rows=2 width=4) (actual\ntime=0.012..0.020 rows=1 loops=1)\n\nFilter: ((watch)::text ~~ 'searchtext%'::text)\n -> Hash (cost=15.20..15.20\nrows=520 width=122) (actual time=0.002..0.002 rows=0 loops=1)\n -> Seq Scan on\nrealdev_pointofcontact intf (cost=0.00..15.20 rows=520 width=122)\n(actual time=0.001..0.001 rows=0 loops=1)\n -> Hash (cost=13.10..13.10 rows=310\nwidth=222) (actual time=0.131..0.131 rows=100 loops=1)\n -> Seq Scan on realdev_ssszn\nszone (cost=0.00..13.10 rows=310 width=222) (actual time=0.028..0.076\nrows=100 loops=1)\n -> Hash (cost=15.20..15.20 rows=520\nwidth=122) (actual time=0.103..0.103 rows=85 loops=1)\n -> Seq Scan on realdev_dddzn dzone\n(cost=0.00..15.20 rows=520 width=122) (actual time=0.016..0.054\nrows=85 loops=1)\n -> Hash (cost=18.50..18.50 rows=850 width=62)\n(actual time=0.038..0.038 rows=7 loops=1)\n -> Seq Scan on realdev_aaacol aaa\n(cost=0.00..18.50 rows=850 width=62) (actual time=0.028..0.031 rows=7\nloops=1)\n -> Hash (cost=17.70..17.70 rows=770 width=72) (actual\ntime=0.027..0.027 rows=6 loops=1)\n -> Seq Scan on realdev_acttype act\n(cost=0.00..17.70 rows=770 width=72) (actual time=0.018..0.020 rows=6\nloops=1)\n -> Hash (cost=14.40..14.40 rows=440 width=150) (actual\ntime=0.099..0.099 rows=69 loops=1)\n -> Seq Scan on realdev_point nod (cost=0.00..14.40\nrows=440 width=150) (actual time=0.024..0.064 rows=69 loops=1)\n -> Hash (cost=14.40..14.40 rows=440 width=150) (actual\ntime=0.055..0.055 rows=30 loops=1)\n -> Seq Scan on realdev_watch sent (cost=0.00..14.40\nrows=440 width=150) (actual time=0.021..0.037 rows=30 loops=1)\n Total runtime: 144613.558 ms\n\n\n\n\n\n\n=================================================================================================================================================================================\n\nFAST Query (With hardcode IN value)\n\nHash Join (cost=1222737.69..1277695.72 rows=448637 width=996) (actual\ntime=37125.501..37783.520 rows=7294 loops=1)\n Hash Cond: (\"outer\".watch_id = \"inner\".watch_id)\n -> Hash Join (cost=1222722.19..1268707.48 rows=448637 width=854)\n(actual time=37122.482..37166.714 rows=7294 loops=1)\n Hash Cond: (\"outer\".point_id = \"inner\".point_id)\n -> Hash Join (cost=1222706.69..1261962.43 rows=448637\nwidth=712) (actual time=37122.389..37160.697 rows=7294 loops=1)\n Hash Cond: (\"outer\".acttype_id = \"inner\".acttype_id)\n -> Hash Left Join (cost=1222687.07..1255213.25\nrows=448637 width=648) (actual time=37122.335..37154.030 rows=7294\nloops=1)\n Hash Cond: (\"outer\".aaacol_id = \"inner\".aaacol_id)\n -> Hash Left Join (cost=1222666.44..1248463.07\nrows=448637 width=594) (actual time=37122.306..37147.818 rows=7294\nloops=1)\n Hash Cond: (\"outer\".dddzone_id = \"inner\".dddzn_id)\n -> Hash Left Join\n(cost=1222649.94..1241717.01 rows=448637 width=480) (actual\ntime=37122.194..37140.144 rows=7294 loops=1)\n Hash Cond: (\"outer\".ssszone_id =\n\"inner\".ssszn_id)\n -> Hash Left Join\n(cost=1222636.07..1234973.58 rows=448637 width=266) (actual\ntime=37122.076..37133.362 rows=7294 loops=1)\n Hash Cond:\n(\"outer\".pointofcontact_id = \"inner\".pointofcontact_id)\n -> Sort\n(cost=1222619.57..1223741.16 rows=448637 width=80) (actual\ntime=37122.054..37124.857 rows=7294 loops=1)\n Sort Key:\ndate_trunc('day'::text, public.realdev_date_facet.thedate)\n -> HashAggregate\n(cost=1171530.60..1180503.34 rows=448637 width=80) (actual\ntime=37098.239..37108.376 rows=7294 loops=1)\n -> Result\n(cost=0.00..1120386.08 rows=1278613 width=80) (actual\ntime=8010.438..37052.063 rows=10248 loops=1)\n -> Append\n(cost=0.00..1117189.55 rows=1278613 width=80) (actual\ntime=8010.420..37032.106 rows=10248 loops=1)\n ->\nIndex Scan using realdev_dy_dim_idx1 on realdev_date_facet\n(cost=0.00..2.69 rows=1 width=80) (actual\n\ntime=0.027..0.027 rows=0 loops=1)\n\nIndex Cond: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone) AND (watch_id = 3))\n ->\nBitmap Heap Scan on realdev_date_facet_2009_10_17 realdev_date_facet\n(cost=1175.35..116994.25 rows=184386\n\nwidth=80) (actual time=8010.391..8027.057 rows=1025 loops=1)\n\nRecheck Cond: (watch_id = 3)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n\n-> Bitmap Index Scan on realdev_date_facet_2009_10_17_watch_id\n(cost=0.00..1175.35 rows=184386 width=0)\n\n(actual time=8010.057..8010.057 rows=1025 loops=1)\n\n Index Cond: (watch_id = 3)\n ->\nBitmap Heap Scan on realdev_date_facet_2009_10_18 realdev_date_facet\n(cost=898.09..135163.52 rows=141169\n\nwidth=80) (actual time=7926.811..7941.851 rows=985 loops=1)\n\nRecheck Cond: (watch_id = 3)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n\n-> Bitmap Index Scan on realdev_date_facet_2009_10_18_watch_id\n(cost=0.00..898.09 rows=141169 width=0)\n\n(actual time=7926.583..7926.583 rows=985 loops=1)\n\n Index Cond: (watch_id = 3)\n ->\nBitmap Heap Scan on realdev_date_facet_2009_10_19 realdev_date_facet\n(cost=1068.33..156545.18 rows=167809\n\nwidth=80) (actual time=210.303..230.478 rows=1277 loops=1)\n\nRecheck Cond: (watch_id = 3)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n\n-> Bitmap Index Scan on realdev_date_facet_2009_10_19_watch_id\n(cost=0.00..1068.33 rows=167809 width=0)\n\n(actual time=209.980..209.980 rows=1277 loops=1)\n\n Index Cond: (watch_id = 3)\n ->\nBitmap Heap Scan on realdev_date_facet_2009_10_20 realdev_date_facet\n(cost=1076.96..156331.90 rows=168846\n\nwidth=80) (actual time=3388.336..3475.603 rows=1508 loops=1)\n\nRecheck Cond: (watch_id = 3)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n\n-> Bitmap Index Scan on realdev_date_facet_2009_10_20_watch_id\n(cost=0.00..1076.96 rows=168846 width=0)\n\n(actual time=3387.985..3387.985 rows=1508 loops=1)\n\n Index Cond: (watch_id = 3)\n ->\nBitmap Heap Scan on realdev_date_facet_2009_10_21 realdev_date_facet\n(cost=959.30..145554.92 rows=150658\n\nwidth=80) (actual time=9787.370..9807.884 rows=1383 loops=1)\n\nRecheck Cond: (watch_id = 3)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n\n-> Bitmap Index Scan on realdev_date_facet_2009_10_21_watch_id\n(cost=0.00..959.30 rows=150658 width=0)\n\n(actual time=9787.039..9787.039 rows=1383 loops=1)\n\n Index Cond: (watch_id = 3)\n ->\nBitmap Heap Scan on realdev_date_facet_2009_10_22 realdev_date_facet\n(cost=1165.49..149480.07 rows=182999\n\nwidth=80) (actual time=6884.397..6970.130 rows=1625 loops=1)\n\nRecheck Cond: (watch_id = 3)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n\n-> Bitmap Index Scan on realdev_date_facet_2009_10_22_watch_id\n(cost=0.00..1165.49 rows=182999 width=0)\n\n(actual time=6884.011..6884.011 rows=1625 loops=1)\n\n Index Cond: (watch_id = 3)\n ->\nBitmap Heap Scan on realdev_date_facet_2009_10_23 realdev_date_facet\n(cost=984.91..137907.55 rows=154546\n\nwidth=80) (actual time=307.460..333.678 rows=1395 loops=1)\n\nRecheck Cond: (watch_id = 3)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n\n-> Bitmap Index Scan on realdev_date_facet_2009_10_23_watch_id\n(cost=0.00..984.91 rows=154546 width=0)\n\n(actual time=307.150..307.150 rows=1395 loops=1)\n\n Index Cond: (watch_id = 3)\n ->\nBitmap Heap Scan on realdev_date_facet_2009_10_24 realdev_date_facet\n(cost=816.70..119209.47 rows=128199\n\nwidth=80) (actual time=214.640..239.955 rows=1050 loops=1)\n\nRecheck Cond: (watch_id = 3)\n\nFilter: ((thedate >= '2009-10-17 00:00:00'::timestamp without time\nzone) AND (thedate <= '2009-10-24\n\n00:00:00'::timestamp without time zone))\n\n-> Bitmap Index Scan on realdev_date_facet_2009_10_24_watch_id\n(cost=0.00..816.70 rows=128199 width=0)\n\n(actual time=214.276..214.276 rows=1050 loops=1)\n\n Index Cond: (watch_id = 3)\n -> Hash (cost=15.20..15.20\nrows=520 width=122) (actual time=0.003..0.003 rows=0 loops=1)\n -> Seq Scan on\nrealdev_pointofcontact intf (cost=0.00..15.20 rows=520 width=122)\n(actual time=0.002..0.002 rows=0 loops=1)\n -> Hash (cost=13.10..13.10 rows=310\nwidth=222) (actual time=0.111..0.111 rows=100 loops=1)\n -> Seq Scan on realdev_ssszn\nszone (cost=0.00..13.10 rows=310 width=222) (actual time=0.011..0.065\nrows=100 loops=1)\n -> Hash (cost=15.20..15.20 rows=520\nwidth=122) (actual time=0.096..0.096 rows=85 loops=1)\n -> Seq Scan on realdev_dddzn dzone\n(cost=0.00..15.20 rows=520 width=122) (actual time=0.006..0.049\nrows=85 loops=1)\n -> Hash (cost=18.50..18.50 rows=850 width=62)\n(actual time=0.016..0.016 rows=7 loops=1)\n -> Seq Scan on realdev_aaacol aaa\n(cost=0.00..18.50 rows=850 width=62) (actual time=0.006..0.009 rows=7\nloops=1)\n -> Hash (cost=17.70..17.70 rows=770 width=72) (actual\ntime=0.041..0.041 rows=6 loops=1)\n -> Seq Scan on realdev_acttype act\n(cost=0.00..17.70 rows=770 width=72) (actual time=0.032..0.035 rows=6\nloops=1)\n -> Hash (cost=14.40..14.40 rows=440 width=150) (actual\ntime=0.080..0.080 rows=69 loops=1)\n -> Seq Scan on realdev_point nod (cost=0.00..14.40\nrows=440 width=150) (actual time=0.007..0.040 rows=69 loops=1)\n -> Hash (cost=14.40..14.40 rows=440 width=150) (actual\ntime=0.055..0.055 rows=30 loops=1)\n -> Seq Scan on realdev_watch sent (cost=0.00..14.40\nrows=440 width=150) (actual time=0.020..0.039 rows=30 loops=1)\n Total runtime: 37790.144 ms\n\nOn Thu, Oct 29, 2009 at 8:35 AM, Robert Haas <[email protected]> wrote:\n> On Thu, Oct 29, 2009 at 10:10 AM, Anj Adu <[email protected]> wrote:\n>> Join did not help. A sequential scan is still being done. The\n>> hardcoded value in the IN clause performs the best. The time\n>> difference is more than an order of magnitude.\n>\n> If you want help debugging a performance problem, you need to post\n> your EXPLAIN ANALYZE results.\n>\n> http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n>\n> ...Robert\n>\n", "msg_date": "Thu, 29 Oct 2009 08:40:20 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sub-select in IN clause results in sequential scan" }, { "msg_contents": "for explains, use http://explain.depesz.com/\nbesides, why are you using left join ?\n\nequivlent of IN () is just JOIN, not LEFT JOIN.\n\nAnd please, format your query so it readable without twisting eyeballs\nbefore sending.\n\nfor explains, use http://explain.depesz.com/\nbesides, why are you using left join ?equivlent of IN () is just JOIN, not LEFT JOIN.And please, format your query so it readable without twisting eyeballs before sending.", "msg_date": "Fri, 30 Oct 2009 12:35:18 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sub-select in IN clause results in sequential scan" }, { "msg_contents": "2009/10/30 Grzegorz Jaśkiewicz <[email protected]>:\n> for explains, use http://explain.depesz.com/\n> besides, why are you using left join ?\n> equivlent of IN () is just JOIN, not LEFT JOIN.\n> And please, format your query so it readable without twisting eyeballs\n> before sending.\n\nI prefer to have things posted to the list rather than some pastebin\nthat may go away, but I agree that the email was unreadable as posted.\n Maybe an attachment?\n\n...Robert\n", "msg_date": "Fri, 30 Oct 2009 11:03:47 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sub-select in IN clause results in sequential scan" } ]
[ { "msg_contents": "Hi All,\n\nI use postgresql 8.3.7 as a huge queue. There is a very simple table\nwith six columns and two indices, and about 6 million records are\nwritten into it in every day continously commited every 10 seconds from\n8 clients. The table stores approximately 120 million records, because a\ncron job daily deletes those ones are older than 20 day. Autovacuum is\non and every settings is the factory default except some unrelated ones\n(listen address, authorization). But my database is growing,\ncharacteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\nor even 0!!!).\n\nI've also tried a test on another server running the same postgresql,\nwhere 300 million record was loaded into a freshly created database,\nand 25 million was deleted with single DELETE command. The 'vacuum\nverbose phaseangle;' command seems to be running forever for hours:\n\nphasor=# vacuum VERBOSE phaseangle;\nINFO: vacuuming \"public.phaseangle\"\nINFO: scanned index \"i\" to remove 2796006 row versions\nDETAIL: CPU 9.49s/120.30u sec elapsed 224.20 sec.\nINFO: scanned index \"t\" to remove 2796006 row versions\nDETAIL: CPU 13.57s/105.70u sec elapsed 192.71 sec.\nINFO: \"phaseangle\": removed 2796006 row versions in 24748 pages\nDETAIL: CPU 0.65s/0.30u sec elapsed 39.97 sec.\nINFO: scanned index \"i\" to remove 2795924 row versions\nDETAIL: CPU 9.58s/121.63u sec elapsed 239.06 sec.\nINFO: scanned index \"t\" to remove 2795924 row versions\nDETAIL: CPU 13.10s/103.59u sec elapsed 190.84 sec.\nINFO: \"phaseangle\": removed 2795924 row versions in 24743 pages\nDETAIL: CPU 0.68s/0.28u sec elapsed 40.21 sec.\nINFO: scanned index \"i\" to remove 2796014 row versions\nDETAIL: CPU 9.65s/117.28u sec elapsed 231.92 sec.\nINFO: scanned index \"t\" to remove 2796014 row versions\nDETAIL: CPU 13.48s/103.59u sec elapsed 194.49 sec.\nINFO: \"phaseangle\": removed 2796014 row versions in 24774 pages\nDETAIL: CPU 0.69s/0.28u sec elapsed 40.26 sec.\nINFO: scanned index \"i\" to remove 2795935 row versions\nDETAIL: CPU 9.55s/119.02u sec elapsed 226.85 sec.\nINFO: scanned index \"t\" to remove 2795935 row versions\nDETAIL: CPU 13.09s/102.84u sec elapsed 194.74 sec.\nINFO: \"phaseangle\": removed 2795935 row versions in 25097 pages\nDETAIL: CPU 0.67s/0.28u sec elapsed 41.21 sec.\n\nstill running...\n\nThese are the very same problems?\nShould I delete mor frequently in smaller chunks? It seems to have a\nlimit...\n\nThanks \n\nPeter\n\n-- \n", "msg_date": "Thu, 29 Oct 2009 15:44:05 +0100", "msg_from": "Peter Meszaros <[email protected]>", "msg_from_op": true, "msg_subject": "database size growing continously" }, { "msg_contents": "On Thu, 2009-10-29 at 15:44 +0100, Peter Meszaros wrote:\n> Hi All,\n> \n> I use postgresql 8.3.7 as a huge queue. There is a very simple table\n> with six columns and two indices, and about 6 million records are\n> written into it in every day continously commited every 10 seconds from\n> 8 clients. The table stores approximately 120 million records, because a\n> cron job daily deletes those ones are older than 20 day. Autovacuum is\n> on and every settings is the factory default except some unrelated ones\n> (listen address, authorization). But my database is growing,\n> characteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\n> or even 0!!!).\n\nDo you ever \"vacuum full\" to reclaim empty record space?\n\n-- \nP.J. \"Josh\" Rovero Vice President Sonalysts, Inc.\nEmail: [email protected] www.sonalysts.com 215 Parkway North\nWork: (860)326-3671 Waterford, CT 06385\n\n\n", "msg_date": "Thu, 29 Oct 2009 11:33:25 -0400", "msg_from": "Josh Rovero <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "I would recomend increasing fsm max_fsm_pages and shared_buffers\nThis changes did speed up vacuum full on my database.\nWith shared_buffers remember to increase max shm in your OS.\n\nLudwik\n\n2009/10/29 Peter Meszaros <[email protected]>\n\n> Hi All,\n>\n> I use postgresql 8.3.7 as a huge queue. There is a very simple table\n> with six columns and two indices, and about 6 million records are\n> written into it in every day continously commited every 10 seconds from\n> 8 clients. The table stores approximately 120 million records, because a\n> cron job daily deletes those ones are older than 20 day. Autovacuum is\n> on and every settings is the factory default except some unrelated ones\n> (listen address, authorization). But my database is growing,\n> characteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\n> or even 0!!!).\n>\n> I've also tried a test on another server running the same postgresql,\n> where 300 million record was loaded into a freshly created database,\n> and 25 million was deleted with single DELETE command. The 'vacuum\n> verbose phaseangle;' command seems to be running forever for hours:\n>\n> phasor=# vacuum VERBOSE phaseangle;\n> INFO: vacuuming \"public.phaseangle\"\n> INFO: scanned index \"i\" to remove 2796006 row versions\n> DETAIL: CPU 9.49s/120.30u sec elapsed 224.20 sec.\n> INFO: scanned index \"t\" to remove 2796006 row versions\n> DETAIL: CPU 13.57s/105.70u sec elapsed 192.71 sec.\n> INFO: \"phaseangle\": removed 2796006 row versions in 24748 pages\n> DETAIL: CPU 0.65s/0.30u sec elapsed 39.97 sec.\n> INFO: scanned index \"i\" to remove 2795924 row versions\n> DETAIL: CPU 9.58s/121.63u sec elapsed 239.06 sec.\n> INFO: scanned index \"t\" to remove 2795924 row versions\n> DETAIL: CPU 13.10s/103.59u sec elapsed 190.84 sec.\n> INFO: \"phaseangle\": removed 2795924 row versions in 24743 pages\n> DETAIL: CPU 0.68s/0.28u sec elapsed 40.21 sec.\n> INFO: scanned index \"i\" to remove 2796014 row versions\n> DETAIL: CPU 9.65s/117.28u sec elapsed 231.92 sec.\n> INFO: scanned index \"t\" to remove 2796014 row versions\n> DETAIL: CPU 13.48s/103.59u sec elapsed 194.49 sec.\n> INFO: \"phaseangle\": removed 2796014 row versions in 24774 pages\n> DETAIL: CPU 0.69s/0.28u sec elapsed 40.26 sec.\n> INFO: scanned index \"i\" to remove 2795935 row versions\n> DETAIL: CPU 9.55s/119.02u sec elapsed 226.85 sec.\n> INFO: scanned index \"t\" to remove 2795935 row versions\n> DETAIL: CPU 13.09s/102.84u sec elapsed 194.74 sec.\n> INFO: \"phaseangle\": removed 2795935 row versions in 25097 pages\n> DETAIL: CPU 0.67s/0.28u sec elapsed 41.21 sec.\n>\n> still running...\n>\n> These are the very same problems?\n> Should I delete mor frequently in smaller chunks? It seems to have a\n> limit...\n>\n> Thanks\n>\n> Peter\n>\n> --\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nLudwik Dyląg\n\nI would recomend increasing fsm max_fsm_pages and shared_buffersThis changes did speed up vacuum full on my database.\nWith shared_buffers remember to increase max shm in your OS.\nLudwik\n2009/10/29 Peter Meszaros <[email protected]>\nHi All,\n\nI use postgresql 8.3.7 as a huge queue. There is a very simple table\nwith six columns and two indices, and about 6 million records are\nwritten into it in every day continously commited every 10 seconds from\n8 clients. The table stores approximately 120 million records, because a\ncron job daily deletes those ones are older than 20 day. Autovacuum is\non and every settings is the factory default except some unrelated ones\n(listen address, authorization). But my database is growing,\ncharacteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\nor even 0!!!).\n\nI've also tried a test on another server running the same postgresql,\nwhere 300 million record was loaded into a freshly created database,\nand 25 million was deleted with single DELETE command.  The 'vacuum\nverbose phaseangle;' command seems to be running forever for hours:\n\nphasor=# vacuum VERBOSE phaseangle;\nINFO:  vacuuming \"public.phaseangle\"\nINFO:  scanned index \"i\" to remove 2796006 row versions\nDETAIL:  CPU 9.49s/120.30u sec elapsed 224.20 sec.\nINFO:  scanned index \"t\" to remove 2796006 row versions\nDETAIL:  CPU 13.57s/105.70u sec elapsed 192.71 sec.\nINFO:  \"phaseangle\": removed 2796006 row versions in 24748 pages\nDETAIL:  CPU 0.65s/0.30u sec elapsed 39.97 sec.\nINFO:  scanned index \"i\" to remove 2795924 row versions\nDETAIL:  CPU 9.58s/121.63u sec elapsed 239.06 sec.\nINFO:  scanned index \"t\" to remove 2795924 row versions\nDETAIL:  CPU 13.10s/103.59u sec elapsed 190.84 sec.\nINFO:  \"phaseangle\": removed 2795924 row versions in 24743 pages\nDETAIL:  CPU 0.68s/0.28u sec elapsed 40.21 sec.\nINFO:  scanned index \"i\" to remove 2796014 row versions\nDETAIL:  CPU 9.65s/117.28u sec elapsed 231.92 sec.\nINFO:  scanned index \"t\" to remove 2796014 row versions\nDETAIL:  CPU 13.48s/103.59u sec elapsed 194.49 sec.\nINFO:  \"phaseangle\": removed 2796014 row versions in 24774 pages\nDETAIL:  CPU 0.69s/0.28u sec elapsed 40.26 sec.\nINFO:  scanned index \"i\" to remove 2795935 row versions\nDETAIL:  CPU 9.55s/119.02u sec elapsed 226.85 sec.\nINFO:  scanned index \"t\" to remove 2795935 row versions\nDETAIL:  CPU 13.09s/102.84u sec elapsed 194.74 sec.\nINFO:  \"phaseangle\": removed 2795935 row versions in 25097 pages\nDETAIL:  CPU 0.67s/0.28u sec elapsed 41.21 sec.\n\nstill running...\n\nThese are the very same problems?\nShould I delete mor frequently in smaller chunks? It seems to have a\nlimit...\n\nThanks\n\nPeter\n\n--\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Ludwik Dyląg", "msg_date": "Thu, 29 Oct 2009 16:33:50 +0100", "msg_from": "Ludwik Dylag <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "Hi Peter,\n\nSounds like you're experiencing index bloat and vacuums do nothing to\nhelp that. You can do one of 2 thing to remedy this:\n\n1) The fastest and simplest (but most disruptive) way is to use REINDEX.\n But this will exclusively lock the table while rebuilding the indexes:\n\n REINDEX TABLE phaseangle;\n\n2) The slower but less disruptive way is to do a concurrent build of\neach index and then drop the old ones. For example, to rebuild the \"i\"\nindex:\n\n CREATE INDEX CONCURRENTLY i_new ON phaseangle (<indexed columns>);\n DROP INDEX i;\n ALTER INDEX i_new RENAME TO i;\n ANALYZE phaseangle (<indexed columns>);\n\nDo this regularly to keep the index sizes in check.\n\n\t- Chris\n\nPeter Meszaros wrote:\n> Hi All,\n> \n> I use postgresql 8.3.7 as a huge queue. There is a very simple table\n> with six columns and two indices, and about 6 million records are\n> written into it in every day continously commited every 10 seconds from\n> 8 clients. The table stores approximately 120 million records, because a\n> cron job daily deletes those ones are older than 20 day. Autovacuum is\n> on and every settings is the factory default except some unrelated ones\n> (listen address, authorization). But my database is growing,\n> characteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\n> or even 0!!!).\n> \n> I've also tried a test on another server running the same postgresql,\n> where 300 million record was loaded into a freshly created database,\n> and 25 million was deleted with single DELETE command. The 'vacuum\n> verbose phaseangle;' command seems to be running forever for hours:\n> \n> phasor=# vacuum VERBOSE phaseangle;\n> INFO: vacuuming \"public.phaseangle\"\n> INFO: scanned index \"i\" to remove 2796006 row versions\n> DETAIL: CPU 9.49s/120.30u sec elapsed 224.20 sec.\n> INFO: scanned index \"t\" to remove 2796006 row versions\n> DETAIL: CPU 13.57s/105.70u sec elapsed 192.71 sec.\n> INFO: \"phaseangle\": removed 2796006 row versions in 24748 pages\n> DETAIL: CPU 0.65s/0.30u sec elapsed 39.97 sec.\n> INFO: scanned index \"i\" to remove 2795924 row versions\n> DETAIL: CPU 9.58s/121.63u sec elapsed 239.06 sec.\n> INFO: scanned index \"t\" to remove 2795924 row versions\n> DETAIL: CPU 13.10s/103.59u sec elapsed 190.84 sec.\n> INFO: \"phaseangle\": removed 2795924 row versions in 24743 pages\n> DETAIL: CPU 0.68s/0.28u sec elapsed 40.21 sec.\n> INFO: scanned index \"i\" to remove 2796014 row versions\n> DETAIL: CPU 9.65s/117.28u sec elapsed 231.92 sec.\n> INFO: scanned index \"t\" to remove 2796014 row versions\n> DETAIL: CPU 13.48s/103.59u sec elapsed 194.49 sec.\n> INFO: \"phaseangle\": removed 2796014 row versions in 24774 pages\n> DETAIL: CPU 0.69s/0.28u sec elapsed 40.26 sec.\n> INFO: scanned index \"i\" to remove 2795935 row versions\n> DETAIL: CPU 9.55s/119.02u sec elapsed 226.85 sec.\n> INFO: scanned index \"t\" to remove 2795935 row versions\n> DETAIL: CPU 13.09s/102.84u sec elapsed 194.74 sec.\n> INFO: \"phaseangle\": removed 2795935 row versions in 25097 pages\n> DETAIL: CPU 0.67s/0.28u sec elapsed 41.21 sec.\n> \n> still running...\n> \n> These are the very same problems?\n> Should I delete mor frequently in smaller chunks? It seems to have a\n> limit...\n> \n> Thanks \n> \n> Peter\n> \n\n", "msg_date": "Thu, 29 Oct 2009 09:58:36 -0600", "msg_from": "Chris Ernst <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "On Thu, 29 Oct 2009, Josh Rovero wrote:\n> Do you ever \"vacuum full\" to reclaim empty record space?\n\nUnless you expect the size of the database to permanently decrease by a \nsignificant amount, that is a waste of time, and may cause bloat in \nindexes. In this case, since the space will be used again fairly soon, it \nis better to just VACUUM, or autovacuum. Just make sure the free space map \ncan cope with it.\n\nMatthew\n\n-- \n import oz.wizards.Magic;\n if (Magic.guessRight())... -- Computer Science Lecturer\n", "msg_date": "Thu, 29 Oct 2009 16:00:18 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "2009/10/29 Peter Meszaros <[email protected]>\n\n> Hi All,\n>\n> I use postgresql 8.3.7 as a huge queue. There is a very simple table\n> with six columns and two indices, and about 6 million records are\n> written into it in every day continously commited every 10 seconds from\n> 8 clients. The table stores approximately 120 million records, because a\n> cron job daily deletes those ones are older than 20 day. Autovacuum is\n> on and every settings is the factory default except some unrelated ones\n> (listen address, authorization). But my database is growing,\n> characteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\n> or even 0!!!).\n>\n> I've also tried a test on another server running the same postgresql,\n> where 300 million record was loaded into a freshly created database,\n> and 25 million was deleted with single DELETE command. The 'vacuum\n> verbose phaseangle;' command seems to be running forever for hours:\n>\n\nTry increasing max_fsm_pages and shared_buffers\nThese changes did speed up vacuum full on my database.\nWith shared_buffers remember to increase max shm in your OS.\n\nLudwik\n\n\n>\n> phasor=# vacuum VERBOSE phaseangle;\n> INFO: vacuuming \"public.phaseangle\"\n> INFO: scanned index \"i\" to remove 2796006 row versions\n> DETAIL: CPU 9.49s/120.30u sec elapsed 224.20 sec.\n> INFO: scanned index \"t\" to remove 2796006 row versions\n> DETAIL: CPU 13.57s/105.70u sec elapsed 192.71 sec.\n> INFO: \"phaseangle\": removed 2796006 row versions in 24748 pages\n> DETAIL: CPU 0.65s/0.30u sec elapsed 39.97 sec.\n> INFO: scanned index \"i\" to remove 2795924 row versions\n> DETAIL: CPU 9.58s/121.63u sec elapsed 239.06 sec.\n> INFO: scanned index \"t\" to remove 2795924 row versions\n> DETAIL: CPU 13.10s/103.59u sec elapsed 190.84 sec.\n> INFO: \"phaseangle\": removed 2795924 row versions in 24743 pages\n> DETAIL: CPU 0.68s/0.28u sec elapsed 40.21 sec.\n> INFO: scanned index \"i\" to remove 2796014 row versions\n> DETAIL: CPU 9.65s/117.28u sec elapsed 231.92 sec.\n> INFO: scanned index \"t\" to remove 2796014 row versions\n> DETAIL: CPU 13.48s/103.59u sec elapsed 194.49 sec.\n> INFO: \"phaseangle\": removed 2796014 row versions in 24774 pages\n> DETAIL: CPU 0.69s/0.28u sec elapsed 40.26 sec.\n> INFO: scanned index \"i\" to remove 2795935 row versions\n> DETAIL: CPU 9.55s/119.02u sec elapsed 226.85 sec.\n> INFO: scanned index \"t\" to remove 2795935 row versions\n> DETAIL: CPU 13.09s/102.84u sec elapsed 194.74 sec.\n> INFO: \"phaseangle\": removed 2795935 row versions in 25097 pages\n> DETAIL: CPU 0.67s/0.28u sec elapsed 41.21 sec.\n>\n> still running...\n>\n> These are the very same problems?\n> Should I delete mor frequently in smaller chunks? It seems to have a\n> limit...\n>\n> Thanks\n>\n> Peter\n>\n> --\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nLudwik Dyląg\n\n2009/10/29 Peter Meszaros <[email protected]>\nHi All,\n\nI use postgresql 8.3.7 as a huge queue. There is a very simple table\nwith six columns and two indices, and about 6 million records are\nwritten into it in every day continously commited every 10 seconds from\n8 clients. The table stores approximately 120 million records, because a\ncron job daily deletes those ones are older than 20 day. Autovacuum is\non and every settings is the factory default except some unrelated ones\n(listen address, authorization). But my database is growing,\ncharacteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\nor even 0!!!).\n\nI've also tried a test on another server running the same postgresql,\nwhere 300 million record was loaded into a freshly created database,\nand 25 million was deleted with single DELETE command.  The 'vacuum\nverbose phaseangle;' command seems to be running forever for hours:Try increasing max_fsm_pages and shared_buffersThese changes did speed up vacuum full on my database.\nWith shared_buffers remember to increase max shm in your OS.Ludwik \n\nphasor=# vacuum VERBOSE phaseangle;\nINFO:  vacuuming \"public.phaseangle\"\nINFO:  scanned index \"i\" to remove 2796006 row versions\nDETAIL:  CPU 9.49s/120.30u sec elapsed 224.20 sec.\nINFO:  scanned index \"t\" to remove 2796006 row versions\nDETAIL:  CPU 13.57s/105.70u sec elapsed 192.71 sec.\nINFO:  \"phaseangle\": removed 2796006 row versions in 24748 pages\nDETAIL:  CPU 0.65s/0.30u sec elapsed 39.97 sec.\nINFO:  scanned index \"i\" to remove 2795924 row versions\nDETAIL:  CPU 9.58s/121.63u sec elapsed 239.06 sec.\nINFO:  scanned index \"t\" to remove 2795924 row versions\nDETAIL:  CPU 13.10s/103.59u sec elapsed 190.84 sec.\nINFO:  \"phaseangle\": removed 2795924 row versions in 24743 pages\nDETAIL:  CPU 0.68s/0.28u sec elapsed 40.21 sec.\nINFO:  scanned index \"i\" to remove 2796014 row versions\nDETAIL:  CPU 9.65s/117.28u sec elapsed 231.92 sec.\nINFO:  scanned index \"t\" to remove 2796014 row versions\nDETAIL:  CPU 13.48s/103.59u sec elapsed 194.49 sec.\nINFO:  \"phaseangle\": removed 2796014 row versions in 24774 pages\nDETAIL:  CPU 0.69s/0.28u sec elapsed 40.26 sec.\nINFO:  scanned index \"i\" to remove 2795935 row versions\nDETAIL:  CPU 9.55s/119.02u sec elapsed 226.85 sec.\nINFO:  scanned index \"t\" to remove 2795935 row versions\nDETAIL:  CPU 13.09s/102.84u sec elapsed 194.74 sec.\nINFO:  \"phaseangle\": removed 2795935 row versions in 25097 pages\nDETAIL:  CPU 0.67s/0.28u sec elapsed 41.21 sec.\n\nstill running...\n\nThese are the very same problems?\nShould I delete mor frequently in smaller chunks? It seems to have a\nlimit...\n\nThanks\n\nPeter\n\n--\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Ludwik Dyląg", "msg_date": "Thu, 29 Oct 2009 17:00:57 +0100", "msg_from": "Ludwik Dylag <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "On Thu, 2009-10-29 at 17:00 +0100, Ludwik Dylag wrote:\n> 2009/10/29 Peter Meszaros <[email protected]>\n> Hi All,\n> \n> I use postgresql 8.3.7 as a huge queue. There is a very simple\n> table\n> with six columns and two indices, and about 6 million records\n> are\n> written into it in every day continously commited every 10\n> seconds from\n> 8 clients. The table stores approximately 120 million records,\n> because a\n> cron job daily deletes those ones are older than 20 day.\n> Autovacuum is\n> on and every settings is the factory default except some\n> unrelated ones\n> (listen address, authorization). But my database is growing,\n> characteristically ~600MByte/day, but sometimes much slower\n> (eg. 10MB,\n> or even 0!!!).\n> \n> I've also tried a test on another server running the same\n> postgresql,\n> where 300 million record was loaded into a freshly created\n> database,\n> and 25 million was deleted with single DELETE command. The\n> 'vacuum\n> verbose phaseangle;' command seems to be running forever for\n> hours:\n> \n> \n> Try increasing max_fsm_pages and shared_buffers\n> These changes did speed up vacuum full on my database.\n> With shared_buffers remember to increase max shm in your OS.\n\nIf you overran your max_fsm_pages you are going to have indexes that are\nnot properly cleaned up, even after a vacuum full. You will need to\ncluster or reindex.\n\nJoshua D. Drake\n\n\n\n\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\nIf the world pushes look it in the eye and GRR. Then push back harder. - Salamander\n\n", "msg_date": "Thu, 29 Oct 2009 09:14:50 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "Peter Meszaros wrote:\n> Hi All,\n>\n> I use postgresql 8.3.7 as a huge queue. There is a very simple table\n> with six columns and two indices, and about 6 million records are\n> written into it in every day continously commited every 10 seconds from\n> 8 clients. The table stores approximately 120 million records, because a\n> cron job daily deletes those ones are older than 20 day. Autovacuum is\n> on and every settings is the factory default except some unrelated ones\n> (listen address, authorization). But my database is growing,\n> characteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\n> or even 0!!!)...\n\nCan you try running against 8.4.1? I believe there are a number of \nimprovements that should help in your case. For one thing, the \nmax_fsm_pages and max_fsm_relation \"knobs\" are gone - it happens \nautomagically. I believe there are some substantial improvements in \nspace reuse along with numerous improvements not directly related to \nyour question.\n\nCheers,\nSteve\n\n", "msg_date": "Thu, 29 Oct 2009 09:21:35 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "Peter Meszaros wrote:\n> Hi All,\n>\n> I use postgresql 8.3.7 as a huge queue. There is a very simple table\n> with six columns and two indices, and about 6 million records are\n> written into it in every day continously commited every 10 seconds from\n> 8 clients. The table stores approximately 120 million records, because a\n> cron job daily deletes those ones are older than 20 day.\nYou may be an ideal candidate for table partitioning - this is \nfrequently used for rotating log table maintenance.\n\nUse a parent table and 20 child tables. Create a new child every day and \ndrop the 20-day-old table. Table drops are far faster and lower-impact \nthan delete-from a 120-million row table. Index-bloat is limited to \none-day of inserts and will be eliminated in 20-days. No deletes means \nno vacuum requirement on the affected tables. Single tables are limited \nto about 6-million records. A clever backup scheme can ignore \nprior-days' static child-tables (and you could keep \nhistorical-data-dumps off-line for later use if desired).\n\nRead up on it here: \nhttp://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html\n\nCheers,\nSteve\n\n", "msg_date": "Thu, 29 Oct 2009 09:40:01 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "On Thu, Oct 29, 2009 at 8:44 AM, Peter Meszaros <[email protected]> wrote:\n> Hi All,\n>\n> I use postgresql 8.3.7 as a huge queue. There is a very simple table\n> with six columns and two indices, and about 6 million records are\n> written into it in every day continously commited every 10 seconds from\n> 8 clients. The table stores approximately 120 million records, because a\n> cron job daily deletes those ones are older than 20 day. Autovacuum is\n> on and every settings is the factory default except some unrelated ones\n> (listen address, authorization). But my database is growing,\n> characteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\n> or even 0!!!).\n\nSounds like you're blowing out your free space map. Things to try:\n\n1: delete your rows in smaller batches. Like every hour delete\neverything over 20 days so you don't delete them all at once one time\na day.\n2: crank up max fsm pages large enough to hold all the dead tuples.\n3: lower the autovacuum cost delay\n4: get faster hard drives so that vacuum can keep up without causing\nyour system to slow to a crawl while vacuum is running.\n", "msg_date": "Thu, 29 Oct 2009 10:59:48 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford\n<[email protected]> wrote:\n> Peter Meszaros wrote:\n>>\n>> Hi All,\n>>\n>> I use postgresql 8.3.7 as a huge queue. There is a very simple table\n>> with six columns and two indices, and about 6 million records are\n>> written into it in every day continously commited every 10 seconds from\n>> 8 clients. The table stores approximately 120 million records, because a\n>> cron job daily deletes those ones are older than 20 day.\n>\n> You may be an ideal candidate for table partitioning - this is frequently\n> used for rotating log table maintenance.\n>\n> Use a parent table and 20 child tables. Create a new child every day and\n> drop the 20-day-old table. Table drops are far faster and lower-impact than\n> delete-from a 120-million row table. Index-bloat is limited to one-day of\n> inserts and will be eliminated in 20-days. No deletes means no vacuum\n> requirement on the affected tables. Single tables are limited to about\n> 6-million records. A clever backup scheme can ignore prior-days' static\n> child-tables (and you could keep historical-data-dumps off-line for later\n> use if desired).\n>\n> Read up on it here:\n> http://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html\n\n From a performance point of view, this is going to be the best option.\n It might push some complexity though into his queries to invoke\nconstraint exclusion or deal directly with the child partitions.\n\nmerlin\n", "msg_date": "Fri, 30 Oct 2009 07:43:20 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "On 10/30/2009 12:43 PM, Merlin Moncure wrote:\n> On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford\n> <[email protected]> wrote:\n>> Use a parent table and 20 child tables. Create a new child every day and\n>> drop the 20-day-old table. Table drops are far faster and lower-impact than\n>> delete-from a 120-million row table. Index-bloat is limited to one-day of\n>> inserts and will be eliminated in 20-days.\n[...]\n>> Read up on it here:\n>> http://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html\n>\n> From a performance point of view, this is going to be the best option.\n> It might push some complexity though into his queries to invoke\n> constraint exclusion or deal directly with the child partitions.\n\nSeeking to understand.... is the use of partitions and constraint-exclusion\npretty much a hack to get around poor performance, which really ought\nto be done invisibly and automatically by a DBMS?\n\nMuch as indexes per se are, in the SQL/Codd worldview?\n\nOr, is there more to it?\n\n\nI appreciate the \"Simple Matter Of Programming\" problem.\n\nThanks,\n Jeremy\n", "msg_date": "Fri, 30 Oct 2009 18:57:04 +0000", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "Any relational database worth its salt has partitioning for a reason.\n\n1. Maintenance. You will need to delete data at some\npoint.(cleanup)...Partitions are the only way to do it effectively.\n2. Performance. Partitioning offer a way to query smaller slices of\ndata automatically (i.e the query optimizer will choose the partition\nfor you) ...very large tables are a no-no in any relational\ndatabase.(sheer size has limitations)\n\n\nOn Fri, Oct 30, 2009 at 11:57 AM, Jeremy Harris <[email protected]> wrote:\n> On 10/30/2009 12:43 PM, Merlin Moncure wrote:\n>>\n>> On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford\n>> <[email protected]>  wrote:\n>>>\n>>> Use a parent table and 20 child tables. Create a new child every day and\n>>> drop the 20-day-old table. Table drops are far faster and lower-impact\n>>> than\n>>> delete-from a 120-million row table. Index-bloat is limited to one-day of\n>>> inserts and will be eliminated in 20-days.\n>\n> [...]\n>>>\n>>> Read up on it here:\n>>> http://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html\n>>\n>>  From a performance point of view, this is going to be the best option.\n>>  It might push some complexity though into his queries to invoke\n>> constraint exclusion or deal directly with the child partitions.\n>\n> Seeking to understand.... is the use of partitions and constraint-exclusion\n> pretty much a hack to get around poor performance, which really ought\n> to be done invisibly and automatically by a DBMS?\n>\n> Much as indexes per se are, in the SQL/Codd worldview?\n>\n> Or, is there more to it?\n>\n>\n> I appreciate the \"Simple Matter Of Programming\" problem.\n>\n> Thanks,\n>    Jeremy\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 30 Oct 2009 12:53:00 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu <[email protected]> wrote:\n> Any relational database worth its salt has partitioning for a reason.\n>\n> 1. Maintenance.  You will need to delete data at some\n> point.(cleanup)...Partitions are the only way to do it effectively.\n\nThis is true and it's unavoidably a manual process. The database will\nnot know what segments of the data you intend to load and unload en\nmasse.\n\n> 2. Performance.  Partitioning offer a way to query smaller slices of\n> data automatically (i.e the query optimizer will choose the partition\n> for you) ...very large tables are a no-no in any relational\n> database.(sheer size has limitations)\n\nThis I dispute. Databases are designed to be scalable and very large\ntables should perform just as well as smaller tables.\n\nWhere partitions win for performance is when you know something about\nhow your data is accessed and you can optimize the access by\npartitioning along the same keys. For example if you're doing a\nsequential scan of just one partition or doing a merge join of two\nequivalently partitioned tables and the partitions can be sorted in\nmemory.\n\nHowever in these cases it is possible the database will become more\nintelligent and be able to achieve the same performance gains\nautomatically. Bitmap index scans should perform comparably to the\nsequential scan of individual partitions for example.\n\n-- \ngreg\n", "msg_date": "Fri, 30 Oct 2009 13:01:31 -0700", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "Database are designed to handle very large tables..but effectiveness\nis always at question. A full table scan on a partitioned table is\nalways preferable to a FTS on a super large table. The nature of the\nquery will of-course dictate performance..but you run into definite\nlimitations with very large tables.\n\nOn Fri, Oct 30, 2009 at 1:01 PM, Greg Stark <[email protected]> wrote:\n> On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu <[email protected]> wrote:\n>> Any relational database worth its salt has partitioning for a reason.\n>>\n>> 1. Maintenance.  You will need to delete data at some\n>> point.(cleanup)...Partitions are the only way to do it effectively.\n>\n> This is true and it's unavoidably a manual process. The database will\n> not know what segments of the data you intend to load and unload en\n> masse.\n>\n>> 2. Performance.  Partitioning offer a way to query smaller slices of\n>> data automatically (i.e the query optimizer will choose the partition\n>> for you) ...very large tables are a no-no in any relational\n>> database.(sheer size has limitations)\n>\n> This I dispute. Databases are designed to be scalable and very large\n> tables should perform just as well as smaller tables.\n>\n> Where partitions win for performance is when you know something about\n> how your data is accessed and you can optimize the access by\n> partitioning along the same keys. For example if you're doing a\n> sequential scan of just one partition or doing a merge join of two\n> equivalently partitioned tables and the partitions can be sorted in\n> memory.\n>\n> However in these cases it is possible the database will become more\n> intelligent and be able to achieve the same performance gains\n> automatically. Bitmap index scans should perform comparably to the\n> sequential scan of individual partitions for example.\n>\n> --\n> greg\n>\n", "msg_date": "Fri, 30 Oct 2009 13:11:06 -0700", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "On 10/30/2009 08:01 PM, Greg Stark wrote:\n> On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu<[email protected]> wrote:\n>> Any relational database worth its salt has partitioning for a reason.\n>>\n>> 1. Maintenance. You will need to delete data at some\n>> point.(cleanup)...Partitions are the only way to do it effectively.\n>\n> This is true and it's unavoidably a manual process. The database will\n> not know what segments of the data you intend to load and unload en\n> masse.\n>\n>> 2. Performance. Partitioning offer a way to query smaller slices of\n>> data automatically (i.e the query optimizer will choose the partition\n>> for you) ...very large tables are a no-no in any relational\n>> database.(sheer size has limitations)\n>\n> This I dispute. Databases are designed to be scalable and very large\n> tables should perform just as well as smaller tables.\n>\n> Where partitions win for performance is when you know something about\n> how your data is accessed and you can optimize the access by\n> partitioning along the same keys. For example if you're doing a\n> sequential scan of just one partition or doing a merge join of two\n> equivalently partitioned tables and the partitions can be sorted in\n> memory.\n>\n> However in these cases it is possible the database will become more\n> intelligent and be able to achieve the same performance gains\n> automatically. Bitmap index scans should perform comparably to the\n> sequential scan of individual partitions for example.\n>\n\nSo, on the becoming more intelligent front: PostgreSQL already does\nsome operations as background maintenance (autovacuum). Extending\nthis to de-bloat indices does not seem conceptually impossible, nor for\nthe collection of table-data statistics for planner guidance (also, why\ncould a full-table-scan not collect stats as a side-effect?). Further out,\nhow about the gathering of statistics on queries to guide the automatic\ncreation of indices? Or to set up a partitioning scheme on a previously\nmonolithic table?\n\n- Jeremy\n\n", "msg_date": "Fri, 30 Oct 2009 20:18:45 +0000", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "On Fri, Oct 30, 2009 at 1:18 PM, Jeremy Harris <[email protected]> wrote:\n> So, on the becoming more intelligent front:  PostgreSQL already does\n> some operations as background maintenance (autovacuum).  Extending\n> this to de-bloat indices does not seem conceptually impossible\n\nIt could be done but it's not easy because there will be people\nconcurrently scanning the index. Vacuum is limited to operations it\ncan do without blocking other jobs.\n\n>, nor for the collection of table-data statistics for planner guidance\n\nWell autovacuum already does this.\n\n\n> (also, why\n> could a full-table-scan not collect stats as a side-effect?).\n\nThat's a good idea but there are difficulties with it. The full table\nscan might not run to completion meaning you may have done a lot of\nwork for nothing. Also gathering and processing that data is fairly\nexpensive, especially for higher stats targets. It requires sorting\nthe data by each column which takes some cpu time which we wouldn't\nwant to make sql queries wait for.\n\n>  Further out, how about the gathering of statistics on queries to guide the automatic\n> creation of indices?\n\nI think we do need more run-time stats. How to make use of them would\nbe a complex question. We could automatically tune the cost\nparameters, we could automatically try other plans and see if they run\nfaster, we could even automatically build indexes. Not all of these\nwould be appropriate in every environment though.\n\n>  Or to set up a partitioning scheme on a previously monolithic table?\n\nWell that involves moving data that other users might be busy\naccessing. Again we wouldn't want an automatic background job blocking\nuser queries.\n\n-- \ngreg\n", "msg_date": "Fri, 30 Oct 2009 15:16:44 -0700", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "Thank you all for the fast responses!\n\nI changed the delete's schedule from daily to hourly and I will let you\nknow the result. This seems to be the most promising step.\n\nThe next one is tuning 'max_fsm_pages'.\nIncreasing max_fsm_pages can be also helpful, but I've read that\n'vacuum verbose ...' will issue warnings if max_fsm_pages is too small.\nI've never seen such messag, this command is either run and finish or\ngoes to an endless loop as it was written in my initial e-mail.\n\n\nOn Thu, Oct 29, 2009 at 10:59:48AM -0600, Scott Marlowe wrote:\n> On Thu, Oct 29, 2009 at 8:44 AM, Peter Meszaros <[email protected]> wrote:\n> > Hi All,\n> >\n> > I use postgresql 8.3.7 as a huge queue. There is a very simple table\n> > with six columns and two indices, and about 6 million records are\n> > written into it in every day continously commited every 10 seconds from\n> > 8 clients. The table stores approximately 120 million records, because a\n> > cron job daily deletes those ones are older than 20 day. Autovacuum is\n> > on and every settings is the factory default except some unrelated ones\n> > (listen address, authorization). But my database is growing,\n> > characteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\n> > or even 0!!!).\n> \n> Sounds like you're blowing out your free space map. Things to try:\n> \n> 1: delete your rows in smaller batches. Like every hour delete\n> everything over 20 days so you don't delete them all at once one time\n> a day.\n> 2: crank up max fsm pages large enough to hold all the dead tuples.\n> 3: lower the autovacuum cost delay\n> 4: get faster hard drives so that vacuum can keep up without causing\n> your system to slow to a crawl while vacuum is running.\n\n-- \nE-mail: pmeATprolanDOThu\nPhone: +36-20-954-3100/8139\nMobile: +36-20-9543139\nFax: +36-26-540420\nhttp://www.prolan.hu\nMon Nov 2 13:20:39 CET 2009\n", "msg_date": "Mon, 2 Nov 2009 13:50:54 +0100", "msg_from": "Peter Meszaros <[email protected]>", "msg_from_op": true, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "I would recommend (if at all possible) to partition the table and drop\nthe old partitions when not needed. This will guarantee the space\nfree-up without VACUUM overhead. Deletes will kill you at some point\nand you dont want too much of the VACUUM IO overhead impacting your\nperformance.\n\nOn Mon, Nov 2, 2009 at 4:50 AM, Peter Meszaros <[email protected]> wrote:\n> Thank you all for the fast responses!\n>\n> I changed the delete's schedule from daily to hourly and I will let you\n> know the result. This seems to be the most promising step.\n>\n> The next one is tuning 'max_fsm_pages'.\n> Increasing max_fsm_pages can be also helpful, but I've read that\n> 'vacuum verbose ...' will issue warnings if max_fsm_pages is too small.\n> I've never seen such messag, this command is either run and finish or\n> goes to an endless loop as it was written in my initial e-mail.\n>\n>\n> On Thu, Oct 29, 2009 at 10:59:48AM -0600, Scott Marlowe wrote:\n>> On Thu, Oct 29, 2009 at 8:44 AM, Peter Meszaros <[email protected]> wrote:\n>> > Hi All,\n>> >\n>> > I use postgresql 8.3.7 as a huge queue. There is a very simple table\n>> > with six columns and two indices, and about 6 million records are\n>> > written into it in every day continously commited every 10 seconds from\n>> > 8 clients. The table stores approximately 120 million records, because a\n>> > cron job daily deletes those ones are older than 20 day. Autovacuum is\n>> > on and every settings is the factory default except some unrelated ones\n>> > (listen address, authorization). But my database is growing,\n>> > characteristically ~600MByte/day, but sometimes much slower (eg. 10MB,\n>> > or even 0!!!).\n>>\n>> Sounds like you're blowing out your free space map.  Things to try:\n>>\n>> 1: delete your rows in smaller batches.  Like every hour delete\n>> everything over 20 days so you don't delete them all at once one time\n>> a day.\n>> 2: crank up max fsm pages large enough to hold all the dead tuples.\n>> 3: lower the autovacuum cost delay\n>> 4: get faster hard drives so that vacuum can keep up without causing\n>> your system to slow to a crawl while vacuum is running.\n>\n> --\n> E-mail: pmeATprolanDOThu\n> Phone: +36-20-954-3100/8139\n> Mobile: +36-20-9543139\n> Fax: +36-26-540420\n> http://www.prolan.hu\n> Mon Nov  2 13:20:39 CET 2009\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 2 Nov 2009 08:14:44 -0800", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" }, { "msg_contents": "On Mon, Nov 2, 2009 at 7:50 AM, Peter Meszaros <[email protected]> wrote:\n> Increasing max_fsm_pages can be also helpful, but I've read that\n> 'vacuum verbose ...' will issue warnings if max_fsm_pages is too small.\n> I've never seen such messag, this command is either run and finish or\n> goes to an endless loop as it was written in my initial e-mail.\n\nI don't think it goes into an endless loop. I think it runs for a\nreally long time because your database is bloated, in need of\nvacuuming, and probably has blown out the free space map. But since\nyou haven't let it run to completion you haven't seem the message.\n\n...Robert\n", "msg_date": "Mon, 2 Nov 2009 15:34:27 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: database size growing continously" } ]
[ { "msg_contents": "Hi everyone,\n\nI want to model the following scenario for an online marketing application:\n\nUsers can create mailings. The list of recipients can be uploaded as \nspreadsheets with arbitrary columns (each row is a recipient). I expect \nthe following maximum quantities the DB will contain:\n\n* up to 5000 mailings\n* 0-10'000 recipients per mailing, average maybe 2000\n* approx. 20 columns per spreadsheet\n\nI see basically two approaches to store the recipients:\n\nA) A single table with a fixed number of generic columns. If the \nspreadsheet has less columns than the table, the values will be null.\n\nCREATE TABLE recipient (\n mailing integer,\n row integer,\n col_1 text,\n …\n col_50 text,\n PRIMARY KEY (mailing, row),\n FOREIGN KEY mailing REFERENCES mailing(id)\n);\n\n\nB) Two tables, one for the recipients and one for the values:\n\nCREATE TABLE recipient (\n mailing integer,\n row integer,\n PRIMARY KEY (mailing, row),\n FOREIGN KEY mailing REFERENCES mailing(id)\n);\n\nCREATE TABLE recipient_value (\n mailing integer,\n row integer,\n column integer,\n value text,\n PRIMARY KEY (mailing, row, column),\n FOREIGN KEY mailing REFERENCES mailing(id),\n FOREIGN KEY row REFERENCES recipient(row)\n);\n\n\nI have the feeling that the second approach is cleaner. But since the \nrecipient_value table will contain approx. 20 times more rows than the \nrecipient table in approach A, I'd expect a performance degradation.\n\nIs there a limit to the number of rows that should be stored in a table? \nWith approach B the maximum number of rows could be about 200'000'000, \nwhich sounds quite a lot …\n\nThanks a lot in advance for any suggestions!\n\nBest regards,\nAndreas\n\n\n\n-- \nAndreas Hartmann, CTO\nBeCompany GmbH\nhttp://www.becompany.ch\nTel.: +41 (0) 43 818 57 01\n\n", "msg_date": "Thu, 29 Oct 2009 21:52:33 +0100", "msg_from": "Andreas Hartmann <[email protected]>", "msg_from_op": true, "msg_subject": "Modeling a table with arbitrary columns" }, { "msg_contents": "Andreas Hartmann wrote on 29.10.2009 21:52:\n> Hi everyone,\n> \n> I want to model the following scenario for an online marketing application:\n> \n> Users can create mailings. The list of recipients can be uploaded as \n> spreadsheets with arbitrary columns (each row is a recipient). I expect \n> the following maximum quantities the DB will contain:\n> \n> * up to 5000 mailings\n> * 0-10'000 recipients per mailing, average maybe 2000\n> * approx. 20 columns per spreadsheet\n> \n[...]\n> \n> I have the feeling that the second approach is cleaner. But since the \n> recipient_value table will contain approx. 20 times more rows than the \n> recipient table in approach A, I'd expect a performance degradation.\n> \n> Is there a limit to the number of rows that should be stored in a table? \n> With approach B the maximum number of rows could be about 200'000'000, \n> which sounds quite a lot …\n> \n\nI don't think the number of rows is that critical (it sure doesn't hit any \"limits\". The question is how you want to access that information and how quick that has to be. If you need sub-second response time for aggregates over that, you'll probably have to throw quite some hardware at it. \n\nYou could also check out the hstore contrib module which lets you store key value pairs in a single column, which might actually be what you are looking for (note that I have never used it, so I cannot tell how fast it acutally is)\n\nhttp://www.postgresql.org/docs/current/static/hstore.html\n\nSo something like\n\nCREATE TABLE recipient (\n mailing integer NOT NULL,\n row integer NOT NULL,\n recipient_values hstore,\n PRIMARY KEY (mailing, row), \n FOREIGN KEY (mailing) REFERENCES mailing (id)\n)\n\nBtw: I would get rid of a column named \"row\", this more a \"recipient_id\", but that is just personal taste.\n\nRegards\nThomas\n\n", "msg_date": "Thu, 29 Oct 2009 22:24:26 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modeling a table with arbitrary columns" }, { "msg_contents": "2009/10/29 Andreas Hartmann <[email protected]>\n\n> Hi everyone,\n>\n> I want to model the following scenario for an online marketing application:\n>\n> Users can create mailings. The list of recipients can be uploaded as\n> spreadsheets with arbitrary columns (each row is a recipient). I expect the\n> following maximum quantities the DB will contain:\n>\n> I see basically two approaches to store the recipients:\n>\n> A) A single table with a fixed number of generic columns. If the\n> spreadsheet has less columns than the table, the values will be null.\n>\n> B) Two tables, one for the recipients and one for the values:\n>\n\nOne more option is to use arrays (and single table).\n\n2009/10/29 Andreas Hartmann <[email protected]>\nHi everyone,\n\nI want to model the following scenario for an online marketing application:\n\nUsers can create mailings. The list of recipients can be uploaded as spreadsheets with arbitrary columns (each row is a recipient). I expect the following maximum quantities the DB will contain:\n\nI see basically two approaches to store the recipients:\n\nA) A single table with a fixed number of generic columns. If the spreadsheet has less columns than the table, the values will be null.\n\nB) Two tables, one for the recipients and one for the values:\nOne more option is to use arrays (and single table).", "msg_date": "Fri, 30 Oct 2009 12:18:26 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modeling a table with arbitrary columns" }, { "msg_contents": "On Thu, Oct 29, 2009 at 4:52 PM, Andreas Hartmann <[email protected]> wrote:\n> Hi everyone,\n>\n> I want to model the following scenario for an online marketing application:\n>\n> Users can create mailings. The list of recipients can be uploaded as\n> spreadsheets with arbitrary columns (each row is a recipient). I expect the\n> following maximum quantities the DB will contain:\n>\n> * up to 5000 mailings\n> * 0-10'000 recipients per mailing, average maybe 2000\n> * approx. 20 columns per spreadsheet\n>\n> I see basically two approaches to store the recipients:\n>\n> A) A single table with a fixed number of generic columns. If the spreadsheet\n> has less columns than the table, the values will be null.\n>\n> CREATE TABLE recipient (\n>  mailing integer,\n>  row integer,\n>  col_1 text,\n>  …\n>  col_50 text,\n>  PRIMARY KEY (mailing, row),\n>  FOREIGN KEY mailing REFERENCES mailing(id)\n> );\n>\n>\n> B) Two tables, one for the recipients and one for the values:\n>\n> CREATE TABLE recipient (\n>  mailing integer,\n>  row integer,\n>  PRIMARY KEY (mailing, row),\n>  FOREIGN KEY mailing REFERENCES mailing(id)\n> );\n>\n> CREATE TABLE recipient_value (\n>  mailing integer,\n>  row integer,\n>  column integer,\n>  value text,\n>  PRIMARY KEY (mailing, row, column),\n>  FOREIGN KEY mailing REFERENCES mailing(id),\n>  FOREIGN KEY row REFERENCES recipient(row)\n> );\n>\n>\n> I have the feeling that the second approach is cleaner. But since the\n> recipient_value table will contain approx. 20 times more rows than the\n> recipient table in approach A, I'd expect a performance degradation.\n>\n> Is there a limit to the number of rows that should be stored in a table?\n> With approach B the maximum number of rows could be about 200'000'000, which\n> sounds quite a lot …\n>\n> Thanks a lot in advance for any suggestions!\n\nAnother possibility would be to create a table for each upload based\non the columns that are actually present. Just have your upload\nscript read the spreadsheet, determine the format, and create an\nappropriate table for that particular upload.\n\nBut a lot of it depends on what kinds of queries you want to write.\n\n...Robert\n", "msg_date": "Sat, 31 Oct 2009 09:03:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Modeling a table with arbitrary columns" } ]
[ { "msg_contents": "Hi,\n\nI have several long text fields in my DB that I would love to compress\n(descriptions, URLs etc). Can you tell me what options exists in PG\n(+pointers please), typical effect on space and run time?\n\nThanks,\n\n-- Shaul\n\nHi,I have several long text fields in my DB that I would love to compress (descriptions, URLs etc). Can you tell me what options exists in PG (+pointers please), typical effect on space and run time?\nThanks,-- Shaul", "msg_date": "Sun, 1 Nov 2009 16:51:53 +0200", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Compression in PG" }, { "msg_contents": "2009/11/1 Shaul Dar <[email protected]>:\n> Hi,\n>\n> I have several long text fields in my DB that I would love to compress\n> (descriptions, URLs etc). Can you tell me what options exists in PG\n> (+pointers please), typical effect on space and run time?\n\nHello\n\nYou can do nothing. PostgreSQL compresses data automatically\n\nhttp://www.postgresql.org/docs/8.4/interactive/storage-toast.html\n\nRegards\nPavel Stehule\n>\n> Thanks,\n>\n> -- Shaul\n>\n", "msg_date": "Sun, 1 Nov 2009 15:56:17 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compression in PG" }, { "msg_contents": "Shaul Dar wrote:\n> Hi,\n> \n> I have several long text fields in my DB that I would love to compress\n> (descriptions, URLs etc). Can you tell me what options exists in PG\n> (+pointers please), typical effect on space and run time?\n\nvariable length text fields .. e.g TEXT will automatically be stored in\na TOAST table and compressed. Search the manual for toast.\n\n-- \nJesper\n", "msg_date": "Sun, 01 Nov 2009 16:45:35 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compression in PG" }, { "msg_contents": "On Sun, Nov 1, 2009 at 7:56 AM, Pavel Stehule <[email protected]> wrote:\n> 2009/11/1 Shaul Dar <[email protected]>:\n>> Hi,\n>>\n>> I have several long text fields in my DB that I would love to compress\n>> (descriptions, URLs etc). Can you tell me what options exists in PG\n>> (+pointers please), typical effect on space and run time?\n>\n> Hello\n>\n> You can do nothing. PostgreSQL compresses data automatically\n>\n> http://www.postgresql.org/docs/8.4/interactive/storage-toast.html\n\nWell you can pick a strategy. But yeah, there's not much for the\naverage user to do really.\n", "msg_date": "Sun, 1 Nov 2009 08:57:42 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compression in PG" }, { "msg_contents": "Guys,\n\nI am aware of the TOAST mechanism (actually complained about it in this\nforum...). The text fields I have are below the limits that trigger this\nmechanism, and also I may want to compress *specific* fields, not all of\nthem. And also I have performance concerns as TOAST splits tables and can\npotentially cause a performance hit on queries.\n\nMy question is if PG can compress smaller text fields e.g 0.5-1KB, or must I\ndo this outside PG?\n\n-- Shaul\n\nOn Sun, Nov 1, 2009 at 5:57 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Sun, Nov 1, 2009 at 7:56 AM, Pavel Stehule <[email protected]>\n> wrote:\n> > 2009/11/1 Shaul Dar <[email protected]>:\n> >> Hi,\n> >>\n> >> I have several long text fields in my DB that I would love to compress\n> >> (descriptions, URLs etc). Can you tell me what options exists in PG\n> >> (+pointers please), typical effect on space and run time?\n> >\n> > Hello\n> >\n> > You can do nothing. PostgreSQL compresses data automatically\n> >\n> > http://www.postgresql.org/docs/8.4/interactive/storage-toast.html\n>\n> Well you can pick a strategy. But yeah, there's not much for the\n> average user to do really.\n>\n\nGuys,I am aware of the TOAST mechanism (actually complained about it in this forum...). The text fields I have are below the limits that trigger this mechanism, and also I may want to compress specific fields, not all of them. And also I have performance concerns as TOAST splits tables and can potentially cause a performance hit on queries.\nMy question is if PG can compress smaller text fields e.g 0.5-1KB, or must I do this outside PG?-- Shaul\nOn Sun, Nov 1, 2009 at 5:57 PM, Scott Marlowe <[email protected]> wrote:\nOn Sun, Nov 1, 2009 at 7:56 AM, Pavel Stehule <[email protected]> wrote:\n> 2009/11/1 Shaul Dar <[email protected]>:\n>> Hi,\n>>\n>> I have several long text fields in my DB that I would love to compress\n>> (descriptions, URLs etc). Can you tell me what options exists in PG\n>> (+pointers please), typical effect on space and run time?\n>\n> Hello\n>\n> You can do nothing. PostgreSQL compresses data automatically\n>\n> http://www.postgresql.org/docs/8.4/interactive/storage-toast.html\n\nWell you can pick a strategy.  But yeah, there's not much for the\naverage user to do really.", "msg_date": "Sun, 1 Nov 2009 18:53:44 +0200", "msg_from": "Shaul Dar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Compression in PG" }, { "msg_contents": "On Sun, 2009-11-01 at 18:53 +0200, Shaul Dar wrote:\n> I am aware of the TOAST mechanism (actually complained about it in\n> this forum...). The text fields I have are below the limits that\n> trigger this mechanism,\n\nHave you proved somehow that compressing tiny values has any value?\n\n> and also I may want to compress *specific* fields, not all of them.\n\nYou can do that.\n\n“ALTER TABLE table ALTER COLUMN comment SET STORAGE mechanism;\"\n\nFor example:\n\nALTER TABLE job_history_info ALTER COLUMN comment SET STORAGE\nEXTERNAL;\n\nWhere mechanism is -\n\n<quote source=\"WMOGAG\"\nurl=\"http://docs.opengroupware.org/Members/whitemice/wmogag/file_view\">\n* Extended – With the extended TOAST strategy the long value, once it\nexceeds the TOASTing threshold will be compressed. If the compression\nreduced the length to below the TOAST threshold the value will be\nstored, compressed, in the original table. If compression does not\nreduce the value to below the TOAST threshold the value will be stored\nuncompressed in the table's TOAST table. Because the value is stored\ncompressed it most be uncompressed in order to perform value\ncomparisons; for large tables with many compressed values this can\nresult in spikes of processor utilization. On the other hand this\nstorage mechanism conserves disk space and reduces the need to perform\nseek-and-read operations on the TOAST table. Extended is the default,\nand usually recommended, TOAST storage mechanism.\n* External – With the external TOAST strategy a long value is\nimmediately migrated to the TOAST table, compression is disabled.\nDisabling compressions can increase the performance for substring\nsearches on long text values at the cost of increasing seeks in the\nTOAST table as well as disk consumption.\n* Main – Main enables compression and uses any means available to avoid\nmigrating the value to the TOAST table.\n</quote>\n\nAs I recall all the above is in the PostgreSQL TOAST documentation; you\nshould go look at that.\n\n> And also I have performance concerns as TOAST splits tables and can\n> potentially cause a performance hit on queries.\n\nThen change your TOAST mechanism to \"MAIN\". \n\nBut benchmarking [aka: knowing] is always preferable to having\n\"concerns\". I'd wager your biggest bottlenecks will be elsewhere.\n\n> My question is if PG can compress smaller text fields e.g 0.5-1KB, or\n> must I do this outside PG?\n\nI just think compressing small documents seems pointless.\n-- \nOpenGroupware developer: [email protected]\n<http://whitemiceconsulting.blogspot.com/>\nOpenGroupare & Cyrus IMAPd documenation @\n<http://docs.opengroupware.org/Members/whitemice/wmogag/file_view>\n\n", "msg_date": "Sun, 01 Nov 2009 12:23:32 -0500", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compression in PG" }, { "msg_contents": "On Sun, Nov 1, 2009 at 9:53 AM, Shaul Dar <[email protected]> wrote:\n> Guys,\n>\n> I am aware of the TOAST mechanism (actually complained about it in this\n> forum...). The text fields I have are below the limits that trigger this\n> mechanism, and also I may want to compress specific fields, not all of them.\n> And also I have performance concerns as TOAST splits tables and can\n> potentially cause a performance hit on queries.\n\nDid you even read the link posted?\n", "msg_date": "Sun, 1 Nov 2009 11:27:13 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compression in PG" }, { "msg_contents": "On Sun, Nov 1, 2009 at 11:53 AM, Shaul Dar <[email protected]> wrote:\n> I am aware of the TOAST mechanism (actually complained about it in this\n> forum...). The text fields I have are below the limits that trigger this\n> mechanism, and also I may want to compress specific fields, not all of them.\n> And also I have performance concerns as TOAST splits tables and can\n> potentially cause a performance hit on queries.\n>\n> My question is if PG can compress smaller text fields e.g 0.5-1KB, or must I\n> do this outside PG?\n\nThe manual explains the behavior of the system fairly clearly. You\nshould read it through carefully, since it doesn't seem that you aware\nof all the ins and outs (for example, compression and external storage\ncan be configured independently of one another, and on a per-column\nbasis). If it doesn't seem like the behavior you want, then it is\nlikely that you are trying to solve a different problem than the one\nthat TOAST is designed to solve. In that case, you should tell us\nmore about what you're trying to do.\n\nThe only reason I can think of for wanting to compress very small\ndatums is if you have a gajillion of them, they're highly\ncompressible, and you have extra CPU time coming out of your ears. In\nthat case - yeah, you might want to think about pre-compressing them\noutside of Pg. If you're doing this for some other reason you could\nprobably get some better advice if you explain what it is...\n\n...Robert\n", "msg_date": "Sun, 1 Nov 2009 23:24:47 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compression in PG" }, { "msg_contents": "At 05:24 02/11/2009, you wrote:\n\n>The only reason I can think of for wanting to compress very small\n>datums is if you have a gajillion of them, they're highly\n>compressible, and you have extra CPU time coming out of your ears. In\n>that case - yeah, you might want to think about pre-compressing them\n>outside of Pg. If you're doing this for some other reason you could\n>probably get some better advice if you explain what it is...\n>\n>...Robert\n\nThere is another reason. If you compress (lossless) all the small text datums with the same algorithm, you get unique and smaller representations of the text that can be used as primary unique keys. You can see it like a variable length hashing algorithm.\n\nDepending the compression method (f.ex. static huffman) you can compare 2 texts using only the compressed versions or sort them faster. This cannot be done with the actual LZ algorithm.\n\n\n--------------------------------\nEduardo Morrás González\nDept. I+D+i e-Crime Vigilancia Digital\nS21sec Labs\nTlf: +34 902 222 521\nMóvil: +34 555 555 555 \nwww.s21sec.com, blog.s21sec.com \n\n\nSalvo que se indique lo contrario, esta información es CONFIDENCIAL y\ncontiene datos de carácter personal que han de ser tratados conforme a la\nlegislación vigente en materia de protección de datos. Si usted no es\ndestinatario original de este mensaje, le comunicamos que no está autorizado\na revisar, reenviar, distribuir, copiar o imprimir la información en él\ncontenida y le rogamos que proceda a borrarlo de sus sistemas.\n\nKontrakoa adierazi ezean, posta elektroniko honen barruan doana ISILPEKO\ninformazioa da eta izaera pertsonaleko datuak dituenez, indarrean dagoen\ndatu pertsonalak babesteko legediaren arabera tratatu beharrekoa. Posta\nhonen hartzaile ez zaren kasuan, jakinarazten dizugu baimenik ez duzula\nbertan dagoen informazioa aztertu, igorri, banatu, kopiatu edo inprimatzeko.\nHortaz, erregutzen dizugu posta hau zure sistemetatik berehala ezabatzea. \n\nAntes de imprimir este mensaje valora si verdaderamente es necesario. De\nesta forma contribuimos a la preservación del Medio Ambiente. \n\n", "msg_date": "Mon, 02 Nov 2009 10:51:06 +0100", "msg_from": "Eduardo Morras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compression in PG" } ]
[ { "msg_contents": "\nHi all,\n\nI have now readed many many forums and tried many different solutions and I\nam not getting good performance to database. My server is Debian linux, with\n4gb ram, there is also java application and I am giving to that 512mb\n(JAVA_OPTS) memory. In database there is now like 4milj rows. What should I\ndo better.\nNow .conf is:\n\nmax_connections = 80\nshared_buffers = 512MB\ntemp_buffers = 8MB\nwork_mem = 20MB\nmaintenance_work_mem = 384MB\nwal_buffers = 8MB\ncheckpoint_segments = 128MB\neffective_cache_size = 2304MB\ncpu_tuple_cost = 0.0030\ncpu_index_tuple_cost = 0.0010\ncpu_operator_cost = 0.0005\nfsync = off\ncheckpoint_timeout = 1h\n\nand I am giving kernels like:\n\nsysctl -w kernel.shmmax=1073741824\nsysctl -w kernel.shmall=2097152\n\nbtw, what file I should modify to give this kernels as defaults ?\n\nThank you very much ! Hope you can clear my problem !\n-- \nView this message in context: http://old.nabble.com/Problem-with-database-performance%2C-Debian-4gb-ram---tp26157096p26157096.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Mon, 2 Nov 2009 05:58:43 -0800 (PST)", "msg_from": "Massan <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with database performance, Debian 4gb ram ?" }, { "msg_contents": "Massan wrote:\n> Hi all,\n> \n> I have now readed many many forums and tried many different solutions and I\n> am not getting good performance to database. \n\nYou don't give nearly enough information. To begin with, why do you think performance is poor? What exactly are you measuring? What does your application do, at least in broad terms? Have you identified any particular SQL queries that are slow, and if so, have you run EXPLAIN ANALYZE on them? What is the output of EXPlAIN ANALYZE?\n\nWhat about your hardware, do you have a single disk, a huge RAID10 array, or something in between? How much memory, how many processors?\n\nThe more information you provide, the more likely you will get a good answer.\n\nCraig\n\n> My server is Debian linux, with\n> 4gb ram, there is also java application and I am giving to that 512mb\n> (JAVA_OPTS) memory. In database there is now like 4milj rows. What should I\n> do better.\n> Now .conf is:\n> \n> max_connections = 80\n> shared_buffers = 512MB\n> temp_buffers = 8MB\n> work_mem = 20MB\n> maintenance_work_mem = 384MB\n> wal_buffers = 8MB\n> checkpoint_segments = 128MB\n> effective_cache_size = 2304MB\n> cpu_tuple_cost = 0.0030\n> cpu_index_tuple_cost = 0.0010\n> cpu_operator_cost = 0.0005\n> fsync = off\n> checkpoint_timeout = 1h\n> \n> and I am giving kernels like:\n> \n> sysctl -w kernel.shmmax=1073741824\n> sysctl -w kernel.shmall=2097152\n> \n> btw, what file I should modify to give this kernels as defaults ?\n> \n> Thank you very much ! Hope you can clear my problem !\n\n", "msg_date": "Fri, 06 Nov 2009 06:37:38 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with database performance, Debian 4gb ram ?" }, { "msg_contents": "Massan escribi�:\n\n> and I am giving kernels like:\n> \n> sysctl -w kernel.shmmax=1073741824\n> sysctl -w kernel.shmall=2097152\n> \n> btw, what file I should modify to give this kernels as defaults ?\n\n/etc/sysctl.conf\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 6 Nov 2009 12:39:27 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with database performance, Debian 4gb ram ?" } ]
[ { "msg_contents": "Hi Hi all,\n\nI have now readed many many forums and tried many different solutions and I\nam not getting good performance to database. My server is Debian linux, with\n4gb ram, there is also java application and I am giving to that 512mb\n(JAVA_OPTS) memory. In database there is now like 4milj rows. What should I\ndo better.\nNow .conf is:\n\nmax_connections = 80\nshared_buffers = 512MB\ntemp_buffers = 8MB\nwork_mem = 20MB\nmaintenance_work_mem = 384MB\nwal_buffers = 8MB\ncheckpoint_segments = 128MB\neffective_cache_size = 2304MB\ncpu_tuple_cost = 0.0030\ncpu_index_tuple_cost = 0.0010\ncpu_operator_cost = 0.0005\nfsync = off\ncheckpoint_timeout = 1h\n\nand I am giving kernels like:\n\nsysctl -w kernel.shmmax=1073741824\nsysctl -w kernel.shmall=2097152\n\nbtw, what file I should modify to give this kernels as defaults ?\n\nThank you very much ! Hope you can clear my problem !\n\nHi Hi all,\nI have now readed many many forums and tried many different\nsolutions and I am not getting good performance to database. My server\nis Debian linux, with 4gb ram, there is also java application and I am\ngiving to that 512mb (JAVA_OPTS) memory. In database there is now like\n4milj rows. What should I do better.\nNow .conf is:\nmax_connections = 80\nshared_buffers = 512MB\ntemp_buffers = 8MB\nwork_mem = 20MB\nmaintenance_work_mem = 384MB\nwal_buffers = 8MB\ncheckpoint_segments = 128MB\neffective_cache_size = 2304MB\ncpu_tuple_cost = 0.0030\ncpu_index_tuple_cost = 0.0010\ncpu_operator_cost = 0.0005\nfsync = off\ncheckpoint_timeout = 1h\nand I am giving kernels like:\nsysctl -w kernel.shmmax=1073741824\nsysctl -w kernel.shmall=2097152\nbtw, what file I should modify to give this kernels as defaults ?\nThank you very much ! Hope you can clear my problem !", "msg_date": "Mon, 2 Nov 2009 16:16:15 +0200", "msg_from": "Grant Masan <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with database performance, Debian 4gb ram ?" }, { "msg_contents": "On Mon, Nov 2, 2009 at 2:16 PM, Grant Masan <[email protected]> wrote:\n\n> Hi Hi all,\n>\n> I have now readed many many forums and tried many different solutions and I\n> am not getting good performance to database. My server is Debian linux, with\n> 4gb ram, there is also java application and I am giving to that 512mb\n> (JAVA_OPTS) memory. In database there is now like 4milj rows. What should I\n> do better.\n>\n>\nI would rather start to look at queries performance.\n\nWhat's the size of db ?\nselect pg_size_pretty(pg_database_size('yourDbName'));\n\nstart logging queries, with time of execution, to see which ones are causing\nproblems.\n\n\n\n-- \nGJ\n\nOn Mon, Nov 2, 2009 at 2:16 PM, Grant Masan <[email protected]> wrote:\nHi Hi all,\nI have now readed many many forums and tried many different\nsolutions and I am not getting good performance to database. My server\nis Debian linux, with 4gb ram, there is also java application and I am\ngiving to that 512mb (JAVA_OPTS) memory. In database there is now like\n4milj rows. What should I do better.\nI would rather start to look at queries performance. What's the size of db ? select pg_size_pretty(pg_database_size('yourDbName')); \nstart logging queries, with time of execution, to see which ones are causing problems. -- GJ", "msg_date": "Mon, 2 Nov 2009 14:19:47 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with database performance, Debian 4gb ram ?" }, { "msg_contents": "Grant Masan wrote:\n> Hi Hi all,\n> \n> I have now readed many many forums and tried many different solutions \n> and I am not getting good performance to database. My server is Debian \n> linux, with 4gb ram, there is also java application and I am giving to \n> that 512mb (JAVA_OPTS) memory. In database there is now like 4milj rows. \n> What should I do better.\n> Now .conf is:\n> \n> max_connections = 80\n> shared_buffers = 512MB\n> temp_buffers = 8MB\n> work_mem = 20MB\n> maintenance_work_mem = 384MB\n> wal_buffers = 8MB\n> checkpoint_segments = 128MB\n> effective_cache_size = 2304MB\n> cpu_tuple_cost = 0.0030\n> cpu_index_tuple_cost = 0.0010\n> cpu_operator_cost = 0.0005\n> fsync = off\n> checkpoint_timeout = 1h\n> \n> and I am giving kernels like:\n> \n> sysctl -w kernel.shmmax=1073741824\n> sysctl -w kernel.shmall=2097152\n> \n> btw, what file I should modify to give this kernels as defaults ?\n> \n> Thank you very much ! Hope you can clear my problem !\n\nYou have given almost no information that can be used to help you. In \nparticular, you seem to be mixing up Java performance and database \nperformance (JAVA_OPTS has nothing to do with pg performance).\n\nHow do you know your performance is low? What is your hardware, what \nperformance do you get and what do you expect?\n\nIf after this you are still convinced the problem is database-related, \nyou will probably need to run a tool like \nhttp://pqa.projects.postgresql.org/ to search which queries are slow and \nthen start analyzing each particular query.\n\nIn short, there is no magical answer to your question :)\n\n", "msg_date": "Mon, 02 Nov 2009 15:22:03 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with database performance, Debian 4gb ram ?" }, { "msg_contents": "2009/11/2 Grant Masan <[email protected]>\n\n> Size is \"6154 MB\". I have checked all queries, and those are as good as\n> they can be in this situation. You think that this confs doesn't make really\n> no difference at all ?\n>\n> you gotta hit 'reply all' next time ;)\nconfiguration makes difference, but you need to know what is your issue\nbefore you start changing stuff randomly.\n\n\ntry using pg_tune , that should give you good config for your hardware.\n\n\n-- \nGJ\n\n2009/11/2 Grant Masan <[email protected]>\nSize is \"6154 MB\". I have checked all queries, and those are as good as they can be in this situation. You think that this confs doesn't make really no difference at all ?you gotta hit 'reply all' next time ;)\nconfiguration makes difference, but you need to know what is your issue before you start changing stuff randomly.try using pg_tune , that should give you good config for your hardware. \n-- GJ", "msg_date": "Mon, 2 Nov 2009 14:33:52 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Problem with database performance, Debian 4gb ram ?" }, { "msg_contents": "Grant Masan <[email protected]> wrote:\n \n> max_connections = 80\n> shared_buffers = 512MB\n> temp_buffers = 8MB\n> work_mem = 20MB\n> maintenance_work_mem = 384MB\n> wal_buffers = 8MB\n> checkpoint_segments = 128MB\n> effective_cache_size = 2304MB\n> checkpoint_timeout = 1h\n \nPending further information, these seem sane to me.\n \n> cpu_tuple_cost = 0.0030\n> cpu_index_tuple_cost = 0.0010\n> cpu_operator_cost = 0.0005\n \nWhy did you make these adjustments? I usually have to change the\nratio between page and cpu costs toward the other direction. Unless\nyou have carefully measured performance with and without these changes\nand found a clear win with these, I would recommend going back to the\ndefaults for these three and tuning from there.\n \n> fsync = off\n \nOnly use this if you can afford to lose all data in the database. \n(There are some situations where this is actually OK, but they are\nunusual.)\n \nAs others have stated, though, we'd need more information to really\ngive much useful advice. An EXPLAIN ANALYZE of a query which isn't\nperforming to expectations would be helpful, especially if you include\nthe table definitions (with indexes) of all tables involved in the\nquery. Output from vmstat or iostat with a fairly small interval (I\nusually use 1) while the query is running would be useful, too.\n \nKnowing the exact version of PostgreSQL (like from SELECT version();)\nwould be useful, as well as knowing more about you disk array and\ncontroller(s).\n \n-Kevin\n", "msg_date": "Tue, 03 Nov 2009 09:13:43 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with database performance, Debian 4gb\n\t ram ?" }, { "msg_contents": "On Tue, Nov 3, 2009 at 7:13 AM, Kevin Grittner\n<[email protected]> wrote:\n> Grant Masan <[email protected]> wrote:\n>\n>\n>> cpu_tuple_cost = 0.0030\n>> cpu_index_tuple_cost = 0.0010\n>> cpu_operator_cost = 0.0005\n>\n> Why did you make these adjustments? I usually have to change the\n> ratio between page and cpu costs toward the other direction.\n\nIs that because the database is mostly cached in memory? If I take the\ndocumented descriptions of the costs parameters at face value, I find\nthat cpu_tuple_cost should be even lower yet.\n\n\nCheer,\n\nJeff\n", "msg_date": "Tue, 3 Nov 2009 09:23:33 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with database performance, Debian 4gb ram ?" }, { "msg_contents": "Jeff Janes <[email protected]> wrote:\n> On Tue, Nov 3, 2009 at 7:13 AM, Kevin Grittner\n> <[email protected]> wrote:\n>> Grant Masan <[email protected]> wrote:\n>>\n>>\n>>> cpu_tuple_cost = 0.0030\n>>> cpu_index_tuple_cost = 0.0010\n>>> cpu_operator_cost = 0.0005\n>>\n>> Why did you make these adjustments? I usually have to change the\n>> ratio between page and cpu costs toward the other direction.\n> \n> Is that because the database is mostly cached in memory? If I take\n> the documented descriptions of the costs parameters at face value, I\n> find that cpu_tuple_cost should be even lower yet.\n \nRight, the optimizer doesn't model caching effects very well, so I\nfind that in practice I have to fudge these from their putative\nmeanings to allow for typical caching. Even with only a small\nfraction of the database cached, the heavily accessed indexes tend to\nbe fairly well cached, so overall performance improves markedly by\ndropping random_page_cost to about 2, even in our lowest-percentage-\ncached databases.\n \nI've occasionally tried using the defaults for that GUC, which has\nalways resulted in user complaints about unacceptable performance of\nimportant queries. While I tend to reduce the random_page_cost and\nseq_page_cost to tweak things, raising the cpu_*_cost settings would\naccomplish the same thing, so reducing them as show above would tend\nto push things into sequential scans where indexed access might be\nfaster.\n \n-Kevin\n", "msg_date": "Tue, 03 Nov 2009 11:54:14 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with database performance, Debian 4gb\n\t ram ?" }, { "msg_contents": "Please keep the list copied.\n \nGrant Masan <[email protected]> wrote:\n \n> CREATE FUNCTION ... RETURNS SETOF ...\n \n> FOR ... IN SELECT ... LOOP\n \n> FOR ... IN SELECT ... LOOP\n \n> FOR ... IN SELECT ... LOOP\n> \n> RETURN NEXT text_output;\n> \n> END LOOP;\n> END LOOP;\n> END LOOP;\n \nI don't have time to work through the logic of all this to try to\ndiscern what your goal is; but in my experience, such procedural code\ncan usually be rewritten as a single query. The results are typically\norders of magnitude better.\n \n> SELECT * FROM info_tool(linest,date,date)\n \n> \"Function Scan on info_tool (cost=0.00..260.00 rows=1000 width=108)\n> (actual time=437712.611..437712.629 rows=14 loops=1)\"\n> \"Total runtime: 437712.686 ms\"\n \nTo get useful information you need EXPLAIN ANALYZE from statements\ninside the function, not of the execution of the function.\n \n-Kevin\n", "msg_date": "Fri, 06 Nov 2009 09:26:27 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with database performance, Debian 4gb\n\t ram ?" } ]
[ { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> Any sane text search application is going to try to filter out\n> common words as stopwords; it's only the failure to do that that's\n> making this run slow.\n \nImagine a large table with a GIN index on a tsvector. The user wants\na particular document, and is sure four words are in it. One of them\nonly appears in 100 documents. The other three each appear in about\na third of the documents. Is it more sane to require the user to\nwait for a table scan or to make them wade through 100 rows rather\nthan four?\n \nI'd rather have the index used for the selective test, and apply the\nremaining tests to the rows retrieved from the heap.\n \n-Kevin\n", "msg_date": "Mon, 02 Nov 2009 16:06:00 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> Any sane text search application is going to try to filter out\n>> common words as stopwords; it's only the failure to do that that's\n>> making this run slow.\n \n> I'd rather have the index used for the selective test, and apply the\n> remaining tests to the rows retrieved from the heap.\n\nUh, that was exactly my point. Indexing common words is a waste.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Nov 2009 19:18:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search. " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Tom Lane <[email protected]> wrote:\n>>> Any sane text search application is going to try to filter out\n>>> common words as stopwords; it's only the failure to do that that's\n>>> making this run slow.\n> \n>> I'd rather have the index used for the selective test, and apply\n>> the remaining tests to the rows retrieved from the heap.\n> \n> Uh, that was exactly my point. Indexing common words is a waste.\n \nPerhaps I'm missing something. My point was that there are words\nwhich are too common to be useful for index searches, yet uncommon\nenough to usefully limit the results. These words could typically\nbenefit from tsearch2 style parsing and dictionaries; so declaring\nthem as stop words would be bad from a functional perspective, yet\nsearching an index for them would be bad from a performance\nperspective.\n \nOne solution would be for the users to rigorously identify all of\nthese words, include them on one stop word list but not another,\ninclude *two* tsvector columns in the table (with and without the\n\"iffy\" words), index only the one with the larger stop word list, and\ngenerate two tsquery values to search the two different columns. Best\nof both worlds. Sort of. The staff time to create and maintain such\na list would obviously be costly and writing the queries would be\nerror-prone.\n \nSecond best would be to somehow recognize the \"iffy\" words and exclude\nthem from the index and the index search phase, but apply the check\nwhen the row is retrieved from the heap. I really have a hard time\nseeing how the conditional exclusion from the index could be\naccomplished, though. Next best would be to let them fall into the\nindex, but exclude top level ANDed values from the index search,\napplying them only to the recheck when the row is read from the heap. \nThe seems, at least conceptually, like it could be done.\n \n-Kevin\n", "msg_date": "Tue, 03 Nov 2009 08:56:53 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Perhaps I'm missing something. My point was that there are words\n> which are too common to be useful for index searches, yet uncommon\n> enough to usefully limit the results. These words could typically\n> benefit from tsearch2 style parsing and dictionaries; so declaring\n> them as stop words would be bad from a functional perspective, yet\n> searching an index for them would be bad from a performance\n> perspective.\n\nRight, but the original complaint in this thread was that a GIN index is\nslow about searching for very common terms. The answer to that clearly\nis to not index common terms, rather than worry about making the case\na bit faster.\n\nIt may well be that Jesper's identified a place where the GIN code could\nbe improved --- it seems like having the top-level search logic be more\naware of the AND/OR structure of queries would be useful. But the\nparticular example shown here doesn't make a very good case for that,\nbecause it's hard to tell how much of a penalty would be taken in more\nrealistic examples.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Nov 2009 10:35:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search. " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> The answer to that clearly is to not index common terms\n \nMy understanding is that we don't currently get statistics on how\ncommon the terms in a tsvector column are until we ANALYZE the *index*\ncreated from it. Seems like sort of a Catch 22. Also, if we exclude\nwords which are in the tsvector from the index on the tsvector, we\nneed to know what words were excluded so we know not to search on them\nas well as forcing the recheck of the full tsquery (unless this always\nhappens already?).\n \n> It may well be that Jesper's identified a place where the GIN code\n> could be improved\n \nMy naive assumption has been that it would be possible to get an\nimprovement without touching the index logic, by changing this part of\nthe query plan:\n \n Index Cond: (ftsbody_body_fts @@ to_tsquery\n('TERM1 & TERM2 & TERM3 & TERM4 & TERM5'::text))\n \nto something like this:\n \n Index Cond: (ftsbody_body_fts @@ to_tsquery\n('TERM1'::text))\n \nand count on this doing the rest:\n \n Recheck Cond: (ftsbody_body_fts @@ to_tsquery\n('TERM1 & TERM2 & TERM3 & TERM4 & TERM5'::text))\n \nI'm wondering if anyone has ever confirmed that probing for the more\nfrequent term through the index is *ever* a win, versus using the\nindex for the most common of the top level AND conditions and doing\nthe rest on recheck. That seems like a dangerous assumption from\nwhich to start.\n \n> But the particular example shown here doesn't make a very good case\n> for that, because it's hard to tell how much of a penalty would be\n> taken in more realistic examples.\n \nFair enough. We're in the early stages of moving to tsearch2 and I\nhaven't run across this yet in practice. If I do, I'll follow up.\n \n-Kevin\n", "msg_date": "Tue, 03 Nov 2009 10:49:31 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n \n> I'm wondering if anyone has ever confirmed that probing for the more\n> frequent term through the index is *ever* a win, versus using the\n> index for the most common of the top level AND conditions and doing\n> the rest on recheck.\n \ns/most/least/\n \n-Kevin\n", "msg_date": "Tue, 03 Nov 2009 11:03:43 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "Tom Lane wrote:\n> It may well be that Jesper's identified a place where the GIN code could\n> be improved --- it seems like having the top-level search logic be more\n> aware of the AND/OR structure of queries would be useful. But the\n> particular example shown here doesn't make a very good case for that,\n> because it's hard to tell how much of a penalty would be taken in more\n> realistic examples.\n\nWith a term sitting in:\n80% of the docs the penalty is: x23\n60% of the docs the penalty is: x17\n40% of the docs the penalty is: x13\nof doing\nvectorcol @@ ts_query('term & commonterm')\ncompared to\nvectorcol @@ ts_query('term) and vectorcol @@ ts_query('commonterm');\nwhere term is non-existing (or rare).\n\n(in query execution performance on a fully memory recident dataset,\ndoing test with \"drop_caches\" and restart pg to simulate a dead disk the\nnumbers are a bit higher).\n\nhttp://article.gmane.org/gmane.comp.db.postgresql.performance/22496/match=\n\nWould you ever quantify a term sitting in 60-80% as a stop-word candidate?\n\nI dont know if x13 in execution performance is worth hunting or there\nare lower hanging fruits sitting in the fts-search-system.\n\nThis is essentially the penalty the user will get for adding a terms to\ntheir search that rarely restricts the results.\n\nIn term of the usual \"set theory\" that databases work in, a search for a\nstop-word translated into the full set. This is just not the case in\nwhere it throws a warning and returns the empty set. This warning can be\ncaught by application code to produce the \"correct\" result to the users,\nbut just slightly more complex queries dont do this:\n\nftstest=# select id from ftstest where body_fts @@ to_tsquery('random |\nthe') limit 10;\n id\n----\n(0 rows)\n\nHere I would have expected the same error.. I basically have to hook in\nthe complete stop-word dictionary in a FTS-preparser to give the user\nthe expected results or have I missed a feature somwhere?\n\nMy reason for not pushing \"commonterms\" into the stopword list is that\nthey actually perform excellent in PG.\n\nSame body as usual, but commonterm99 is sitting in 99% of the documents.\n\nftstest=# set enable_seqscan=off;\nSET\nftstest=# explain analyze select id from ftstest where body_fts @@\nto_tsquery('commonterm99');\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on ftstest (cost=1051476.74..1107666.07 rows=197887\nwidth=4) (actual time=51.036..121.348 rows=197951 loops=1)\n Recheck Cond: (body_fts @@ to_tsquery('commonterm99'::text))\n -> Bitmap Index Scan on ftstest_gin_idx (cost=0.00..1051427.26\nrows=197887 width=0) (actual time=49.602..49.602 rows=197951 loops=1)\n Index Cond: (body_fts @@ to_tsquery('commonterm99'::text))\n Total runtime: 147.350 ms\n(5 rows)\n\nftstest=# set enable_seqscan=on;\nSET\nftstest=# explain analyze select id from ftstest where body_fts @@\nto_tsquery('commonterm99');\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------\n Seq Scan on ftstest (cost=0.00..56744.00 rows=197887 width=4) (actual\ntime=0.086..7134.384 rows=197951 loops=1)\n Filter: (body_fts @@ to_tsquery('commonterm99'::text))\n Total runtime: 7194.182 ms\n(3 rows)\n\n\n\nSo in order to get the result with a speedup of more than x50 I simply\ncannot add these terms to the stop-words because then the first query\nwould resolve to an error and getting results would then be up to the\nsecond query.\n\nMy bet is that doing a seq_scan will \"never\" be beneficial for this type\nof query.\n\nAs far as I can see the only consequence of simply not remove stop-words\nat all is a (fairly small) increase in index-size. It seems to me that\nstop-words were invented when it was hard to get more than 2GB of memory\ninto a computer to get the index-size reduced to a size that better\ncould fit into memory. But nowadays it seems like the downsides are hard\nto see?\n\nJesper\n-- \nJesper\n", "msg_date": "Tue, 03 Nov 2009 19:36:13 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queryplan within FTS/GIN index -search." }, { "msg_contents": "I wrote:\n> Tom Lane <[email protected]> wrote:\n \n>> But the particular example shown here doesn't make a very good case\n>> for that, because it's hard to tell how much of a penalty would be\n>> taken in more realistic examples.\n> \n> Fair enough. We're in the early stages of moving to tsearch2 and I\n> haven't run across this yet in practice. If I do, I'll follow up.\n \nWe have a staging database which allowed some limited testing quickly.\nWhile it's real production data, we haven't been gathering this type\nof data long, so it's got relatively few rows; therefore, it wasn't\nfeasible to try any tests which would be disk-bound, so I primed the\ncache for all of these, and they are all totally served from cache. \nFor various reasons which I'll omit unless asked, we do our text\nsearches through functions which take a \"selection string\", turn it\ninto a tsquery with a little extra massaging on our part, run the\nquery with a minimum ranking to return, and return a set of records\nordered by the ranking in descending sequence.\n \nUnder these conditions there is a slight performance gain in adding an\nadditional test which matches 1356 out of 1691 rows. Not surprisingly\nfor a fully cached query set, timings were very consistent from run to\nrun. While undoubtedly a little unusual in approach, this is\nproduction software run against real-world data. I confirmed that it\nis using the GIN index on the tsvector for these runs.\n \nBy the way, the tsearch2 features have been received very well so\nfar. One of the first reactions from most users is surprise at how\nfast it is. :-) Anyway, our production results don't confirm the\nissue shown with the artificial test data.\n \n \nscca=> select count(*) from \"DocThumbnail\" where \"text\" is not null;\n count\n-------\n 1691\n(1 row)\n\nTime: 0.619 ms\n\n\nscca=> select count(*) from (select \"DocThumbnail_text_rank\"('guardian\nad litem', 0.1)) x;\n count\n-------\n 41\n(1 row)\n\nTime: 19.394 ms\n\n\nscca=> select count(*) from (select \"DocThumbnail_text_rank\"('guardian\nad litem attorney', 0.1)) x;\n count\n-------\n 4\n(1 row)\n\nTime: 16.434 ms\n\n\nscca=> select count(*) from (select\n\"DocThumbnail_text_rank\"('attorney', 0.1)) x;\n count\n-------\n 1356\n(1 row)\n\nTime: 415.056 ms\n\n\nscca=> select count(*) from (select \"DocThumbnail_text_rank\"('guardian\nad litem party', 0.1)) x;\n count\n-------\n 2\n(1 row)\n\nTime: 16.290 ms\n\n\nscca=> select count(*) from (select \"DocThumbnail_text_rank\"('party',\n0.1)) x;\n count\n-------\n 935\n(1 row)\n\nTime: 386.941 ms\n\n\n-Kevin\n", "msg_date": "Tue, 03 Nov 2009 15:50:58 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queryplan within FTS/GIN index -search." } ]
[ { "msg_contents": "Does/is it possible for the PG optimizer come up with differnet plans when \nyou're using bind variables vs when you send static values?\n\nlike if my query was\n\nselect * from users (add a bunch of complex joins) where username = 'dave'\nvs\nselect * from users (add a bunch of complex joins) where username = '?'\n\nIn oracle they are frequently different.\n\nif it's possible for the plan to be different how can i generate an\nxplan for the bind version?\n\nThanks!\n\nDave\n", "msg_date": "Tue, 3 Nov 2009 09:47:29 -0800", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer + bind variables" }, { "msg_contents": "David Kerr wrote:\n> Does/is it possible for the PG optimizer come up with differnet plans when \n> you're using bind variables vs when you send static values?\n\nYes, if the bind variable form causes your DB access driver to use a\nserver-side prepared statement. Pg can't use its statistics to improve\nits query planning if it doesn't have a value for a parameter when it's\nbuilding the query plan.\n\nWhether a server-side prepared statement is used or not depends on how\nyou're connecting to the database - ie your DB access driver and\nversion. If you're using JDBC, I *think* the JDBC driver does parameter\nplacement client-side unless you're using a JDBC prepared statement and\nthe JDBC prepared statement is re-used several times, at which point it\nsets up a server-side prepared statement. AFAIK otherwise it uses\nclient-side (or Pg protocol level) parameter placement.\n\n> if it's possible for the plan to be different how can i generate an\n> xplan for the bind version?\n\nxplan = explain? If so:\n\nUse PREPARE to prepare a statement with the params, then use:\n\nEXPLAIN EXECUTE prepared_statement_name(params);\n\neg:\n\nx=> PREPARE blah AS SELECT * FROM generate_series(1,100);\nPREPARE\nx=> EXPLAIN EXECUTE blah;\n QUERY PLAN\n------------------------------------------------------------------------\n Function Scan on generate_series (cost=0.00..12.50 rows=1000 width=4)\n(1 row)\n\n--\nCraig Ringer\n", "msg_date": "Wed, 04 Nov 2009 07:43:16 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer + bind variables" }, { "msg_contents": "On Wed, Nov 04, 2009 at 07:43:16AM +0800, Craig Ringer wrote:\n- David Kerr wrote:\n- > Does/is it possible for the PG optimizer come up with differnet plans when \n- > you're using bind variables vs when you send static values?\n- \n- Yes, if the bind variable form causes your DB access driver to use a\n- server-side prepared statement. Pg can't use its statistics to improve\n- its query planning if it doesn't have a value for a parameter when it's\n- building the query plan.\n\nhmm, that's a little unclear to me.\n\nlet's assume that the application is using prepare:\n\nAssuming the database hasn't changed, would:\nPREPARE bla1 as SELECT * from users where username = '$1';\nexplain execute bla1\n\ngive the same output as\nexplain select * from users where username = 'dave';\n\n?\n\n- Whether a server-side prepared statement is used or not depends on how\n- you're connecting to the database - ie your DB access driver and\n- version. If you're using JDBC, I *think* the JDBC driver does parameter\n- placement client-side unless you're using a JDBC prepared statement and\n- the JDBC prepared statement is re-used several times, at which point it\n- sets up a server-side prepared statement. AFAIK otherwise it uses\n- client-side (or Pg protocol level) parameter placement.\nthat's interesting, i'll need to find out which mine are using, probably\na mix of both.\n\n- > if it's possible for the plan to be different how can i generate an\n- > xplan for the bind version?\n- \n- xplan = explain? If so:\nyeah, sorry.\n\n- Use PREPARE to prepare a statement with the params, then use:\n- \n- EXPLAIN EXECUTE prepared_statement_name(params);\n- \n- eg:\n- \n- x=> PREPARE blah AS SELECT * FROM generate_series(1,100);\n- PREPARE\n- x=> EXPLAIN EXECUTE blah;\n- QUERY PLAN\n- ------------------------------------------------------------------------\n- Function Scan on generate_series (cost=0.00..12.50 rows=1000 width=4)\n- (1 row)\n\ngreat thanks!\n\nDave\n", "msg_date": "Tue, 3 Nov 2009 15:52:46 -0800", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer + bind variables" }, { "msg_contents": "David Kerr wrote:\n> On Wed, Nov 04, 2009 at 07:43:16AM +0800, Craig Ringer wrote:\n> - David Kerr wrote:\n> - > Does/is it possible for the PG optimizer come up with differnet plans when \n> - > you're using bind variables vs when you send static values?\n> - \n> - Yes, if the bind variable form causes your DB access driver to use a\n> - server-side prepared statement. Pg can't use its statistics to improve\n> - its query planning if it doesn't have a value for a parameter when it's\n> - building the query plan.\n> \n> hmm, that's a little unclear to me.\n> \n> let's assume that the application is using prepare:\n> \n> Assuming the database hasn't changed, would:\n> PREPARE bla1 as SELECT * from users where username = '$1';\n> explain execute bla1\n> \n> give the same output as\n> explain select * from users where username = 'dave';\n> \n> ?\n\nNo.\n\nThis is explained in the notes here:\n\nhttp://www.postgresql.org/docs/current/static/sql-prepare.html\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Wed, 04 Nov 2009 11:02:22 +1100", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer + bind variables" }, { "msg_contents": "On Wed, Nov 04, 2009 at 11:02:22AM +1100, Chris wrote:\n- David Kerr wrote:\n- >On Wed, Nov 04, 2009 at 07:43:16AM +0800, Craig Ringer wrote:\n- >- David Kerr wrote:\n- No.\n- \n- This is explained in the notes here:\n- \n- http://www.postgresql.org/docs/current/static/sql-prepare.html\n\n<sigh> and i've read that before too.\n\nOn the upside, then it behaves like I would expect it to, which is\ngood.\n\nThanks\n\nDave\n", "msg_date": "Tue, 3 Nov 2009 16:18:25 -0800", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer + bind variables" }, { "msg_contents": "\n\n\nOn 11/3/09 4:18 PM, \"David Kerr\" <[email protected]> wrote:\n\n> On Wed, Nov 04, 2009 at 11:02:22AM +1100, Chris wrote:\n> - David Kerr wrote:\n> - >On Wed, Nov 04, 2009 at 07:43:16AM +0800, Craig Ringer wrote:\n> - >- David Kerr wrote:\n> - No.\n> -\n> - This is explained in the notes here:\n> -\n> - http://www.postgresql.org/docs/current/static/sql-prepare.html\n> \n> <sigh> and i've read that before too.\n> \n> On the upside, then it behaves like I would expect it to, which is\n> good.\n> \n> Thanks\n> \n> Dave\n\nNote that the query plan can often be the same for the example here.\n\nIt depends on whether the knowledge of the exact value makes a difference.\n\nThe most common case is an identifier column.\nIf the column is unique and indexed, and the parameter is an exact = match\nin the where clause to that column, the plans won't differ.\n\n\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 4 Nov 2009 21:47:01 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer + bind variables" } ]
[ { "msg_contents": "All,\n \nI'm trying to understand the free memory usage and why it falls below\n17G sometimes and what could be causing it. Any pointers would be\nappreciated.\n \nroot@prod1 # prtconf\nSystem Configuration: Sun Microsystems sun4u\nMemory size: 32768 Megabytes\n \n[postgres@prod1 ~]$ vmstat 5 10\n kthr memory page disk faults\ncpu\n r b w swap free re mf pi po fr de sr 1m 1m 1m m1 in sy cs us\nsy id\n 0 0 0 51713480 21130304 58 185 325 104 104 0 0 23 3 7 1 488 604 573 1\n2 97\n 0 0 0 51048768 18523456 6 10 0 192 192 0 0 4 0 3 0 527 753 807 2\n1 97\n 0 0 0 51713480 21130304 58 185 325 104 104 0 0 1 23 3 7 488 604 573 1\n2 97\n 0 0 0 51067112 18538472 0 1 0 171 171 0 0 4 8 0 4 522 573 740 2\n1 97\n 0 0 0 51072744 18542992 0 0 0 187 187 0 0 0 22 0 7 532 657 780 2\n1 97\n 0 0 0 51069944 18540736 146 1729 3 174 174 0 0 0 9 0 3 526 3227 944 4\n5 91\n 0 0 0 51065728 18537360 32 33 0 192 192 0 0 0 20 0 3 522 1147 927 3\n2 95\n 0 0 0 51065728 18537336 0 0 0 190 190 0 0 0 26 0 3 517 628 789 2\n1 97\n 0 0 0 51065728 18537336 0 0 0 168 168 0 0 0 25 0 11 517 668 810 2\n2 96\n 0 0 0 51062960 18535152 0 165 2 190 190 0 0 14 29 0 4 552 732 808 2\n1 97\n \nprstat -am\n \n NPROC USERNAME SWAP RSS MEMORY TIME CPU\n 21 postgres 8312M 8300M 25% 112:24:15 2.1%\n 53 root 347M 236M 0.7% 130:52:02 0.1%\n 7 daemon 708M 714M 2.2% 21:53:05 0.0%\n 4 mot 5552K 15M 0.0% 0:00:00 0.0%\n 1 smmsp 1384K 5480K 0.0% 0:00:59 0.0%\n\n\n[postgres@prod1]$ ps -eaf | grep postgres | wc -l\n24\n \nmax_connections = 600\nshared_buffers = 8000MB \ntemp_buffers = 8MB\nwork_mem = 2MB \nmaintenance_work_mem = 256MB \nmax_fsm_pages = 2048000\nmax_fsm_relations = 2000\neffective_cache_size = 4000MB\n \nThanks,\nStalin\n\n \n \n\n\n\n\n\nAll,\n \nI'm trying to \nunderstand the free memory usage and why it falls below 17G sometimes and what \ncould be causing it. Any pointers would be appreciated.\n \nroot@prod1 # prtconfSystem Configuration:  \nSun Microsystems  sun4uMemory size: 32768 Megabytes\n \n[postgres@prod1 ~]$ \nvmstat 5 10 kthr      \nmemory            \npage            \ndisk          \nfaults      cpu r b w   swap  \nfree  re  mf pi po fr de sr 1m 1m 1m m1   in   \nsy   cs us sy id 0 0 0 51713480 21130304 58 185 325 104 104 0 \n0 23 3 7 1 488 604  573  1  2 97 0 0 0 51048768 18523456 \n6 10 0 192 192 0 0 4  0  3  0  527  753  807  \n2  1 97 0 0 0 51713480 21130304 58 185 325 104 104 0 0 1 23 3 7 \n488 604  573  1  2 97 0 0 0 51067112 18538472 0 1 0 171 \n171 0 0  4  8  0  4  522  573  740  \n2  1 97 0 0 0 51072744 18542992 0 0 0 187 187 0 0  0 22  \n0  7  532  657  780  2  1 97 0 0 0 \n51069944 18540736 146 1729 3 174 174 0 0 0 9 0 3 526 3227  944  \n4  5 91 0 0 0 51065728 18537360 32 33 0 192 192 0 0 0 20 0  \n3  522 1147  927  3  2 95 0 0 0 51065728 18537336 0 \n0 0 190 190 0 0  0 26  0  3  517  628  789  \n2  1 97 0 0 0 51065728 18537336 0 0 0 168 168 0 0  0 25  \n0 11  517  668  810  2  2 96 0 0 0 51062960 \n18535152 0 165 2 190 190 0 0 14 29 0 4  552  732  808  \n2  1 97\n \nprstat \n-am\n \n NPROC \nUSERNAME  SWAP   RSS MEMORY      \nTIME  CPU    21 postgres 8312M 8300M    \n25% 112:24:15 2.1%    53 root      \n347M  236M   0.7% 130:52:02 0.1%     7 \ndaemon    708M  714M   2.2%  21:53:05 \n0.0%     4 mot      \n5552K   15M   0.0%   0:00:00 \n0.0%     1 smmsp    1384K \n5480K   0.0%   0:00:59 0.0%\n[postgres@prod1]$ ps -eaf | grep postgres | wc \n-l24\n \nmax_connections = \n600shared_buffers = 8000MB temp_buffers = 8MBwork_mem = \n2MB                          \nmaintenance_work_mem = \n256MB            \nmax_fsm_pages = 2048000max_fsm_relations = 2000effective_cache_size \n= 4000MB\n \nThanks,\nStalin", "msg_date": "Tue, 3 Nov 2009 14:16:26 -0500", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": true, "msg_subject": "Free memory usage Sol10, 8.2.9" }, { "msg_contents": "On 11/03/2009 07:16 PM, Subbiah Stalin-XCGF84 wrote:\n> All,\n>\n> I'm trying to understand the free memory usage and why it falls below\n> 17G sometimes and what could be causing it. Any pointers would be\n> appreciated.\n>\n> root@prod1 # prtconf\n> System Configuration: Sun Microsystems sun4u\n> Memory size: 32768 Megabytes\n>\n> [postgres@prod1 ~]$ vmstat 5 10\n> kthr memory page disk faults\n> cpu\n> r b w swap free re mf pi po fr de sr 1m 1m 1m m1 in sy cs us\n> sy id\n> 0 0 0 51713480 21130304 58 185 325 104 104 0 0 23 3 7 1 488 604 573 1\n> 2 97\n> 0 0 0 51048768 18523456 6 10 0 192 192 0 0 4 0 3 0 527 753 807 2\n> 1 97\n\nMemory used by the OS for caching files is no longer free.\nFree memory is wasted memory.\n-J\n", "msg_date": "Tue, 03 Nov 2009 20:42:56 +0000", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Free memory usage Sol10, 8.2.9" }, { "msg_contents": "Jeremy Harris wrote:\n> On 11/03/2009 07:16 PM, Subbiah Stalin-XCGF84 wrote:\n>> All,\n>>\n>> I'm trying to understand the free memory usage and why it falls below\n>> 17G sometimes and what could be causing it. Any pointers would be\n>> appreciated.\n>>\n>> root@prod1 # prtconf\n>> System Configuration: Sun Microsystems sun4u\n>> Memory size: 32768 Megabytes\n>>\n>> [postgres@prod1 ~]$ vmstat 5 10\n>> kthr memory page disk faults\n>> cpu\n>> r b w swap free re mf pi po fr de sr 1m 1m 1m m1 in sy cs us\n>> sy id\n>> 0 0 0 51713480 21130304 58 185 325 104 104 0 0 23 3 7 1 488 604 573 1\n>> 2 97\n>> 0 0 0 51048768 18523456 6 10 0 192 192 0 0 4 0 3 0 527 753 807 2\n>> 1 97\n> \n> Memory used by the OS for caching files is no longer free.\n> Free memory is wasted memory.\n\nTo finish the thought: memory used by OS for caching files will be \nautomatically given to applications that need more memory so it is \"kind \nof\" free memory.\n\nIn your case, you really do have 17-18G unused memory which is \npractically wasted.\n\n", "msg_date": "Wed, 04 Nov 2009 11:35:23 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Free memory usage Sol10, 8.2.9" } ]
[ { "msg_contents": "Hello All --\n\nI have a simple queuing application written on top of postgres which \nI'm trying to squeeze some more performance out of.\n\nThe setup is relatively simple: there is a central queue table in \npostgres. Worker daemons do a bounded, ordered, limited SELECT to \ngrab a row, which they lock by setting a value in the queue.status \ncolumn. When the task is complete, results are written back to the \nrow. The system is designed to allow multiple concurrent daemons to \naccess a queue. At any one time, we expect 1-5M active items on the \nqueue.\n\nNow this design is never going to win any performance awards against a \ntrue queuing system like Active/Rabbit/Zero MQ, but it's tolerably \nfast for our applications. Fetch/mark times are about 1ms, \nindependent of the number of items on the queue. This is acceptable \nconsidering that our tasks take ~50ms to run.\n\nHowever, the writing of results back to the row takes ~5ms, which is \nslower than I'd like. It seems that this is because I need to to do \nan index scan on the queue table to find the row I just fetched.\n\nMy question is this: is there some way that I can keep a cursor / \npointer / reference / whatever to the row I fetched originally, so \nthat I don't have to search for it again when I'm ready to write \nresults?\n\nThanks in advance for any pointers you can provide.\n\nBrian\n", "msg_date": "Tue, 3 Nov 2009 12:30:15 -0800", "msg_from": "Brian Karlak <[email protected]>", "msg_from_op": true, "msg_subject": "maintaining a reference to a fetched row" }, { "msg_contents": "Brian Karlak wrote:\n\n> The setup is relatively simple: there is a central queue table in\n> postgres. Worker daemons do a bounded, ordered, limited SELECT to grab\n> a row, which they lock by setting a value in the queue.status column. \n\nYou can probably do an UPDATE ... RETURNING to turn that into one\noperation - but that won't work with a cursor :-(\n\n> My question is this: is there some way that I can keep a cursor /\n> pointer / reference / whatever to the row I fetched originally, so that\n> I don't have to search for it again when I'm ready to write results?\n\nYou could use a cursor, but it won't work if you're locking rows by\ntesting a 'status' flag, because that requires the worker to commit the\ntransaction (so others can see the status flag) before starting work. A\ncursor only exists within a transaction.\n\nBEGIN;\nDECLARE curs CURSOR FOR SELECT * FROM queue ORDER BY queue_id LIMIT 1;\nFETCH NEXT FROM curs;\n--\n-- Set the status - but nobody else can see the change yet because we\n-- haven't committed! We'll have a Pg row lock on the record due to the\n-- UPDATE, preventing other UPDATEs but not other SELECTs.\n--\n-- We can't start work until the transaction commits, but committing\n-- will close the cursor.\n--\nUPDATE queue SET status = 1 WHERE CURRENT OF curs;\n\n\nI don't have a good answer for you there. Perhaps using Pg's locking to\ndo your queueing, rather than updating a status flag, might let you use\na cursor? Have a look at the list archives - there's been a fair bit of\ndiscussion of queuing mechanisms.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 04 Nov 2009 08:03:48 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintaining a reference to a fetched row" }, { "msg_contents": "\nOn Nov 3, 2009, at 4:03 PM, Craig Ringer wrote:\n\n> I don't have a good answer for you there. Perhaps using Pg's locking \n> to\n> do your queueing, rather than updating a status flag, might let you \n> use\n> a cursor? Have a look at the list archives - there's been a fair bit \n> of\n> discussion of queuing mechanisms.\n\nThis is an interesting idea. I'll see what I can find in the \narchives. It will likely take a bit of refactoring, but such is \nlife ...\n\nThanks!\nBrian\n", "msg_date": "Tue, 3 Nov 2009 16:12:44 -0800", "msg_from": "Brian Karlak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: maintaining a reference to a fetched row" }, { "msg_contents": "Brian Karlak <[email protected]> writes:\n> My question is this: is there some way that I can keep a cursor / \n> pointer / reference / whatever to the row I fetched originally, so \n> that I don't have to search for it again when I'm ready to write \n> results?\n\nIf you don't expect any updates to the row meanwhile, ctid might serve.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Nov 2009 00:31:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintaining a reference to a fetched row " }, { "msg_contents": "On Tue, Nov 3, 2009 at 12:30 PM, Brian Karlak <[email protected]> wrote:\n> Hello All --\n>\n> I have a simple queuing application written on top of postgres which I'm\n> trying to squeeze some more performance out of.\n>\n> The setup is relatively simple: there is a central queue table in postgres.\n> Worker daemons do a bounded, ordered, limited SELECT to grab a row, which\n> they lock by setting a value in the queue.status column.\n\nSo you do a select, and then an update?\n\n> When the task is\n> complete, results are written back to the row. The system is designed to\n> allow multiple concurrent daemons to access a queue. At any one time, we\n> expect 1-5M active items on the queue.\n>\n> Now this design is never going to win any performance awards against a true\n> queuing system like Active/Rabbit/Zero MQ, but it's tolerably fast for our\n> applications. Fetch/mark times are about 1ms, independent of the number of\n> items on the queue. This is acceptable considering that our tasks take\n> ~50ms to run.\n>\n> However, the writing of results back to the row takes ~5ms, which is slower\n> than I'd like.\n\nIt seems you have an select, and update, and another update. Where in\nthis process do you commit? Are you using fsync=off or\nsynchronous_commit=off?\n\n> It seems that this is because I need to to do an index scan\n> on the queue table to find the row I just fetched.\n\nWhy would the index scan take 1 ms two of the times it is done but 5ms\nthe third time? Isn't it the same index scan each time? Or does the\nchange in queue.status change the plan?\n\nCheers,\n\nJeff\n", "msg_date": "Wed, 4 Nov 2009 08:47:10 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintaining a reference to a fetched row" }, { "msg_contents": "\nOn Nov 3, 2009, at 9:31 PM, Tom Lane wrote:\n\n> Brian Karlak <[email protected]> writes:\n>> My question is this: is there some way that I can keep a cursor /\n>> pointer / reference / whatever to the row I fetched originally, so\n>> that I don't have to search for it again when I'm ready to write\n>> results?\n>\n> If you don't expect any updates to the row meanwhile, ctid might \n> serve.\n\nAhhh ... that's the magic I'm looking for. Thanks!\n\nBrian\n", "msg_date": "Wed, 4 Nov 2009 09:25:27 -0800", "msg_from": "Brian Karlak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: maintaining a reference to a fetched row " }, { "msg_contents": "On Nov 4, 2009, at 8:47 AM, Jeff Janes wrote:\n\n>> Worker daemons do a bounded, ordered, limited SELECT to grab a row, \n>> which\n>> they lock by setting a value in the queue.status column.\n>\n> So you do a select, and then an update?\n\nI do a select for update in a stored proc:\n\nFOR queue_item IN\n SELECT * FROM queue\n WHERE status IS NULL AND id >= low_bound_id\n ORDER BY id LIMIT batch_size\n FOR UPDATE\nLOOP\n UPDATE queue_proc set status = 'proc' where id = queue_item.id ;\n\nThe daemons keep track of their last position in the queue with \nlow_bound_id. Also, as you probably notice, I also fetch a batch of \n(100) items at a time. In practice, it's pretty fast. The job I'm \nrunning now is showing an average fetch time of 30ms per 100 actions, \nwhich ain't bad.\n\n>> However, the writing of results back to the row takes ~5ms, which \n>> is slower\n>> than I'd like.\n>\n> It seems you have an select, and update, and another update. Where in\n> this process do you commit? Are you using fsync=off or\n> synchronous_commit=off?\n\nFirst commit occurs after the stored proc to select/update a batch of \nitems is complete. Second commit occurs on the writing of results \nback for each particular action. Two commits are required because the \ntime it takes to complete the intervening action can vary wildly: \nanywhere between 20ms and 45min.\n\n>> It seems that this is because I need to to do an index scan\n>> on the queue table to find the row I just fetched.\n>\n> Why would the index scan take 1 ms two of the times it is done but 5ms\n> the third time? Isn't it the same index scan each time? Or does the\n> change in queue.status change the plan?\n\nThe final update is a different query -- just a plain old update by ID:\n\nUPDATE queue_proc set status = 'proc' where id = %s ;\n\nThis update by ID takes ~2.5ms, which means it's where the framework \nis spending most of its overhead.\n\nBrian\n\nOn Nov 4, 2009, at 8:47 AM, Jeff Janes wrote: Worker daemons do a bounded, ordered, limited SELECT to grab a row, whichthey lock by setting a value in the queue.status column.So you do a select, and then an update?I do a select for update in a stored proc:FOR queue_item IN    SELECT *  FROM queue   WHERE status IS NULL AND id >= low_bound_id   ORDER BY id LIMIT batch_size     FOR UPDATELOOP  UPDATE queue_proc set status = 'proc' where id = queue_item.id ;The daemons keep track of their last position in the queue with low_bound_id.  Also, as you probably notice, I also fetch a batch of (100) items at a time.  In practice, it's pretty fast.  The job I'm running now is showing an average fetch time of 30ms per 100 actions, which ain't bad.However, the writing of results back to the row takes ~5ms, which is slowerthan I'd like.It seems you have an select, and update, and another update.  Where inthis process do you commit?  Are you using fsync=off orsynchronous_commit=off?First commit occurs after the stored proc to select/update a batch of items is complete.  Second commit occurs on the writing of results back for each particular action.  Two commits are required because the time it takes to complete the intervening action can vary wildly: anywhere between 20ms and 45min.It seems that this is because I need to to do an index scanon the queue table to find the row I just fetched.Why would the index scan take 1 ms two of the times it is done but 5msthe third time?  Isn't it the same index scan each time?  Or does thechange in queue.status change the plan?The final update is a different query -- just a plain old update by ID:UPDATE queue_proc set status = 'proc' where id = %s ;This update by ID takes ~2.5ms, which means it's where the framework is spending most of its overhead.Brian", "msg_date": "Wed, 4 Nov 2009 09:41:54 -0800", "msg_from": "Brian Karlak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: maintaining a reference to a fetched row" }, { "msg_contents": "Brian Karlak <[email protected]> writes:\n> On Nov 4, 2009, at 8:47 AM, Jeff Janes wrote:\n>> Why would the index scan take 1 ms two of the times it is done but 5ms\n>> the third time? Isn't it the same index scan each time? Or does the\n>> change in queue.status change the plan?\n\n> The final update is a different query -- just a plain old update by ID:\n> UPDATE queue_proc set status = 'proc' where id = %s ;\n> This update by ID takes ~2.5ms, which means it's where the framework \n> is spending most of its overhead.\n\nWell, if SELECT FROM queue_proc where id = %s takes 1ms and the update\ntakes 2.5ms, then you've got 1.5ms going into updating the row, which\nmeans it's not going to get a whole lot faster by switching to some\nother WHERE condition. Maybe you should look at cutting back on indexes\nand/or triggers attached to this table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Nov 2009 15:43:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintaining a reference to a fetched row " }, { "msg_contents": "On Wed, Nov 4, 2009 at 9:41 AM, Brian Karlak <[email protected]> wrote:\n>\n> I do a select for update in a stored proc:\n>\n> FOR queue_item IN\n>\n>   SELECT *  FROM queue\n>    WHERE status IS NULL AND id >= low_bound_id\n>    ORDER BY id LIMIT batch_size\n>      FOR UPDATE\n>\n> LOOP\n>\n>   UPDATE queue_proc set status = 'proc' where id = queue_item.id ;\n>\n> The daemons keep track of their last position in the queue with\n> low_bound_id.  Also, as you probably notice, I also fetch a batch of (100)\n> items at a time.  In practice, it's pretty fast.  The job I'm running now is\n> showing an average fetch time of 30ms per 100 actions, which ain't bad.\n>\n> However, the writing of results back to the row takes ~5ms, which is slower\n> than I'd like.\n\n5 ms per each of the 100 actions? With one commit per action?\n\n> > It seems you have an select, and update, and another update.  Where in\n> > this process do you commit?  Are you using fsync=off or\n> > synchronous_commit=off?\n>\n> First commit occurs after the stored proc to select/update a batch of items\n> is complete.\n\nSo one commit per 100 items?\n\n> Second commit occurs on the writing of results back for each\n> particular action.\n\nSo one commit per 1 item?\nIf so, this completely explains the difference in speed, I think.\n\n> Two commits are required because the time it takes to\n> complete the intervening action can vary wildly: anywhere between 20ms and\n> 45min.\n\nIs there any way of knowing/approximating ahead of time how long it will take?\n\nThe 45 min monsters must be exceedingly rare, or else the average\ncould not be ~50ms.\n\n>> Why would the index scan take 1 ms two of the times it is done but 5ms\n>> the third time?  Isn't it the same index scan each time?  Or does the\n>> change in queue.status change the plan?\n>\n> The final update is a different query -- just a plain old update by ID:\n>\n> UPDATE queue_proc set status = 'proc' where id = %s ;\n\nThat looks very much like the other UPDATE you showed. The difference\nit seems is that you commit after every one, rather than after every\n100. Right?\n\n> This update by ID takes ~2.5ms, which means it's where the framework is\n> spending most of its overhead.\n\nYou said the computation task can take anywhere from 20ms to 45min, so\nit seems that this update overhead is at most 1/8 of the irreducible\ntime. That doesn't seem like it is enough to worry about, to me.\n\nJeff\n", "msg_date": "Wed, 4 Nov 2009 20:27:27 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintaining a reference to a fetched row" }, { "msg_contents": "Brian Karlak <[email protected]> writes:\n> I have a simple queuing application written on top of postgres which I'm\n> trying to squeeze some more performance out of.\n\nHave you tried to write a custom PGQ consumer yet?\n http://wiki.postgresql.org/wiki/PGQ_Tutorial\n\nRegards,\n-- \ndim\n", "msg_date": "Mon, 09 Nov 2009 15:34:13 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: maintaining a reference to a fetched row" } ]
[ { "msg_contents": "Hi:\n\nI have an application wherein a process needs to read data from a stream and store the records for further analysis and reporting. The data in the stream is in the form of variable length records with clearly defined fields - so it can be stored in a database or in a file. The only caveat is that the rate of records coming in the stream could be several 1000 records a second.\n\nThe design choice I am faced with currently is whether to use a postgres database or a flat file for this purpose. My application already maintains a postgres (8.3.4) database for other reasons - so it seemed like the straightforward thing to do. However I am concerned about the performance overhead of writing several 1000 records a second to the database. The same database is being used simultaneously for other activities as well and I do not want those to be adversely affected by this operation (especially the query times). The advantage of running complex queries to mine the data in various different ways is very appealing but the performance concerns are making me wonder if just using a flat file to store the data would be a better approach.\n\nAnybody have any experience in high frequency writes to a postgres database?\n\n- Jay\n\n\n\n\n\n\n\n\n\n\nHi:\n \nI have an application\nwherein a process needs to read data from a stream and store the records for\nfurther analysis and reporting. The data in the stream is in the form of\nvariable length records with clearly defined fields – so it can be stored\nin a database or in a file. The only caveat is that the rate of records coming\nin the stream could be several 1000 records a second. \n \nThe design choice I am\nfaced with currently is whether to use a postgres database or a flat file for this\npurpose. My application already maintains a postgres (8.3.4) database for other\nreasons – so it seemed like the straightforward thing to do. However I am\nconcerned about the performance overhead of writing several 1000 records a\nsecond to the database. The same database is being used simultaneously for\nother activities as well and I do not want those to be adversely affected by\nthis operation (especially the query times). The advantage of running complex\nqueries to mine the data in various different ways is very appealing but the\nperformance concerns are making me wonder if just using a flat file to store\nthe data would be a better approach. \n \nAnybody have any\nexperience in high frequency writes to a postgres database?\n \n- Jay", "msg_date": "Tue, 3 Nov 2009 19:02:50 -0800", "msg_from": "Jay Manni <[email protected]>", "msg_from_op": true, "msg_subject": "High Frequency Inserts to Postgres Database vs Writing to a File" } ]
[ { "msg_contents": "Hi:\n\nI have an application wherein a process needs to read data from a stream and store the records for further analysis and reporting. The data in the stream is in the form of variable length records with clearly defined fields - so it can be stored in a database or in a file. The only caveat is that the rate of records coming in the stream could be several 1000 records a second.\n\nThe design choice I am faced with currently is whether to use a postgres database or a flat file for this purpose. My application already maintains a postgres (8.3.4) database for other reasons - so it seemed like the straightforward thing to do. However I am concerned about the performance overhead of writing several 1000 records a second to the database. The same database is being used simultaneously for other activities as well and I do not want those to be adversely affected by this operation (especially the query times). The advantage of running complex queries to mine the data in various different ways is very appealing but the performance concerns are making me wonder if just using a flat file to store the data would be a better approach.\n\nAnybody have any experience in high frequency writes to a postgres database?\n\n- Jay\n\n\n\n\n\n\n\n\n\n\nHi:\n \nI have an application\nwherein a process needs to read data from a stream and store the records for\nfurther analysis and reporting. The data in the stream is in the form of\nvariable length records with clearly defined fields – so it can be stored in a\ndatabase or in a file. The only caveat is that the rate of records coming in\nthe stream could be several 1000 records a second. \n \nThe design choice I am\nfaced with currently is whether to use a postgres database or a flat file for\nthis purpose. My application already maintains a postgres (8.3.4) database for\nother reasons – so it seemed like the straightforward thing to do. However I am\nconcerned about the performance overhead of writing several 1000 records a\nsecond to the database. The same database is being used simultaneously for\nother activities as well and I do not want those to be adversely affected by\nthis operation (especially the query times). The advantage of running complex\nqueries to mine the data in various different ways is very appealing but the\nperformance concerns are making me wonder if just using a flat file to store\nthe data would be a better approach. \n \nAnybody have any\nexperience in high frequency writes to a postgres database?\n \n- Jay", "msg_date": "Tue, 3 Nov 2009 19:12:29 -0800", "msg_from": "Jay Manni <[email protected]>", "msg_from_op": true, "msg_subject": "High Frequency Inserts to Postgres Database vs Writing to a File" }, { "msg_contents": "On Tue, Nov 3, 2009 at 8:12 PM, Jay Manni <[email protected]> wrote:\n> Hi:\n>\n>\n>\n> I have an application wherein a process needs to read data from a stream and\n> store the records for further analysis and reporting. The data in the stream\n> is in the form of variable length records with clearly defined fields – so\n> it can be stored in a database or in a file. The only caveat is that the\n> rate of records coming in the stream could be several 1000 records a second.\n>\n>\n>\n> The design choice I am faced with currently is whether to use a postgres\n> database or a flat file for this purpose. My application already maintains a\n\nA common approach is to store them in flat files, then insert the flat\nfiles at a later time so that if the db falls behind no data is lost.\n", "msg_date": "Tue, 3 Nov 2009 20:35:19 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Frequency Inserts to Postgres Database vs Writing to a File" }, { "msg_contents": "\"could be several 1000 records a second.\"\n\nSo, are there periods when there are no/few records coming in? Do the records/data/files really need to be persisted? \n\nThe following statement makes me think you should go the flat file route:\n\n\"The advantage of running complex queries to mine the data in various different ways is very appealing\"\n\nPlease don't be offended, but that sounds a little like feature creep. I've found that it's best to keep it simple and don't do a bunch of work now for what might be requested in the future.\n\nI know it's not exactly what you were looking for... Just food for thought.\n\nBest of luck!\n\nDavid\n\n\n\n________________________________\nFrom: Jay Manni <[email protected]>\nTo: \"[email protected]\" <[email protected]>\nSent: Tue, November 3, 2009 7:12:29 PM\nSubject: [PERFORM] High Frequency Inserts to Postgres Database vs Writing to a File\n\n \nHi:\n \nI have an application\nwherein a process needs to read data from a stream and store the records for\nfurther analysis and reporting. The data in the stream is in the form of\nvariable length records with clearly defined fields – so it can be stored in a\ndatabase or in a file. The only caveat is that the rate of records coming in\nthe stream could be several 1000 records a second. \n \nThe design choice I am\nfaced with currently is whether to use a postgres database or a flat file for\nthis purpose. My application already maintains a postgres (8.3.4) database for\nother reasons – so it seemed like the straightforward thing to do. However I am\nconcerned about the performance overhead of writing several 1000 records a\nsecond to the database. The same database is being used simultaneously for\nother activities as well and I do not want those to be adversely affected by\nthis operation (especially the query times). The advantage of running complex\nqueries to mine the data in various different ways is very appealing but the\nperformance concerns are making me wonder if just using a flat file to store\nthe data would be a better approach. \n \nAnybody have any\nexperience in high frequency writes to a postgres database?\n \n- Jay\n\"could be several 1000 records a second.\"So, are there periods when there are no/few records coming in?  Do the records/data/files really need to be persisted?  The\n following statement makes me think you should go the flat file route:\"The advantage of running complex queries to mine the data in various different ways is very appealing\"Please don't be offended, but that sounds a little like feature creep.  I've found that it's best to keep it simple and don't do a bunch of work now for what might be requested in the future.I know it's not exactly what you were looking for...  Just food for thought.Best of luck!DavidFrom: Jay Manni <[email protected]>To: \"[email protected]\" <[email protected]>Sent: Tue, November 3, 2009 7:12:29 PMSubject: [PERFORM] High Frequency Inserts to Postgres Database vs Writing to a File\n\n\nHi:\n  \nI have an application\nwherein a process needs to read data from a stream and store the records for\nfurther analysis and reporting. The data in the stream is in the form of\nvariable length records with clearly defined fields – so it can be stored in a\ndatabase or in a file. The only caveat is that the rate of records coming in\nthe stream could be several 1000 records a second. \n  \nThe design choice I am\nfaced with currently is whether to use a postgres database or a flat file for\nthis purpose. My application already maintains a postgres (8.3.4) database for\nother reasons – so it seemed like the straightforward thing to do. However I am\nconcerned about the performance overhead of writing several 1000 records a\nsecond to the database. The same database is being used simultaneously for\nother activities as well and I do not want those to be adversely affected by\nthis operation (especially the query times). The advantage of running complex\nqueries to mine the data in various different ways is very appealing but the\nperformance concerns are making me wonder if just using a flat file to store\nthe data would be a better approach. \n  \nAnybody have any\nexperience in high frequency writes to a postgres database?\n  \n- Jay", "msg_date": "Tue, 3 Nov 2009 19:42:08 -0800 (PST)", "msg_from": "David Saracini <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Frequency Inserts to Postgres Database vs Writing to a File" }, { "msg_contents": "n Tue, Nov 3, 2009 at 10:12 PM, Jay Manni <[email protected]> wrote:\n> Hi:\n>\n> I have an application wherein a process needs to read data from a stream and\n> store the records for further analysis and reporting. The data in the stream\n> is in the form of variable length records with clearly defined fields – so\n> it can be stored in a database or in a file. The only caveat is that the\n> rate of records coming in the stream could be several 1000 records a second.\n\nPostgres doing this is going to depend on primarily two things:\n*) Your hardware\n*) The mechanism you use to insert the data into the database\n\nPostgres can handle multiple 1000 insert/sec but your hardware most\nlikely can't handle multiple 1000 transaction/sec if fsync is on. You\ndefinitely want to batch the insert into the database somehow, so that\nsomething accumulates the data (could be a simple file), and flushes\nit in to the database. The 'flush' ideally should use copy but\nmultiple row insert is ok too. Try to avoid inserting one row at a\ntime even if in a transaction.\n\nIf you are bulk inserting 1000+ records/sec all day long, make sure\nyou have provisioned enough storage for this (that's 86M records/day),\nand you should immediately start thinking about partitioning and\nrotating the log table (if you log to the database, partition/rotate\nis basically already baked in anyways).\n\nThe effects on other users of the database are really hard to predict\n-- it's going to depend on how much resources you have (cpu and\nespecially disk) to direct towards the loading and how the database is\nbeing used. I expect it shouldn't be too bad unless your dataase is\nalready i/o loaded. The good news is testing this is relatively easy\nyou can simulate a load test and just run it during typical use and\nsee how it affects other users. Standard o/s tools (iostat, top), and\ndatabase log with min_duration_statement are going to be a big help\nhere. If you start seeing big leaps in iowait corresponding with\nunexpectedly lagging queries in your app , you probably should think\nabout scrapping the idea.\n\nmerlin\n", "msg_date": "Wed, 4 Nov 2009 08:59:25 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Frequency Inserts to Postgres Database vs Writing to a File" }, { "msg_contents": "On Tue, Nov 3, 2009 at 7:12 PM, Jay Manni <[email protected]> wrote:\n> Hi:\n>\n>\n>\n> I have an application wherein a process needs to read data from a stream and\n> store the records for further analysis and reporting.\n\nWhere is the stream coming from? What happens if the process reading\nthe stream fails but the one generating the stream keeps going?\n\n> The data in the stream\n> is in the form of variable length records with clearly defined fields – so\n> it can be stored in a database or in a file. The only caveat is that the\n> rate of records coming in the stream could be several 1000 records a second.\n>\n> The design choice I am faced with currently is whether to use a postgres\n> database or a flat file for this purpose. My application already maintains a\n> postgres (8.3.4) database for other reasons – so it seemed like the\n> straightforward thing to do. However I am concerned about the performance\n> overhead of writing several 1000 records a second to the database. The same\n> database is being used simultaneously for other activities as well and I do\n> not want those to be adversely affected by this operation (especially the\n> query times).\n\nI would not use the database, but just a flat file. You can always load it\nto a database later as long as you keep the files around, if a\ncompelling reason arises.\n\n> The advantage of running complex queries to mine the data in\n> various different ways is very appealing\n\nDo you have concrete plans to do this, or just vague notions?\n\nEven if the loading of 1000s of records per second doesn't adversely\nimpact the performance of other things going on in the server, surely\ndoing complex queries on hundreds of millions of records will. How\nlong to you plan on storing the records in the database, and how to\ndelete them out? Do you already know what indexes, if any, should be\non the table?\n\nJeff\n", "msg_date": "Wed, 4 Nov 2009 08:39:46 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Frequency Inserts to Postgres Database vs Writing to a File" }, { "msg_contents": "> I have an application wherein a process needs to read data from a stream and\n> store the records for further analysis and reporting. The data in the stream\n> is in the form of variable length records with clearly defined fields – so\n> it can be stored in a database or in a file. The only caveat is that the\n> rate of records coming in the stream could be several 1000 records a second.\n> The design choice I am faced with currently is whether to use a postgres\n> database or a flat file for this purpose. My application already maintains a\n> postgres (8.3.4) database for other reasons – so it seemed like the\n> straightforward thing to do. However I am concerned about the performance\n> overhead of writing several 1000 records a second to the database. The same\n> database is being used simultaneously for other activities as well and I do\n> not want those to be adversely affected by this operation (especially the\n> query times). The advantage of running complex queries to mine the data in\n> various different ways is very appealing but the performance concerns are\n> making me wonder if just using a flat file to store the data would be a\n> better approach.\n>\n>\n>\n> Anybody have any experience in high frequency writes to a postgres database?\n\n\nAs mentioned earlier in this thread,,make sure your hardware can\nscale. You may hit a \"monolithic hardware\" wall and may have to\ndistribute your data across multiple boxes and have your application\nmanage the distribution and access. A RAID 10 storage\narchitecture(since fast writes are critical) with a mulitple core box\n(preferably 8) having fast scsi disks (15K rpm) may be a good starting\npoint.\n\nWe have a similar requirement and we scale by distributing the data\nacross multiple boxes. This is key.\n\nIf you need to run complex queries..plan on aggregation strategies\n(processes that aggregate and optimize the data storage to facilitate\nfaster access).\n\nPartitioning is key. You will need to purge old data at some point.\nWithout partitions..you will run into trouble with the time taken to\ndelete old data as well as availability of disk space.\n\nThese are just guidelines for a big warehouse style database.\n", "msg_date": "Wed, 4 Nov 2009 08:58:07 -0800", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Frequency Inserts to Postgres Database vs Writing to a File" }, { "msg_contents": "Merlin Moncure wrote:\n\n> Postgres can handle multiple 1000 insert/sec but your hardware most\n> likely can't handle multiple 1000 transaction/sec if fsync is on.\n\ncommit_delay or async commit should help a lot there.\n\nhttp://www.postgresql.org/docs/8.3/static/wal-async-commit.html\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-wal.html\n\nPlease do *not* turn fsync off unless you want to lose your data.\n\n> If you are bulk inserting 1000+ records/sec all day long, make sure\n> you have provisioned enough storage for this (that's 86M records/day),\n\nplus any index storage, room for dead tuples if you ever issue UPDATEs, etc.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 05 Nov 2009 09:12:17 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High Frequency Inserts to Postgres Database vs Writing to a File" }, { "msg_contents": "Thanks to all for the responses. Based on all the recommendations, I am going to try a batched commit approach; along with data purging policies so that the data storage does not grow beyond certain thresholds.\n\n- J\n\n-----Original Message-----\nFrom: Craig Ringer [mailto:[email protected]] \nSent: Wednesday, November 04, 2009 5:12 PM\nTo: Merlin Moncure\nCc: Jay Manni; [email protected]\nSubject: Re: [PERFORM] High Frequency Inserts to Postgres Database vs Writing to a File\n\nMerlin Moncure wrote:\n\n> Postgres can handle multiple 1000 insert/sec but your hardware most\n> likely can't handle multiple 1000 transaction/sec if fsync is on.\n\ncommit_delay or async commit should help a lot there.\n\nhttp://www.postgresql.org/docs/8.3/static/wal-async-commit.html\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-wal.html\n\nPlease do *not* turn fsync off unless you want to lose your data.\n\n> If you are bulk inserting 1000+ records/sec all day long, make sure\n> you have provisioned enough storage for this (that's 86M records/day),\n\nplus any index storage, room for dead tuples if you ever issue UPDATEs, etc.\n\n--\nCraig Ringer\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n", "msg_date": "Thu, 5 Nov 2009 00:01:36 -0800", "msg_from": "Jay Manni <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High Frequency Inserts to Postgres Database vs Writing \tto a File" } ]
[ { "msg_contents": "Jay Manni wrote:\n> The data in the stream is in the form of variable length records with \n> clearly defined fields ? so it can be stored in a database or in a \n> file. The only caveat is that the rate of records coming in the stream \n> could be several 1000 records a second.\nThere's a few limits to be concerned about here, some physical, some \nrelated to your application design. A few thousand records is possible \nwith either the right software design or some hardware assist. The wall \nwhere it gets increasingly difficult to keep up is closer to 10K/second, \npresuming your records aren't particularly wide.\n\nSome background: when you commit a record in PostgreSQL, by default \nthat transaction doesn't complete until data has been physically written \nto disk. If you take a typical 7200 RPM disk, that spins 120 \ntimes/second, meaning that even under the best possible conditions there \ncan only be 120 commits per second to physical disk.\n\nHowever, that physical commit can contain more than one record. Here \nare the common ways to increase the number of records you can insert per \nsecond:\n\n1) Batch up inserts. Turn off any auto-commit behavior in your client, \ninsert a bunch of records, issue one COMMIT. \nTypical popular batch sizes are in the 100-1000 records/commit range. \nIf individual records aren't very wide, you can easily get a huge \nspeedup here. Hard to estimate how much this will help in your case \nwithout knowing more about that width and the speed of your underlying \ndisks; more on that below.\n\n2) Have multiple clients committing at once. Typically I see this give \nat most about a 5X speedup, so on a slow disk with single record commits \nyou might hit 600/s instead of 120/s if you had 10 clients going at once.\n\n3) Use a RAID controller with a battery-backed cache. This will hold \nmultiple disk commits in its cache and dump them onto disk in larger \nchunks transparently, with only a small risk of corruption if there's an \nextended power outage longer than the battery lasts. Typically I'll see \nthis increase commit rate to the 1000-10,000 commits/second range, again \ndepending on underlying disk speed and row size. This approach really \nreduces the worst-case behavior disks can get into, which is where you \nkeep seeking between two spots writing small bits at each one.\n\n4) Turn off synchronous_commit. This lets you adjust the rate at which \nrecords get committed into larger chunks without touching your \napplication or hardware. It does introduce the possibility you might \nlose some records if there's a crash in the middle of loading or \nchanging things. Adjusting the commit period here upwards makes this \ncase look similar to (1), you're basically committing in larger chunks \nbut the app just doesn't know it.\n\nBasically, (2) alone is probably not enough to reach 1,000 per second. \nBut (1) or (3) is, as is (4) if you can take the potential data \nintegrity issues if there's a crash. If your batchs get bigger via any \nof these techniques, what should end up happening is that you push the \nbottleneck to somewhere else, like disk write or seek speed. Which of \nthose you'll run into depends on how interleaved these writes are with \napplication reads and the total disk bandwidth.\n\nTo close, here's a quick example showing the sort of analysis you should \nbe doing to better estimate here. Imagine you're trying to write 10,000 \nrecords/second. Each record is 100 bytes wide. That works out to be \nalmost 1MB/s of steady disk writes. In the real world, a single cheap \ndisk can't do much better than this if those writes involve heavy \nseeking around the disk. And that's happens in a database, because at a \nminimum you need to write to both the main database area and the \nwrite-ahead log. If your records are 1,000 records wide instead, you \nmight hit the upper limit of your disk seeking capability at only \n1,000/second.\n\nWhereas if you have an app that's just writing to a file, you wouldn't \nnecessarily expect that to regularly seek elsewhere. That means it's \nmuch likely that you'd hit >10MB/s on writes rather than the 1-2MB/s \nworst-case behavior when seeking. Of course, as you've already noted, \nyou end up paying penalties on reporting instead if you do that. The \nbest you can do here is to try and measure your application and \nestimate/simulate larger volume, then see what happens if you apply one \nor more of these techniques.\n\n--\nGreg Smith [email protected] Baltimore, MD\n", "msg_date": "Tue, 03 Nov 2009 23:42:48 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High Frequency Inserts to Postgres Database vs Writing to a File" } ]
[ { "msg_contents": "Jay Manni wrote:\n\n> The data in the stream is in the form of variable length records with \n> clearly defined fields ? so it can be stored in a database or in a \n> file. The only caveat is that the rate of records coming in the stream \n> could be several 1000 records a second.\n\n\nThere's a few limits to be concerned about here, some physical, some\nrelated to your application design. A few thousand records is possible\nwith either the right software design or some hardware assist. The wall\nwhere it gets increasingly difficult to keep up is closer to 10K/second,\npresuming your records aren't particularly wide.\n\nSome background: when you commit a record in PostgreSQL, by default\nthat transaction doesn't complete until data has been physically written\nto disk. If you take a typical 7200 RPM disk, that spins 120\ntimes/second, meaning that even under the best possible conditions there\ncan only be 120 commits per second to physical disk.\n\nHowever, that physical commit can contain more than one record. Here\nare the common ways to increase the number of records you can insert per\nsecond:\n\n1) Batch up inserts. Turn off any auto-commit behavior in your client,\ninsert a bunch of records, issue one COMMIT.\nTypical popular batch sizes are in the 100-1000 records/commit range.\nIf individual records aren't very wide, you can easily get a huge\nspeedup here. Hard to estimate how much this will help in your case\nwithout knowing more about that width and the speed of your underlying\ndisks; more on that below.\n\n2) Have multiple clients committing at once. Typically I see this give\nat most about a 5X speedup, so on a slow disk with single record commits\nyou might hit 600/s instead of 120/s if you had 10 clients going at once.\n\n3) Use a RAID controller with a battery-backed cache. This will hold\nmultiple disk commits in its cache and dump them onto disk in larger\nchunks transparently, with only a small risk of corruption if there's an\nextended power outage longer than the battery lasts. Typically I'll see\nthis increase commit rate to the 1000-10,000 commits/second range, again\ndepending on underlying disk speed and row size. This approach really\nreduces the worst-case behavior disks can get into, which is where you\nkeep seeking between two spots writing small bits at each one.\n\n4) Turn off synchronous_commit. This lets you adjust the rate at which\nrecords get committed into larger chunks without touching your\napplication or hardware. It does introduce the possibility you might\nlose some records if there's a crash in the middle of loading or\nchanging things. Adjusting the commit period here upwards makes this\ncase look similar to (1), you're basically committing in larger chunks\nbut the app just doesn't know it.\n\nBasically, (2) alone is probably not enough to reach 1,000 per second.\nBut (1) or (3) is, as is (4) if you can take the potential data\nintegrity issues if there's a crash. If your batchs get bigger via any\nof these techniques, what should end up happening is that you push the\nbottleneck to somewhere else, like disk write or seek speed. Which of\nthose you'll run into depends on how interleaved these writes are with\napplication reads and the total disk bandwidth.\n\nTo close, here's a quick example showing the sort of analysis you should\nbe doing to better estimate here. Imagine you're trying to write 10,000\nrecords/second. Each record is 100 bytes wide. That works out to be\nalmost 1MB/s of steady disk writes. In the real world, a single cheap\ndisk can't do much better than this if those writes involve heavy\nseeking around the disk. And that's happens in a database, because at a\nminimum you need to write to both the main database area and the\nwrite-ahead log. If your records are 1,000 records wide instead, you\nmight hit the upper limit of your disk seeking capability at only\n1,000/second.\n\nWhereas if you have an app that's just writing to a file, you wouldn't\nnecessarily expect that to regularly seek elsewhere. That means it's\nmuch likely that you'd hit >10MB/s on writes rather than the 1-2MB/s\nworst-case behavior when seeking. Of course, as you've already noted,\nyou end up paying penalties on reporting instead if you do that. The\nbest you can do here is to try and measure your application and\nestimate/simulate larger volume, then see what happens if you apply one\nor more of these techniques.\n\n--\nGreg Smith [email protected] Baltimore, MD\n", "msg_date": "Wed, 04 Nov 2009 09:28:34 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High Frequency Inserts to Postgres Database vs Writing to a File" } ]
[ { "msg_contents": "Hi folks\n\nI had a couple of semi-newbie questions about this, which I couldn't find\nobvious answers to in the archives ... we are using Postgres 8.3, and the\nbehaviour is the same across Windows and Linux.\n\nI am working with an app which, among other things stores XML files (average\nabout 50KB in size) in blobs in Postgres (column type \"text\") which Postgres\nputs in a pg_toast_nnnnn table. The pattern of access is that a group of a\nfew hundred new rows is written to the main table once every few hours, but\nthen the XML documents in that recent batch of rows will be updated about\nonce every 5 minutes each, until the next batch of new rows is created - in\nthat way, the contents of the table are the most recent version of each\ndocument, plus a historical trail of one version every few hours.\n\nI'm not defending the decision to store blobs in a database (it was taken a\nwhile ago, before the need for frequent updates of the XML) and it isn't\nsomething that can be readily changed at short notice, so please no advice\nabout \"don't do that\" :-)\n\nObviously, the app causes high turnover of rows in both the parent table and\nthe toast table, so it relies heavily on vacuum to keep the size down. There\nis no DBA here and no Postgres tuning has been done yet (I plan to have a\npoke, but my DB tuning experience is Oracle with a side of MySQL, I am a\nPostgres newbie).\n\nQuestions:\n\n1. When I run vacuum manually on the parent table with the application\nrunning, it has no effect on either the parent or toast table (as reported\nby the \"pgstattuple\" add-on), even when the table is showing 40-50% dead\ntuples. However, if I disconnect the app, all the dead tuples clean up and\nmoved to the \"free space\" category.\n\nIs this a normal amount of dead space, and if not, what does this mean? My\nbest guess is that (a) it's not normal, and (b) somewhere the app is holding\nopen an old transaction, so Postgres thinks it has to retain all that data.\n\n2. If there is a hanging transaction, what's the best way to trace it from\nthe PG end? Client is classic Java (Spring / Hibernate / Apache DBCP) if\nthat matters.\n\nCheers\nDave\n\nHi folksI had a couple of semi-newbie questions about this, which I couldn't find obvious answers to in the archives ... we are using Postgres 8.3, and the behaviour is the same across Windows and Linux.\nI am working with an app which, among other things stores XML files (average about 50KB in size) in blobs in Postgres (column type \"text\") which Postgres puts in a pg_toast_nnnnn table. The pattern of access is that a group of a few hundred new rows is written to the main table once every few hours, but then the XML documents in that recent batch of rows will be updated about once every 5 minutes each, until the next batch of new rows is created - in that way, the contents of the table are the most recent version of each document, plus a historical trail of one version every few hours.\nI'm not defending the decision to store blobs in a database (it was taken a while ago, before the need for frequent updates of the XML) and it isn't something that can be readily changed at short notice, so please no advice about \"don't do that\" :-)\nObviously, the app causes high turnover of rows in both the parent table and the toast table, so it relies heavily on vacuum to keep the size down. There is no DBA here and no Postgres tuning has been done yet (I plan to have a poke, but my DB tuning experience is Oracle with a side of MySQL, I am a Postgres newbie).\nQuestions:1. When I run vacuum manually on the parent table with the application running, it has no effect on either the parent or toast table (as reported by the \"pgstattuple\" add-on), even when the table is showing 40-50% dead tuples. However, if I disconnect the app, all the dead tuples clean up and moved to the \"free space\" category.\nIs this a normal amount of dead space, and if not, what does this mean? My best guess is that (a) it's not normal, and (b) somewhere the app is holding open an old transaction, so Postgres thinks it has to retain all that data.\n2. If there is a hanging transaction, what's the best way to trace it from the PG end? Client is classic Java (Spring / Hibernate / Apache DBCP) if that matters.CheersDave", "msg_date": "Wed, 4 Nov 2009 15:18:46 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum'ing toast crumbs, detecting dangling transactions" }, { "msg_contents": "Dave Crooke <[email protected]> wrote:\n \n> I'm not defending the decision to store blobs in a database (it was\n> taken a while ago, before the need for frequent updates of the XML)\n> and it isn't something that can be readily changed at short notice,\n> so please no advice about \"don't do that\" :-)\n \nI wouldn't sweat 50kB chunks of XML. We store 10MB PDF files. :-)\n \n> is showing 40-50% dead tuples. However, if I disconnect the app, all\n> the dead tuples clean up and moved to the \"free space\" category.\n \nAs you suspected, that sounds like lingering database transactions. \nTry looking at the pg_stat_activity table for transactions \"IDLE in\ntransaction\". If you're having trouble pinning down the cause, look\nthe pg_locks view to see what tables they've been in.\n \n-Kevin\n", "msg_date": "Wed, 04 Nov 2009 15:30:25 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum'ing toast crumbs, detecting dangling\n\t transactions" }, { "msg_contents": "On Wed, Nov 4, 2009 at 2:18 PM, Dave Crooke <[email protected]> wrote:\n\n> 2. If there is a hanging transaction, what's the best way to trace it from\n> the PG end? Client is classic Java (Spring / Hibernate / Apache DBCP) if\n> that matters.\n\nLast place I worked we had the same issue and it was in our jdbc\nsettings or maybe needed an upgraded version. It was some slick trick\nsomeone thought of to do a commit;begin; at the end of each access to\nthe db. It's that begin; that gets in the way, especially if there's\nan occasional select 1 to make sure the connection is alive.\n", "msg_date": "Wed, 4 Nov 2009 14:51:57 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum'ing toast crumbs, detecting dangling\n\ttransactions" } ]
[ { "msg_contents": "Thanks folks for the quick replies.\n\n1. There is one transaction, connected from the JVM, that is showing\n\"IDLE in transaction\" .... this appears to be a leftover from\nHibernate looking at the schema metadata. It's Apache Jackrabbit, not\nour own code:\n\nhyper9test_1_6=# select c.relname, l.* from pg_class c, pg_locks l\nwhere c.relfilenode=l.relation and l.pid in (select procpid from\npg_stat_activity where current_query='<IDLE> in transaction');\n relname | locktype | database | relation | page |\ntuple | virtualxid | transactionid | classid | objid | objsubid |\nvirtualtransaction | pid | mode | granted\n----------------------------+----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+------+-----------------+---------\n pg_class_oid_index | relation | 280066 | 2662 | |\n | | | | | | 3/18\n | 8069 | AccessShareLock | t\n pg_class_relname_nsp_index | relation | 280066 | 2663 | |\n | | | | | | 3/18\n | 8069 | AccessShareLock | t\n pg_description_o_c_o_index | relation | 280066 | 2675 | |\n | | | | | | 3/18\n | 8069 | AccessShareLock | t\n pg_namespace_nspname_index | relation | 280066 | 2684 | |\n | | | | | | 3/18\n | 8069 | AccessShareLock | t\n pg_namespace_oid_index | relation | 280066 | 2685 | |\n | | | | | | 3/18\n | 8069 | AccessShareLock | t\n pg_class | relation | 280066 | 1259 | |\n | | | | | | 3/18\n | 8069 | AccessShareLock | t\n pg_description | relation | 280066 | 2609 | |\n | | | | | | 3/18\n | 8069 | AccessShareLock | t\n pg_namespace | relation | 280066 | 2615 | |\n | | | | | | 3/18\n | 8069 | AccessShareLock | t\n version_node | relation | 280066 | 493309 | |\n | | | | | | 3/18\n | 8069 | AccessShareLock | t\n version_node_idx | relation | 280066 | 493315 | |\n | | | | | | 3/18\n | 8069 | AccessShareLock | t\n(10 rows)\n\nSince the Jackrabbit tables are in the same namespace / user / schema\nas ours, am I right in thinking that this is effectively blocking the\nentire auto-vaccum system from doing anything at all?\n\nCheers\nDave\n", "msg_date": "Wed, 4 Nov 2009 17:52:17 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Followup: vacuum'ing toast" }, { "msg_contents": "Dave Crooke wrote:\n> Since the Jackrabbit tables are in the same namespace / user / schema\n> as ours, am I right in thinking that this is effectively blocking the\n> entire auto-vaccum system from doing anything at all?\n> \nYes, but the problem is actually broader than that: it wouldn't matter \nif it was a different user or namespace, the impact would still be the \nsame. PostgreSQL gets rid of needing to hold a bunch of table/row locks \nby using an approach called MVCC: \nhttp://www.postgresql.org/docs/8.4/static/mvcc-intro.html\n\nThe biggest downside of that approach is that if you have an old client \nlingering around, things that happened in the database after it started \ncan't be cleaned up. That client might still be referring to the old \ncopy of that data, so that anything it looks at will be a consistent \nsnapshot that includes the earlier version of the rows, the database is \nparanoid about letting VACUUM clean the things you've deleted up.\n\nIn 8.4 this situation is improved for some common use cases. In the 8.3 \nyou're using, an old transaction will block any VACUUM attempt from \nmoving past that point in time forever. You have to figure out how to \nget Hibernate to close the transaction it's leaving open for VACUUM to work.\n\n--\nGreg Smith [email protected] Baltimore, MD\n\n", "msg_date": "Wed, 04 Nov 2009 20:27:17 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup: vacuum'ing toast" }, { "msg_contents": "Greg Smith wrote:\n\n> The biggest downside of [MVCC] is that if you have an old client\n> lingering around, things that happened in the database after it started\n> can't be cleaned up.\n\nJust to clarify for readers: Idle clients aren't generally an issue.\nIt's only clients that are idle with an open transaction that tend to\ncause issues.\n\n> In 8.4 this situation is improved for some common use cases. In the 8.3\n> you're using, an old transaction will block any VACUUM attempt from\n> moving past that point in time forever. You have to figure out how to\n> get Hibernate to close the transaction it's leaving open for VACUUM to\n> work.\n\nHibernate is pretty well behaved with transaction management. In fact,\nit's downright nuts about keeping transactions open for as short a\nperiod of time as possible. It even implements its own row-versioning\nbased optimistic locking scheme (oplock) rather than relying on holding\na transaction open with row locks in the database.\n\nIf you have connections left idle in transaction by a Hibernate-based\nJava app, the problem is probably:\n\n1) Unclosed sessions / EntityManagers or explicit transactions in your\nown app code. Check particularly for places where the app may open a\ntransaction without a finally clause on a try block to ensure the\ntransaction (and the Session / EntityManager) are closed when the block\nis exited.\n\n2) Connections being returned to the connection pool with open\ntransactions ( probably due to #1 ). The connection pool should take\ncare of that, but reports suggest that some don't.\n\n3) Autocommit being disabled. At least when using Hibernate via JPA,\nthat'll cause a major mess and would easily explain the issues you're\nseeing. Hibernate manages transactions explicitly when required, and\nexpects autocommit to be off.\n\n3) Your connection pool software doing something crazy like\nintentionally keeping idle connections with transactions open. The\nconnection pool (c3p0 or whatever) that you use is separate from\nHibernate. I'd be surprised to see this except if autocommit was\ndisabled and the pooling software expected/assumed it'd be enabled.\n\n--\nCraig Ringe\n", "msg_date": "Thu, 05 Nov 2009 11:01:26 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup: vacuum'ing toast" }, { "msg_contents": "\n>\n>3) Autocommit being disabled. At least when using Hibernate via JPA,\n>that'll cause a major mess and would easily explain the issues you're\n>seeing. Hibernate manages transactions explicitly when required, and\n>expects autocommit to be off.\n\nExcuse me but i don't understand this point. You say that the problem happens if Autocommit is disabled but that hibernates expects that Autocommit is disabled for a correct work. What's better then? Autocommit off for Hibernate or Autocommit On for the original poster problem. Can you explain it more?\n\nThanks\n\n>--\n>Craig Ringe\n\n\n--------------------------------\nEduardo Morrás González\nDept. I+D+i e-Crime Vigilancia Digital\nS21sec Labs\nTlf: +34 902 222 521\nMóvil: +34 555 555 555 \nwww.s21sec.com, blog.s21sec.com \n\n\nSalvo que se indique lo contrario, esta información es CONFIDENCIAL y\ncontiene datos de carácter personal que han de ser tratados conforme a la\nlegislación vigente en materia de protección de datos. Si usted no es\ndestinatario original de este mensaje, le comunicamos que no está autorizado\na revisar, reenviar, distribuir, copiar o imprimir la información en él\ncontenida y le rogamos que proceda a borrarlo de sus sistemas.\n\nKontrakoa adierazi ezean, posta elektroniko honen barruan doana ISILPEKO\ninformazioa da eta izaera pertsonaleko datuak dituenez, indarrean dagoen\ndatu pertsonalak babesteko legediaren arabera tratatu beharrekoa. Posta\nhonen hartzaile ez zaren kasuan, jakinarazten dizugu baimenik ez duzula\nbertan dagoen informazioa aztertu, igorri, banatu, kopiatu edo inprimatzeko.\nHortaz, erregutzen dizugu posta hau zure sistemetatik berehala ezabatzea. \n\nAntes de imprimir este mensaje valora si verdaderamente es necesario. De\nesta forma contribuimos a la preservación del Medio Ambiente. \n\n", "msg_date": "Thu, 05 Nov 2009 10:04:05 +0100", "msg_from": "Eduardo Morras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup: vacuum'ing toast" }, { "msg_contents": "Eduardo Morras wrote:\n>> 3) Autocommit being disabled. At least when using Hibernate via JPA,\n>> that'll cause a major mess and would easily explain the issues you're\n>> seeing. Hibernate manages transactions explicitly when required, and\n>> expects autocommit to be off.\n> \n> Excuse me but i don't understand this point. You say that the problem happens if Autocommit is disabled but that hibernates expects that Autocommit is disabled for a correct work. What's better then? Autocommit off for Hibernate or Autocommit On for the original poster problem. Can you explain it more?\n\nArgh!\n\nI'm really sorry. I meant that Hibernate expects autocommit to be\n_enabled_. However, when I want back to the documentation to\ndouble-check my understanding:\n\nhttp://docs.jboss.org/hibernate/core/3.3/reference/en/html/session-configuration.html\n\nit reads:\n\n\"hibernate.connection.autocommit: Enables autocommit for JDBC pooled\nconnections (it is not recommended).\ne.g. true | false\"\n\n... so now I'm confused too. I *know* I had issues initially when I\ndisabled autocommit explicitly (particularly with explicit transaction\nmanagement), and that everything works well in my app with autocommit\noff via the hibernate.connection.autocommit param, but that appears to\nconflict with the documentation. Maybe it's different when hibernate is\nused via the JPA APIs as I'm using it?\n\nThanks for checking that. I guess now I'm confused too, but that's\nbetter than \"knowing\" something wrong.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 05 Nov 2009 19:49:07 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup: vacuum'ing toast" }, { "msg_contents": "Craig Ringer wrote:\n> Eduardo Morras wrote:\n>>> 3) Autocommit being disabled. At least when using Hibernate via JPA,\n>>> that'll cause a major mess and would easily explain the issues you're\n>>> seeing. Hibernate manages transactions explicitly when required, and\n>>> expects autocommit to be off.\n>> Excuse me but i don't understand this point. You say that the problem happens if Autocommit is disabled but that hibernates expects that Autocommit is disabled for a correct work. What's better then? Autocommit off for Hibernate or Autocommit On for the original poster problem. Can you explain it more?\n> \n> Argh!\n> \n> I'm really sorry. I meant that Hibernate expects autocommit to be\n> _enabled_.\n\nSome searching suggests that, indeed, the issue is that when using\nHibernate via JPA (Hibernate EntityManager) autocommit needs to be left\nenabled.\n\nFor some reason there doesn't seem to be any explicit reference to\nautocommit in the Hibernate EntityManager docs or the EJB3 spec. I can't\nfind anything but (numerous) forum posts and the like on this, so don't\ntake it as definitive.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 05 Nov 2009 20:09:10 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup: vacuum'ing toast" }, { "msg_contents": "Craig Ringer wrote:\n> Hibernate is pretty well behaved with transaction management. In fact,\n> it's downright nuts about keeping transactions open for as short a\n> period of time as possible. It even implements its own row-versioning\n> based optimistic locking scheme (oplock) rather than relying on holding\n> a transaction open with row locks in the database.\n> \nIt's probably more nuts than it needs to be with PostgreSQL as the \nbacking store, since MVCC prevents some of the common sources of row \nlocks from being needed. But since Hibernate is database-agnostic and \nit worried about locally cached copies of things too, it ends up needing \nto do this extra work regardless.\n\n> 3) Autocommit being disabled. At least when using Hibernate via JPA,\n> that'll cause a major mess and would easily explain the issues you're\n> seeing. Hibernate manages transactions explicitly when required, and\n> expects autocommit to be off.\n> \nDownthread it suggests there's still some confusion here, but everyone \nshould be clear about one thing: turning autocommit on is the first \nstep down a road that usually leads to bad batch performance. If your \nproblems go away by enabling it, which they sometimes do, that is a sign \nthere's a problem to be investigated, not a true solution. One day \nyou're going to find yourself wanting transactions to be explicitly \ncommitted only when required, both for atomicity and performance \nreasons, and you won't be able to rely on autocommit as a crutch at that \npoint. Better to never get used to be there in the first place.\n\n--\nGreg Smith [email protected] Baltimore, MD\n", "msg_date": "Thu, 05 Nov 2009 09:28:15 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup: vacuum'ing toast" }, { "msg_contents": "On 5/11/2009 10:28 PM, Greg Smith wrote:\n> Craig Ringer wrote:\n>> Hibernate is pretty well behaved with transaction management. In fact,\n>> it's downright nuts about keeping transactions open for as short a\n>> period of time as possible. It even implements its own row-versioning\n>> based optimistic locking scheme (oplock) rather than relying on holding\n>> a transaction open with row locks in the database.\n>> \n> It's probably more nuts than it needs to be with PostgreSQL as the\n> backing store, since MVCC prevents some of the common sources of row\n> locks from being needed.\n\nI'm not sure about that personally. Much of the work it does is to avoid\nholding an update lock on a row during \"user think time\". Instead of\nstopping another transaction from jumping in between reading a record\nand writing an updated copy, it detects when another transaction has got\nin the way and aborts the loser of the race, which will usually retry in\nsome way. This issue applies just as much to PostgreSQL as any other\ndatabase, and is very hard to avoid if your problem forces you to write\ncode that reads a record, updates it in memory, then writes it back to\nthe DB instead of doing an in-place read-and-update.\n\nThat means that, as in SERIALIZABLE transactions, UPDATEs with hibernate\ncan fail and may need to be retried. On the other hand, it means that\ntransactions aren't blocked by a lock held by another transaction during\nlong periods of user inactivity.\n\nIt's the difference between:\n\nBEGIN;\nSELECT val1, val2 FROM blah WHERE id = 1 FOR UPDATE;\n-- User ponders for half an hour before applying a change\n-- Meanwhile, another transaction that has to update the same record\n-- is blocked, and can't continue on to do other work. As it also holds\n-- update locks on other records, if you're unlucky or the app's data\n-- is highly interdependent then half the app lands up waiting for the\n-- user to get back from lunch.\nUPDATE blah SET val1 = something, val2 = somethingelse WHERE id = 1;\nCOMMIT;\n\nand:\n\nBEGIN;\nSELECT val1, val2, version FROM blah WHERE id = 1;\nCOMMIT;\n-- User ponders for half an hour before applying a change. Meanwhile,\n-- someone else who hasn't gone for lunch updates the record,\n-- incrementing the `version' field as well as tweaking the data fields.\nBEGIN;\nUPDATE blah SET val1 = something, val2 = somethingelse\nWHERE id = 1, version = oldversion;\n-- As rows matched, Hibernate knows the record has been deleted\n-- or someone else updated it in the mean time. It aborts the\n-- change by the until recently out-to-lunch user and the app informs\n-- the user that\n-- someone else has altered the record, so they'll have to check\n-- if they still need to make their changes and possibly re-apply them.\n-- (Or, if appropriate, the app it merges the two change sets and\n-- auto-retries).\nROLLBACK;\n\n\nGetting these two strategies to play well together in a DB used by\n\"optimistic locking\" row-versioned users like Hibernate as well as apps\nusing conventional SQL DB locking isn't hard, by the way. I wrote\nsomething up on it recently:\n\nhttp://wiki.postgresql.org/wiki/Hibernate_oplocks\n\n> Downthread it suggests there's still some confusion here, but everyone\n> should be clear about one thing: turning autocommit on is the first\n> step down a road that usually leads to bad batch performance.\n\nNormally I'd be in complete agreement with you. Batching things into\ntransactions not only improves performance, but it's necessary for\ncorrectness unless much of what you're doing is pretty trivial.\n\nThe distinction here is that the ORM framework expects to manage\nautocommit settings on the JDBC connection its self. In the case of use\nof Hibernate via JPA, Hibernate will almost always have autocommit\ndisabled when doing work. It's just that the JPA implementation appears\nto expect to receive connections with autocommit initially enabled, and\ngets somewhat confused if that's not the case.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 06 Nov 2009 16:22:32 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Followup: vacuum'ing toast" } ]
[ { "msg_contents": "Hi Developers and Tuners,\n Is there any way to run some query in low priority and some query\nin higher priority in pg. The main reason for this is i need my main\napplication(high priority) to be undisturbed by the sub application(low\npriority) which is running on same DB. Is there anyother good way to operate\nthis?\n\nArvind S\n*\n\n\"Many of lifes failure are people who did not realize how close they were to\nsuccess when they gave up.\"\n-Thomas Edison*\n\nHi Developers and Tuners,         Is there any way to run some query in low priority and some query in higher priority in pg. The main reason for this is i need my main application(high priority) to be undisturbed by the sub application(low priority) which is running on same DB. Is there anyother good way to operate this?\nArvind S\"Many of lifes failure are people who did not realize how close they were to success when they gave up.\"-Thomas Edison", "msg_date": "Thu, 5 Nov 2009 15:06:31 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Running some query in low priority" }, { "msg_contents": "On Thu, Nov 5, 2009 at 9:36 AM, S Arvind <[email protected]> wrote:\n\n> Hi Developers and Tuners,\n> Is there any way to run some query in low priority and some query\n> in higher priority in pg. The main reason for this is i need my main\n> application(high priority) to be undisturbed by the sub application(low\n> priority) which is running on same DB. Is there anyother good way to operate\n> this?\n>\n\nother than manually re-nicing back end, no.\n\n\n\n\n-- \nGJ\n\nOn Thu, Nov 5, 2009 at 9:36 AM, S Arvind <[email protected]> wrote:\nHi Developers and Tuners,         Is there any way to run some query in low priority and some query in higher priority in pg. The main reason for this is i need my main application(high priority) to be undisturbed by the sub application(low priority) which is running on same DB. Is there anyother good way to operate this?\nother than manually re-nicing back end, no. -- GJ", "msg_date": "Thu, 5 Nov 2009 09:38:55 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running some query in low priority" }, { "msg_contents": "2009/11/5 Grzegorz Jaśkiewicz <[email protected]>:\n>\n>\n> On Thu, Nov 5, 2009 at 9:36 AM, S Arvind <[email protected]> wrote:\n>>\n>> Hi Developers and Tuners,\n>>          Is there any way to run some query in low priority and some query\n>> in higher priority in pg. The main reason for this is i need my main\n>> application(high priority) to be undisturbed by the sub application(low\n>> priority) which is running on same DB. Is there anyother good way to operate\n>> this?\n>\n> other than manually re-nicing back end, no.\n\nAnd unfortunately this doesn't really work very well. renicing only\naffects cpu priority and usually it's i/o priority you want to adjust.\nEven if you can adjust i/o priority per process on your operating\nsystem the database often does i/o work for one process in another\nprocess or has times when a process is waiting on another process to\nfinish i/o. So lowering the i/o priority of the low priority process\nmight not have the desired effect of speeding up other processes.\n\nUsually this isn't a problem unless you have a large batch load or\nsomething like that happening which consumes all available i/o. In\nthat case you can sometimes reduce the i/o demand by just throttling\nthe rate at which you send data to or read data from the server.\n\n\n-- \ngreg\n", "msg_date": "Thu, 5 Nov 2009 11:32:22 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running some query in low priority" }, { "msg_contents": "Thank Greg and Grzegorz,\n As told i have large batch load to the postgres which i need to be\nrun in low priority. Is it really throttling the data will help to lower the\npostgres workload for some queries?\n\n-Arvind S\n\n\n\n2009/11/5 Greg Stark <[email protected]>\n\n> 2009/11/5 Grzegorz Jaśkiewicz <[email protected]>:\n> >\n> >\n> > On Thu, Nov 5, 2009 at 9:36 AM, S Arvind <[email protected]> wrote:\n> >>\n> >> Hi Developers and Tuners,\n> >> Is there any way to run some query in low priority and some\n> query\n> >> in higher priority in pg. The main reason for this is i need my main\n> >> application(high priority) to be undisturbed by the sub application(low\n> >> priority) which is running on same DB. Is there anyother good way to\n> operate\n> >> this?\n> >\n> > other than manually re-nicing back end, no.\n>\n> And unfortunately this doesn't really work very well. renicing only\n> affects cpu priority and usually it's i/o priority you want to adjust.\n> Even if you can adjust i/o priority per process on your operating\n> system the database often does i/o work for one process in another\n> process or has times when a process is waiting on another process to\n> finish i/o. So lowering the i/o priority of the low priority process\n> might not have the desired effect of speeding up other processes.\n>\n> Usually this isn't a problem unless you have a large batch load or\n> something like that happening which consumes all available i/o. In\n> that case you can sometimes reduce the i/o demand by just throttling\n> the rate at which you send data to or read data from the server.\n>\n>\n> --\n> greg\n>\n\nThank Greg and Grzegorz,       As told i have large batch load to the postgres which i need to be run in low priority. Is it really throttling the data will help to lower the postgres workload for some queries?\n\n-Arvind S2009/11/5 Greg Stark <[email protected]>\n\n2009/11/5 Grzegorz Jaśkiewicz <[email protected]>:\n>\n>\n> On Thu, Nov 5, 2009 at 9:36 AM, S Arvind <[email protected]> wrote:\n>>\n>> Hi Developers and Tuners,\n>>          Is there any way to run some query in low priority and some query\n>> in higher priority in pg. The main reason for this is i need my main\n>> application(high priority) to be undisturbed by the sub application(low\n>> priority) which is running on same DB. Is there anyother good way to operate\n>> this?\n>\n> other than manually re-nicing back end, no.\n\nAnd unfortunately this doesn't really work very well. renicing only\naffects cpu priority and usually it's i/o priority you want to adjust.\nEven if you can adjust i/o priority per process on your operating\nsystem the database often does i/o work for one process in another\nprocess or has times when a process is waiting on another process to\nfinish i/o. So lowering the i/o priority of the low priority process\nmight not have the desired effect of speeding up other processes.\n\nUsually this isn't a problem unless you have a large batch load or\nsomething like that happening which consumes all available i/o. In\nthat case you can sometimes reduce the i/o demand by just throttling\nthe rate at which you send data to or read data from the server.\n\n\n--\ngreg", "msg_date": "Thu, 5 Nov 2009 18:50:19 +0530", "msg_from": "S Arvind <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Running some query in low priority" }, { "msg_contents": "On Thu, Nov 5, 2009 at 1:20 PM, S Arvind <[email protected]> wrote:\n\n> Thank Greg and Grzegorz,\n> As told i have large batch load to the postgres which i need to be\n> run in low priority. Is it really throttling the data will help to lower the\n> postgres workload for some queries?\n>\ndepends on what you are actually trying to achieve.\n\nIf it is an insert of some sort, than divide it up. If it is a query that\nruns over data, use limits, and do it in small batches. Overall, divide in\nconquer approach works in these scenarios.\n\n\n\n-- \nGJ\n\nOn Thu, Nov 5, 2009 at 1:20 PM, S Arvind <[email protected]> wrote:\nThank Greg and Grzegorz,       As told i have large batch load to the postgres which i need to be run in low priority. Is it really throttling the data will help to lower the postgres workload for some queries?\ndepends on what you are actually trying to achieve.If it is an insert of some sort, than divide it up. If it is a query that runs over data, use limits, and do it in small batches. Overall, divide in conquer approach works in these scenarios.\n-- GJ", "msg_date": "Thu, 5 Nov 2009 13:25:20 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running some query in low priority" }, { "msg_contents": "On Thu, 5 Nov 2009, Grzegorz Jaśkiewicz wrote:\n> If it is an insert of some sort, than divide it up. If it is a query that runs over data,\n> use limits, and do it in small batches. Overall, divide in conquer approach works in\n> these scenarios.\n\nUnfortunately, dividing the work up can cause a much greater load, which \nwould make things worse. If you are inserting in smaller chunks and \ncommitting more frequently that can reduce performance. If you split up \nqueries with limit and offset, that will just multiply the number of times \nthe query has to be run. Each time, the query will be evaluated, the first \n<offset> rows thrown away, and the next <limit> rows returned, which will \nwaste a huge amount of time.\n\nIf you are inserting data, then use a COPY from stdin, and then you can \nthrottle the data stream. When you are querying, declare a cursor, and \nfetch from it at a throttled rate.\n\nMatthew\n\n-- \n Bashir: The point is, if you lie all the time, nobody will believe you, even\n when you're telling the truth. (RE: The boy who cried wolf)\n Garak: Are you sure that's the point, Doctor?\n Bashir: What else could it be? -- Star Trek DS9\n Garak: That you should never tell the same lie twice. -- Improbable Cause", "msg_date": "Thu, 5 Nov 2009 13:39:35 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running some query in low priority" }, { "msg_contents": "2009/11/5 Matthew Wakeling <[email protected]>\n\n> On Thu, 5 Nov 2009, Grzegorz Jaśkiewicz wrote:\n>\n>> If it is an insert of some sort, than divide it up. If it is a query that\n>> runs over data,\n>> use limits, and do it in small batches. Overall, divide in conquer\n>> approach works in\n>> these scenarios.\n>>\n>\n> Unfortunately, dividing the work up can cause a much greater load, which\n> would make things worse. If you are inserting in smaller chunks and\n> committing more frequently that can reduce performance. If you split up\n> queries with limit and offset, that will just multiply the number of times\n> the query has to be run. Each time, the query will be evaluated, the first\n> <offset> rows thrown away, and the next <limit> rows returned, which will\n> waste a huge amount of time.\n>\n> If you are inserting data, then use a COPY from stdin, and then you can\n> throttle the data stream. When you are querying, declare a cursor, and fetch\n> from it at a throttled rate.\n>\n\nas with everything, you have to find the right balance. I think he is\nlooking for low impact, not speed. So he has to trade one for another. Find\na small enough batch size, but not too small, cos like you said - things\nwill have too much impact otherwise.\n\n\n-- \nGJ\n\n2009/11/5 Matthew Wakeling <[email protected]>\nOn Thu, 5 Nov 2009, Grzegorz Jaśkiewicz wrote:\n\nIf it is an insert of some sort, than divide it up. If it is a query that runs over data,\nuse limits, and do it in small batches. Overall, divide in conquer approach works in\nthese scenarios.\n\n\nUnfortunately, dividing the work up can cause a much greater load, which would make things worse. If you are inserting in smaller chunks and committing more frequently that can reduce performance. If you split up queries with limit and offset, that will just multiply the number of times the query has to be run. Each time, the query will be evaluated, the first <offset> rows thrown away, and the next <limit> rows returned, which will waste a huge amount of time.\n\nIf you are inserting data, then use a COPY from stdin, and then you can throttle the data stream. When you are querying, declare a cursor, and fetch from it at a throttled rate.as with everything, you have to find the right balance. I think he is looking for low impact, not speed. So he has to trade one for another. Find a small enough batch size, but not too small, cos like you said - things will have too much impact otherwise.\n-- GJ", "msg_date": "Thu, 5 Nov 2009 13:42:56 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running some query in low priority" }, { "msg_contents": "On Thu, Nov 5, 2009 at 2:36 AM, S Arvind <[email protected]> wrote:\n> Hi Developers and Tuners,\n>          Is there any way to run some query in low priority and some query\n> in higher priority in pg. The main reason for this is i need my main\n> application(high priority) to be undisturbed by the sub application(low\n> priority) which is running on same DB. Is there anyother good way to operate\n> this?\n\nAre you IO or CPU bound? If CPU bound get more CPUs. If IO bound see\nabout getting more IO, specifically a fast RAID controller with\nBattery Backed Cache, and a fair number of fast hard drives in a\nRAID-10. Trying to throttle one thing to get the others to run faster\ncan only buy you so much time. As load increases you'll need more CPU\nor IO.\n\nIf the thing you're doing is CPU intensive, and it needs lots of CPUs,\nthen look at some form of replication to other boxes to throw more\nCPUs at the problem.\n", "msg_date": "Thu, 5 Nov 2009 11:02:09 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Running some query in low priority" } ]
[ { "msg_contents": "I'm facing a problem where running a CREATE TABLE has slowed down\nsignificantly over time.\n\n \n\nThis is problematic because my application needs to routinely create a new\nschema and create 300 tables in each new schema. In total it takes about 3\nminutes, which may not seem like a big deal, but this is time sensitive\nbecause users of our SaaS application are waiting in real-time for the\nschema and 300 the tables to be created.\n\n \n\nIt used to take about 15 seconds to create those 300 tables in a new schema\n(when there were only a few schemas, say about 50). It now takes about 3\nminutes (and now we have about 200 schemas, with more data but not hugely\nso).\n\n \n\nTo debug this problem, I've created a new database in a separate (and dinky)\nlaptop, and running a single test CREATE TABLE command takes about 19 ms.\n\n \n\nBut on the server with 200+ schemas, this single command takes between 200\nand 300 ms.\n\n \n\nMy test command on psql is:\n\nCREATE TABLE <TheSchemaName>.academicsemesters (\n\n id text NOT NULL,\n\n creationdate timestamp with time zone,\n\n academicsemestername text,\n\n academicyearandsemestername text,\n\n startdate timestamp with time zone,\n\n enddate timestamp with time zone,\n\n isplanningmode boolean NOT NULL,\n\n isclosed boolean NOT NULL,\n\n isactive boolean NOT NULL,\n\n status text,\n\n workflowstubid text,\n\n deleted boolean NOT NULL,\n\n academicyearid text\n\n);\n\n \n\n* Any tips anyone can give on what might be the underlying cause of the\nslowing down of the CREATE TABLE command over time?\n\n* Is the problem caused by the increasing number of schemas?\n\n \n\nThanks in advance,\n\nAris\n\n\n\n\n\n\n\n\n\n\n\nI’m facing a problem where running a CREATE TABLE has\nslowed down significantly over time.\n \nThis is problematic because my application needs to routinely\ncreate a new schema and create 300 tables in each new schema. In total it takes\nabout 3 minutes, which may not seem like a big deal, but this is time sensitive\nbecause users of our SaaS application are waiting in real-time for the schema and\n300 the tables to be created.\n \nIt used to take about 15 seconds to create those 300 tables\nin a new schema (when there were only a few schemas, say about 50). It now\ntakes about 3 minutes (and now we have about 200 schemas, with more data but\nnot hugely so).\n \nTo debug this problem, I’ve created a new database in\na separate (and dinky) laptop, and running a single test CREATE TABLE command\ntakes about 19 ms.\n \nBut on the server with 200+ schemas, this single command\ntakes between 200 and 300 ms.\n \nMy test command on psql is:\nCREATE TABLE <TheSchemaName>.academicsemesters (\n    id text NOT NULL,\n    creationdate timestamp with time zone,\n    academicsemestername text,\n    academicyearandsemestername text,\n    startdate timestamp with time zone,\n    enddate timestamp with time zone,\n    isplanningmode boolean NOT NULL,\n    isclosed boolean NOT NULL,\n    isactive boolean NOT NULL,\n    status text,\n    workflowstubid text,\n    deleted boolean NOT NULL,\n    academicyearid text\n);\n \n* Any tips anyone can give on what might be the underlying\ncause of the slowing down of the CREATE TABLE command over time?\n* Is the problem caused by the increasing number of schemas?\n \nThanks in advance,\nAris", "msg_date": "Sat, 7 Nov 2009 22:15:40 -0500", "msg_from": "\"Aris Samad-Yahaya\" <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE TABLE slowing down significantly over time" }, { "msg_contents": "\"Aris Samad-Yahaya\" <[email protected]> writes:\n> I'm facing a problem where running a CREATE TABLE has slowed down\n> significantly over time.\n\nSystem catalog bloat maybe? What are your vacuuming practices?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 07 Nov 2009 22:29:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE slowing down significantly over time " }, { "msg_contents": "On 8/11/2009 11:15 AM, Aris Samad-Yahaya wrote:\n\n> It used to take about 15 seconds to create those 300 tables in a new\n> schema (when there were only a few schemas, say about 50). It now takes\n> about 3 minutes (and now we have about 200 schemas, with more data but\n> not hugely so).\n\n200 schemas, 300 tables per schema. That's sixty THOUSAND tables.\n\n> * Is the problem caused by the increasing number of schemas?\n\nand increasing table count, I expect.\n\nYou do batch the table and schema creation into a single transaction,\nright? If not, do that first, rather than creating each table in a\nseparate transaction (ie: relying on autocommit).\n\nIt may also be worth thinking about the app's design. Is a new schema\nand 300 new tables for each user really the best way to tackle what\nyou're doing? (It might be, but it's a question worth asking yourself).\n\n--\nCraig Ringer\n", "msg_date": "Sun, 08 Nov 2009 11:47:39 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE slowing down significantly over time" }, { "msg_contents": "Hi Craig,\n\nYes we do put the creation of the 300 tables into a single transaction. The\ndifference between putting them in a single transaction and individual\ntransactions is about 30 seconds over the 3 minutes.\n\nAs for the creation of 300 individual tables for an account... yes we were\ntrying to think through that issue very hard. It's the SaaS maturity levels\ndiscussion: How much do you separate the databases for each account, vs\nsharing customer information into large tables. I hear SalesForce puts most\neverything in giant tables, whereas we've decided to separate customer\naccounts into separate schemas.\n\n-----Original Message-----\nFrom: Craig Ringer [mailto:[email protected]] \nSent: Saturday, November 07, 2009 10:48 PM\nTo: Aris Samad-Yahaya\nCc: [email protected]\nSubject: Re: [PERFORM] CREATE TABLE slowing down significantly over time\n\nOn 8/11/2009 11:15 AM, Aris Samad-Yahaya wrote:\n\n> It used to take about 15 seconds to create those 300 tables in a new\n> schema (when there were only a few schemas, say about 50). It now takes\n> about 3 minutes (and now we have about 200 schemas, with more data but\n> not hugely so).\n\n200 schemas, 300 tables per schema. That's sixty THOUSAND tables.\n\n> * Is the problem caused by the increasing number of schemas?\n\nand increasing table count, I expect.\n\nYou do batch the table and schema creation into a single transaction,\nright? If not, do that first, rather than creating each table in a\nseparate transaction (ie: relying on autocommit).\n\nIt may also be worth thinking about the app's design. Is a new schema\nand 300 new tables for each user really the best way to tackle what\nyou're doing? (It might be, but it's a question worth asking yourself).\n\n--\nCraig Ringer\n\n", "msg_date": "Sat, 7 Nov 2009 23:55:00 -0500", "msg_from": "\"Aris Samad-Yahaya\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE TABLE slowing down significantly over time" }, { "msg_contents": "We vacuum analyze nightly, and vacuum normally ad-hoc (but we're going to\nschedule this weekly moving forward).\n\nInteresting pointer about system catalog bloat. I tried to vacuum full the\nsystem catalog tables (pg_*), and the performance for creating a single\ntable manually improved dramatically (back to what it used to be), but as\nsoon as I created the next schema, the performance went back down to the\nsame level.\n\nSo there's a clue there somewhere. Next I will try to vacuum full the entire\ndatabase.\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Saturday, November 07, 2009 10:29 PM\nTo: Aris Samad-Yahaya\nCc: [email protected]\nSubject: Re: [PERFORM] CREATE TABLE slowing down significantly over time \n\n\"Aris Samad-Yahaya\" <[email protected]> writes:\n> I'm facing a problem where running a CREATE TABLE has slowed down\n> significantly over time.\n\nSystem catalog bloat maybe? What are your vacuuming practices?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 7 Nov 2009 23:58:03 -0500", "msg_from": "\"Aris Samad-Yahaya\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CREATE TABLE slowing down significantly over time " }, { "msg_contents": "On Sat, Nov 7, 2009 at 9:58 PM, Aris Samad-Yahaya <[email protected]> wrote:\n> We vacuum analyze nightly, and vacuum normally ad-hoc (but we're going to\n> schedule this weekly moving forward).\n>\n> Interesting pointer about system catalog bloat. I tried to vacuum full the\n> system catalog tables (pg_*), and the performance for creating a single\n> table manually improved dramatically (back to what it used to be), but as\n> soon as I created the next schema, the performance went back down to the\n> same level.\n>\n> So there's a clue there somewhere. Next I will try to vacuum full the entire\n> database.\n\nIf you don't run autovac, it's possible you've managed to bloat your\npg_catalog tables... Note that we run similar numbers of tables, as\nwe have 30 tables and about 10 indexes in over 2000 schemas. We did\nthe trick Tom posted:\nalter function pg_table_is_visible(oid) cost 10;\nto get faster tab completion and / or \\d performance.\n", "msg_date": "Sat, 7 Nov 2009 22:30:56 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE slowing down significantly over time" }, { "msg_contents": "On Sat, Nov 7, 2009 at 11:58 PM, Aris Samad-Yahaya\n<[email protected]> wrote:\n> We vacuum analyze nightly, and vacuum normally ad-hoc (but we're going to\n> schedule this weekly moving forward).\n>\n> Interesting pointer about system catalog bloat. I tried to vacuum full the\n> system catalog tables (pg_*), and the performance for creating a single\n> table manually improved dramatically (back to what it used to be), but as\n> soon as I created the next schema, the performance went back down to the\n> same level.\n>\n> So there's a clue there somewhere. Next I will try to vacuum full the entire\n> database.\n\nAnd maybe REINDEX, too.\n\n...Robert\n", "msg_date": "Sun, 8 Nov 2009 22:58:49 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE slowing down significantly over time" }, { "msg_contents": "On Mon, Nov 9, 2009 at 3:58 AM, Robert Haas <[email protected]> wrote:\n>\n>\n> And maybe REINDEX, too.\n\n\nyup, nevermind the mess in table, indices are getting fscked much quicker\nthan table it self, because of its structure.\n\n\n\n\n-- \nGJ\n\nOn Mon, Nov 9, 2009 at 3:58 AM, Robert Haas <[email protected]> wrote:\n\n\nAnd maybe REINDEX, too.yup, nevermind the mess in table, indices are getting fscked much quicker than table it self, because of its structure.  \n-- GJ", "msg_date": "Mon, 9 Nov 2009 08:40:30 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE slowing down significantly over time" }, { "msg_contents": "On Sat, Nov 7, 2009 at 11:58 PM, Aris Samad-Yahaya\n<[email protected]> wrote:\n> We vacuum analyze nightly, and vacuum normally ad-hoc (but we're going to\n> schedule this weekly moving forward).\n>\n> Interesting pointer about system catalog bloat. I tried to vacuum full the\n> system catalog tables (pg_*), and the performance for creating a single\n> table manually improved dramatically (back to what it used to be), but as\n> soon as I created the next schema, the performance went back down to the\n> same level.\n>\n> So there's a clue there somewhere. Next I will try to vacuum full the entire\n> database.\n\nYou should really enable autovacuum. You'll probably have to VACUUM\nFULL and REINDEX to clean everything up, but after that autovacuum\nshould be MUCH more effective than a nightly vacuum run. If you're\nrunning some ancient Pg version where autovacuum is not enabled by\ndefault, you should also consider upgrading. There are a lot of\ngoodies (including performance enhancements) in newer versions.\n\n...Robert\n", "msg_date": "Mon, 9 Nov 2009 07:22:02 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE slowing down significantly over time" }, { "msg_contents": "Why is reindex needed ? Unless most of the key values get deleted\nfrequently..this is not needed. (I am assuming postgres 8.x and above)\n\nOn Sun, Nov 8, 2009 at 7:58 PM, Robert Haas <[email protected]> wrote:\n> On Sat, Nov 7, 2009 at 11:58 PM, Aris Samad-Yahaya\n> <[email protected]> wrote:\n>> We vacuum analyze nightly, and vacuum normally ad-hoc (but we're going to\n>> schedule this weekly moving forward).\n>>\n>> Interesting pointer about system catalog bloat. I tried to vacuum full the\n>> system catalog tables (pg_*), and the performance for creating a single\n>> table manually improved dramatically (back to what it used to be), but as\n>> soon as I created the next schema, the performance went back down to the\n>> same level.\n>>\n>> So there's a clue there somewhere. Next I will try to vacuum full the entire\n>> database.\n>\n> And maybe REINDEX, too.\n>\n> ...Robert\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 9 Nov 2009 06:46:29 -0800", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE slowing down significantly over time" }, { "msg_contents": "On Mon, Nov 9, 2009 at 5:22 AM, Robert Haas <[email protected]> wrote:\n> On Sat, Nov 7, 2009 at 11:58 PM, Aris Samad-Yahaya\n> <[email protected]> wrote:\n>> We vacuum analyze nightly, and vacuum normally ad-hoc (but we're going to\n>> schedule this weekly moving forward).\n>>\n>> Interesting pointer about system catalog bloat. I tried to vacuum full the\n>> system catalog tables (pg_*), and the performance for creating a single\n>> table manually improved dramatically (back to what it used to be), but as\n>> soon as I created the next schema, the performance went back down to the\n>> same level.\n>>\n>> So there's a clue there somewhere. Next I will try to vacuum full the entire\n>> database.\n>\n> You should really enable autovacuum.  You'll probably have to VACUUM\n> FULL and REINDEX to clean everything up, but after that autovacuum\n> should be MUCH more effective than a nightly vacuum run.  If you're\n> running some ancient Pg version where autovacuum is not enabled by\n> default, you should also consider upgrading.  There are a lot of\n> goodies (including performance enhancements) in newer versions.\n\nAlso note that the argument that autovacuum chews up too much IO is\nmoot now that you can set cost delay to 10 to 20 milliseconds. Unless\nyou're running on the hairy edge of maximum IO at all times, autovac\nshould be pretty much unnoticed.\n", "msg_date": "Mon, 9 Nov 2009 08:33:12 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE slowing down significantly over time" }, { "msg_contents": "On Mon, Nov 9, 2009 at 9:46 AM, Anj Adu <[email protected]> wrote:\n> Why is reindex needed ?\n\nVACUUM FULL does not fix index bloat, only table boat.\n\n...Robert\n", "msg_date": "Mon, 9 Nov 2009 10:49:35 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE slowing down significantly over time" }, { "msg_contents": "Scott Marlowe wrote:\n> Also note that the argument that autovacuum chews up too much IO is\n> moot now that you can set cost delay to 10 to 20 milliseconds. Unless\n> you're running on the hairy edge of maximum IO at all times, autovac\n> should be pretty much unnoticed\nAnd if you're running on the hairy edge like that, you really need \nautovacuum whether you think you can afford it or not. Badly maintained \ntables are also I/O intensive, and it's easy for someone who thinks \"I'm \ntoo busy to allocate VACUUM time\" to end up wasting more resources than \nit would have taken to just do things right in the first place. I see \nway too many people who suffer from false economy when it comes to \nautovacuum planning.\n\nSome comments on this whole discussion:\n\n1) You don't end up with dead rows [auto]vacuum needs to clean up just \nwhen you delete things. They show up when you UPDATE things, too--the \noriginal row isn't removed until after the new one is written. The \noverhead isn't as bad on UPDATEs in 8.3 or later, but just because you \ndon't delete doesn't mean you don't need VACUUM to clean up dead stuff.\n\n2) Any time you find yourself considering VACUUM FULL to clean things \nup, you're probably making a mistake, because the sort of situations \nit's the only tool to recover from tend to be broader disasters. The \nguidelines in \nhttp://developer.postgresql.org/pgdocs/postgres/routine-vacuuming.html \nspell out my feeling here as a tip: \"the best way is to use CLUSTER or \none of the table-rewriting variants of ALTER TABLE\". If you're fighting \nperformance issues because of some amount of background mismanagement \nwith an unknown amount of table garbage in the past, it's quite possible \nyou'll find the reinvigorated performance you get from CLUSTER worth the \nmaintenance cost of needing an exclusive lock for it to run for a \nwhile. See \nhttp://it.toolbox.com/blogs/database-soup/getting-rid-of-vacuum-full-feedback-needed-33959 \nfor more information. I run CLUSTER all the time, and every time I \nthink I'm saving time by doing VACUUM FULL/REINDEX instead I regret it \nas another false economy. Turn on autovacuum, make it run all the time \nbut at a slower average speed if you're concerned about its overhead, \nand use CLUSTER once to blow away the accumulated bloat from before you \nwere doing the right things.\n\n3) There is a lot of speculation here and no measurements. What I'd be \ndoing in this case is running something like the query at \nhttp://wiki.postgresql.org/wiki/Disk_Usage (but without the lines that \nfilter out pg_catalog because the catalogs are a strong suspect here) \nregularly while debugging the problem here. Measure how big all the \ncatalog tables and indexes are, do your operations that make things \nbetter or worse, then measure again. Turn autovacuum on, repeat the \ntest, see if things are different. This type of problem tends to be \nreally easy to quantify.\n\n--\nGreg Smith [email protected] Baltimore, MD\n\n", "msg_date": "Mon, 09 Nov 2009 12:04:21 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE TABLE slowing down significantly over time" } ]
[ { "msg_contents": "\n Hi All,\n\nWe have a bigger table with some million rows. Number of index scans is\nhigh, number of seq reads is low. This table if often joined with\nothers... so we want to buy a new SSD drive, create a tablespace on it\nand put this big table on it. Random read speed on SSD is identical to\nseq read. However, I need to tell the optimizer that random_page_cost is\nless for the new tablespace. Is there a way to do it?\n\nThanks,\n\n Laszlo\n\n\n", "msg_date": "Mon, 09 Nov 2009 13:58:47 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "random_page_cost for tablespace" }, { "msg_contents": "2009/11/9 Laszlo Nagy <[email protected]>:\n> We have a bigger table with some million rows. Number of index scans is\n> high, number of seq reads is low. This table if often joined with\n> others... so we want to buy a new SSD drive, create a tablespace on it\n> and put this big table on it. Random read speed on SSD is identical to\n> seq read. However, I need to tell the optimizer that random_page_cost is\n> less for the new tablespace. Is there a way to do it?\n\nI happen to be working on a patch for this exact feature. However,\neven assuming it gets in, that means waiting for 8.5.\n\n...Robert\n", "msg_date": "Mon, 9 Nov 2009 08:48:55 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_cost for tablespace" }, { "msg_contents": "Robert Haas ďż˝rta:\n> 2009/11/9 Laszlo Nagy <[email protected]>:\n> \n>> We have a bigger table with some million rows. Number of index scans is\n>> high, number of seq reads is low. This table if often joined with\n>> others... so we want to buy a new SSD drive, create a tablespace on it\n>> and put this big table on it. Random read speed on SSD is identical to\n>> seq read. However, I need to tell the optimizer that random_page_cost is\n>> less for the new tablespace. Is there a way to do it?\n>> \n>\n> I happen to be working on a patch for this exact feature. However,\n> even assuming it gets in, that means waiting for 8.5.\n> \nThat will be a very nice feature. Thank you! :-)\n\n", "msg_date": "Mon, 09 Nov 2009 17:04:04 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: random_page_cost for tablespace" } ]
[ { "msg_contents": "Hi !\nWe recently had a problem with wal archiving badly impacting the\nperformance of our postgresql master.\nAnd i discovered \"cstream\", that can limite the bandwidth of pipe stream.\n\nHere is our new archive command, FYI, that limit the IO bandwidth to 500KB/s :\narchive_command = '/bin/cat %p | cstream -i \"\" -o \"\" -t -500k | nice\ngzip -9 -c | /usr/bin/ncftpput etc...'\n\n\nPS : While writing that mail, i just found that i could replace :\ncat %p | cstream -i \"\" ...\nwith\ncstream -i %p ...\n*grins*\n\n\n-- \nker2x\nSysadmin & DBA @ http://Www.over-blog.com/\n", "msg_date": "Tue, 10 Nov 2009 12:55:42 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, Nov 10, 2009 at 12:55:42PM +0100, Laurent Laborde wrote:\n> Hi !\n> We recently had a problem with wal archiving badly impacting the\n> performance of our postgresql master.\n> And i discovered \"cstream\", that can limite the bandwidth of pipe stream.\n> \n> Here is our new archive command, FYI, that limit the IO bandwidth to 500KB/s :\n> archive_command = '/bin/cat %p | cstream -i \"\" -o \"\" -t -500k | nice\n> gzip -9 -c | /usr/bin/ncftpput etc...'\n> \n> \n> PS : While writing that mail, i just found that i could replace :\n> cat %p | cstream -i \"\" ...\n> with\n> cstream -i %p ...\n> *grins*\n> \n\nAnd here is a simple perl program that I have used for a similar\nreason. Obviously, it can be adapted to your specific needs.\n\nRegards,\nKen\n\n----throttle.pl-------\n#!/usr/bin/perl -w\n\nrequire 5.0; # written for perl5, hasta labyebye perl4\n\nuse strict;\nuse Getopt::Std;\n\n#\n# This is an simple program to throttle network traffic to a\n# specified KB/second to allow a restore in the middle of the\n# day over the network.\n#\n\nmy($file, $chunksize, $len, $offset, $written, $rate, $buf );\nmy($options, $blocksize, $speed, %convert, $inv_rate, $verbose);\n\n%convert = ( # conversion factors for $speed,$blocksize\n\t'',\t'1',\n\t'w',\t'2',\n\t'W',\t'2',\n\t'b',\t'512',\n\t'B',\t'512',\n\t'k',\t'1024',\n\t'K',\t'1024',\n);\n\n$options = 'vhs:r:b:f:';\n\n#\n# set defaults\n#\n$speed = '100k';\n$rate = '5';\n$blocksize = '120k'; # Works for the DLT drives under SunOS\n$file = '-';\n$buf = '';\n$verbose = 0; # default to quiet\n\nsub usage {\n my($usage);\n\n $usage = \"Usage: throttle [-s speed][-r rate/sec][-b blksize][-f file][-v][-h]\n (writes data to STDOUT)\n -s speed max data rate in B/s - defaults to 100k \n -r rate writes/sec - defaults to 5\n -b size read blocksize - defaults to 120k\n -f file file to read for input - defaults to STDIN\n -h print this message\n -v print parameters used\n\";\n\n print STDERR $usage;\n exit(1);\n}\n\ngetopts($options) || usage;\n\nif ($::opt_h || $::opt_h) {\n usage;\n}\n\nusage unless $#ARGV < 0;\n\n$speed = $::opt_s if $::opt_s;\n$rate = $::opt_r if $::opt_r;\n$blocksize = $::opt_b if $::opt_b;\n$file = $::opt_f if $::opt_f;\n\n#\n# Convert $speed and $blocksize to bytes for use in the rest of the script\nif ( $speed =~ /^(\\d+)([wWbBkK]*)$/ ) {\n $speed = $1 * $convert{$2};\n}\nif ( $blocksize =~ /^(\\d+)([wWbBkK]*)$/ ) {\n $blocksize = $1 * $convert{$2};\n}\n$inv_rate = 1/$rate;\n$chunksize = int($speed/$rate);\n$chunksize = 1 if $chunksize == 0;\n\nif ($::opt_v || $::opt_v) {\n print STDERR \"speed = $speed B/s\\nrate = $rate/sec\\nblocksize = $blocksize B\\nchunksize = $chunksize B\\n\";\n}\n\n# Return error if unable to open file\nopen(FILE, \"<$file\") or die \"Cannot open $file: $!\\n\";\n\n# Read data from stdin and write it to stdout at a rate based\n# on $rate and $speed.\n#\nwhile($len = sysread(FILE, $buf, $blocksize)) {\n #\n # print out in chunks of $speed/$rate size to allow a smoother load\n $offset = 0;\n while ($len) {\n $written = syswrite(STDOUT, $buf, $chunksize, $offset);\n die \"System write error: $!\\n\" unless defined $written;\n $len -= $written;\n $offset += $written;\n #\n # Now wait 1/$rate seconds before doing the next block\n #\n select(undef, undef, undef, $inv_rate);\n }\n}\n\nclose(FILE);\n", "msg_date": "Tue, 10 Nov 2009 07:41:24 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "Laurent Laborde wrote:\n> Hi !\n> We recently had a problem with wal archiving badly impacting the\n> performance of our postgresql master.\n\nHmmm, do you want to say that copying 16 MB files over the network (and \npresumably you are not doing it absolutely continually - there are \npauses between log shipping - or you wouldn't be able to use bandwidth \nlimiting) in an age when desktop drives easily read 60 MB/s (and besides \nmost of the file should be cached by the OS anyway) is a problem for \nyou? Slow hardware?\n\n(or I've misunderstood the problem...)\n\n", "msg_date": "Tue, 10 Nov 2009 15:05:36 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, Nov 10, 2009 at 3:05 PM, Ivan Voras <[email protected]> wrote:\n> Laurent Laborde wrote:\n>>\n>> Hi !\n>> We recently had a problem with wal archiving badly impacting the\n>> performance of our postgresql master.\n>\n> Hmmm, do you want to say that copying 16 MB files over the network (and\n> presumably you are not doing it absolutely continually - there are pauses\n> between log shipping - or you wouldn't be able to use bandwidth limiting) in\n> an age when desktop drives easily read 60 MB/s (and besides most of the file\n> should be cached by the OS anyway) is a problem for you? Slow hardware?\n>\n> (or I've misunderstood the problem...)\n\nDesktop drive can easily do 60MB/s in *sequential* read/write.\nWe use high performance array of 15.000rpm SAS disk on an octocore\n32GB and IO is always a problem.\n\nI explain the problem :\n\nThis server (doing wal archiving) is the master node of the\nover-blog's server farm.\nhundreds of GB of data, tens of millions of articles and comments,\nmillions of user, ...\n~250 read/write sql requests per seconds for the master\n~500 read sql request per slave.\n\nAwefully random access overload our array at 10MB/s at best.\nOf course, when doing sequential read it goes to +250MB/s :)\n\nWaiting for \"cheap\" memory to be cheap enough to have 512Go of ram per server ;)\n\nWe tought about SSD.\nBut interleaved read/write kill any SSD performance and is not better\nthan SSD. Just more expensive with an unknown behaviour over age.\n\n-- \nker2x\nsysadmin & DBA @ http://www.over-blog.com/\n", "msg_date": "Tue, 10 Nov 2009 16:00:06 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "Laurent Laborde wrote:\n> On Tue, Nov 10, 2009 at 3:05 PM, Ivan Voras <[email protected]> wrote:\n>> Laurent Laborde wrote:\n>>> Hi !\n>>> We recently had a problem with wal archiving badly impacting the\n>>> performance of our postgresql master.\n>> Hmmm, do you want to say that copying 16 MB files over the network (and\n>> presumably you are not doing it absolutely continually - there are pauses\n>> between log shipping - or you wouldn't be able to use bandwidth limiting) in\n>> an age when desktop drives easily read 60 MB/s (and besides most of the file\n>> should be cached by the OS anyway) is a problem for you? Slow hardware?\n>>\n>> (or I've misunderstood the problem...)\n> \n> Desktop drive can easily do 60MB/s in *sequential* read/write.\n\n... and WAL files are big sequential chunks of data :)\n\n> We use high performance array of 15.000rpm SAS disk on an octocore\n> 32GB and IO is always a problem.\n> \n> I explain the problem :\n> \n> This server (doing wal archiving) is the master node of the\n> over-blog's server farm.\n> hundreds of GB of data, tens of millions of articles and comments,\n> millions of user, ...\n> ~250 read/write sql requests per seconds for the master\n> ~500 read sql request per slave.\n> \n> Awefully random access overload our array at 10MB/s at best.\n\nOk, this explains it. It also means you are probably not getting much \nruntime performance benefits from the logging and should think about \nmoving the logs to different drive(s), among other things because...\n\n> Of course, when doing sequential read it goes to +250MB/s :)\n\n... it means you cannot dedicate 0.064 of second from the array to read \nthrough a single log file without your other transactions suffering.\n\n> Waiting for \"cheap\" memory to be cheap enough to have 512Go of ram per server ;)\n> \n> We tought about SSD.\n> But interleaved read/write kill any SSD performance and is not better\n> than SSD. Just more expensive with an unknown behaviour over age.\n\nYes, this is the current attitude toward them.\n\n", "msg_date": "Tue, 10 Nov 2009 16:11:33 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, Nov 10, 2009 at 4:11 PM, Ivan Voras <[email protected]> wrote:\n> Laurent Laborde wrote:\n>\n> Ok, this explains it. It also means you are probably not getting much\n> runtime performance benefits from the logging and should think about moving\n> the logs to different drive(s), among other things because...\n\nIt is on a separate array which does everything but tablespace (on a\nseparate array) and indexspace (another separate array).\n\n>> Of course, when doing sequential read it goes to +250MB/s :)\n>\n> ... it means you cannot dedicate 0.064 of second from the array to read\n> through a single log file without your other transactions suffering.\n\nWell, actually, i also change the configuration to synchronous_commit=off\nIt probably was *THE* problem with checkpoint and archiving :)\n\nBut adding cstream couldn't hurt performance, and i wanted to share\nthis with the list. :)\n\nBTW, if you have any idea to improve IO performance, i'll happily read it.\nWe're 100% IO bound.\n\neg: historically, we use JFS with LVM on linux. from the good old time\nwhen IO wasn't a problem.\ni heard that ext3 is not better for postgresql. what else ? xfs ?\n\n*hugs*\n\n-- \nker2x\nSysadmin & DBA @ http://www.over-blog.com/\n", "msg_date": "Tue, 10 Nov 2009 16:29:32 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "checkpoint log :\n--------------------\n\n checkpoint starting: time\n checkpoint complete: wrote 1972 buffers (0.8%); 0 transaction log\nfile(s) added, 0 removed, 13 recycled;\n write=179.123 s, sync=26.284 s, total=205.451 s\n\nwith a 10mn timeout.\n\n-- \nker2x\n", "msg_date": "Tue, 10 Nov 2009 16:37:45 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, Nov 10, 2009 at 8:00 AM, Laurent Laborde <[email protected]> wrote:\n>\n> Desktop drive can easily do 60MB/s in *sequential* read/write.\n> We use high performance array of 15.000rpm SAS disk on an octocore\n> 32GB and IO is always a problem.\n\nHow man drives in the array? Controller? RAID level?\n\n> I explain the problem :\n>\n> This server (doing wal archiving) is the master node of the\n> over-blog's server farm.\n> hundreds of GB of data, tens of millions of articles and comments,\n> millions of user, ...\n> ~250 read/write sql requests per seconds for the master\n> ~500 read sql request per slave.\n\nThat's really not very fast.\n", "msg_date": "Tue, 10 Nov 2009 08:43:34 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "Laurent Laborde <[email protected]> wrote:\n \n> BTW, if you have any idea to improve IO performance, i'll happily\n> read it. We're 100% IO bound.\n \nAt the risk of stating the obvious, you want to make sure you have\nhigh quality RAID adapters with large battery backed cache configured\nto write-back.\n \nIf you haven't already done so, you might want to try\nelevator=deadline.\n \n> xfs ?\n \nIf you use xfs and have the aforementioned BBU cache, be sure to turn\nwrite barriers off.\n \n-Kevin\n", "msg_date": "Tue, 10 Nov 2009 09:48:35 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, Nov 10, 2009 at 4:48 PM, Kevin Grittner\n<[email protected]> wrote:\n> Laurent Laborde <[email protected]> wrote:\n>\n>> BTW, if you have any idea to improve IO performance, i'll happily\n>> read it.  We're 100% IO bound.\n>\n> At the risk of stating the obvious, you want to make sure you have\n> high quality RAID adapters with large battery backed cache configured\n> to write-back.\n\nNot sure how \"high quality\" the 3ware is.\n/c0 Driver Version = 2.26.08.004-2.6.18\n/c0 Model = 9690SA-8I\n/c0 Available Memory = 448MB\n/c0 Firmware Version = FH9X 4.04.00.002\n/c0 Bios Version = BE9X 4.01.00.010\n/c0 Boot Loader Version = BL9X 3.08.00.001\n/c0 Serial Number = L340501A7360026\n/c0 PCB Version = Rev 041\n/c0 PCHIP Version = 2.00\n/c0 ACHIP Version = 1501290C\n/c0 Controller Phys = 8\n/c0 Connections = 8 of 128\n/c0 Drives = 8 of 128\n/c0 Units = 3 of 128\n/c0 Active Drives = 8 of 128\n/c0 Active Units = 3 of 32\n/c0 Max Drives Per Unit = 32\n/c0 Total Optimal Units = 2\n/c0 Not Optimal Units = 1\n/c0 Disk Spinup Policy = 1\n/c0 Spinup Stagger Time Policy (sec) = 1\n/c0 Auto-Carving Policy = off\n/c0 Auto-Carving Size = 2048 GB\n/c0 Auto-Rebuild Policy = on\n/c0 Controller Bus Type = PCIe\n/c0 Controller Bus Width = 8 lanes\n/c0 Controller Bus Speed = 2.5 Gbps/lane\n\n\n> If you haven't already done so, you might want to try\n> elevator=deadline.\n\nThat's what we use.\nAlso tried \"noop\" scheduler without signifiant performance change.\n\n-- \nker2x\n", "msg_date": "Tue, 10 Nov 2009 16:53:49 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "Laurent Laborde wrote:\n> It is on a separate array which does everything but tablespace (on a\n> separate array) and indexspace (another separate array).\n> \nOn Linux, the types of writes done to the WAL volume (where writes are \nconstantly being flushed) require the WAL volume not be shared with \nanything else for that to perform well. Typically you'll end up with \nother things being written out too because it can't just selectively \nflush just the WAL data. The whole \"write barriers\" implementation \nshould fix that, but in practice rarely does.\n\nIf you put many drives into one big array, somewhere around 6 or more \ndrives, at that point you might put the WAL on that big volume too and \nbe OK (presuming a battery-backed cache which you have). But if you're \ncarving up array sections so finely for other purposes, it doesn't sound \nlike your WAL data is on a big array. Mixed onto a big shared array or \nsingle dedicated disks (RAID1) are the two WAL setups that work well, \nand if I have a bunch of drives I personally always prefer a dedicated \ndrive mainly because it makes it easy to monitor exactly how much WAL \nactivity is going on by watching that drive.\n\n> Well, actually, i also change the configuration to synchronous_commit=off\n> It probably was *THE* problem with checkpoint and archiving :)\n> \nThis is basically turning off the standard WAL implementation for one \nwhere you'll lose some data if there's a crash. If you're OK with that, \ngreat; if not, expect to lose some number of transactions if the server \never goes down unexpectedly when configured like this.\n\nGenerally if checkpoints and archiving are painful, the first thing to \ndo is to increase checkpoint_segments to a very high amount (>100), \nincrease checkpoint_timeout too, and push shared_buffers up to be a \nlarge chunk of memory. Disabling synchronous_commit should be a last \nresort if your performance issues are so bad you have no choice but to \nsacrifice some data integrity just to keep things going, while you \nrearchitect to improve things.\n\n> eg: historically, we use JFS with LVM on linux. from the good old time\n> when IO wasn't a problem.\n> i heard that ext3 is not better for postgresql. what else ? xfs ?\n> \nYou never want to use LVM under Linux if you care about performance. It \nadds a bunch of overhead that drops throughput no matter what, and it's \nfilled with limitations. For example, I mentioned write barriers being \none way to interleave WAL writes without other types without having to \nwrite the whole filesystem cache out. Guess what: they don't work at \nall regardless if you're using LVM. Much like using virtual machines, \nLVM is an approach only suitable for low to medium performance systems \nwhere your priority is easier management rather than speed.\n\nGiven the current quality of Linux code, I hesitate to use anything but \next3 because I consider that just barely reliable enough even as the \nmost popular filesystem by far. JFS and XFS have some benefits to them, \nbut none so compelling to make up for how much less testing they get. \nThat said, there seem to be a fair number of people happily running \nhigh-performance PostgreSQL instances on XFS.\n\n--\nGreg Smith [email protected] Baltimore, MD\n", "msg_date": "Tue, 10 Nov 2009 11:35:50 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith <[email protected]> wrote:\n> Laurent Laborde wrote:\n>>\n>> It is on a separate array which does everything but tablespace (on a\n>> separate array) and indexspace (another separate array).\n>>\n>\n> On Linux, the types of writes done to the WAL volume (where writes are\n> constantly being flushed) require the WAL volume not be shared with anything\n> else for that to perform well.  Typically you'll end up with other things\n> being written out too because it can't just selectively flush just the WAL\n> data.  The whole \"write barriers\" implementation should fix that, but in\n> practice rarely does.\n>\n> If you put many drives into one big array, somewhere around 6 or more\n> drives, at that point you might put the WAL on that big volume too and be OK\n> (presuming a battery-backed cache which you have).  But if you're carving up\n> array sections so finely for other purposes, it doesn't sound like your WAL\n> data is on a big array.  Mixed onto a big shared array or single dedicated\n> disks (RAID1) are the two WAL setups that work well, and if I have a bunch\n> of drives I personally always prefer a dedicated drive mainly because it\n> makes it easy to monitor exactly how much WAL activity is going on by\n> watching that drive.\n\nOn the \"new\" slave i have 6 disk in raid-10 and 2 disk in raid-1.\nI tought about doing the same thing with the master.\n\n\n>> Well, actually, i also change the configuration to synchronous_commit=off\n>> It probably was *THE* problem with checkpoint and archiving :)\n>>\n>\n> This is basically turning off the standard WAL implementation for one where\n> you'll lose some data if there's a crash.  If you're OK with that, great; if\n> not, expect to lose some number of transactions if the server ever goes down\n> unexpectedly when configured like this.\n\nI have 1 spare dedicated to hot standby, doing nothing but waiting for\nthe master to fail.\n+ 2 spare candidate for cluster mastering.\n\nIn theory, i could even disable fsync and all \"safety\" feature on the master.\nIn practice, i'd like to avoid using the slony's failover capabilities\nif i can avoid it :)\n\n> Generally if checkpoints and archiving are painful, the first thing to do is\n> to increase checkpoint_segments to a very high amount (>100), increase\n> checkpoint_timeout too, and push shared_buffers up to be a large chunk of\n> memory.\n\nShared_buffer is 2GB.\nI'll reread domcumentation about checkpoint_segments.\nthx.\n\n> Disabling synchronous_commit should be a last resort if your\n> performance issues are so bad you have no choice but to sacrifice some data\n> integrity just to keep things going, while you rearchitect to improve\n> things.\n>\n>> eg: historically, we use JFS with LVM on linux. from the good old time\n>> when IO wasn't a problem.\n>> i heard that ext3 is not better for postgresql. what else ? xfs ?\n>>\n>\n> You never want to use LVM under Linux if you care about performance.  It\n> adds a bunch of overhead that drops throughput no matter what, and it's\n> filled with limitations.  For example, I mentioned write barriers being one\n> way to interleave WAL writes without other types without having to write the\n> whole filesystem cache out.  Guess what:  they don't work at all regardless\n> if you're using LVM.  Much like using virtual machines, LVM is an approach\n> only suitable for low to medium performance systems where your priority is\n> easier management rather than speed.\n\n*doh* !!\nEverybody told me \"nooo ! LVM is ok, no perceptible overhead, etc ...)\nAre you 100% about LVM ? I'll happily trash it :)\n\n> Given the current quality of Linux code, I hesitate to use anything but ext3\n> because I consider that just barely reliable enough even as the most popular\n> filesystem by far.  JFS and XFS have some benefits to them, but none so\n> compelling to make up for how much less testing they get.  That said, there\n> seem to be a fair number of people happily running high-performance\n> PostgreSQL instances on XFS.\n\nThx for the info :)\n\n-- \nker2x\n", "msg_date": "Tue, 10 Nov 2009 17:52:00 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, Nov 10, 2009 at 9:52 AM, Laurent Laborde <[email protected]> wrote:\n> On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith <[email protected]> wrote:\n>> disks (RAID1) are the two WAL setups that work well, and if I have a bunch\n>> of drives I personally always prefer a dedicated drive mainly because it\n>> makes it easy to monitor exactly how much WAL activity is going on by\n>> watching that drive.\n\nI do the same thing for the same reasons.\n\n> On the \"new\" slave i have 6 disk in raid-10 and 2 disk in raid-1.\n> I tought about doing the same thing with the master.\n\nIt would be a worthy change to make. As long as there's no heavy log\nwrite load on the RAID-1 put the pg_xlog there.\n\n>> Generally if checkpoints and archiving are painful, the first thing to do is\n>> to increase checkpoint_segments to a very high amount (>100), increase\n>> checkpoint_timeout too, and push shared_buffers up to be a large chunk of\n>> memory.\n>\n> Shared_buffer is 2GB.\n\nOn some busy systems with lots of small transactions large\nshared_buffer can cause it to run slower rather than faster due to\nbackground writer overhead.\n\n> I'll reread domcumentation about checkpoint_segments.\n> thx.\n\nNote that if you've got a slow IO subsystem, a large number of\ncheckpoint segments can result in REALLY long restart times after a\ncrash, as well as really long waits for shutdown and / or bgwriter\nonce you've filled them all up.\n\n>> You never want to use LVM under Linux if you care about performance.  It\n>> adds a bunch of overhead that drops throughput no matter what, and it's\n>> filled with limitations.  For example, I mentioned write barriers being one\n>> way to interleave WAL writes without other types without having to write the\n>> whole filesystem cache out.  Guess what:  they don't work at all regardless\n>> if you're using LVM.  Much like using virtual machines, LVM is an approach\n>> only suitable for low to medium performance systems where your priority is\n>> easier management rather than speed.\n>\n> *doh* !!\n> Everybody told me \"nooo ! LVM is ok, no perceptible overhead, etc ...)\n> Are you 100% about LVM ? I'll happily trash it :)\n\nEveryone who doesn't run databases thinks LVM is plenty fast. Under a\ndatabase it is not so quick. Do your own testing to be sure, but I've\nseen slowdowns of about 1/2 under it for fast RAID arrays.\n\n>> Given the current quality of Linux code, I hesitate to use anything but ext3\n>> because I consider that just barely reliable enough even as the most popular\n>> filesystem by far.  JFS and XFS have some benefits to them, but none so\n>> compelling to make up for how much less testing they get.  That said, there\n>> seem to be a fair number of people happily running high-performance\n>> PostgreSQL instances on XFS.\n>\n> Thx for the info :)\n\nNote that XFS gets a LOT of testing, especially under linux. That\nsaid it's still probably only 1/10th as many dbs (or fewer) as those\nrunning on ext3 on linux. I've used it before and it's a little\nfaster than ext3 at some stuff, especially deleting large files (or in\npg's case lots of 1G files) which can make ext3 crawl.\n", "msg_date": "Tue, 10 Nov 2009 10:01:30 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith <[email protected]> wrote:\n> Given the current quality of Linux code, I hesitate to use anything but ext3\n> because I consider that just barely reliable enough even as the most popular\n> filesystem by far. JFS and XFS have some benefits to them, but none so\n> compelling to make up for how much less testing they get. That said, there\n> seem to be a fair number of people happily running high-performance\n> PostgreSQL instances on XFS.\n\nI thought the common wisdom was to use ext2 for the WAL, since the WAL is a journal system, and ext3 would essentially be journaling the journal. Is that not true?\n\nCraig\n", "msg_date": "Tue, 10 Nov 2009 09:07:14 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, Nov 10, 2009 at 10:07 AM, Craig James\n<[email protected]> wrote:\n> On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith <[email protected]> wrote:\n>>\n>> Given the current quality of Linux code, I hesitate to use anything but\n>> ext3\n>> because I consider that just barely reliable enough even as the most\n>> popular\n>> filesystem by far.  JFS and XFS have some benefits to them, but none so\n>> compelling to make up for how much less testing they get.  That said,\n>> there\n>> seem to be a fair number of people happily running high-performance\n>> PostgreSQL instances on XFS.\n>\n> I thought the common wisdom was to use ext2 for the WAL, since the WAL is a\n> journal system, and ext3 would essentially be journaling the journal.  Is\n> that not true?\n\nYep, ext2 for pg_xlog is fine.\n", "msg_date": "Tue, 10 Nov 2009 10:10:45 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "Craig James wrote:\n> On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith <[email protected]> wrote:\n>> Given the current quality of Linux code, I hesitate to use anything \n>> but ext3\n>> because I consider that just barely reliable enough even as the most \n>> popular\n>> filesystem by far. JFS and XFS have some benefits to them, but none so\n>> compelling to make up for how much less testing they get. That said, \n>> there\n>> seem to be a fair number of people happily running high-performance\n>> PostgreSQL instances on XFS.\n>\n> I thought the common wisdom was to use ext2 for the WAL, since the WAL \n> is a journal system, and ext3 would essentially be journaling the \n> journal. Is that not true?\nUsing ext2 means that you're still exposed to fsck errors on boot after \na crash, which doesn't lose anything but you have to go out of your way \nto verify you're not going to get stuck with your server down in that \ncase. The state of things on the performance side is nicely benchmarked \nat \nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/ \n\n\nSure, it jumps from 85MB/s to 115MB/s if you use ext2, but if noatime \nhad been used I think even some of that fairly small gap would have \nclosed. My experience is that it's really hard to saturate even a \nsingle disk worth of bandwidth with WAL writes if there's a dedicated \nWAL volume. As such, I'll use ext3 until it's very clear that's the \nactual bottleneck, and only then step back and ask if converting to ext2 \nis worth the performance boost and potential crash recovery mess. I've \nnever actually reached that point in a real-world situation, only in \nsimulated burst write tests.\n\n--\nGreg Smith [email protected] Baltimore, MD\n", "msg_date": "Tue, 10 Nov 2009 12:26:03 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "Laurent Laborde wrote:\n> On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith <[email protected]> wrote:\n> \n> I have 1 spare dedicated to hot standby, doing nothing but waiting for\n> the master to fail.\n> + 2 spare candidate for cluster mastering.\n>\n> In theory, i could even disable fsync and all \"safety\" feature on the master.\n> \nThere are two types of safety issues here:\n\n1) Will the database be corrupted if there's a crash? This can happen \nif you turn off fsync, and you'll need to switch to a standby to easily \nget back up again\n\n2) Will you lose transactions that have been reported as committed to a \nclient if there's a crash? This you're exposed to if synchronous_commit \nis off, and whether you have a standby or not doesn't change that fact.\n\n> Everybody told me \"nooo ! LVM is ok, no perceptible overhead, etc ...)\n> Are you 100% about LVM ? I'll happily trash it :)\nBelieving what people told you is how you got into trouble in the first \nplace. You shouldn't believe me either--benchmark yourself and then \nyou'll know. As a rule, any time someone suggests there's a \ntechnological approach that makes it easier to manage disks, that \napproach will also slow performance. LVM vs. straight volumes, SAN vs. \ndirect-attached storage, VM vs. real hardware, it's always the same story.\n\n--\nGreg Smith [email protected] Baltimore, MD\n\n", "msg_date": "Tue, 10 Nov 2009 12:34:29 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "Scott Marlowe wrote:\n> On some busy systems with lots of small transactions large\n> shared_buffer can cause it to run slower rather than faster due to\n> background writer overhead.\n> \nThis is only really true in 8.2 and earlier, where background writer \ncomputations are done as a percentage of shared_buffers. The rewrite I \ndid in 8.3 changes that to where it's proportional to overall system \nactivity (specifically, buffer allocations) and you shouldn't see this \nthere. However, large values for shared_buffers do increase the \npotential for longer checkpoints though, which is similar background \noverhead starting in 8.3. That's why I mention it hand in hand with \ndecreasing the checkpoint frequency, you really need to do that before \nlarge shared_buffers values are viable.\n\nThis is actually a topic I meant to mention to Laurent: if you're not \nrunning at least PG8.3, you really should be considering what it would \ntake to upgrade to 8.4. It's hard to justify the 8.3->8.4 upgrade just \nbased on that version's new performance features (unless you delete \nthings a lot), but the changes from 8.1 to 8.2 to 8.3 make the database \nfaster at a lot of common tasks.\n\n> Note that if you've got a slow IO subsystem, a large number of\n> checkpoint segments can result in REALLY long restart times after a\n> crash, as well as really long waits for shutdown and / or bgwriter\n> once you've filled them all up.\n> \nThe setup here, with a decent number of disks and a 3ware controller, \nshouldn't be that bad here. Ultimately you have to ask yourself whether \nit's OK to suffer from the rare recovery issue this introduces if it \nimproves things a lot all of the rest of the time, which increasing \ncheckpoint_segments does.\n\n> Note that XFS gets a LOT of testing, especially under linux. That\n> said it's still probably only 1/10th as many dbs (or fewer) as those\n> running on ext3 on linux. I've used it before and it's a little\n> faster than ext3 at some stuff, especially deleting large files (or in\n> pg's case lots of 1G files) which can make ext3 crawl.\n> \nWhile true, you have to consider whether the things it's better at \nreally happen during a regular day. The whole \"faster at deleting large \nfiles\" thing doesn't matter to me on a production DB server at all, so \nthat slam-dunk win for XFS doesn't even factor into my filesystem \nranking computations in that context.\n\n--\nGreg Smith [email protected] Baltimore, MD\n\n", "msg_date": "Tue, 10 Nov 2009 12:48:46 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, Nov 10, 2009 at 10:48 AM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> On some busy systems with lots of small transactions large\n>> shared_buffer can cause it to run slower rather than faster due to\n>> background writer overhead.\n>>\n>\n> This is only really true in 8.2 and earlier, where background writer\n> computations are done as a percentage of shared_buffers.  The rewrite I did\n> in 8.3 changes that to where it's proportional to overall system activity\n> (specifically, buffer allocations) and you shouldn't see this there.\n\nNice to know since we converted to 8.3 a few months ago. I did notice\nthe huge overall performance improvement from 8.2 to 8.3 and I assume\npart of that was the code you wrote for WAL. Thanks!\n\n>  However, large values for shared_buffers do increase the potential for\n> longer checkpoints though, which is similar background overhead starting in\n> 8.3.  That's why I mention it hand in hand with decreasing the checkpoint\n> frequency, you really need to do that before large shared_buffers values are\n> viable.\n\nYeah. We run 64 checkpoint segments and a 30 minute timeout and a\nlower completion target (0.25 to 0.5) on most of our servers with good\nbehaviour in 8.3\n\n> This is actually a topic I meant to mention to Laurent:  if you're not\n> running at least PG8.3, you really should be considering what it would take\n> to upgrade to 8.4.  It's hard to justify the 8.3->8.4 upgrade just based on\n> that version's new performance features (unless you delete things a lot),\n> but the changes from 8.1 to 8.2 to 8.3 make the database faster at a lot of\n> common tasks.\n\nTrue++ 8.3 is the minimum version of pg we run anywhere at work now.\n8.4 isn't compelling yet for us, since we finally got fsm setup right.\n But for someone upgrading from 8.2 or before, I'd think the automatic\nfsm stuff would be a big selling point.\n\n>> Note that if you've got a slow IO subsystem, a large number of\n>> checkpoint segments can result in REALLY long restart times after a\n>> crash, as well as really long waits for shutdown and / or bgwriter\n>> once you've filled them all up.\n>>\n>\n> The setup here, with a decent number of disks and a 3ware controller,\n> shouldn't be that bad here.\n\nIf he were running RAID-5 I'd agree. :) That's gonna slow down the\nwrite speeds quite a bit during recovery.\n\n> Ultimately you have to ask yourself whether\n> it's OK to suffer from the rare recovery issue this introduces if it\n> improves things a lot all of the rest of the time, which increasing\n> checkpoint_segments does.\n\nNote that 100% of the time I have to wait for recovery on start it's\nbecause something went wrong with a -m fast shutdown that required\neither hand killing all postgres backends and the postmaster, or a -m\nimmediate. On the machines with 12 disk RAID-10 arrays this takes\nseconds to do. On the slaves with a pair of 7200RPM SATA drives, or\nthe one at the office on RAID-6, and 60 to 100+ WAL segments, it takes\na couple of minutes.\n\n>> Note that XFS gets a LOT of testing, especially under linux.  That\n>> said it's still probably only 1/10th as many dbs (or fewer) as those\n>> running on ext3 on linux.  I've used it before and it's a little\n>> faster than ext3 at some stuff, especially deleting large files (or in\n>> pg's case lots of 1G files) which can make ext3 crawl.\n>\n> While true, you have to consider whether the things it's better at really\n> happen during a regular day.  The whole \"faster at deleting large files\"\n> thing doesn't matter to me on a production DB server at all, so that\n> slam-dunk win for XFS doesn't even factor into my filesystem ranking\n> computations in that context.\n\nahhhh. I store backups on my pgdata directory, so it does start to\nmatter there. Luckily, that's on a slave database so it's not as\nhorrible as it could be. Still running ext3 on it because it just\nworks.\n", "msg_date": "Tue, 10 Nov 2009 11:10:48 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "\nOn Nov 10, 2009, at 10:53 AM, Laurent Laborde wrote:\n\n> On Tue, Nov 10, 2009 at 4:48 PM, Kevin Grittner\n> <[email protected]> wrote:\n>> Laurent Laborde <[email protected]> wrote:\n>>\n>>> BTW, if you have any idea to improve IO performance, i'll happily\n>>> read it. We're 100% IO bound.\n>>\n>> At the risk of stating the obvious, you want to make sure you have\n>> high quality RAID adapters with large battery backed cache configured\n>> to write-back.\n>\n> Not sure how \"high quality\" the 3ware is.\n> /c0 Driver Version = 2.26.08.004-2.6.18\n> /c0 Model = 9690SA-8I\n> /c0 Available Memory = 448MB\n\nI'll note that I've had terrible experience with 3ware controllers and \ngetting a high number of iops using hardware raid mode. If you switch \nit to jbod and do softraid you'll get a large increase in iops - which \nis the key metric for a db. I've posted previously about my problems \nwith 3ware.\n\nas for the ssd comment - I disagree. I've been running ssd's for a \nwhile now (probably closing in on a year by now) with great success. \nA pair of intel x25-e's can get thousands of iops. That being said \nthe key is I'm running the intel ssds - there are plenty of absolutely \nmiserable ssds floating around (I'm looking at you jmicron based disks!)\n\nHave you gone through the normal process of checking your query plans \nto ensure they are sane? There is always a possibility a new index can \nvastly reduce IO.\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Tue, 10 Nov 2009 15:29:32 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "\n> Using ext2 means that you're still exposed to fsck errors on boot after\n> a crash, which doesn't lose anything but you have to go out of your way\n> to verify you're not going to get stuck with your server down in that\n> case. The state of things on the performance side is nicely benchmarked\n> at\n> http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_\n> smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n>\n\nfsck on a filesystem with 1 folder and <checkpoint_segments> files is very\nvery fast. Even if using WAL archiving, there won't be many\nfiles/directories to check. Fsck is not an issue if the partition is\nexclusively for WAL. You can even mount it direct, and avoid having the OS\ncache those pages if you are using a caching raid controller.\n\n\n\n", "msg_date": "Wed, 11 Nov 2009 22:37:02 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "Scott Carey wrote:\n>> Using ext2 means that you're still exposed to fsck errors on boot after\n>> a crash, which doesn't lose anything but you have to go out of your way\n>> to verify you're not going to get stuck with your server down in that\n>> case.\n> fsck on a filesystem with 1 folder and <checkpoint_segments> files is very\n> very fast. Even if using WAL archiving, there won't be many\n> files/directories to check. Fsck is not an issue if the partition is\n> exclusively for WAL. You can even mount it direct, and avoid having the OS\n> cache those pages if you are using a caching raid controller\nRight; that sort of thing--switching to a more direct mount, making sure \nfsck is setup to run automatically rather than dropping to a menu--is \nwhat I was alluding to when I said you had to go out of your way to make \nthat work. It's not complicated, really, but by the time you've set \neverything up and done the proper testing to confirm it all worked as \nexpected you've just spent a modest chunk of time. All I was trying to \nsuggest is that there is a cost and some complexity, and that I feel \nthere's no reason to justify that unless you're not bottlenecked \nspecifically at WAL write volume.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 12 Nov 2009 01:47:35 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "Hi !\n\nHere is my plan :\n- rebuilding a spare with ext3, raid10, without lvm\n- switch the slony master to this new node.\n\nWe'll see ...\nThx for all the info !!!\n\n-- \nKer2x\n", "msg_date": "Thu, 12 Nov 2009 15:21:25 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Tue, 10 Nov 2009, Greg Smith wrote:\n\n> Laurent Laborde wrote:\n>> It is on a separate array which does everything but tablespace (on a\n>> separate array) and indexspace (another separate array).\n>> \n> On Linux, the types of writes done to the WAL volume (where writes are \n> constantly being flushed) require the WAL volume not be shared with anything \n> else for that to perform well. Typically you'll end up with other things \n> being written out too because it can't just selectively flush just the WAL \n> data. The whole \"write barriers\" implementation should fix that, but in \n> practice rarely does.\n\nI believe that this is more a EXT3 problem than a linux problem.\n\nDavid Lang\n\n", "msg_date": "Sun, 15 Nov 2009 14:54:42 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: limiting performance impact of wal archiving." }, { "msg_contents": "On Thu, Nov 12, 2009 at 3:21 PM, Laurent Laborde <[email protected]> wrote:\n> Hi !\n>\n> Here is my plan :\n> - rebuilding a spare with ext3, raid10, without lvm\n> - switch the slony master to this new node.\n\nDone 3 days ago : Problem solved ! It totally worked. \\o/\n\n-- \nker2x\nsysadmin & DBA @ http://www.over-blog.com/\n", "msg_date": "Mon, 16 Nov 2009 10:00:56 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: limiting performance impact of wal archiving." } ]
[ { "msg_contents": "There's two papers published recently at Duke that I just found, both of \nwhich use PostgreSQL as part of their research:\n\nAutomated SQL Tuning through Trial and (Sometimes) Error: \nhttp://www.cs.duke.edu/~shivnath/papers/dbtest09z.pdf\nTuning Database Configuration Parameters with iTuned: \nhttp://www.cs.duke.edu/~shivnath/papers/ituned.pdf\n\nThe second has a number of interesting graphs showing how changing two \npostgresql.conf parameters at a time interact with one another. There's \nalso a set of graphs comparing the default postgresql.conf performance \nwith what you get using the guidelines suggested by an earlier version \nof http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and \nsome of the documents on that section of the wiki. Check out page 10, \nthe \"M\" column represents that manual tuning against the leftmost \"D\" \nwhich is the stock postgresql.conf settings.\n\nI was a bit confused at first about the environment because of how the \npaper is organized, here's the bit that clarifies it: \"The database \nsize with indexes is around 4GB. The physical memory (RAM) given to the \ndatabase is 1GB to create a realistic scenario where the database is 4x \nthe amount of RAM.\" That RAM limit was constrained with a Solaris \nzone. They multiplied the 1GB x 20% to get a standard \"rule-based\" \nsetting of shared_buffers of 200MB (based on the guidelines on the wiki \nat the time--that suggestion is now 25%).\n\nNote that much of the improvement shown in their better tuned versions \nthere results from increases to shared_buffers (peaking at 40%=400MB) \nand work_mem beyond the recommendations given in the tuning guide. That \nis unsurprising as those are aimed more to be reasonable starting values \nrather than suggested as truly optimal. work_mem is particular is \ndangerous to suggest raising really high without knowing what types of \nqueries are going to be run. There's been plenty of commentary on this \nlist suggesting optimal shared_buffers is closer to 50% of RAM than 25% \nfor some workloads, so their results showing peak performance at 40% fit \nright in the middle of community lore.\n\nI'm now in contact with the authors and asked them to let me know whey \npublish the entire post-optimization postgresql.conf, I'll let the list \nknow when that's available. I'm quite curious to see what the final \nsettings that gave the best results looked like.\n\n--\nGreg Smith [email protected] Baltimore, MD\n\n", "msg_date": "Tue, 10 Nov 2009 19:31:48 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Database tuning at Duke" } ]
[ { "msg_contents": "Hi Chaps,\n\nI'm putting together some new servers, and whilst I've been happy with our current config of Adaptec 5805's with bbu I've noticed these 5805Z cards, apparently the contents of DRAM is copied into onboard flash upon power failure.\n\nJust wondered if anyone had any experience of this sort of technology yet?\n\nSo far my head is telling me to just go with what I know...\n\n\nGlyn\n\n\n \n", "msg_date": "Wed, 11 Nov 2009 12:40:35 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Adaptec Zero-Maintenance Cache Protection - Anyone using?" }, { "msg_contents": "\nGlyn Astill wrote:\n> Hi Chaps,\n>\n> I'm putting together some new servers, and whilst I've been happy with our current config of Adaptec 5805's with bbu I've noticed these 5805Z cards, apparently the contents of DRAM is copied into onboard flash upon power failure.\n>\n> Just wondered if anyone had any experience of this sort of technology yet?\n>\n> So far my head is telling me to just go with what I know...\n>\n>\n> Glyn\nI just put a 5445Z in my new server but considering I just got it turned \non about 2 hours ago I really can't say much about it. I have several \nAdaptec cards with BBU and the Adaptec people talked me into trying the \nnew Z version. It's a bit more expensive on the front end but \nsupposedly you don't have to replace batteries. If you do go with the \nnew Z version the first thing you'll have to figure out is where to put \nit. I was wondering what the zip ties were for when I opened the \npackage then realized it's to secure the unit to wherever you want in \nthe chassis. Kinda handy in a way but can also be a pain.\n", "msg_date": "Wed, 11 Nov 2009 15:16:48 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adaptec Zero-Maintenance Cache Protection - Anyone\n using?" } ]
[ { "msg_contents": "Brahma Prakash Tiwari escribi�:\n>\n> *Hi all*\n>\n> *Why age (datfrozenxid) in postgres becomes 1073742202 not zero after \n> vacuum of database.*\n>\n> *Thanks in advance *\n>\n> \n>\n> **Brahma Prakash Tiwari**** **\n>\n> DBA\n>\n> ------------------------------------------------------------------------\n>\n> **Think before you print.|Go green**\n>\n> \n>\nThis is not the right list for that.\nSend the message to [email protected] or another.\nThis list is for the related jobs with PostgreSQL.\n\nRegards\n\n-- \n--\n\"For me, the purpose is, at least partly, to have joy. Programmers often\nfeel joy when they can concentrate on the creative side of programming,\nso Ruby is designed to make programmers happy.\" \nYukihiro Matsumoto (Matz), Creator of the Ruby Language\n\nIng. Marcos Lu�s Ort�z Valmaseda\nSystem DBA && Rails New User\nCentro de Tecnolog�as de Almacenamiento y An�lis de Datos (CENTALAD)\nUniversidad de las Ciencias Inform�ticas\n\nLinux User # 418229\n\nhttp://www.freebsd.org\nhttp://www.postgresql-es.org\nhttp://www.postgresql.org\nhttp://www.planetpostgresql.org\nhttp://www.rubyonrails.org\nhttp://www.ruby-lang.org \n\n", "msg_date": "Thu, 12 Nov 2009 07:04:53 +0100", "msg_from": "=?ISO-8859-1?Q?=22Ing_=2E_Marcos_Lu=EDs_Ort=EDz_Valmaseda?=\n\t=?ISO-8859-1?Q?=22?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why age (datfrozenxid) in postgres becomes 1073742202\n\tnot zero after each vacuum of database." }, { "msg_contents": "Hi all\n\nWhy age (datfrozenxid) in postgres becomes 1073742202 not zero after vacuum\nof database.\n\nThanks in advance \n\n \n\nBrahma Prakash Tiwari \n\nDBA\n\n _____ \n\nThink before you print.|Go green\n\n \n\n\n\n\n\n\n\n\n\n\n\nHi all\nWhy age (datfrozenxid) in postgres\nbecomes 1073742202 not zero after vacuum of database.\nThanks in advance \n \nBrahma Prakash Tiwari \nDBA\n\n\n\nThink before\nyou print.|Go green", "msg_date": "Thu, 12 Nov 2009 13:48:31 +0530", "msg_from": "\"Brahma Prakash Tiwari\" <[email protected]>", "msg_from_op": false, "msg_subject": "Why age (datfrozenxid) in postgres becomes 1073742202 not zero after\n\teach vacuum of database." }, { "msg_contents": "I am recruiting for a contract Web Architect to work for our client in\nPortland, Oregon.\n\nThis Web Architect will start as a contract opportunity with the possibility\nof leading into a full time job. The Web Architect will be working with the\noutsource providers and internal team members to complete this engagement's\ndeliverables. The project will be involved with moving from a Windows based\nsystem to an Open Source based, web system. This will involve: \n\n. Create Infrastructure Design Document as well as oversight, approval and\ncreation of design configuration guides. The IDD should have sections on\ndifferent environments such as Quality Assurance, Performance Testing, Beta\nand Production. \n. Technical oversight of Deployment through Beta test environment being\nready for start of Beta Test. \n\nThe ideal candidate for this position will have experience designing,\nbuilding and maintaining IT infrastructures for web based applications. The\nposition will mostly involve architecting web solutions with .Net and Open\nSource technologies such as Linux, Java, Geronimo and PostgreSQL.\n\nIf you are interested in an opportunity such as this, please feel free to\nsend a copy of your resume to me as soon as possible.\n\nKirk\n\nKirk Baillie\nPrinciple\nMakena Technical Resources\[email protected]\n\n", "msg_date": "Thu, 12 Nov 2009 10:11:44 -0800", "msg_from": "\"kirk Baillie\" <[email protected]>", "msg_from_op": false, "msg_subject": "Contract Web Architect Opportunity in Portland, Oregon" }, { "msg_contents": "[ removing -jobs from cc list as it is not appropriate for this posting ]\n\nOn Thu, Nov 12, 2009 at 3:18 AM, Brahma Prakash Tiwari\n<[email protected]> wrote:\n> Hi all\n>\n> Why age (datfrozenxid) in postgres becomes 1073742202 not zero after vacuum\n> of database.\n>\n> Thanks in advance\n\nI think you're misunderstanding the meaning of the column. As the\nfine manual explains:\n\n\"Similarly, the datfrozenxid column of a database's pg_database row is\na lower bound on the normal XIDs appearing in that database — it is\njust the minimum of the per-table relfrozenxid values within the\ndatabase.\"\n\nhttp://www.postgresql.org/docs/current/static/routine-vacuuming.html\n\n...Robert\n", "msg_date": "Fri, 13 Nov 2009 16:05:38 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why age (datfrozenxid) in postgres becomes 1073742202\n\tnot zero after each vacuum of database." } ]
[ { "msg_contents": "Hi All,\n\nRunning Pg 8.3RC2, Linux server, w/8GB RAM, OpenSuSE 10.2 OS (yes, I \nknow that's old). I have seen *really* long-running autovacs eating up \nsystem resources. While the below is not an example of *really* long, \nit shows how I killed an autovac which had been running for more than \n10 minutes, then ran a VAC FULL ANALYZE on same exact table in about \n~2 min. Any wisdom here? Attributable to autovac_worker settings? Or \nPg version? Other?\n\nAny insight appreciated.\n\nwb\n\n++++++++++++++++++++++++++\n\n$ psql template1 -c \"SELECT procpid, current_query, to_char (now() - \nbackend_start, 'HH24:MI:SS') AS connected_et, to_char (now() - \nquery_start,'HH24:MI:SS') AS query_et FROM pg_stat_activity WHERE \ndatname='mydb' ORDER BY query_et DESC LIMIT 1\"\n\n procpid | current_query | connected_et \n| query_et\n---------+--------------------------------------------+--------------+----------\n 9064 | autovacuum: VACUUM ANALYZE myschema.mytable | 00:12:07 \n | 00:11:38\n\n\n\n$ kill 9064\n\n\n$ date; psql mydb -c \"VACUUM FULL ANALYZE myschema.mytable\"; date\nWed Nov 11 17:25:41 UTC 2009\nVACUUM\nWed Nov 11 17:27:59 UTC 2009\n", "msg_date": "Thu, 12 Nov 2009 09:33:01 -0500", "msg_from": "Wayne Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Manual vacs 5x faster than autovacs?" }, { "msg_contents": "Wayne Beaver <[email protected]> writes:\n> Running Pg 8.3RC2, Linux server, w/8GB RAM, OpenSuSE 10.2 OS (yes, I \n> know that's old). I have seen *really* long-running autovacs eating up \n> system resources. While the below is not an example of *really* long, \n> it shows how I killed an autovac which had been running for more than \n> 10 minutes, then ran a VAC FULL ANALYZE on same exact table in about \n> ~2 min. Any wisdom here? Attributable to autovac_worker settings?\n\nautovacuum_vacuum_cost_delay. Is the slow autovac *really* eating\na noticeable amount of system resources? I would think that a full\nspeed manual vacuum would be a lot worse.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Nov 2009 09:49:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs? " }, { "msg_contents": "On Thu, Nov 12, 2009 at 7:33 AM, Wayne Beaver <[email protected]> wrote:\n> Hi All,\n>\n> Running Pg 8.3RC2, Linux server, w/8GB RAM, OpenSuSE 10.2 OS (yes, I know\n> that's old). I have seen *really* long-running autovacs eating up system\n> resources. While the below is not an example of *really* long, it shows how\n> I killed an autovac which had been running for more than 10 minutes, then\n> ran a VAC FULL ANALYZE on same exact table in about ~2 min. Any wisdom here?\n> Attributable to autovac_worker settings? Or Pg version? Other?\n>\n> Any insight appreciated.\n\nAutovac running slow is (generally) a good thing. It reduces the load\non your IO subsystem so that other queries can still run fast. What\nresources are your long running autovacs eating up. If top shows\n500Mres and 499Mshr, then don't worry, it's not actually eating up\nresources.\n", "msg_date": "Thu, 12 Nov 2009 08:14:17 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "Hmm, looks like I've been myth-busted here.\n\n$ top | grep -E '29343|31924|29840|PID'; echo\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n29840 postgres 15 0 2150m 203m 194m S 0 2.5 0:00.59 postmaster\n29343 postgres 15 0 2137m 360m 356m S 1 4.5 0:00.92 postmaster\n31924 postgres 15 0 2135m 73m 70m S 1 0.9 0:00.44 postmaster\n\nSo my claims of resource-usage appear incorrect.\n\nI'd seen autovacs running for hours and had mis-attributed this to \ngrowing query times on those tables - my thought was that \"shrinking\" \nthe tables \"more quickly\" could make them \"more-optimized\", more \noften. Sounds like I could be chasing the wrong symptoms, though.\n\nwb\n\n\n> Quoting Scott Marlowe <[email protected]>:\n>\n> Autovac running slow is (generally) a good thing. It reduces the load\n> on your IO subsystem so that other queries can still run fast. What\n> resources are your long running autovacs eating up. If top shows\n> 500Mres and 499Mshr, then don't worry, it's not actually eating up\n> resources.\n\n\n> Quoting Tom Lane <[email protected]>:\n>\n> autovacuum_vacuum_cost_delay. Is the slow autovac *really* eating\n> a noticeable amount of system resources? I would think that a full\n> speed manual vacuum would be a lot worse.\n\n\n>> Wayne Beaver <[email protected]> writes:\n>>\n>> Running Pg 8.3RC2, Linux server, w/8GB RAM, OpenSuSE 10.2 OS (yes, I\n>> know that's old). I have seen *really* long-running autovacs eating up\n>> system resources. While the below is not an example of *really* long,\n>> it shows how I killed an autovac which had been running for more than\n>> 10 minutes, then ran a VAC FULL ANALYZE on same exact table in about\n>> ~2 min. Any wisdom here? Attributable to autovac_worker settings?\n\n", "msg_date": "Thu, 12 Nov 2009 11:14:42 -0500", "msg_from": "Wayne Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "On Thu, Nov 12, 2009 at 9:14 AM, Wayne Beaver <[email protected]> wrote:\n> Hmm, looks like I've been myth-busted here.\n>\n> $ top | grep -E '29343|31924|29840|PID'; echo\n>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n> 29840 postgres  15   0 2150m 203m 194m S    0  2.5   0:00.59 postmaster\n> 29343 postgres  15   0 2137m 360m 356m S    1  4.5   0:00.92 postmaster\n> 31924 postgres  15   0 2135m  73m  70m S    1  0.9   0:00.44 postmaster\n>\n> So my claims of resource-usage appear incorrect.\n>\n> I'd seen autovacs running for hours and had mis-attributed this to growing\n> query times on those tables  - my thought was that \"shrinking\" the tables\n> \"more quickly\" could make them \"more-optimized\", more often. Sounds like I\n> could be chasing the wrong symptoms, though.\n\nNow it is quite possible that a slow autovac is causing your queries\nto run slower. And it's that autovac isn't keeping up. One of the\nverious serious shortcomings of autovac in 8.1 (or was it 8.0? I\nthink it was 8.1 as well) was that it only had one worker thread. So,\nif it has a moderate to high cost delay, then it might be able to keep\nup with the job and your tables will become bloated.\n\nThe problem isn't that autovac is stealing too many resources, it's\nthat it's not stealing enough.\n\nThe first quick fix is 8.3 which has more efficient vacuuming code and\nthe ability to run > 1 thread (it defaults to 3) so you can still keep\nit \"detuned\" to stay out of the way, but with enough threads it can\nhopefully keep up.\n\nOf course, eventually you reach the point where as the work load rises\nthe ability of autovac to keep up is lost, and then you need more IO\nperiod. Whether pgsql or any other database, running out of io\nbandwidth is only really solvable by more IO bandwidth.\n\nSo, what does iostat -x 10\n\nsay about utilization?\n", "msg_date": "Thu, 12 Nov 2009 09:33:24 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "On Thu, Nov 12, 2009 at 9:33 AM, Scott Marlowe <[email protected]> wrote:\n> On Thu, Nov 12, 2009 at 9:14 AM, Wayne Beaver <[email protected]> wrote:\n>> Hmm, looks like I've been myth-busted here.\n>>\n>> $ top | grep -E '29343|31924|29840|PID'; echo\n>>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n>> 29840 postgres  15   0 2150m 203m 194m S    0  2.5   0:00.59 postmaster\n>> 29343 postgres  15   0 2137m 360m 356m S    1  4.5   0:00.92 postmaster\n>> 31924 postgres  15   0 2135m  73m  70m S    1  0.9   0:00.44 postmaster\n>>\n>> So my claims of resource-usage appear incorrect.\n>>\n>> I'd seen autovacs running for hours and had mis-attributed this to growing\n>> query times on those tables  - my thought was that \"shrinking\" the tables\n>> \"more quickly\" could make them \"more-optimized\", more often. Sounds like I\n>> could be chasing the wrong symptoms, though.\n>\n> Now it is quite possible that a slow autovac is causing your queries\n> to run slower.  And it's that autovac isn't keeping up.  One of the\n> verious serious shortcomings of autovac in 8.1 (or was it 8.0?  I\n> think it was 8.1 as well) was that it only had one worker thread.  So,\n> if it has a moderate to high cost delay, then it might be able to keep\n> up with the job and your tables will become bloated.\n\nmight NOT be able to keep up\n\n>\n> The problem isn't that autovac is stealing too many resources, it's\n> that it's not stealing enough.\n>\n> The first quick fix is 8.3 which has more efficient vacuuming code and\n\nWhoops I see you're technically running 8.3, but you're running RC2\nfor some reason? I don't usually run 8.x.0 in production. Let alone\nRCs. You should really update before some nasty bug that's been\nsquashed in later releases bites you.\n", "msg_date": "Thu, 12 Nov 2009 09:38:48 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "> Quoting Scott Marlowe <[email protected]>:\n>\n>>> On Thu, Nov 12, 2009 at 9:14 AM, Wayne Beaver <[email protected]> wrote:\n>>> I'd seen autovacs running for hours and had mis-attributed this to growing\n>>> query times on those tables �- my thought was that \"shrinking\" the tables\n>>> \"more quickly\" could make them \"more-optimized\", more often. Sounds like I\n>>> could be chasing the wrong symptoms, though.\n>>\n>> Now it is quite possible that a slow autovac is causing your queries\n>> to run slower. �So, if it has a moderate to high cost delay, then \n>> it might not be able to keep\n>> up with the job and your tables will become bloated.\n>>\n>> The problem isn't that autovac is stealing too many resources, it's\n>> that it's not stealing enough.\n>>\n>> I see you're technically running 8.3, but you're running RC2\n>> for some reason? I don't usually run 8.x.0 in production. Let alone\n>> RCs. You should really update before some nasty bug that's been\n>> squashed in later releases bites you.\n\n\nHahaha. Yes, 8.3RC2 was latest version at time I implemented related \nclient app. Install is \"production-like\", more so than production - \nnon-mission-critical, but important to some \"VIP-like\" users at \nintervals which are not necessarily predictable. I'm long past my goal \nof migrating to 8.4, actually...\n\nMy autovac settings are all at default values, so sounds like I can at \nleast tinker with _workers and _cost_delay. I've not yet gotten to you \niostat inquiry from your previous response...\n\nwb\n", "msg_date": "Thu, 12 Nov 2009 11:58:52 -0500", "msg_from": "Wayne Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "On Thu, Nov 12, 2009 at 9:58 AM, Wayne Beaver <[email protected]> wrote:\n>> Quoting Scott Marlowe <[email protected]>:\n>>\n>>>> On Thu, Nov 12, 2009 at 9:14 AM, Wayne Beaver <[email protected]> wrote:\n>>>> I'd seen autovacs running for hours and had mis-attributed this to\n>>>> growing\n>>>> query times on those tables  - my thought was that \"shrinking\" the\n>>>> tables\n>>>> \"more quickly\" could make them \"more-optimized\", more often. Sounds like\n>>>> I\n>>>> could be chasing the wrong symptoms, though.\n>>>\n>>> Now it is quite possible that a slow autovac is causing your queries\n>>> to run slower.  So, if it has a moderate to high cost delay, then it\n>>> might not be able to keep\n>>> up with the job and your tables will become bloated.\n>>>\n>>> The problem isn't that autovac is stealing too many resources, it's\n>>> that it's not stealing enough.\n>>>\n>>> I see you're technically running 8.3, but you're running RC2\n>>> for some reason?  I don't usually run 8.x.0 in production.  Let alone\n>>> RCs.  You should really update before some nasty bug that's been\n>>> squashed in later releases bites you.\n>\n>\n> Hahaha. Yes, 8.3RC2 was latest version at time I implemented related client\n> app. Install is \"production-like\", more so than production -\n> non-mission-critical,  but important to some \"VIP-like\" users at intervals\n> which are not necessarily predictable. I'm long past my goal of migrating to\n> 8.4, actually...\n\nWorry far more about being out of date on 8.3. Since you're on an rc\nrelease you'll likely need to dump and restore to safely migrate to\n8.3.latest, but once there, simply shutting down, updating and\nstarting up is all that's usually required.\n\n>\n> My autovac settings are all at default values, so sounds like I can at least\n> tinker with _workers and _cost_delay. I've not yet gotten to you iostat\n> inquiry from your previous response...\n\nDon't worry too much, just want to see if your IO system is maxed out.\n", "msg_date": "Thu, 12 Nov 2009 10:36:18 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "The autovac may have done most of the work before you killed it ...\nI'm new to Postgres, but from limited subjective experience, it seems\nit's a lot faster to vaccum ranges of blocks that are were recently\nvacuumed (at minimum, a good chunk of table will have been brought\ninto buffer cache by both Postgres and the OS during the prior pass).\n\nI've found that with very large data tables, the auto-vaccum on\ndefault settings isn't as aggressive as I'd like ... I find running a\nVACUUM ANALYZE isn't at all intrusive, though I prefer to do it once a\nday at 3am.\n\nBeware that VACUUM FULL locks an entire table at a time :-)\n\nCheers\nDave\n\nOn Thu, Nov 12, 2009 at 8:33 AM, Wayne Beaver <[email protected]> wrote:\n> Hi All,\n>\n> Running Pg 8.3RC2, Linux server, w/8GB RAM, OpenSuSE 10.2 OS (yes, I know\n> that's old). I have seen *really* long-running autovacs eating up system\n> resources. While the below is not an example of *really* long, it shows how\n> I killed an autovac which had been running for more than 10 minutes, then\n> ran a VAC FULL ANALYZE on same exact table in about ~2 min. Any wisdom here?\n> Attributable to autovac_worker settings? Or Pg version? Other?\n>\n> Any insight appreciated.\n>\n> wb\n>\n> ++++++++++++++++++++++++++\n>\n> $ psql template1 -c \"SELECT procpid, current_query, to_char (now() -\n> backend_start, 'HH24:MI:SS') AS connected_et, to_char (now() -\n> query_start,'HH24:MI:SS') AS query_et FROM pg_stat_activity WHERE\n> datname='mydb' ORDER BY query_et DESC LIMIT 1\"\n>\n>  procpid |                   current_query            | connected_et |\n> query_et\n> ---------+--------------------------------------------+--------------+----------\n>    9064 | autovacuum: VACUUM ANALYZE myschema.mytable    | 00:12:07     |\n> 00:11:38\n>\n>\n>\n> $ kill 9064\n>\n>\n> $ date; psql mydb -c \"VACUUM FULL ANALYZE myschema.mytable\"; date\n> Wed Nov 11 17:25:41 UTC 2009\n> VACUUM\n> Wed Nov 11 17:27:59 UTC 2009\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 13 Nov 2009 00:29:35 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "On 13/11/2009 2:29 PM, Dave Crooke wrote:\n\n> Beware that VACUUM FULL locks an entire table at a time :-)\n\n... and often bloats its indexes horribly. Use CLUSTER instead if you\nneed to chop a table that's massively bloated down to size; it'll be\nmuch faster, and shouldn't leave the indexes in a mess.\n\nI increasingly wonder what the purpose of VACUUM FULL in its current\nform is.\n\n--\nCraig Ringer\n", "msg_date": "Sat, 14 Nov 2009 11:31:59 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "On Fri, Nov 13, 2009 at 8:31 PM, Craig Ringer\n<[email protected]> wrote:\n> On 13/11/2009 2:29 PM, Dave Crooke wrote:\n>\n>> Beware that VACUUM FULL locks an entire table at a time :-)\n>\n> ... and often bloats its indexes horribly. Use CLUSTER instead if you\n> need to chop a table that's massively bloated down to size; it'll be\n> much faster, and shouldn't leave the indexes in a mess.\n>\n> I increasingly wonder what the purpose of VACUUM FULL in its current\n> form is.\n\nThere's been talk of removing it. It's almost historical in nature\nnow, but there are apparently one or two situations, like when you're\nalmost out of space, that vacuum full can handle that dumping reload\nor cluster or whatnot can't do without more extra space.\n", "msg_date": "Fri, 13 Nov 2009 20:55:12 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "On 14/11/2009 11:55 AM, Scott Marlowe wrote:\n> On Fri, Nov 13, 2009 at 8:31 PM, Craig Ringer\n> <[email protected]> wrote:\n>> On 13/11/2009 2:29 PM, Dave Crooke wrote:\n>>\n>>> Beware that VACUUM FULL locks an entire table at a time :-)\n>>\n>> ... and often bloats its indexes horribly. Use CLUSTER instead if you\n>> need to chop a table that's massively bloated down to size; it'll be\n>> much faster, and shouldn't leave the indexes in a mess.\n>>\n>> I increasingly wonder what the purpose of VACUUM FULL in its current\n>> form is.\n> \n> There's been talk of removing it. It's almost historical in nature\n> now, but there are apparently one or two situations, like when you're\n> almost out of space, that vacuum full can handle that dumping reload\n> or cluster or whatnot can't do without more extra space.\n\nPerhaps it should drop and re-create indexes as well, then? (Or disable\nthem so they become inconsistent, then REINDEX them - same deal). It'd\nrun a LOT faster, and the index bloat issue would be gone.\n\nThe current form of the command just invites misuse and misapplication.\n\n--\nCraig Ringer\n", "msg_date": "Sat, 14 Nov 2009 12:45:19 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "On Fri, Nov 13, 2009 at 9:45 PM, Craig Ringer\n<[email protected]> wrote:\n> On 14/11/2009 11:55 AM, Scott Marlowe wrote:\n>> On Fri, Nov 13, 2009 at 8:31 PM, Craig Ringer\n>> <[email protected]> wrote:\n>>> On 13/11/2009 2:29 PM, Dave Crooke wrote:\n>>>\n>>>> Beware that VACUUM FULL locks an entire table at a time :-)\n>>>\n>>> ... and often bloats its indexes horribly. Use CLUSTER instead if you\n>>> need to chop a table that's massively bloated down to size; it'll be\n>>> much faster, and shouldn't leave the indexes in a mess.\n>>>\n>>> I increasingly wonder what the purpose of VACUUM FULL in its current\n>>> form is.\n>>\n>> There's been talk of removing it.  It's almost historical in nature\n>> now, but there are apparently one or two situations, like when you're\n>> almost out of space, that vacuum full can handle that dumping reload\n>> or cluster or whatnot can't do without more extra space.\n>\n> Perhaps it should drop and re-create indexes as well, then? (Or disable\n> them so they become inconsistent, then REINDEX them - same deal). It'd\n> run a LOT faster, and the index bloat issue would be gone.\n>\n> The current form of the command just invites misuse and misapplication.\n\nYeah, it should be a name that when you're typing it you know you\nscrewed up to get where you are. The\nopleasemayihavebackthespaceilostwhilelockingmytablesandbloatingmyindexes\ncommand. No chance you'll run it by mistake either!\n", "msg_date": "Fri, 13 Nov 2009 23:02:17 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "> Quoting Scott Marlowe <[email protected]>:\n>\n>> On Thu, Nov 12, 2009 at 9:58 AM, Wayne Beaver <[email protected]> wrote:\n>>> Quoting Scott Marlowe <[email protected]>:\n>>>\n>>>>> On Thu, Nov 12, 2009 at 9:14 AM, Wayne Beaver <[email protected]> wrote:\n>>>>> I'd seen autovacs running for hours and had mis-attributed this to\n>>>>> growing query times on those tables - my thought was that \n>>>>> \"shrinking\" the tables\n>>>>> \"more quickly\" could make them \"more-optimized\", more often. Sounds like\n>>>>> could be chasing the wrong symptoms, though.\n>>>>\n>>>> Now it is quite possible that a slow autovac is causing your queries\n>>>> to run slower. �So, if it has a moderate to high cost delay, then it\n>>>> might not be able to keep\n>>>> up with the job and your tables will become bloated.\n>>>>\n>>>> The problem isn't that autovac is stealing too many resources, it's\n>>>> that it's not stealing enough.\n>>>>\n>> I've not yet gotten to you iostat inquiry from your previous response...\n>\n> Don't worry too much, just want to see if your IO system is maxed out.\n\n\n$ iostat\nLinux 2.6.18.8-0.9-default (myserver) \t11/16/2009\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 28.11 3.13 6.50 8.71 0.00 53.56\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 153.08 7295.23 3675.59 123127895363 62036043656\\\n", "msg_date": "Mon, 16 Nov 2009 11:13:49 -0500", "msg_from": "Wayne Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "On Mon, Nov 16, 2009 at 9:13 AM, Wayne Beaver <[email protected]> wrote:\n>> Quoting Scott Marlowe <[email protected]>:\n>>\n>>> On Thu, Nov 12, 2009 at 9:58 AM, Wayne Beaver <[email protected]> wrote:\n>>>>\n>>>> Quoting Scott Marlowe <[email protected]>:\n>>>>\n>>>>>> On Thu, Nov 12, 2009 at 9:14 AM, Wayne Beaver <[email protected]>\n>>>>>> wrote:\n>>>>>> I'd seen autovacs running for hours and had mis-attributed this to\n>>>>>> growing query times on those tables  - my thought was that \"shrinking\"\n>>>>>> the tables\n>>>>>> \"more quickly\" could make them \"more-optimized\", more often. Sounds\n>>>>>> like\n>>>>>> could be chasing the wrong symptoms, though.\n>>>>>\n>>>>> Now it is quite possible that a slow autovac is causing your queries\n>>>>> to run slower.  So, if it has a moderate to high cost delay, then it\n>>>>> might not be able to keep\n>>>>> up with the job and your tables will become bloated.\n>>>>>\n>>>>> The problem isn't that autovac is stealing too many resources, it's\n>>>>> that it's not stealing enough.\n>>>>>\n>>> I've not yet gotten to you iostat inquiry from your previous response...\n>>\n>> Don't worry too much, just want to see if your IO system is maxed out.\n>\n>\n> $ iostat\n> Linux 2.6.18.8-0.9-default (myserver)   11/16/2009\n>\n> avg-cpu:  %user   %nice %system %iowait  %steal   %idle\n>          28.11    3.13    6.50    8.71    0.00   53.56\n>\n> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn\n> sda             153.08      7295.23      3675.59 123127895363 62036043656\\\n\nThat's just since the machine was turned on. run it like:\n\niostat -x 10\n\nand see what comes out after the first one.\n", "msg_date": "Mon, 16 Nov 2009 09:39:18 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" }, { "msg_contents": "Quoting Scott Marlowe <[email protected]>:\n\n> On Mon, Nov 16, 2009 at 9:13 AM, Wayne Beaver <[email protected]> wrote:\n>>> Quoting Scott Marlowe <[email protected]>:\n>>>\n>>>> On Thu, Nov 12, 2009 at 9:58 AM, Wayne Beaver <[email protected]> wrote:\n>>>>>\n>>>>> Quoting Scott Marlowe <[email protected]>:\n>>>>>\n>>>>>>> On Thu, Nov 12, 2009 at 9:14 AM, Wayne Beaver <[email protected]>\n>>>>>>> wrote:\n>>>>>>> I'd seen autovacs running for hours and had mis-attributed this to\n>>>>>>> growing query times on those tables �- my thought was that \"shrinking\"\n>>>>>>> the tables\n>>>>>>> \"more quickly\" could make them \"more-optimized\", more often. Sounds\n>>>>>>> like\n>>>>>>> could be chasing the wrong symptoms, though.\n>>>>>>\n>>>>>> Now it is quite possible that a slow autovac is causing your queries\n>>>>>> to run slower. �So, if it has a moderate to high cost delay, then it\n>>>>>> might not be able to keep\n>>>>>> up with the job and your tables will become bloated.\n>>>>>>\n>>>>>> The problem isn't that autovac is stealing too many resources, it's\n>>>>>> that it's not stealing enough.\n>>>>>>\n>>>> I've not yet gotten to you iostat inquiry from your previous response...\n>>>\n>>> Don't worry too much, just want to see if your IO system is maxed out.\n>>\n>\n> That's just since the machine was turned on. run it like:\n>\n> iostat -x 10\n>\n> and see what comes out after the first one.\n\n\nDuh! Sorry about that...\n\n\n$ iostat -x 10\nLinux 2.6.18.8-0.9-default (myserver) \t11/16/2009\n\navg-cpu: %user %nice %system %iowait %steal %idle\n 28.11 3.13 6.50 8.70 0.00 53.56\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s \navgrq-sz avgqu-sz await svctm %util\nsda 3.20 406.34 100.74 52.33 7293.84 3675.79 3646.92 \n1837.90 71.66 0.07 2.15 0.90 13.71\n", "msg_date": "Mon, 16 Nov 2009 12:02:05 -0500", "msg_from": "Wayne Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Manual vacs 5x faster than autovacs?" } ]
[ { "msg_contents": "This is just a heads-up for anyone using Ruby on Rails (with\nActiveRecord) on JRuby who sees performance degradation over time. Each\nRuby runtime instance will degrade query performance slightly. You can\nsee this if the minimum and maximum number of active runtimes are not\nconfigured to the same value, or (at a potentially extreme rate) if you\nhave Ruby class caching turned off. Class caching should only be off\nfor development, but we saw this issue in production when someone copied\nthe development directory to production without changing this setting.\n \nThe cause is a bug in the RoR activerecord-jdbc-adapter gem. It\ncreates an instance of the JDBC Driver class and explicitly registers it\nwith Java's DriverManager. This DriverManager.registerDriver method is\nonly supposed to be invoked from a static initializer in the driver\nclass itself, so this is just plain wrong on the part of the gem. The\nregisterDriver method just adds the Driver instance (along with a bit of\nrelated information) in a java.util.Vector list.\n \nThe performance hit comes when you try to connect through\nDriverManager. It sequentially scans through the list from the front,\nchecking whether each driver instance was loaded with the same class\nloader as the requester. The newer driver instances are at the end, so\nonce you accumulate enough driver instances, this search can take quite\na while. (With class caching off in production, we accumulated tens of\nthousands of driver instances, each keeping a zombie class loader alive,\nbefore it started causing failures from resource exhaustion. At that\npoint, the time to find the right driver instance was up to about 15\nseconds.)\n \nIt strikes me that not only should the gem not be explicitly\nregistering the driver instances, but it should be finding and\n*deregistering* any drivers before its Ruby runtime is torn down;\notherwise there is a memory leak as runtimes are created and destroyed,\neven with proper start-up behavior -- just not as fast as with the\nexplicit register (which creates a second driver instance for each\nruntime).\n \nThis problem is not PostgreSQL specific and does *not* reflect any bug\nor flaw in PostgreSQL; but PostgreSQL was initially blamed by our\nprogrammers. I'm sharing the information to save time for any other DBA\nwho may be faced with similar deterioration in performance for a RoR\nenvironment. We will be discussing the issue on the JRuby list and will\nbe attempting to work with them on a proper fix. In the meantime you\ncan work around the issue by making sure that Ruby class caching is on\nand minimum and maximum active runtimes match.\n \n-Kevin\n", "msg_date": "Thu, 12 Nov 2009 10:50:26 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "activerecord-jdbc-adapter bug can affect RoR query performance" } ]
[ { "msg_contents": "Hello,\n\nI'm about to buy SSD drive(s) for a database. For decision making, I \nused this tech report:\n\nhttp://techreport.com/articles.x/16255/9\nhttp://techreport.com/articles.x/16255/10\n\nHere are my concerns:\n\n * I need at least 32GB disk space. So DRAM based SSD is not a real\n option. I would have to buy 8x4GB memory, costs a fortune. And\n then it would still not have redundancy.\n * I could buy two X25-E drives and have 32GB disk space, and some\n redundancy. This would cost about $1600, not counting the RAID\n controller. It is on the edge.\n * I could also buy many cheaper MLC SSD drives. They cost about\n $140. So even with 10 drives, I'm at $1400. I could put them in\n RAID6, have much more disk space (256GB), high redundancy and\n POSSIBLY good read/write speed. Of course then I need to buy a\n good RAID controller.\n\nMy question is about the last option. Are there any good RAID cards that \nare optimized (or can be optimized) for SSD drives? Do any of you have \nexperience in using many cheaper SSD drives? Is it a bad idea?\n\nThank you,\n\n Laszlo\n\n", "msg_date": "Fri, 13 Nov 2009 13:46:15 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "SSD + RAID" }, { "msg_contents": "Laszlo Nagy wrote:\n> Hello,\n>\n> I'm about to buy SSD drive(s) for a database. For decision making, I\n> used this tech report:\n>\n> http://techreport.com/articles.x/16255/9\n> http://techreport.com/articles.x/16255/10\n>\n> Here are my concerns:\n>\n> * I need at least 32GB disk space. So DRAM based SSD is not a real\n> option. I would have to buy 8x4GB memory, costs a fortune. And\n> then it would still not have redundancy.\n> * I could buy two X25-E drives and have 32GB disk space, and some\n> redundancy. This would cost about $1600, not counting the RAID\n> controller. It is on the edge.\n> * I could also buy many cheaper MLC SSD drives. They cost about\n> $140. So even with 10 drives, I'm at $1400. I could put them in\n> RAID6, have much more disk space (256GB), high redundancy and\n> POSSIBLY good read/write speed. Of course then I need to buy a\n> good RAID controller.\n>\n> My question is about the last option. Are there any good RAID cards\n> that are optimized (or can be optimized) for SSD drives? Do any of you\n> have experience in using many cheaper SSD drives? Is it a bad idea?\n>\n> Thank you,\n>\n> Laszlo\n>\nNote that some RAID controllers (3Ware in particular) refuse to\nrecognize the MLC drives, in particular, they act as if the OCZ Vertex\nseries do not exist when connected.\n\nI don't know what they're looking for (perhaps some indication that\nactual rotation is happening?) but this is a potential problem.... make\nsure your adapter can talk to these things!\n\nBTW I have done some benchmarking with Postgresql against these drives\nand they are SMOKING fast.\n\n-- Karl", "msg_date": "Fri, 13 Nov 2009 07:02:34 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\n> Note that some RAID controllers (3Ware in particular) refuse to\n> recognize the MLC drives, in particular, they act as if the OCZ Vertex\n> series do not exist when connected.\n>\n> I don't know what they're looking for (perhaps some indication that\n> actual rotation is happening?) but this is a potential problem.... make\n> sure your adapter can talk to these things!\n>\n> BTW I have done some benchmarking with Postgresql against these drives\n> and they are SMOKING fast.\n> \nI was thinking about ARECA 1320 with 2GB memory + BBU. Unfortunately, I \ncannot find information about using ARECA cards with SSD drives. I'm \nalso not sure how they would work together. I guess the RAID cards are \noptimized for conventional disks. They read/write data in bigger blocks \nand they optimize the order of reading/writing for physical cylinders. I \nknow for sure that this particular areca card has an Intel dual core IO \nprocessor and its own embedded operating system. I guess it could be \ntuned for SSD drives, but I don't know how.\n\nI was hoping that with a RAID 6 setup, write speed (which is slower for \ncheaper flash based SSD drives) would dramatically increase, because \ninformation written simultaneously to 10 drives. With very small block \nsize, it would probably be true. But... what if the RAID card uses \nbigger block sizes, and - say - I want to update much smaller blocks in \nthe database?\n\nMy other option is to buy two SLC SSD drives and use RAID1. It would \ncost about the same, but has less redundancy and less capacity. Which is \nthe faster? 8-10 MLC disks in RAID 6 with a good caching controller, or \ntwo SLC disks in RAID1?\n\nThanks,\n\n Laszlo\n\n", "msg_date": "Fri, 13 Nov 2009 14:57:34 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "This is very fast.\nOn IT Toolbox there are many whitepapers about it.\nOn the ERP and DataCenter sections specifically.\n\nWe need that all tests that we do, we can share it on the\nProject Wiki.\n\nRegards\n\nOn Nov 13, 2009, at 7:02 AM, Karl Denninger wrote:\n\n> Laszlo Nagy wrote:\n>> Hello,\n>>\n>> I'm about to buy SSD drive(s) for a database. For decision making, I\n>> used this tech report:\n>>\n>> http://techreport.com/articles.x/16255/9\n>> http://techreport.com/articles.x/16255/10\n>>\n>> Here are my concerns:\n>>\n>> * I need at least 32GB disk space. So DRAM based SSD is not a real\n>> option. I would have to buy 8x4GB memory, costs a fortune. And\n>> then it would still not have redundancy.\n>> * I could buy two X25-E drives and have 32GB disk space, and some\n>> redundancy. This would cost about $1600, not counting the RAID\n>> controller. It is on the edge.\n>> * I could also buy many cheaper MLC SSD drives. They cost about\n>> $140. So even with 10 drives, I'm at $1400. I could put them in\n>> RAID6, have much more disk space (256GB), high redundancy and\n>> POSSIBLY good read/write speed. Of course then I need to buy a\n>> good RAID controller.\n>>\n>> My question is about the last option. Are there any good RAID cards\n>> that are optimized (or can be optimized) for SSD drives? Do any of \n>> you\n>> have experience in using many cheaper SSD drives? Is it a bad idea?\n>>\n>> Thank you,\n>>\n>> Laszlo\n>>\n> Note that some RAID controllers (3Ware in particular) refuse to\n> recognize the MLC drives, in particular, they act as if the OCZ Vertex\n> series do not exist when connected.\n>\n> I don't know what they're looking for (perhaps some indication that\n> actual rotation is happening?) but this is a potential problem.... \n> make\n> sure your adapter can talk to these things!\n>\n> BTW I have done some benchmarking with Postgresql against these drives\n> and they are SMOKING fast.\n>\n> -- Karl\n> <karl.vcf>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 13 Nov 2009 09:07:28 -0500", "msg_from": "Marcos Ortiz Valmaseda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "2009/11/13 Laszlo Nagy <[email protected]>:\n> Hello,\n>\n> I'm about to buy SSD drive(s) for a database. For decision making, I used\n> this tech report:\n>\n> http://techreport.com/articles.x/16255/9\n> http://techreport.com/articles.x/16255/10\n>\n> Here are my concerns:\n>\n>   * I need at least 32GB disk space. So DRAM based SSD is not a real\n>     option. I would have to buy 8x4GB memory, costs a fortune. And\n>     then it would still not have redundancy.\n>   * I could buy two X25-E drives and have 32GB disk space, and some\n>     redundancy. This would cost about $1600, not counting the RAID\n>     controller. It is on the edge.\n\nI'm not sure a RAID controller brings much of anything to the table with SSDs.\n\n>   * I could also buy many cheaper MLC SSD drives. They cost about\n>     $140. So even with 10 drives, I'm at $1400. I could put them in\n>     RAID6, have much more disk space (256GB), high redundancy and\n\nI think RAID6 is gonna reduce the throughput due to overhead to\nsomething far less than what a software RAID-10 would achieve.\n\n>     POSSIBLY good read/write speed. Of course then I need to buy a\n>     good RAID controller.\n\nI'm guessing that if you spent whatever money you were gonna spend on\nmore SSDs you'd come out ahead, assuming you had somewhere to put\nthem.\n\n> My question is about the last option. Are there any good RAID cards that are\n> optimized (or can be optimized) for SSD drives? Do any of you have\n> experience in using many cheaper SSD drives? Is it a bad idea?\n\nThis I don't know. Some quick googling shows the Areca 1680ix and\nAdaptec 5 Series to be able to handle Samsun SSDs.\n", "msg_date": "Fri, 13 Nov 2009 07:48:05 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Fri, Nov 13, 2009 at 9:48 AM, Scott Marlowe <[email protected]> wrote:\n> I think RAID6 is gonna reduce the throughput due to overhead to\n> something far less than what a software RAID-10 would achieve.\n\nI was wondering about this. I think raid 5/6 might be a better fit\nfor SSD than traditional drives arrays. Here's my thinking:\n\n*) flash SSD reads are cheaper than writes. With 6 or more drives,\nless total data has to be written in Raid 5 than Raid 10. The main\ncomponent of raid 5 performance penalty is that for each written\nblock, it has to be read first than written...incurring rotational\nlatency, etc. SSD does not have this problem.\n\n*) flash is much more expensive in terms of storage/$.\n\n*) flash (at least the intel stuff) is so fast relative to what we are\nused to, that the point of using flash in raid is more for fault\ntolerance than performance enhancement. I don't have data to support\nthis, but I suspect that even with relatively small amount of the\nslower MLC drives in raid, postgres will become cpu bound for most\napplications.\n\nmerlin\n", "msg_date": "Fri, 13 Nov 2009 10:29:43 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Laszlo Nagy wrote:\n> * I need at least 32GB disk space. So DRAM based SSD is not a real\n> option. I would have to buy 8x4GB memory, costs a fortune. And\n> then it would still not have redundancy.\n\nAt 32GB database size, I'd seriously consider just buying a server with\na regular hard drive or a small RAID array for redundancy, and stuffing\n16 or 32 GB of RAM into it to ensure everything is cached. That's tried\nand tested technology.\n\nI don't know how you came to the 32 GB figure, but keep in mind that\nadministration is a lot easier if you have plenty of extra disk space\nfor things like backups, dumps+restore, temporary files, upgrades etc.\nSo if you think you'd need 32 GB of disk space, I'm guessing that 16 GB\nof RAM would be enough to hold all the hot data in cache. And if you\nchoose a server with enough DIMM slots, you can expand easily if needed.\n\nJust my 2 cents, I'm not really an expert on hardware..\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 13 Nov 2009 17:36:51 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "2009/11/13 Heikki Linnakangas <[email protected]>:\n> Laszlo Nagy wrote:\n>>    * I need at least 32GB disk space. So DRAM based SSD is not a real\n>>      option. I would have to buy 8x4GB memory, costs a fortune. And\n>>      then it would still not have redundancy.\n>\n> At 32GB database size, I'd seriously consider just buying a server with\n> a regular hard drive or a small RAID array for redundancy, and stuffing\n> 16 or 32 GB of RAM into it to ensure everything is cached. That's tried\n> and tested technology.\n\nlots of ram doesn't help you if:\n*) your database gets written to a lot and you have high performance\nrequirements\n*) your data is important\n\n(if either of the above is not true or even partially true, than your\nadvice is spot on)\n\nmerlin\n", "msg_date": "Fri, 13 Nov 2009 10:59:03 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "In order for a drive to work reliably for database use such as for \nPostgreSQL, it cannot have a volatile write cache. You either need a \nwrite cache with a battery backup (and a UPS doesn't count), or to turn \nthe cache off. The SSD performance figures you've been looking at are \nwith the drive's write cache turned on, which means they're completely \nfictitious and exaggerated upwards for your purposes. In the real \nworld, that will result in database corruption after a crash one day. \nNo one on the drive benchmarking side of the industry seems to have \npicked up on this, so you can't use any of those figures. I'm not even \nsure right now whether drives like Intel's will even meet their lifetime \nexpectations if they aren't allowed to use their internal volatile write \ncache.\n\nHere's two links you should read and then reconsider your whole design: \n\nhttp://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/\nhttp://petereisentraut.blogspot.com/2009/07/solid-state-drive-benchmarks-and-write.html\n\nI can't even imagine how bad the situation would be if you decide to \nwander down the \"use a bunch of really cheap SSD drives\" path; these \nthings are barely usable for databases with Intel's hardware. The needs \nof people who want to throw SSD in a laptop and those of the enterprise \ndatabase market are really different, and if you believe doom \nforecasting like the comments at \nhttp://blogs.sun.com/BestPerf/entry/oracle_peoplesoft_payroll_sun_sparc \nthat gap is widening, not shrinking.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Fri, 13 Nov 2009 12:21:28 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\n\n\nOn 11/13/09 7:29 AM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> On Fri, Nov 13, 2009 at 9:48 AM, Scott Marlowe <[email protected]>\n> wrote:\n>> I think RAID6 is gonna reduce the throughput due to overhead to\n>> something far less than what a software RAID-10 would achieve.\n> \n> I was wondering about this. I think raid 5/6 might be a better fit\n> for SSD than traditional drives arrays. Here's my thinking:\n> \n> *) flash SSD reads are cheaper than writes. With 6 or more drives,\n> less total data has to be written in Raid 5 than Raid 10. The main\n> component of raid 5 performance penalty is that for each written\n> block, it has to be read first than written...incurring rotational\n> latency, etc. SSD does not have this problem.\n> \n\nFor random writes, RAID 5 writes as much as RAID 10 (parity + data), and\nmore if the raid block size is larger than 8k. With RAID 6 it writes 50%\nmore than RAID 10.\n\nFor streaming writes RAID 5 / 6 has an advantage however.\n\nFor SLC drives, there is really not much of a write performance penalty.\n> \n\n", "msg_date": "Fri, 13 Nov 2009 09:22:17 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Greg Smith wrote:\n> In order for a drive to work reliably for database use such as for\n> PostgreSQL, it cannot have a volatile write cache. You either need a\n> write cache with a battery backup (and a UPS doesn't count), or to\n> turn the cache off. The SSD performance figures you've been looking\n> at are with the drive's write cache turned on, which means they're\n> completely fictitious and exaggerated upwards for your purposes. In\n> the real world, that will result in database corruption after a crash\n> one day.\nIf power is \"unexpectedly\" removed from the system, this is true. But\nthe caches on the SSD controllers are BUFFERS. An operating system\ncrash does not disrupt the data in them or cause corruption. An\nunexpected disconnection of the power source from the drive (due to\nunplugging it or a power supply failure for whatever reason) is a\ndifferent matter.\n> No one on the drive benchmarking side of the industry seems to have\n> picked up on this, so you can't use any of those figures. I'm not\n> even sure right now whether drives like Intel's will even meet their\n> lifetime expectations if they aren't allowed to use their internal\n> volatile write cache.\n>\n> Here's two links you should read and then reconsider your whole design:\n> http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/\n>\n> http://petereisentraut.blogspot.com/2009/07/solid-state-drive-benchmarks-and-write.html\n>\n>\n> I can't even imagine how bad the situation would be if you decide to\n> wander down the \"use a bunch of really cheap SSD drives\" path; these\n> things are barely usable for databases with Intel's hardware. The\n> needs of people who want to throw SSD in a laptop and those of the\n> enterprise database market are really different, and if you believe\n> doom forecasting like the comments at\n> http://blogs.sun.com/BestPerf/entry/oracle_peoplesoft_payroll_sun_sparc\n> that gap is widening, not shrinking.\nAgain, it depends.\n\nWith the write cache off on these disks they still are huge wins for\nvery-heavy-read applications, which many are. The issue is (as always)\noperation mix - if you do a lot of inserts and updates then you suffer,\nbut a lot of database applications are in the high 90%+ SELECTs both in\nfrequency and data flow volume. The lack of rotational and seek latency\nin those applications is HUGE.\n\n-- Karl Denninger", "msg_date": "Fri, 13 Nov 2009 11:35:34 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Karl Denninger wrote:\n> If power is \"unexpectedly\" removed from the system, this is true. But\n> the caches on the SSD controllers are BUFFERS. An operating system\n> crash does not disrupt the data in them or cause corruption. An\n> unexpected disconnection of the power source from the drive (due to\n> unplugging it or a power supply failure for whatever reason) is a\n> different matter.\n> \nAs standard operating procedure, I regularly get something writing heavy \nto the database on hardware I'm suspicious of and power the box off \nhard. If at any time I suffer database corruption from this, the \nhardware is unsuitable for database use; that should never happen. This \nis what I mean when I say something meets the mythical \"enterprise\" \nquality. Companies whose data is worth something can't operate in a \nsituation where money has been exchanged because a database commit was \nrecorded, only to lose that commit just because somebody tripped over \nthe power cord and it was in the buffer rather than on permanent disk. \nThat's just not acceptable, and the even bigger danger of the database \nperhaps not coming up altogether even after such a tiny disaster is also \nvery real with a volatile write cache.\n\n> With the write cache off on these disks they still are huge wins for\n> very-heavy-read applications, which many are.\nVery read-heavy applications would do better to buy a ton of RAM instead \nand just make sure they populate from permanent media (say by reading \neverything in early at sequential rates to prime the cache). There is \nan extremely narrow use-case where SSDs are the right technology, and \nit's only in a subset even of read-heavy apps where they make sense.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Fri, 13 Nov 2009 13:07:39 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Greg Smith wrote:\n> Karl Denninger wrote:\n>> If power is \"unexpectedly\" removed from the system, this is true. But\n>> the caches on the SSD controllers are BUFFERS. An operating system\n>> crash does not disrupt the data in them or cause corruption. An\n>> unexpected disconnection of the power source from the drive (due to\n>> unplugging it or a power supply failure for whatever reason) is a\n>> different matter.\n>> \n> As standard operating procedure, I regularly get something writing\n> heavy to the database on hardware I'm suspicious of and power the box\n> off hard. If at any time I suffer database corruption from this, the\n> hardware is unsuitable for database use; that should never happen. \n> This is what I mean when I say something meets the mythical\n> \"enterprise\" quality. Companies whose data is worth something can't\n> operate in a situation where money has been exchanged because a\n> database commit was recorded, only to lose that commit just because\n> somebody tripped over the power cord and it was in the buffer rather\n> than on permanent disk. That's just not acceptable, and the even\n> bigger danger of the database perhaps not coming up altogether even\n> after such a tiny disaster is also very real with a volatile write cache.\nYep. The \"plug test\" is part of my standard \"is this stable enough for\nsomething I care about\" checkout.\n>> With the write cache off on these disks they still are huge wins for\n>> very-heavy-read applications, which many are.\n> Very read-heavy applications would do better to buy a ton of RAM\n> instead and just make sure they populate from permanent media (say by\n> reading everything in early at sequential rates to prime the cache). \n> There is an extremely narrow use-case where SSDs are the right\n> technology, and it's only in a subset even of read-heavy apps where\n> they make sense.\nI don't know about that in the general case - I'd say \"it depends.\"\n\n250GB of SSD for read-nearly-always applications is a LOT cheaper than\n250gb of ECC'd DRAM. The write performance issues can be handled by\nclever use of controller technology as well (that is, turn off the\ndrive's \"write cache\" and use the BBU on the RAID adapter.)\n\nI have a couple of applications where two 250GB SSD disks in a Raid 1\narray with a BBU'd controller, with the disk drive cache off, is all-in\na fraction of the cost of sticking 250GB of volatile storage in a server\nand reading in the data set (plus managing the occasional updates) from\n\"stable storage.\" It is not as fast as stuffing the 250GB of RAM in a\nmachine but it's a hell of a lot faster than a big array of small\nconventional drives in a setup designed for maximum IO-Ops.\n\nOne caution for those thinking of doing this - the incremental\nimprovement of this setup on PostGresql in WRITE SIGNIFICANT environment\nisn't NEARLY as impressive. Indeed the performance in THAT case for\nmany workloads may only be 20 or 30% faster than even \"reasonably\npedestrian\" rotating media in a high-performance (lots of spindles and\nthus stripes) configuration and it's more expensive (by a lot.) If you\nstep up to the fast SAS drives on the rotating side there's little\nargument for the SSD at all (again, assuming you don't intend to \"cheat\"\nand risk data loss.)\n\nKnow your application and benchmark it.\n\n-- Karl", "msg_date": "Fri, 13 Nov 2009 12:21:19 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Fri, Nov 13, 2009 at 12:22 PM, Scott Carey\n<[email protected]> > On 11/13/09 7:29 AM, \"Merlin Moncure\"\n<[email protected]> wrote:\n>\n>> On Fri, Nov 13, 2009 at 9:48 AM, Scott Marlowe <[email protected]>\n>> wrote:\n>>> I think RAID6 is gonna reduce the throughput due to overhead to\n>>> something far less than what a software RAID-10 would achieve.\n>>\n>> I was wondering about this.  I think raid 5/6 might be a better fit\n>> for SSD than traditional drives arrays.  Here's my thinking:\n>>\n>> *) flash SSD reads are cheaper than writes.  With 6 or more drives,\n>> less total data has to be written in Raid 5 than Raid 10.  The main\n>> component of raid 5 performance penalty is that for each written\n>> block, it has to be read first than written...incurring rotational\n>> latency, etc.   SSD does not have this problem.\n>>\n>\n> For random writes, RAID 5 writes as much as RAID 10 (parity + data), and\n> more if the raid block size is larger than 8k.  With RAID 6 it writes 50%\n> more than RAID 10.\n\nhow does raid 5 write more if the block size is > 8k? raid 10 is also\nstriped, so has the same problem, right? IOW, if the block size is 8k\nand you need to write 16k sequentially the raid 5 might write out 24k\n(two blocks + parity). raid 10 always writes out 2x your data in\nterms of blocks (raid 5 does only in the worst case). For a SINGLE\nblock, it's always 2x your data for both raid 5 and raid 10, so what i\nsaid above was not quite correct.\n\nraid 6 is not going to outperform raid 10 ever IMO. It's just a\nslightly safer raid 5. I was just wondering out loud if raid 5 might\ngive similar performance to raid 10 on flash based disks since there\nis no rotational latency. even if it did, I probably still wouldn't\nuse it...\n\nmerlin\n", "msg_date": "Fri, 13 Nov 2009 13:31:29 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "2009/11/13 Greg Smith <[email protected]>:\n> In order for a drive to work reliably for database use such as for\n> PostgreSQL, it cannot have a volatile write cache.  You either need a write\n> cache with a battery backup (and a UPS doesn't count), or to turn the cache\n> off.  The SSD performance figures you've been looking at are with the\n> drive's write cache turned on, which means they're completely fictitious and\n> exaggerated upwards for your purposes.  In the real world, that will result\n> in database corruption after a crash one day.  No one on the drive\n> benchmarking side of the industry seems to have picked up on this, so you\n> can't use any of those figures.  I'm not even sure right now whether drives\n> like Intel's will even meet their lifetime expectations if they aren't\n> allowed to use their internal volatile write cache.\n\nhm. I never understood why Peter was only able to turn up 400 iops\nwhen others were turning up 4000+ (measured from bonnie). This would\nexplain it.\n\nIs it authoritatively known that the Intel drives true random write\nops is not what they are claiming? If so, then you are right..flash\ndoesn't make sense, at least not without a NV cache on the device.\n\nmerlin\n", "msg_date": "Fri, 13 Nov 2009 13:57:28 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Greg Smith wrote:\n> Karl Denninger wrote:\n>> With the write cache off on these disks they still are huge wins for\n>> very-heavy-read applications, which many are.\n> Very read-heavy applications would do better to buy a ton of RAM \n> instead and just make sure they populate from permanent media (say by \n> reading everything in early at sequential rates to prime the cache). \n> There is an extremely narrow use-case where SSDs are the right \n> technology, and it's only in a subset even of read-heavy apps where \n> they make sense.\n\nOut of curiosity, what are those narrow use cases where you think SSD's \nare the correct technology?\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Fri, 13 Nov 2009 14:24:16 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Itching to jump in here :-)\n\nThere are a lot of things to trade off when choosing storage for a\ndatabase: performance for different parts of the workload,\nreliability, performance in degraded mode (when a disk dies), backup\nmethodologies, etc. ... the mistake many people make is to overlook\nthe sub-optimal operating conditions, dailure modes and recovery\npaths.\n\nSome thoughts:\n\n- RAID-5 and RAID-6 have poor write performance, and terrible\nperformance in degraded mode - there are a few edge cases, but in\nalmost all cases you should be using RAID-10 for a database.\n\n- Like most apps, the ultimate way to make a databse perform is to\nhave most of it (or at least the working set) in RAM, preferably the\nDB server buffer cache. This is why big banks run Oracle on an HP\nSuperdome with 1TB of RAM ... the $15m Hitachi data array is just\nbacking store :-)\n\n- Personally, I'm an SSD skeptic ... the technology just isn't mature\nenough for the data center. If you apply a typical OLTP workload, they\nare going to die early deaths. The only case in which they will\nmaterially improve performance is where you have a large data set with\nlots of **totally random** reads, i.e. where buffer cache is\nineffective. In the words of TurboTax, \"this is not common\".\n\n- If you're going to use synchronous write with a significant amount\nof small transactions, then you need some reliable RAM (not SSD) to\ncommit log files into, which means a proper battery-backed RAID\ncontroller / external SAN with write-back cache. For many apps though,\na synchronous commit simply isn't necessary: losing a few rows of data\nduring a crash is relatively harmless. For these apps, turning off\nsynchronous writes is an often overlooked performance tweak.\n\n\nIn summary, don't get distracted by shiny new objects like SSD and RAID-6 :-)\n\n\n2009/11/13 Brad Nicholson <[email protected]>:\n> Greg Smith wrote:\n>>\n>> Karl Denninger wrote:\n>>>\n>>> With the write cache off on these disks they still are huge wins for\n>>> very-heavy-read applications, which many are.\n>>\n>> Very read-heavy applications would do better to buy a ton of RAM instead\n>> and just make sure they populate from permanent media (say by reading\n>> everything in early at sequential rates to prime the cache).  There is an\n>> extremely narrow use-case where SSDs are the right technology, and it's only\n>> in a subset even of read-heavy apps where they make sense.\n>\n> Out of curiosity, what are those narrow use cases where you think SSD's are\n> the correct technology?\n>\n> --\n> Brad Nicholson  416-673-4106\n> Database Administrator, Afilias Canada Corp.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 13 Nov 2009 14:22:22 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> Laszlo Nagy\n> \n> My question is about the last option. Are there any good RAID \n> cards that are optimized (or can be optimized) for SSD \n> drives? Do any of you have experience in using many cheaper \n> SSD drives? Is it a bad idea?\n> \n> Thank you,\n> \n> Laszlo\n> \n\nNever had a SSD to try yet, still I wonder if software raid + fsync on SSD\nDrives could be regarded as a sound solution?\nShouldn't their write performance be more than a trade-off for fsync?\n\nYou could benchmark this setup yourself before purchasing a RAID card.\n\n", "msg_date": "Fri, 13 Nov 2009 17:57:38 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Brad Nicholson wrote:\n> Out of curiosity, what are those narrow use cases where you think \n> SSD's are the correct technology?\nDave Crooke did a good summary already, I see things like this:\n\n * You need to have a read-heavy app that's bigger than RAM, but not too \nbig so it can still fit on SSD\n * You need reads to be dominated by random-access and uncached lookups, \nso that system RAM used as a buffer cache doesn't help you much.\n * Writes have to be low to moderate, as the true write speed is much \nlower for database use than you'd expect from benchmarks derived from \nother apps. And it's better if writes are biased toward adding data \nrather than changing existing pages\n\nAs far as what real-world apps have that profile, I like SSDs for small \nto medium web applications that have to be responsive, where the user \nshows up and wants their randomly distributed and uncached data with \nminimal latency. \n\nSSDs can also be used effectively as second-tier targeted storage for \nthings that have a performance-critical but small and random bit as part \nof a larger design that doesn't have those characteristics; putting \nindexes on SSD can work out well for example (and there the write \ndurability stuff isn't quite as critical, as you can always drop an \nindex and rebuild if it gets corrupted).\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Fri, 13 Nov 2009 16:06:20 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "2009/11/13 Greg Smith <[email protected]>:\n> As far as what real-world apps have that profile, I like SSDs for small to\n> medium web applications that have to be responsive, where the user shows up\n> and wants their randomly distributed and uncached data with minimal latency.\n> SSDs can also be used effectively as second-tier targeted storage for things\n> that have a performance-critical but small and random bit as part of a\n> larger design that doesn't have those characteristics; putting indexes on\n> SSD can work out well for example (and there the write durability stuff\n> isn't quite as critical, as you can always drop an index and rebuild if it\n> gets corrupted).\n\n\nHere's a bonnie++ result for Intel showing 14k seeks:\nhttp://www.wlug.org.nz/HarddiskBenchmarks\n\nbonnie++ only writes data back 10% of the time. Why is Peter's\nbenchmark showing only 400 seeks? Is this all attributable to write\nbarrier? I'm not sure I'm buying that...\n\nmerlin\n", "msg_date": "Fri, 13 Nov 2009 16:09:18 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Fernando Hevia wrote:\n> Shouldn't their write performance be more than a trade-off for fsync?\n> \nNot if you have sequential writes that are regularly fsync'd--which is \nexactly how the WAL writes things out in PostgreSQL. I think there's a \npotential for SSD to reach a point where they can give good performance \neven with their write caches turned off. But it will require a more \nrobust software stack, like filesystems that really implement the write \nbarrier concept effectively for this use-case, for that to happen.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Fri, 13 Nov 2009 16:09:59 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "The FusionIO products are a little different. They are card based vs trying to emulate a traditional disk. In terms of volatility, they have an on-board capacitor that allows power to be supplied until all writes drain. They do not have a cache in front of them like a disk-type SSD might. I don't sell these things, I am just a fan. I verified all this with the Fusion IO techs before I replied. Perhaps older versions didn't have this functionality? I am not sure. I have already done some cold power off tests w/o problems, but I could up the workload a bit and retest. I will do a couple of 'pull the cable' tests on monday or tuesday and report back how it goes.\n\nRe the performance #'s... Here is my post:\n\nhttp://www.kennygorman.com/wordpress/?p=398\n\n-kg\n\n \n>In order for a drive to work reliably for database use such as for \n>PostgreSQL, it cannot have a volatile write cache. You either need a \n>write cache with a battery backup (and a UPS doesn't count), or to turn \n>the cache off. The SSD performance figures you've been looking at are \n>with the drive's write cache turned on, which means they're completely \n>fictitious and exaggerated upwards for your purposes. In the real \n>world, that will result in database corruption after a crash one day. \n>No one on the drive benchmarking side of the industry seems to have \n>picked up on this, so you can't use any of those figures. I'm not even \n>sure right now whether drives like Intel's will even meet their lifetime \n>expectations if they aren't allowed to use their internal volatile write \n>cache.\n>\n>Here's two links you should read and then reconsider your whole design: \n>\n>http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/\n>http://petereisentraut.blogspot.com/2009/07/solid-state-drive-benchmarks-and-write.html\n>\n>I can't even imagine how bad the situation would be if you decide to \n>wander down the \"use a bunch of really cheap SSD drives\" path; these \n>things are barely usable for databases with Intel's hardware. The needs \n>of people who want to throw SSD in a laptop and those of the enterprise \n>database market are really different, and if you believe doom \n>forecasting like the comments at \n>http://blogs.sun.com/BestPerf/entry/oracle_peoplesoft_payroll_sun_sparc \n>that gap is widening, not shrinking.\n\n\n\n\n\n\nRE: [PERFORM] SSD + RAID\n\n\n\nThe FusionIO products are a little different.  They are card based vs trying to emulate a traditional disk.  In terms of volatility, they have an on-board capacitor that allows power to be supplied until all writes drain.  They do not have a cache in front of them like a disk-type SSD might.   I don't sell these things, I am just a fan.  I verified all this with the Fusion IO techs before I replied.  Perhaps older versions didn't have this functionality?  I am not sure.  I have already done some cold power off tests w/o problems, but I could up the workload a bit and retest.  I will do a couple of 'pull the cable' tests on monday or tuesday and report back how it goes.\n\nRe the performance #'s...  Here is my post:\n\nhttp://www.kennygorman.com/wordpress/?p=398\n\n-kg\n\n\n>In order for a drive to work reliably for database use such as for\n>PostgreSQL, it cannot have a volatile write cache.  You either need a\n>write cache with a battery backup (and a UPS doesn't count), or to turn\n>the cache off.  The SSD performance figures you've been looking at are\n>with the drive's write cache turned on, which means they're completely\n>fictitious and exaggerated upwards for your purposes.  In the real\n>world, that will result in database corruption after a crash one day. \n>No one on the drive benchmarking side of the industry seems to have\n>picked up on this, so you can't use any of those figures.  I'm not even\n>sure right now whether drives like Intel's will even meet their lifetime\n>expectations if they aren't allowed to use their internal volatile write\n>cache.\n>\n>Here's two links you should read and then reconsider your whole design:\n>\n>http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/\n>http://petereisentraut.blogspot.com/2009/07/solid-state-drive-benchmarks-and-write.html\n>\n>I can't even imagine how bad the situation would be if you decide to\n>wander down the \"use a bunch of really cheap SSD drives\" path; these\n>things are barely usable for databases with Intel's hardware.  The needs\n>of people who want to throw SSD in a laptop and those of the enterprise\n>database market are really different, and if you believe doom\n>forecasting like the comments at\n>http://blogs.sun.com/BestPerf/entry/oracle_peoplesoft_payroll_sun_sparc\n>that gap is widening, not shrinking.", "msg_date": "Fri, 13 Nov 2009 16:35:57 -0500", "msg_from": "\"Kenny Gorman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Laszlo Nagy wrote:\n> Hello,\n>\n> I'm about to buy SSD drive(s) for a database. For decision making, I \n> used this tech report:\n>\n> http://techreport.com/articles.x/16255/9\n> http://techreport.com/articles.x/16255/10\n>\n> Here are my concerns:\n>\n> * I need at least 32GB disk space. So DRAM based SSD is not a real\n> option. I would have to buy 8x4GB memory, costs a fortune. And\n> then it would still not have redundancy.\n> * I could buy two X25-E drives and have 32GB disk space, and some\n> redundancy. This would cost about $1600, not counting the RAID\n> controller. It is on the edge.\nThis was the solution I went with (4 drives in a raid 10 actually). Not \na cheap solution, but the performance is amazing.\n\n> * I could also buy many cheaper MLC SSD drives. They cost about\n> $140. So even with 10 drives, I'm at $1400. I could put them in\n> RAID6, have much more disk space (256GB), high redundancy and\n> POSSIBLY good read/write speed. Of course then I need to buy a\n> good RAID controller.\n>\n> My question is about the last option. Are there any good RAID cards \n> that are optimized (or can be optimized) for SSD drives? Do any of you \n> have experience in using many cheaper SSD drives? Is it a bad idea?\n>\n> Thank you,\n>\n> Laszlo\n>\n>\n\n", "msg_date": "Sat, 14 Nov 2009 01:30:43 -0800", "msg_from": "Lists <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Lists wrote:\n> Laszlo Nagy wrote:\n>> Hello,\n>>\n>> I'm about to buy SSD drive(s) for a database. For decision making, I \n>> used this tech report:\n>>\n>> http://techreport.com/articles.x/16255/9\n>> http://techreport.com/articles.x/16255/10\n>>\n>> Here are my concerns:\n>>\n>> * I need at least 32GB disk space. So DRAM based SSD is not a real\n>> option. I would have to buy 8x4GB memory, costs a fortune. And\n>> then it would still not have redundancy.\n>> * I could buy two X25-E drives and have 32GB disk space, and some\n>> redundancy. This would cost about $1600, not counting the RAID\n>> controller. It is on the edge.\n> This was the solution I went with (4 drives in a raid 10 actually). Not \n> a cheap solution, but the performance is amazing.\n\nI've came across this article:\n\nhttp://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/\n\nIt's from a Linux MySQL user so it's a bit confusing but it looks like \nhe has some reservations about performance vs reliability of the Intel \ndrives - apparently they have their own write cache and when it's \ndisabled performance drops sharply.\n\n", "msg_date": "Sat, 14 Nov 2009 11:42:45 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Merlin Moncure wrote:\n> 2009/11/13 Heikki Linnakangas <[email protected]>:\n>> Laszlo Nagy wrote:\n>>> * I need at least 32GB disk space. So DRAM based SSD is not a real\n>>> option. I would have to buy 8x4GB memory, costs a fortune. And\n>>> then it would still not have redundancy.\n>> At 32GB database size, I'd seriously consider just buying a server with\n>> a regular hard drive or a small RAID array for redundancy, and stuffing\n>> 16 or 32 GB of RAM into it to ensure everything is cached. That's tried\n>> and tested technology.\n> \n> lots of ram doesn't help you if:\n> *) your database gets written to a lot and you have high performance\n> requirements\n\nWhen all the (hot) data is cached, all writes are sequential writes to\nthe WAL, with the occasional flushing of the data pages at checkpoint.\nThe sequential write bandwidth of SSDs and HDDs is roughly the same.\n\nI presume the fsync latency is a lot higher with HDDs, so if you're\nrunning a lot of small write transactions, and don't want to risk losing\nany recently committed transactions by setting synchronous_commit=off,\nthe usual solution is to get a RAID controller with a battery-backed up\ncache. With a BBU cache, the fsync latency should be in the same\nballpark as with SDDs.\n\n> *) your data is important\n\nHuh? The data is safely on the hard disk in case of a crash. The RAM is\njust for caching.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 14 Nov 2009 13:17:37 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Sat, Nov 14, 2009 at 6:17 AM, Heikki Linnakangas\n<[email protected]> wrote:\n>> lots of ram doesn't help you if:\n>> *) your database gets written to a lot and you have high performance\n>> requirements\n>\n> When all the (hot) data is cached, all writes are sequential writes to\n> the WAL, with the occasional flushing of the data pages at checkpoint.\n> The sequential write bandwidth of SSDs and HDDs is roughly the same.\n>\n> I presume the fsync latency is a lot higher with HDDs, so if you're\n> running a lot of small write transactions, and don't want to risk losing\n> any recently committed transactions by setting synchronous_commit=off,\n> the usual solution is to get a RAID controller with a battery-backed up\n> cache. With a BBU cache, the fsync latency should be in the same\n> ballpark as with SDDs.\n\nBBU raid controllers might only give better burst performance. If you\nare writing data randomly all over the volume, the cache will overflow\nand performance will degrade. Raid controllers degrade in different\nfashions, at least one (perc 5) halted ALL access to the volume and\nspun out the cache (a bug, IMO).\n\n>> *) your data is important\n>\n> Huh? The data is safely on the hard disk in case of a crash. The RAM is\n> just for caching.\n\nI was alluding to not being able to lose any transactions... in this\ncase you can only run fsync, synchronously. You are then bound by the\ncapabilities of the volume to write, ram only buffers reads.\n\nmerlin\n", "msg_date": "Sat, 14 Nov 2009 08:05:25 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Merlin Moncure wrote:\n> On Sat, Nov 14, 2009 at 6:17 AM, Heikki Linnakangas\n> <[email protected]> wrote:\n>>> lots of ram doesn't help you if:\n>>> *) your database gets written to a lot and you have high performance\n>>> requirements\n>> When all the (hot) data is cached, all writes are sequential writes to\n>> the WAL, with the occasional flushing of the data pages at checkpoint.\n>> The sequential write bandwidth of SSDs and HDDs is roughly the same.\n>>\n>> I presume the fsync latency is a lot higher with HDDs, so if you're\n>> running a lot of small write transactions, and don't want to risk losing\n>> any recently committed transactions by setting synchronous_commit=off,\n>> the usual solution is to get a RAID controller with a battery-backed up\n>> cache. With a BBU cache, the fsync latency should be in the same\n>> ballpark as with SDDs.\n> \n> BBU raid controllers might only give better burst performance. If you\n> are writing data randomly all over the volume, the cache will overflow\n> and performance will degrade.\n\nWe're discussing a scenario where all the data fits in RAM. That's what\nthe large amount of RAM is for. The only thing that's being written to\ndisk is the WAL, which is sequential, and the occasional flush of data\npages from the buffer cache at checkpoints, which doesn't happen often\nand will be spread over a period of time.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 14 Nov 2009 15:47:06 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Heikki Linnakangas wrote:\n> Laszlo Nagy wrote:\n> \n>> * I need at least 32GB disk space. So DRAM based SSD is not a real\n>> option. I would have to buy 8x4GB memory, costs a fortune. And\n>> then it would still not have redundancy.\n>> \n>\n> At 32GB database size, I'd seriously consider just buying a server with\n> a regular hard drive or a small RAID array for redundancy, and stuffing\n> 16 or 32 GB of RAM into it to ensure everything is cached. That's tried\n> and tested technology.\n> \n32GB is for one table only. This server runs other applications, and you \nneed to leave space for sort memory, shared buffers etc. Buying 128GB \nmemory would solve the problem, maybe... but it is too expensive. And it \nis not safe. Power out -> data loss.\n> I don't know how you came to the 32 GB figure, but keep in mind that\n> administration is a lot easier if you have plenty of extra disk space\n> for things like backups, dumps+restore, temporary files, upgrades etc.\n> \nThis disk space would be dedicated for a smaller tablespace, holding one \nor two bigger tables with index scans. Of course I would never use an \nSSD disk for storing database backups. It would be waste of money.\n\n\n L\n\n", "msg_date": "Sat, 14 Nov 2009 19:39:36 +0430", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "2009/11/14 Laszlo Nagy <[email protected]>:\n> 32GB is for one table only. This server runs other applications, and you\n> need to leave space for sort memory, shared buffers etc. Buying 128GB memory\n> would solve the problem, maybe... but it is too expensive. And it is not\n> safe. Power out -> data loss.\n\nHuh?\n\n...Robert\n", "msg_date": "Sat, 14 Nov 2009 15:03:34 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Sat, Nov 14, 2009 at 8:47 AM, Heikki Linnakangas\n<[email protected]> wrote:\n> Merlin Moncure wrote:\n>> On Sat, Nov 14, 2009 at 6:17 AM, Heikki Linnakangas\n>> <[email protected]> wrote:\n>>>> lots of ram doesn't help you if:\n>>>> *) your database gets written to a lot and you have high performance\n>>>> requirements\n>>> When all the (hot) data is cached, all writes are sequential writes to\n>>> the WAL, with the occasional flushing of the data pages at checkpoint.\n>>> The sequential write bandwidth of SSDs and HDDs is roughly the same.\n>>>\n>>> I presume the fsync latency is a lot higher with HDDs, so if you're\n>>> running a lot of small write transactions, and don't want to risk losing\n>>> any recently committed transactions by setting synchronous_commit=off,\n>>> the usual solution is to get a RAID controller with a battery-backed up\n>>> cache. With a BBU cache, the fsync latency should be in the same\n>>> ballpark as with SDDs.\n>>\n>> BBU raid controllers might only give better burst performance.  If you\n>> are writing data randomly all over the volume, the cache will overflow\n>> and performance will degrade.\n>\n> We're discussing a scenario where all the data fits in RAM. That's what\n> the large amount of RAM is for. The only thing that's being written to\n> disk is the WAL, which is sequential, and the occasional flush of data\n> pages from the buffer cache at checkpoints, which doesn't happen often\n> and will be spread over a period of time.\n\nWe are basically in agreement, but regardless of the effectiveness of\nyour WAL implementation, raid controller, etc, if you have to write\ndata to what approximates random locations to a disk based volume in a\nsustained manner, you must eventually degrade to whatever the drive\ncan handle plus whatever efficiencies checkpoint, o/s, can gain by\ngrouping writes together. Extra ram mainly helps only because it can\nshave precious iops off the read side so you use them for writing.\n\nmerlin\n", "msg_date": "Sat, 14 Nov 2009 16:30:39 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Robert Haas wrote:\n> 2009/11/14 Laszlo Nagy <[email protected]>:\n> \n>> 32GB is for one table only. This server runs other applications, and you\n>> need to leave space for sort memory, shared buffers etc. Buying 128GB memory\n>> would solve the problem, maybe... but it is too expensive. And it is not\n>> safe. Power out -> data loss.\n>> \nI'm sorry I though he was talking about keeping the database in memory \nwith fsync=off. Now I see he was only talking about the OS disk cache.\n\nMy server has 24GB RAM, and I cannot easily expand it unless I throw out \nsome 2GB modules, and buy more 4GB or 8GB modules. But... buying 4x8GB \nECC RAM (+throwing out 4x2GB RAM) is a lot more expensive than buying \nsome 64GB SSD drives. 95% of the table in question is not modified. Only \nread (mostly with index scan). Only 5% is actively updated.\n\nThis is why I think, using SSD in my case would be effective.\n\nSorry for the confusion.\n\n L\n\n", "msg_date": "Sun, 15 Nov 2009 06:39:51 +0430", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\n>>>\n>>> * I could buy two X25-E drives and have 32GB disk space, and some\n>>> redundancy. This would cost about $1600, not counting the RAID\n>>> controller. It is on the edge.\n>> This was the solution I went with (4 drives in a raid 10 actually). \n>> Not a cheap solution, but the performance is amazing.\n>\n> I've came across this article:\n>\n> http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/ \n>\n>\n> It's from a Linux MySQL user so it's a bit confusing but it looks like \n> he has some reservations about performance vs reliability of the Intel \n> drives - apparently they have their own write cache and when it's \n> disabled performance drops sharply.\nOk, I'm getting confused here. There is the WAL, which is written \nsequentially. If the WAL is not corrupted, then it can be replayed on \nnext database startup. Please somebody enlighten me! In my mind, fsync \nis only needed for the WAL. If I could configure postgresql to put the \nWAL on a real hard drive that has BBU and write cache, then I cannot \nloose data. Meanwhile, product table data could be placed on the SSD \ndrive, and I sould be able to turn on write cache safely. Am I wrong?\n\n L\n\n", "msg_date": "Sun, 15 Nov 2009 08:27:06 +0430", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\n> A change has been written to the WAL and fsync()'d, so Pg knows it's hit\n> disk. It can now safely apply the change to the tables themselves, and\n> does so, calling fsync() to tell the drive containing the tables to\n> commit those changes to disk.\n>\n> The drive lies, returning success for the fsync when it's just cached\n> the data in volatile memory. Pg carries on, shortly deleting the WAL\n> archive the changes were recorded in or recycling it and overwriting it\n> with new change data. The SSD is still merrily buffering data to write\n> cache, and hasn't got around to writing your particular change yet.\n> \nAll right. I believe you. In the current Pg implementation, I need to \nturn of disk cache.\n\nBut.... I would like to ask some theoretical questions. It is just an \nidea from me, and probably I'm wrong.\nHere is a scenario:\n\n#1. user wants to change something, resulting in a write_to_disk(data) call\n#2. data is written into the WAL and fsync()-ed\n#3. at this point the write_to_disk(data) call CAN RETURN, the user can \ncontinue his work (the WAL is already written, changes cannot be lost)\n#4. Pg can continue writting data onto the disk, and fsync() it.\n#5. Then WAL archive data can be deleted.\n\nNow maybe I'm wrong, but between #3 and #5, the data to be written is \nkept in memory. This is basically a write cache, implemented in OS \nmemory. We could really handle it like a write cache. E.g. everything \nwould remain the same, except that we add some latency. We can wait some \ntime after the last modification of a given block, and then write it out.\n\nIs it possible to do? If so, then can we can turn off write cache for \nall drives, except the one holding the WAL. And still write speed would \nremain the same. I don't think that any SSD drive has more than some \nmegabytes of write cache. The same amount of write cache could easily be \nimplemented in OS memory, and then Pg would always know what hit the disk.\n\nThanks,\n\n Laci\n\n", "msg_date": "Sun, 15 Nov 2009 10:35:12 +0430", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\n> - Pg doesn't know the erase block sizes or positions. It can't group\n> writes up by erase block except by hoping that, within a given file,\n> writing in page order will get the blocks to the disk in roughly\n> erase-block order. So your write caching isn't going to do anywhere near\n> as good a job as the SSD's can.\n> \nOkay, I see. We cannot query erase block size from an SSD drive. :-(\n>> I don't think that any SSD drive has more than some\n>> megabytes of write cache.\n>> \n>\n> The big, lots-of-$$ ones have HUGE battery backed caches for exactly\n> this reason.\n> \nHeh, this is why they are so expensive. :-)\n>> The same amount of write cache could easily be\n>> implemented in OS memory, and then Pg would always know what hit the disk.\n>> \n>\n> Really? How does Pg know what order the SSD writes things out from its\n> cache?\n> \nI got the point. We cannot implement an efficient write cache without \nmuch more knowledge about how that particular drive works.\n\nSo... the only solution that works well is to have much more RAM for \nread cache, and much more RAM for write cache inside the RAID controller \n(with BBU).\n\nThank you,\n\n Laszlo\n\n", "msg_date": "Sun, 15 Nov 2009 12:45:43 +0430", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On 15/11/2009 11:57 AM, Laszlo Nagy wrote:\n\n> Ok, I'm getting confused here. There is the WAL, which is written\n> sequentially. If the WAL is not corrupted, then it can be replayed on\n> next database startup. Please somebody enlighten me! In my mind, fsync\n> is only needed for the WAL. If I could configure postgresql to put the\n> WAL on a real hard drive that has BBU and write cache, then I cannot\n> loose data. Meanwhile, product table data could be placed on the SSD\n> drive, and I sould be able to turn on write cache safely. Am I wrong?\n\nA change has been written to the WAL and fsync()'d, so Pg knows it's hit\ndisk. It can now safely apply the change to the tables themselves, and\ndoes so, calling fsync() to tell the drive containing the tables to\ncommit those changes to disk.\n\nThe drive lies, returning success for the fsync when it's just cached\nthe data in volatile memory. Pg carries on, shortly deleting the WAL\narchive the changes were recorded in or recycling it and overwriting it\nwith new change data. The SSD is still merrily buffering data to write\ncache, and hasn't got around to writing your particular change yet.\n\nThe machine loses power.\n\nOops! A hole just appeared in history. A WAL replay won't re-apply the\nchanges that the database guaranteed had hit disk, but the changes never\nmade it onto the main database storage.\n\nPossible fixes for this are:\n\n- Don't let the drive lie about cache flush operations, ie disable write\nbuffering.\n\n- Give Pg some way to find out, from the drive, when particular write\noperations have actually hit disk. AFAIK there's no such mechanism at\npresent, and I don't think the drives are even capable of reporting this\ndata. If they were, Pg would have to be capable of applying entries from\nthe WAL \"sparsely\" to account for the way the drive's write cache\ncommits changes out-of-order, and Pg would have to maintain a map of\ncommitted / uncommitted WAL records. Pg would need another map of\ntablespace blocks to WAL records to know, when a drive write cache\ncommit notice came in, what record in what WAL archive was affected.\nIt'd also require Pg to keep WAL archives for unbounded and possibly\nlong periods of time, making disk space management for WAL much harder.\nSo - \"not easy\" is a bit of an understatement here.\n\nYou still need to turn off write caching.\n\n--\nCraig Ringer\n\n", "msg_date": "Sun, 15 Nov 2009 16:46:56 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On 15/11/2009 2:05 PM, Laszlo Nagy wrote:\n> \n>> A change has been written to the WAL and fsync()'d, so Pg knows it's hit\n>> disk. It can now safely apply the change to the tables themselves, and\n>> does so, calling fsync() to tell the drive containing the tables to\n>> commit those changes to disk.\n>>\n>> The drive lies, returning success for the fsync when it's just cached\n>> the data in volatile memory. Pg carries on, shortly deleting the WAL\n>> archive the changes were recorded in or recycling it and overwriting it\n>> with new change data. The SSD is still merrily buffering data to write\n>> cache, and hasn't got around to writing your particular change yet.\n>> \n> All right. I believe you. In the current Pg implementation, I need to\n> turn of disk cache.\n\nThat's certainly my understanding. I've been wrong many times before :S\n\n> #1. user wants to change something, resulting in a write_to_disk(data) call\n> #2. data is written into the WAL and fsync()-ed\n> #3. at this point the write_to_disk(data) call CAN RETURN, the user can\n> continue his work (the WAL is already written, changes cannot be lost)\n> #4. Pg can continue writting data onto the disk, and fsync() it.\n> #5. Then WAL archive data can be deleted.\n> \n> Now maybe I'm wrong, but between #3 and #5, the data to be written is\n> kept in memory. This is basically a write cache, implemented in OS\n> memory. We could really handle it like a write cache. E.g. everything\n> would remain the same, except that we add some latency. We can wait some\n> time after the last modification of a given block, and then write it out.\n\nI don't know enough about the whole affair to give you a good\nexplanation ( I tried, and it just showed me how much I didn't know )\nbut here are a few issues:\n\n- Pg doesn't know the erase block sizes or positions. It can't group\nwrites up by erase block except by hoping that, within a given file,\nwriting in page order will get the blocks to the disk in roughly\nerase-block order. So your write caching isn't going to do anywhere near\nas good a job as the SSD's can.\n\n- The only way to make this help the SSD out much would be to use a LOT\nof RAM for write cache and maintain a LOT of WAL archives. That's RAM\nnot being used for caching read data. The large number of WAL archives\nmeans incredibly long WAL replay times after a crash.\n\n- You still need a reliable way to tell the SSD \"really flush your cache\nnow\" after you've flushed the changes from your huge chunks of WAL files\nand are getting ready to recycle them.\n\nI was thinking that write ordering would be an issue too, as some\nchanges in the WAL would hit main disk before others that were earlier\nin the WAL. However, I don't think that matters if full_page_writes are\non. If you replay from the start, you'll reapply some changes with older\nversions, but they'll be corrected again by a later WAL record. So\nordering during WAL replay shouldn't be a problem. On the other hand,\nthe INCREDIBLY long WAL replay times during recovery would be a nightmare.\n\n> I don't think that any SSD drive has more than some\n> megabytes of write cache.\n\nThe big, lots-of-$$ ones have HUGE battery backed caches for exactly\nthis reason.\n\n> The same amount of write cache could easily be\n> implemented in OS memory, and then Pg would always know what hit the disk.\n\nReally? How does Pg know what order the SSD writes things out from its\ncache?\n\n--\nCraig Ringer\n", "msg_date": "Sun, 15 Nov 2009 18:17:24 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "I've wondered whether this would work for a read-mostly application: Buy a big RAM machine, like 64GB, with a crappy little single disk. Build the database, then make a really big RAM disk, big enough to hold the DB and the WAL. Then build a duplicate DB on another machine with a decent disk (maybe a 4-disk RAID10), and turn on WAL logging.\n\nThe system would be blazingly fast, and you'd just have to be sure before you shut it off to shut down Postgres and copy the RAM files back to the regular disk. And if you didn't, you could always recover from the backup. Since it's a read-mostly system, the WAL logging bandwidth wouldn't be too high, so even a modest machine would be able to keep up.\n\nAny thoughts?\n\nCraig\n", "msg_date": "Sun, 15 Nov 2009 11:53:22 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Craig James wrote:\n> I've wondered whether this would work for a read-mostly application: Buy\n> a big RAM machine, like 64GB, with a crappy little single disk. Build\n> the database, then make a really big RAM disk, big enough to hold the DB\n> and the WAL. Then build a duplicate DB on another machine with a decent\n> disk (maybe a 4-disk RAID10), and turn on WAL logging.\n> \n> The system would be blazingly fast, and you'd just have to be sure\n> before you shut it off to shut down Postgres and copy the RAM files back\n> to the regular disk. And if you didn't, you could always recover from\n> the backup. Since it's a read-mostly system, the WAL logging bandwidth\n> wouldn't be too high, so even a modest machine would be able to keep up.\n\nShould work, but I don't see any advantage over attaching the RAID array\ndirectly to the 1st machine with the RAM and turning synchronous_commit=off.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 15 Nov 2009 22:42:24 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "2009/11/13 Greg Smith <[email protected]>:\n> As far as what real-world apps have that profile, I like SSDs for small to\n> medium web applications that have to be responsive, where the user shows up\n> and wants their randomly distributed and uncached data with minimal latency.\n> SSDs can also be used effectively as second-tier targeted storage for things\n> that have a performance-critical but small and random bit as part of a\n> larger design that doesn't have those characteristics; putting indexes on\n> SSD can work out well for example (and there the write durability stuff\n> isn't quite as critical, as you can always drop an index and rebuild if it\n> gets corrupted).\n\nI am right now talking to someone on postgresql irc who is measuring\n15k iops from x25-e and no data loss following power plug test. I am\nbecoming increasingly suspicious that peter's results are not\nrepresentative: given that 90% of bonnie++ seeks are read only, the\nmath doesn't add up, and they contradict broadly published tests on\nthe internet. Has anybody independently verified the results?\n\nmerlin\n", "msg_date": "Tue, 17 Nov 2009 11:36:26 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Tue, 2009-11-17 at 11:36 -0500, Merlin Moncure wrote:\n> 2009/11/13 Greg Smith <[email protected]>:\n> > As far as what real-world apps have that profile, I like SSDs for small to\n> > medium web applications that have to be responsive, where the user shows up\n> > and wants their randomly distributed and uncached data with minimal latency.\n> > SSDs can also be used effectively as second-tier targeted storage for things\n> > that have a performance-critical but small and random bit as part of a\n> > larger design that doesn't have those characteristics; putting indexes on\n> > SSD can work out well for example (and there the write durability stuff\n> > isn't quite as critical, as you can always drop an index and rebuild if it\n> > gets corrupted).\n> \n> I am right now talking to someone on postgresql irc who is measuring\n> 15k iops from x25-e and no data loss following power plug test. I am\n> becoming increasingly suspicious that peter's results are not\n> representative: given that 90% of bonnie++ seeks are read only, the\n> math doesn't add up, and they contradict broadly published tests on\n> the internet. Has anybody independently verified the results?\n\nHow many times have the run the plug test? I've read other reports of\npeople (not on Postgres) losing data on this drive with the write cache\non.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Tue, 17 Nov 2009 11:54:42 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Tue, Nov 17, 2009 at 9:54 AM, Brad Nicholson\n<[email protected]> wrote:\n> On Tue, 2009-11-17 at 11:36 -0500, Merlin Moncure wrote:\n>> 2009/11/13 Greg Smith <[email protected]>:\n>> > As far as what real-world apps have that profile, I like SSDs for small to\n>> > medium web applications that have to be responsive, where the user shows up\n>> > and wants their randomly distributed and uncached data with minimal latency.\n>> > SSDs can also be used effectively as second-tier targeted storage for things\n>> > that have a performance-critical but small and random bit as part of a\n>> > larger design that doesn't have those characteristics; putting indexes on\n>> > SSD can work out well for example (and there the write durability stuff\n>> > isn't quite as critical, as you can always drop an index and rebuild if it\n>> > gets corrupted).\n>>\n>> I am right now talking to someone on postgresql irc who is measuring\n>> 15k iops from x25-e and no data loss following power plug test.  I am\n>> becoming increasingly suspicious that peter's results are not\n>> representative: given that 90% of bonnie++ seeks are read only, the\n>> math doesn't add up, and they contradict broadly published tests on\n>> the internet.  Has anybody independently verified the results?\n>\n> How many times have the run the plug test?  I've read other reports of\n> people (not on Postgres) losing data on this drive with the write cache\n> on.\n\nWhen I run the plug test it's on a pgbench that's as big as possible\n(~4000) and I remove memory if there's a lot in the server so the\nmemory is smaller than the db. I run 100+ concurrent and I set\ncheckoint timeouts to 30 minutes, and make a lots of checkpoint\nsegments (100 or so), and set completion target to 0. Then after\nabout 1/2 checkpoint timeout has passed, I issue a checkpoint from the\ncommand line, take a deep breath and pull the cord.\n", "msg_date": "Tue, 17 Nov 2009 10:04:11 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On tis, 2009-11-17 at 11:36 -0500, Merlin Moncure wrote:\n> I am right now talking to someone on postgresql irc who is measuring\n> 15k iops from x25-e and no data loss following power plug test. I am\n> becoming increasingly suspicious that peter's results are not\n> representative: given that 90% of bonnie++ seeks are read only, the\n> math doesn't add up, and they contradict broadly published tests on\n> the internet. Has anybody independently verified the results?\n\nNotably, between my two blog posts and this email thread, there have\nbeen claims of\n\n400\n1800\n4000\n7000\n14000\n15000\n35000\n\niops (of some kind) per second.\n\nThat alone should be cause of concern.\n\n", "msg_date": "Tue, 17 Nov 2009 19:30:13 +0200", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Merlin Moncure wrote:\n> I am right now talking to someone on postgresql irc who is measuring\n> 15k iops from x25-e and no data loss following power plug test.\nThe funny thing about Murphy is that he doesn't visit when things are \nquiet. It's quite possible the window for data loss on the drive is \nvery small. Maybe you only see it one out of 10 pulls with a very \naggressive database-oriented write test. Whatever the odd conditions \nare, you can be sure you'll see them when there's a bad outage in actual \nproduction though.\n\nA good test program that is a bit better at introducing and detecting \nthe write cache issue is described at \nhttp://brad.livejournal.com/2116715.html\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Tue, 17 Nov 2009 13:51:13 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Tue, Nov 17, 2009 at 1:51 PM, Greg Smith <[email protected]> wrote:\n> Merlin Moncure wrote:\n>>\n>> I am right now talking to someone on postgresql irc who is measuring\n>> 15k iops from x25-e and no data loss following power plug test.\n>\n> The funny thing about Murphy is that he doesn't visit when things are quiet.\n>  It's quite possible the window for data loss on the drive is very small.\n>  Maybe you only see it one out of 10 pulls with a very aggressive\n> database-oriented write test.  Whatever the odd conditions are, you can be\n> sure you'll see them when there's a bad outage in actual production though.\n>\n> A good test program that is a bit better at introducing and detecting the\n> write cache issue is described at http://brad.livejournal.com/2116715.html\n\nSure, not disputing that...I don't have one to test myself, so I can't\nvouch for the data being safe. But what's up with the 400 iops\nmeasured from bonnie++? That's an order of magnitude slower than any\nother published benchmark on the 'net, and I'm dying to get a little\nclarification here.\n\nmerlin\n", "msg_date": "Tue, 17 Nov 2009 14:19:35 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On 11/17/2009 01:51 PM, Greg Smith wrote:\n> Merlin Moncure wrote:\n>> I am right now talking to someone on postgresql irc who is measuring\n>> 15k iops from x25-e and no data loss following power plug test.\n> The funny thing about Murphy is that he doesn't visit when things are \n> quiet. It's quite possible the window for data loss on the drive is \n> very small. Maybe you only see it one out of 10 pulls with a very \n> aggressive database-oriented write test. Whatever the odd conditions \n> are, you can be sure you'll see them when there's a bad outage in \n> actual production though.\n>\n> A good test program that is a bit better at introducing and detecting \n> the write cache issue is described at \n> http://brad.livejournal.com/2116715.html\n>\n\nI've been following this thread with great interest in your results... \nPlease continue to share...\n\nFor write cache issues - is it possible that the reduced power \nutilization of SSD allows for a capacitor to complete all scheduled \nwrites, even with a large cache? Is it this particular drive you are \nsuggesting that is known to be insufficient or is it really the \ntechnology or maturity of the technology?\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Tue, 17 Nov 2009 15:09:58 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Merlin Moncure wrote:\n> But what's up with the 400 iops measured from bonnie++? \nI don't know really. SSD writes are really sensitive to block size and \nthe ability to chunk writes into larger chunks, so it may be that Peter \nhas just found the worst-case behavior and everybody else is seeing \nsomething better than that.\n\nWhen the reports I get back from people I believe are competant--Vadim, \nPeter--show worst-case results that are lucky to beat RAID10, I feel I \nhave to dismiss the higher values reported by people who haven't been so \ncareful. And that's just about everybody else, which leaves me quite \nsuspicious of the true value of the drives. The whole thing really sets \noff my vendor hype reflex, and short of someone loaning me a drive to \ntest I'm not sure how to get past that. The Intel drives are still just \na bit too expensive to buy one on a whim, such that I'll just toss it if \nthe drive doesn't live up to expectations.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Wed, 18 Nov 2009 01:32:01 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Wed, 18 Nov 2009, Greg Smith wrote:\n\n> Merlin Moncure wrote:\n>> But what's up with the 400 iops measured from bonnie++? \n> I don't know really. SSD writes are really sensitive to block size and the \n> ability to chunk writes into larger chunks, so it may be that Peter has just \n> found the worst-case behavior and everybody else is seeing something better \n> than that.\n>\n> When the reports I get back from people I believe are competant--Vadim, \n> Peter--show worst-case results that are lucky to beat RAID10, I feel I have \n> to dismiss the higher values reported by people who haven't been so careful. \n> And that's just about everybody else, which leaves me quite suspicious of the \n> true value of the drives. The whole thing really sets off my vendor hype \n> reflex, and short of someone loaning me a drive to test I'm not sure how to \n> get past that. The Intel drives are still just a bit too expensive to buy \n> one on a whim, such that I'll just toss it if the drive doesn't live up to \n> expectations.\n\nkeep in mind that bonnie++ isn't always going to reflect your real \nperformance.\n\nI have run tests on some workloads that were definantly I/O limited where \nbonnie++ results that differed by a factor of 10x made no measurable \ndifference in the application performance, so I can easily believe in \ncases where bonnie++ numbers would not change but application performance \ncould be drasticly different.\n\nas always it can depend heavily on your workload. you really do need to \nfigure out how to get your hands on one for your own testing.\n\nDavid Lang\n", "msg_date": "Tue, 17 Nov 2009 22:58:42 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "I found a bit of time to play with this.\n\nI started up a test with 20 concurrent processes all inserting into \nthe same table and committing after each insert. The db was achieving \nabout 5000 inserts per second, and I kept it running for about 10 \nminutes. The host was doing about 5MB/s of Physical I/O to the Fusion \nIO drive. I set checkpoint segments very small (10). I observed the \nfollowing message in the log: checkpoints are occurring too frequently \n(16 seconds apart). Then I pulled the cord. On reboot I noticed that \nFusion IO replayed it's log, then the filesystem (vxfs) did the same. \nThen I started up the DB and observed the it perform auto-recovery:\n\nNov 18 14:33:53 frutestdb002 postgres[5667]: [6-1] 2009-11-18 14:33:53 \nPSTLOG: database system was not properly shut down; automatic \nrecovery in progress\nNov 18 14:33:53 frutestdb002 postgres[5667]: [7-1] 2009-11-18 14:33:53 \nPSTLOG: redo starts at 2A/55F9D478\nNov 18 14:33:54 frutestdb002 postgres[5667]: [8-1] 2009-11-18 14:33:54 \nPSTLOG: record with zero length at 2A/56692F38\nNov 18 14:33:54 frutestdb002 postgres[5667]: [9-1] 2009-11-18 14:33:54 \nPSTLOG: redo done at 2A/56692F08\nNov 18 14:33:54 frutestdb002 postgres[5667]: [10-1] 2009-11-18 \n14:33:54 PSTLOG: database system is ready\n\nThanks\nKenny\n\nOn Nov 13, 2009, at 1:35 PM, Kenny Gorman wrote:\n\n> The FusionIO products are a little different. They are card based \n> vs trying to emulate a traditional disk. In terms of volatility, \n> they have an on-board capacitor that allows power to be supplied \n> until all writes drain. They do not have a cache in front of them \n> like a disk-type SSD might. I don't sell these things, I am just a \n> fan. I verified all this with the Fusion IO techs before I \n> replied. Perhaps older versions didn't have this functionality? I \n> am not sure. I have already done some cold power off tests w/o \n> problems, but I could up the workload a bit and retest. I will do a \n> couple of 'pull the cable' tests on monday or tuesday and report \n> back how it goes.\n>\n> Re the performance #'s... Here is my post:\n>\n> http://www.kennygorman.com/wordpress/?p=398\n>\n> -kg\n>\n>\n> >In order for a drive to work reliably for database use such as for\n> >PostgreSQL, it cannot have a volatile write cache. You either need a\n> >write cache with a battery backup (and a UPS doesn't count), or to \n> turn\n> >the cache off. The SSD performance figures you've been looking at \n> are\n> >with the drive's write cache turned on, which means they're \n> completely\n> >fictitious and exaggerated upwards for your purposes. In the real\n> >world, that will result in database corruption after a crash one day.\n> >No one on the drive benchmarking side of the industry seems to have\n> >picked up on this, so you can't use any of those figures. I'm not \n> even\n> >sure right now whether drives like Intel's will even meet their \n> lifetime\n> >expectations if they aren't allowed to use their internal volatile \n> write\n> >cache.\n> >\n> >Here's two links you should read and then reconsider your whole \n> design:\n> >\n> >http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/\n> >http://petereisentraut.blogspot.com/2009/07/solid-state-drive-benchmarks-and-write.html\n> >\n> >I can't even imagine how bad the situation would be if you decide to\n> >wander down the \"use a bunch of really cheap SSD drives\" path; these\n> >things are barely usable for databases with Intel's hardware. The \n> needs\n> >of people who want to throw SSD in a laptop and those of the \n> enterprise\n> >database market are really different, and if you believe doom\n> >forecasting like the comments at\n> >http://blogs.sun.com/BestPerf/entry/oracle_peoplesoft_payroll_sun_sparc\n> >that gap is widening, not shrinking.\n>\n>\n\n", "msg_date": "Wed, 18 Nov 2009 14:59:36 -0800", "msg_from": "Kenny Gorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\n\n\nOn 11/13/09 10:21 AM, \"Karl Denninger\" <[email protected]> wrote:\n\n> \n> One caution for those thinking of doing this - the incremental\n> improvement of this setup on PostGresql in WRITE SIGNIFICANT environment\n> isn't NEARLY as impressive. Indeed the performance in THAT case for\n> many workloads may only be 20 or 30% faster than even \"reasonably\n> pedestrian\" rotating media in a high-performance (lots of spindles and\n> thus stripes) configuration and it's more expensive (by a lot.) If you\n> step up to the fast SAS drives on the rotating side there's little\n> argument for the SSD at all (again, assuming you don't intend to \"cheat\"\n> and risk data loss.)\n\nFor your database DATA disks, leaving the write cache on is 100% acceptable,\neven with power loss, and without a RAID controller. And even in high write\nenvironments.\n\nThat is what the XLOG is for, isn't it? That is where this behavior is\ncritical. But that has completely different performance requirements and\nneed not bee on the same volume, array, or drive.\n\n> \n> Know your application and benchmark it.\n> \n> -- Karl\n> \n\n", "msg_date": "Wed, 18 Nov 2009 20:06:42 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\nOn 11/15/09 12:46 AM, \"Craig Ringer\" <[email protected]> wrote:\n> Possible fixes for this are:\n> \n> - Don't let the drive lie about cache flush operations, ie disable write\n> buffering.\n> \n> - Give Pg some way to find out, from the drive, when particular write\n> operations have actually hit disk. AFAIK there's no such mechanism at\n> present, and I don't think the drives are even capable of reporting this\n> data. If they were, Pg would have to be capable of applying entries from\n> the WAL \"sparsely\" to account for the way the drive's write cache\n> commits changes out-of-order, and Pg would have to maintain a map of\n> committed / uncommitted WAL records. Pg would need another map of\n> tablespace blocks to WAL records to know, when a drive write cache\n> commit notice came in, what record in what WAL archive was affected.\n> It'd also require Pg to keep WAL archives for unbounded and possibly\n> long periods of time, making disk space management for WAL much harder.\n> So - \"not easy\" is a bit of an understatement here.\n\n3: Have PG wait a half second (configurable) after the checkpoint fsync()\ncompletes before deleting/ overwriting any WAL segments. This would be a\ntrivial \"feature\" to add to a postgres release, I think. Actually, it\nalready exists!\n\nTurn on log archiving, and have the script that it runs after a checkpoint\nsleep().\n\nBTW, the information I have seen indicates that the write cache is 256K on\nthe Intel drives, the 32MB/64MB of other RAM is working memory for the drive\nblock mapping / wear leveling algorithms (tracking 160GB of 4k blocks takes\nspace).\n\n4: Yet another solution: The drives DO adhere to write barriers properly.\nA filesystem that used these in the process of fsync() would be fine too.\nSo XFS without LVM or MD (or the newer versions of those that don't ignore\nbarriers) would work too.\n\nSo, I think that write caching may not be necessary to turn off for non-xlog\ndisk.\n\n> \n> You still need to turn off write caching.\n> \n> --\n> Craig Ringer\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 18 Nov 2009 20:22:29 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> For your database DATA disks, leaving the write cache on is 100% acceptable,\n> even with power loss, and without a RAID controller. And even in high write\n> environments.\n\nReally? How hard have you tested that configuration?\n\n> That is what the XLOG is for, isn't it?\n\nOnce we have fsync'd a data change, we discard the relevant XLOG\nentries. If the disk hasn't actually put the data on stable storage\nbefore it claims the fsync is done, you're screwed.\n\nXLOG only exists to centralize the writes that have to happen before\na transaction can be reported committed (in particular, to avoid a\nlot of random-access writes at commit). It doesn't make any\nfundamental change in the rules of the game: a disk that lies about\nwrite complete will still burn you.\n\nIn a zero-seek-cost environment I suspect that XLOG wouldn't actually\nbe all that useful. I gather from what's been said earlier that SSDs\ndon't fully eliminate random-access penalties, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Nov 2009 23:24:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID " }, { "msg_contents": "\nOn 11/17/09 10:51 AM, \"Greg Smith\" <[email protected]> wrote:\n\n> Merlin Moncure wrote:\n>> I am right now talking to someone on postgresql irc who is measuring\n>> 15k iops from x25-e and no data loss following power plug test.\n> The funny thing about Murphy is that he doesn't visit when things are\n> quiet. It's quite possible the window for data loss on the drive is\n> very small. Maybe you only see it one out of 10 pulls with a very\n> aggressive database-oriented write test. Whatever the odd conditions\n> are, you can be sure you'll see them when there's a bad outage in actual\n> production though.\n\nYes, but there is nothing fool proof. Murphy visited me recently, and the\nRAID card with BBU cache that the WAL logs were on crapped out. Data was\nfine.\n\nHad to fix up the system without any WAL logs. Luckily, out of 10TB, only\n200GB or so of it could have been in the process of writing (yay!\npartitioning by date!) to and we could restore just that part rather than\ninitiating a full restore.\nThen there was fun times in single user mode to fix corrupted system tables\n(about half the system indexes were dead, and the statistics table was\ncorrupt, but that could be truncated safely).\n\nIts all fine now with all data validated.\n\nMoral of the story: Nothing is 100% safe, so sometimes a small bit of KNOWN\nrisk is perfectly fine. There is always UNKNOWN risk. If one risks losing\n256K of cached data on an SSD if you're really unlucky with timing, how\ndangerous is that versus the chance that the raid card or other hardware\nbarfs and takes out your whole WAL?\n\nNothing is safe enough to avoid a full DR plan of action. The individual\ntradeoffs are very application and data dependent.\n\n\n> \n> A good test program that is a bit better at introducing and detecting\n> the write cache issue is described at\n> http://brad.livejournal.com/2116715.html\n> \n> --\n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 18 Nov 2009 20:35:02 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\nOn 11/17/09 10:58 PM, \"[email protected]\" <[email protected]> wrote:\n> \n> keep in mind that bonnie++ isn't always going to reflect your real\n> performance.\n> \n> I have run tests on some workloads that were definantly I/O limited where\n> bonnie++ results that differed by a factor of 10x made no measurable\n> difference in the application performance, so I can easily believe in\n> cases where bonnie++ numbers would not change but application performance\n> could be drasticly different.\n> \n\nWell, that is sort of true for all benchmarks, but I do find that bonnie++\nis the worst of the bunch. I consider it relatively useless compared to\nfio. Its just not a great benchmark for server type load and I find it\nlacking in the ability to simulate real applications.\n\n\n> as always it can depend heavily on your workload. you really do need to\n> figure out how to get your hands on one for your own testing.\n> \n> David Lang\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 18 Nov 2009 20:39:02 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On 19/11/2009 12:22 PM, Scott Carey wrote:\n\n> 3: Have PG wait a half second (configurable) after the checkpoint fsync()\n> completes before deleting/ overwriting any WAL segments. This would be a\n> trivial \"feature\" to add to a postgres release, I think.\n\nHow does that help? It doesn't provide any guarantee that the data has\nhit main storage - it could lurk in SDD cache for hours.\n\n> 4: Yet another solution: The drives DO adhere to write barriers properly.\n> A filesystem that used these in the process of fsync() would be fine too.\n> So XFS without LVM or MD (or the newer versions of those that don't ignore\n> barriers) would work too.\n\n*if* the WAL is also on the SSD.\n\nIf the WAL is on a separate drive, the write barriers do you no good,\nbecause they won't ensure that the data hits the main drive storage\nbefore the WAL recycling hits the WAL disk storage. The two drives\noperate independently and the write barriers don't interact.\n\nYou'd need some kind of inter-drive write barrier.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 19 Nov 2009 20:29:56 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Scott Carey wrote:\n> For your database DATA disks, leaving the write cache on is 100% acceptable,\n> even with power loss, and without a RAID controller. And even in high write\n> environments.\n>\n> That is what the XLOG is for, isn't it? That is where this behavior is\n> critical. But that has completely different performance requirements and\n> need not bee on the same volume, array, or drive.\n> \nAt checkpoint time, writes to the main data files are done that are \nfollowed by fsync calls to make sure those blocks have been written to \ndisk. Those writes have exactly the same consistency requirements as \nthe more frequent pg_xlog writes. If the drive ACKs the write, but it's \nnot on physical disk yet, it's possible for the checkpoint to finish and \nthe underlying pg_xlog segments needed to recover from a crash at that \npoint to be deleted. The end of the checkpoint can wipe out many WAL \nsegments, presuming they're not needed anymore because the data blocks \nthey were intended to fix during recovery are now guaranteed to be on disk.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 19 Nov 2009 09:44:36 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Greg Smith wrote:\n> Scott Carey wrote:\n>> For your database DATA disks, leaving the write cache on is 100%\n>> acceptable,\n>> even with power loss, and without a RAID controller. And even in\n>> high write\n>> environments.\n>>\n>> That is what the XLOG is for, isn't it? That is where this behavior is\n>> critical. But that has completely different performance requirements\n>> and\n>> need not bee on the same volume, array, or drive.\n>> \n> At checkpoint time, writes to the main data files are done that are\n> followed by fsync calls to make sure those blocks have been written to\n> disk. Those writes have exactly the same consistency requirements as\n> the more frequent pg_xlog writes. If the drive ACKs the write, but\n> it's not on physical disk yet, it's possible for the checkpoint to\n> finish and the underlying pg_xlog segments needed to recover from a\n> crash at that point to be deleted. The end of the checkpoint can wipe\n> out many WAL segments, presuming they're not needed anymore because\n> the data blocks they were intended to fix during recovery are now\n> guaranteed to be on disk.\nGuys, read that again.\n\nIF THE DISK OR DRIVER ACK'S A FSYNC CALL THE WAL ENTRY IS LIKELY GONE,\nAND YOU ARE SCREWED IF THE DATA IS NOT REALLY ON THE DISK.\n\n-- Karl", "msg_date": "Thu, 19 Nov 2009 08:44:58 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Scott Carey wrote:\n> Moral of the story: Nothing is 100% safe, so sometimes a small bit of KNOWN\n> risk is perfectly fine. There is always UNKNOWN risk. If one risks losing\n> 256K of cached data on an SSD if you're really unlucky with timing, how\n> dangerous is that versus the chance that the raid card or other hardware\n> barfs and takes out your whole WAL?\n> \nI think the point of the paranoia in this thread is that if you're \nintroducing a component with a known risk in it, you're really asking \nfor trouble because (as you point out) it's hard enough to keep a system \nrunning just through the unexpected ones that shouldn't have happened at \nall. No need to make that even harder by introducing something that is \n*known* to fail under some conditions.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 19 Nov 2009 09:49:08 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Wed, Nov 18, 2009 at 11:39 PM, Scott Carey <[email protected]> wrote:\n> Well, that is sort of true for all benchmarks, but I do find that bonnie++\n> is the worst of the bunch.  I consider it relatively useless compared to\n> fio.  Its just not a great benchmark for server type load and I find it\n> lacking in the ability to simulate real applications.\n\nI agree. My biggest gripe with bonnie actually is that 99% of the\ntime is spent measuring in sequential tests which is not that\nimportant in the database world. Dedicated wal volume uses ostensibly\nsequential io, but it's fairly difficult to outrun a dedicated wal\nvolume even if it's on a vanilla sata drive.\n\npgbench is actually a pretty awesome i/o tester assuming you have big\nenough scaling factor, because:\na) it's much closer to the environment you will actually run in\nb) you get to see what i/o affecting options have on the load\nc) you have broad array of options regarding what gets done (select\nonly, -f, etc)\nd) once you build the test database, you can do multiple runs without\nrebuilding it\n\nmerlin\n\nmerlin\n", "msg_date": "Thu, 19 Nov 2009 12:01:20 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Thu, Nov 19, 2009 at 10:01 AM, Merlin Moncure <[email protected]> wrote:\n> On Wed, Nov 18, 2009 at 11:39 PM, Scott Carey <[email protected]> wrote:\n>> Well, that is sort of true for all benchmarks, but I do find that bonnie++\n>> is the worst of the bunch.  I consider it relatively useless compared to\n>> fio.  Its just not a great benchmark for server type load and I find it\n>> lacking in the ability to simulate real applications.\n>\n> I agree.   My biggest gripe with bonnie actually is that 99% of the\n> time is spent measuring in sequential tests which is not that\n> important in the database world.  Dedicated wal volume uses ostensibly\n> sequential io, but it's fairly difficult to outrun a dedicated wal\n> volume even if it's on a vanilla sata drive.\n>\n> pgbench is actually a pretty awesome i/o tester assuming you have big\n> enough scaling factor, because:\n> a) it's much closer to the environment you will actually run in\n> b) you get to see what i/o affecting options have on the load\n> c) you have broad array of options regarding what gets done (select\n> only, -f, etc)\n> d) once you build the test database, you can do multiple runs without\n> rebuilding it\n\nSeeing as how pgbench only goes to scaling factor of 4000, are the any\nplans on enlarging that number?\n", "msg_date": "Thu, 19 Nov 2009 10:19:32 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Am Donnerstag, 19. November 2009 13:29:56 schrieb Craig Ringer:\n> On 19/11/2009 12:22 PM, Scott Carey wrote:\n> > 3: Have PG wait a half second (configurable) after the checkpoint\n> > fsync() completes before deleting/ overwriting any WAL segments. This\n> > would be a trivial \"feature\" to add to a postgres release, I think.\n>\n> How does that help? It doesn't provide any guarantee that the data has\n> hit main storage - it could lurk in SDD cache for hours.\n>\n> > 4: Yet another solution: The drives DO adhere to write barriers\n> > properly. A filesystem that used these in the process of fsync() would be\n> > fine too. So XFS without LVM or MD (or the newer versions of those that\n> > don't ignore barriers) would work too.\n>\n> *if* the WAL is also on the SSD.\n>\n> If the WAL is on a separate drive, the write barriers do you no good,\n> because they won't ensure that the data hits the main drive storage\n> before the WAL recycling hits the WAL disk storage. The two drives\n> operate independently and the write barriers don't interact.\n>\n> You'd need some kind of inter-drive write barrier.\n>\n> --\n> Craig Ringer\n\n\nHello !\n\nas i understand this:\nssd performace is great, but caching is the problem.\n\nquestions:\n\n1. what about conventional disks with 32/64 mb cache ? how do they handle the \nplug test if their caches are on ?\n\n2. what about using seperated power supply for the disks ? it it possible to \nwrite back the cache after switching the sata to another machine controller ?\n\n3. what about making a statement about a lacking enterprise feature (aka \nemergency battery equipped ssd) and submitting this to the producers ?\n\nI found that one of them (OCZ) seems to handle suggestions of customers (see \nwrite speed discussins on vertex fro example)\n\nand another (intel) seems to handle serious problems with his disks in \nrewriting and sometimes redesigning his products - if you tell them and \nmarket dictades to react (see degeneration of performace before 1.11 \nfirmware).\n\nperhaps its time to act and not only to complain about the fact.\n\n(btw: found funny bonnie++ for my intel 160 gb postville and my samsung pb22 \nafter using the sam for now approx. 3 months+ ... my conclusion: NOT all SSD \nare equal ...)\n\nbest regards \n\nanton\n\n-- \n\nATRSoft GmbH\nBivetsweg 12\nD 41542 Dormagen\nDeutschland\nTel .: +49(0)2182 8339951\nMobil: +49(0)172 3490817\n\nGeschäftsführer Anton Rommerskirchen\n\nKöln HRB 44927\nSTNR 122/5701 - 2030\nUSTID DE213791450\n", "msg_date": "Thu, 19 Nov 2009 19:01:14 +0100", "msg_from": "Anton Rommerskirchen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Thu, 2009-11-19 at 19:01 +0100, Anton Rommerskirchen wrote:\n> Am Donnerstag, 19. November 2009 13:29:56 schrieb Craig Ringer:\n> > On 19/11/2009 12:22 PM, Scott Carey wrote:\n> > > 3: Have PG wait a half second (configurable) after the checkpoint\n> > > fsync() completes before deleting/ overwriting any WAL segments. This\n> > > would be a trivial \"feature\" to add to a postgres release, I think.\n> >\n> > How does that help? It doesn't provide any guarantee that the data has\n> > hit main storage - it could lurk in SDD cache for hours.\n> >\n> > > 4: Yet another solution: The drives DO adhere to write barriers\n> > > properly. A filesystem that used these in the process of fsync() would be\n> > > fine too. So XFS without LVM or MD (or the newer versions of those that\n> > > don't ignore barriers) would work too.\n> >\n> > *if* the WAL is also on the SSD.\n> >\n> > If the WAL is on a separate drive, the write barriers do you no good,\n> > because they won't ensure that the data hits the main drive storage\n> > before the WAL recycling hits the WAL disk storage. The two drives\n> > operate independently and the write barriers don't interact.\n> >\n> > You'd need some kind of inter-drive write barrier.\n> >\n> > --\n> > Craig Ringer\n> \n> \n> Hello !\n> \n> as i understand this:\n> ssd performace is great, but caching is the problem.\n> \n> questions:\n> \n> 1. what about conventional disks with 32/64 mb cache ? how do they handle the \n> plug test if their caches are on ?\n\nIf the aren't battery backed, they can lose data. This is not specific\nto SSD.\n\n> 2. what about using seperated power supply for the disks ? it it possible to \n> write back the cache after switching the sata to another machine controller ?\n\nNot sure. I only use devices with battery backed caches or no cache. I\nwould be concerned however about the drive not flushing itself and still\nrunning out of power.\n\n> 3. what about making a statement about a lacking enterprise feature (aka \n> emergency battery equipped ssd) and submitting this to the producers ?\n\nThe producers aren't making Enterprise products, they are using caches\nto accelerate the speeds of consumer products to make their drives more\nappealing to consumers. They aren't going to slow them down to make\nthem more reliable, especially when the core consumer doesn't know about\nthis issue, and is even less likely to understand it if explained.\n\nThey may stamp the word Enterprise on them, but it's nothing more than\nmarketing.\n\n> I found that one of them (OCZ) seems to handle suggestions of customers (see \n> write speed discussins on vertex fro example)\n> \n> and another (intel) seems to handle serious problems with his disks in \n> rewriting and sometimes redesigning his products - if you tell them and \n> market dictades to react (see degeneration of performace before 1.11 \n> firmware).\n> \n> perhaps its time to act and not only to complain about the fact.\n\nOr, you could just buy higher quality equipment that was designed with\nthis in mind.\n\nThere is nothing unique to SSD here IMHO. I wouldn't run my production\ngrade databases on consumer grade HDD, I wouldn't run them on consumer\ngrade SSD either.\n\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Thu, 19 Nov 2009 13:57:51 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Scott Carey wrote:\n> Have PG wait a half second (configurable) after the checkpoint fsync()\n> completes before deleting/ overwriting any WAL segments. This would be a\n> trivial \"feature\" to add to a postgres release, I think. Actually, it\n> already exists! Turn on log archiving, and have the script that it runs after a checkpoint sleep().\n> \nThat won't help. Once the checkpoint is done, the problem isn't just \nthat the WAL segments are recycled. The server isn't going to use them \neven if they were there. The reason why you can erase/recycle them is \nthat you're doing so *after* writing out a checkpoint record that says \nyou don't have to ever look at them again. What you'd actually have to \ndo is hack the server code to insert that delay after every fsync--there \nare none that you can cheat on and not introduce a corruption \npossibility. The whole WAL/recovery mechanism in PostgreSQL doesn't \nmake a lot of assumptions about what the underlying disk has to actually \ndo beyond the fsync requirement; the flip side to that robustness is \nthat it's the one you can't ever violate safely.\n> BTW, the information I have seen indicates that the write cache is 256K on\n> the Intel drives, the 32MB/64MB of other RAM is working memory for the drive\n> block mapping / wear leveling algorithms (tracking 160GB of 4k blocks takes\n> space).\n> \nRight. It's not used like the write-cache on a regular hard drive, \nwhere they're buffering 8MB-32MB worth of writes just to keep seek \noverhead down. It's there primarily to allow combining writes into \nlarge chunks, to better match the block size of the underlying SSD flash \ncells (128K). Having enough space for two full cells allows spooling \nout the flash write to a whole block while continuing to buffer the next \none.\n\nThis is why turning the cache off can tank performance so badly--you're \ngoing to be writing a whole 128K block no matter what if it's force to \ndisk without caching, even if it's just to write a 8K page to it. \nThat's only going to reach 1/16 of the usual write speed on single page \nwrites. And that's why you should also be concerned at whether \ndisabling the write cache impacts the drive longevity, lots of small \nwrites going out in small chunks is going to wear flash out much faster \nthan if the drive is allowed to wait until it's got a full sized block \nto write every time.\n\nThe fact that the cache is so small is also why it's harder to catch the \ndrive doing the wrong thing here. The plug test is pretty sensitive to \na problem when you've got megabytes worth of cached writes that are \nspooling to disk at spinning hard drive speeds. The window for loss on \na SSD with no seek overhead and only a moderate number of KB worth of \ncached data is much, much smaller. Doesn't mean it's gone though. It's \na shame that the design wasn't improved just a little bit; a cheap \ncapacitor and blocking new writes once the incoming power dropped is all \nit would take to make these much more reliable for database use. But \nthat would raise the price, and not really help anybody but the small \nsubset of the market that cares about durable writes.\n> 4: Yet another solution: The drives DO adhere to write barriers properly.\n> A filesystem that used these in the process of fsync() would be fine too.\n> So XFS without LVM or MD (or the newer versions of those that don't ignore\n> barriers) would work too.\n> \nIf I really trusted anything beyond the very basics of the filesystem to \nreally work well on Linux, this whole issue would be moot for most of \nthe production deployments I do. Ideally, fsync would just push out the \nminimum of what's needed, it would call the appropriate write cache \nflush mechanism the way the barrier implementation does when that all \nworks, life would be good. Alternately, you might even switch to using \nO_SYNC writes instead, which on a good filesystem implementation are \nboth accelerated and safe compared to write/fsync (I've seen that work \nas expected on Vertias VxFS for example). \n\nMeanwhile, in the actual world we live, patches that make writes more \ndurable by default are dropped by the Linux community because they tank \nperformance for too many types of loads, I'm frightened to turn on \nO_SYNC at all on ext3 because of reports of corruption on the lists \nhere, fsync does way more work than it needs to, and the way the \nfilesystem and block drivers have been separated makes it difficult to \ndo any sort of device write cache control from userland. This is why I \ntry to use the simplest, best tested approach out there whenever possible.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 19 Nov 2009 16:04:29 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Scott Marlowe wrote:\n> On Thu, Nov 19, 2009 at 10:01 AM, Merlin Moncure <[email protected]> wrote:\n> \n>> pgbench is actually a pretty awesome i/o tester assuming you have big\n>> enough scaling factor\n> Seeing as how pgbench only goes to scaling factor of 4000, are the any\n> plans on enlarging that number?\n> \nI'm doing pgbench tests now on a system large enough for this limit to \nmatter, so I'm probably going to have to fix that for 8.5 just to \ncomplete my own work.\n\nYou can use pgbench to either get interesting peak read results, or peak \nwrite ones, but it's not real useful for things in between. The \nstandard test basically turns into a huge stack of writes to a single \ntable, and the select-only one is interesting to gauge either cached or \nuncached read speed (depending on the scale). It's not very useful for \ngetting a feel for how something with a mixed read/write workload does \nthough, which is unfortunate because I think that scenario is much more \ncommon than what it does test.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 19 Nov 2009 16:10:47 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Thu, Nov 19, 2009 at 4:10 PM, Greg Smith <[email protected]> wrote:\n> You can use pgbench to either get interesting peak read results, or peak\n> write ones, but it's not real useful for things in between.  The standard\n> test basically turns into a huge stack of writes to a single table, and the\n> select-only one is interesting to gauge either cached or uncached read speed\n> (depending on the scale).  It's not very useful for getting a feel for how\n> something with a mixed read/write workload does though, which is unfortunate\n> because I think that scenario is much more common than what it does test.\n\nall true, but it's pretty easy to rig custom (-f) commands for\nvirtually any test you want,.\n\nmerlin\n", "msg_date": "Thu, 19 Nov 2009 16:39:18 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Thu, Nov 19, 2009 at 2:39 PM, Merlin Moncure <[email protected]> wrote:\n> On Thu, Nov 19, 2009 at 4:10 PM, Greg Smith <[email protected]> wrote:\n>> You can use pgbench to either get interesting peak read results, or peak\n>> write ones, but it's not real useful for things in between.  The standard\n>> test basically turns into a huge stack of writes to a single table, and the\n>> select-only one is interesting to gauge either cached or uncached read speed\n>> (depending on the scale).  It's not very useful for getting a feel for how\n>> something with a mixed read/write workload does though, which is unfortunate\n>> because I think that scenario is much more common than what it does test.\n>\n> all true, but it's pretty easy to rig custom (-f) commands for\n> virtually any test you want,.\n\nMy primary use of pgbench is to exercise a machine as a part of\nacceptance testing. After using it to do power plug pulls, I run it\nfor a week or two to exercise the drive array and controller mainly.\nAny machine that runs smooth for a week with a load factor of 20 or 30\nand the amount of updates that pgbench generates don't overwhelm it\nI'm pretty happy.\n", "msg_date": "Thu, 19 Nov 2009 14:52:08 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\nAm 13.11.2009 um 14:57 schrieb Laszlo Nagy:\n\n> I was thinking about ARECA 1320 with 2GB memory + BBU. \n> Unfortunately, I cannot find information about using ARECA cards \n> with SSD drives.\nThey told me: currently not supported, but they have positive customer \nreports. No date yet for implementation of the TRIM command in firmware.\n...\n> My other option is to buy two SLC SSD drives and use RAID1. It would \n> cost about the same, but has less redundancy and less capacity. \n> Which is the faster? 8-10 MLC disks in RAID 6 with a good caching \n> controller, or two SLC disks in RAID1?\nI just went the MLC path with X25-Ms mainly to save energy.\nThe fresh assembled box has one SSD for WAL and one RAID 0 with for \nSSDs as table space.\nEverything runs smoothly on a areca 1222 with BBU, which turned all \nwrite caches off.\nOS is FreeBSD 8.0. I aligned all partitions on 1 MB boundaries.\nNext week I will install 8.4.1 and run pgbench for pull-the-plug- \ntesting.\n\nI would like to get some advice from the list for testing the SSDs!\n\nAxel\n---\[email protected] PGP-Key:29E99DD6 +49 151 2300 9283 computing @ \nchaos claudius\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 20 Nov 2009 12:06:05 +0100", "msg_from": "Axel Rau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Thu, 19 Nov 2009, Greg Smith wrote:\n> This is why turning the cache off can tank performance so badly--you're going \n> to be writing a whole 128K block no matter what if it's force to disk without \n> caching, even if it's just to write a 8K page to it.\n\nTheoretically, this does not need to be the case. Now, I don't know what \nthe Intel drives actually do, but remember that for flash, it is the \n*erase* cycle that has to be done in large blocks. Writing itself can be \ndone in small blocks, to previously erased sites.\n\nThe technology for combining small writes into sequential writes has been \naround for 17 years or so in \nhttp://portal.acm.org/citation.cfm?id=146943&dl= so there really isn't any \nexcuse for modern flash drives not giving really fast small writes.\n\nMatthew\n\n-- \n for a in past present future; do\n for b in clients employers associates relatives neighbours pets; do\n echo \"The opinions here in no way reflect the opinions of my $a $b.\"\n done; done\n", "msg_date": "Fri, 20 Nov 2009 11:54:58 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Wed, Nov 18, 2009 at 8:24 PM, Tom Lane <[email protected]> wrote:\n> Scott Carey <[email protected]> writes:\n>> For your database DATA disks, leaving the write cache on is 100% acceptable,\n>> even with power loss, and without a RAID controller. And even in high write\n>> environments.\n>\n> Really? How hard have you tested that configuration?\n>\n>> That is what the XLOG is for, isn't it?\n>\n> Once we have fsync'd a data change, we discard the relevant XLOG\n> entries. If the disk hasn't actually put the data on stable storage\n> before it claims the fsync is done, you're screwed.\n>\n> XLOG only exists to centralize the writes that have to happen before\n> a transaction can be reported committed (in particular, to avoid a\n> lot of random-access writes at commit). It doesn't make any\n> fundamental change in the rules of the game: a disk that lies about\n> write complete will still burn you.\n>\n> In a zero-seek-cost environment I suspect that XLOG wouldn't actually\n> be all that useful.\n\nYou would still need it to guard against partial page writes, unless\nwe have some guarantee that those can't happen.\n\nAnd once your transaction has scattered its transaction id into\nvarious xmin and xmax over many tables, you need an atomic, durable\nrepository to decide if that id has or has not committed. Maybe clog\nfsynced on commit would serve this purpose?\n\nJeff\n", "msg_date": "Fri, 20 Nov 2009 08:47:01 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Axel Rau wrote:\n> \n> Am 13.11.2009 um 14:57 schrieb Laszlo Nagy:\n> \n>> I was thinking about ARECA 1320 with 2GB memory + BBU. Unfortunately, \n>> I cannot find information about using ARECA cards with SSD drives.\n> They told me: currently not supported, but they have positive customer \n> reports. No date yet for implementation of the TRIM command in firmware.\n> ...\n>> My other option is to buy two SLC SSD drives and use RAID1. It would \n>> cost about the same, but has less redundancy and less capacity. Which \n>> is the faster? 8-10 MLC disks in RAID 6 with a good caching \n>> controller, or two SLC disks in RAID1?\n\nDespite my other problems, I've found that the Intel X25-Es work\nremarkably well. The key issue for short,fast transactions seems to be\nhow fast an fdatasync() call can run, forcing the commit to disk, and\nallowing the transaction to return to userspace.\nWith all the caches off, the intel X25-E beat a standard disk by a\nfactor of about 10.\nAttached is a short C program which may be of use.\n\n\nFor what it's worth, we have actually got a pretty decent (and\nredundant) setup using a RAIS array of RAID1.\n\n\n[primary server]\n\nSSD }\n } RAID1 -------------------} DRBD --- /var/lib/postgresql\nSSD } }\n }\n }\n }\n }\n[secondary server] }\n }\nSSD } }\n } RAID1 --------gigE--------}\nSSD }\n\n\n\nThe servers connect back-to-back with a dedicated Gigabit ethernet\ncable, and DRBD is running in protocol B.\n\nWe can pull the power out of 1 server, and be using the next within 30\nseconds, and with no dataloss.\n\n\nRichard", "msg_date": "Fri, 20 Nov 2009 18:59:57 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Richard Neill wrote:\n> The key issue for short,fast transactions seems to be\n> how fast an fdatasync() call can run, forcing the commit to disk, and\n> allowing the transaction to return to userspace.\n> Attached is a short C program which may be of use.\nRight. I call this the \"commit rate\" of the storage, and on traditional \nspinning disks it's slightly below the rotation speed of the media (i.e. \n7200RPM = 120 commits/second). If you've got a battery-backed cache \nin front of standard disks, you can easily clear 10K commits/second.\n\nI normally test that out with sysbench, because I use that for some \nother tests anyway:\n\nsysbench --test=fileio --file-fsync-freq=1 --file-num=1 \n--file-total-size=16384 --file-test-mode=rndwr run | grep \"Requests/sec\"\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Fri, 20 Nov 2009 19:27:36 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Fri, Nov 20, 2009 at 7:27 PM, Greg Smith <[email protected]> wrote:\n> Richard Neill wrote:\n>>\n>> The key issue for short,fast transactions seems to be\n>> how fast an fdatasync() call can run, forcing the commit to disk, and\n>> allowing the transaction to return to userspace.\n>> Attached is a short C program which may be of use.\n>\n> Right.  I call this the \"commit rate\" of the storage, and on traditional\n> spinning disks it's slightly below the rotation speed of the media (i.e.\n> 7200RPM = 120 commits/second).    If you've got a battery-backed cache in\n> front of standard disks, you can easily clear 10K commits/second.\n\n\n...until you overflow the cache. battery backed cache does not break\nthe laws of physics...it just provides a higher burst rate (plus what\never advantages can be gained by peeking into the write queue and\nre-arranging/grouping. I learned the hard way that how your raid\ncontroller behaves in overflow situations can cause catastrophic\nperformance degradations...\n\nmerlin\n", "msg_date": "Sat, 21 Nov 2009 09:25:03 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Greg Smith wrote:\n> Merlin Moncure wrote:\n> > I am right now talking to someone on postgresql irc who is measuring\n> > 15k iops from x25-e and no data loss following power plug test.\n> The funny thing about Murphy is that he doesn't visit when things are \n> quiet. It's quite possible the window for data loss on the drive is \n> very small. Maybe you only see it one out of 10 pulls with a very \n> aggressive database-oriented write test. Whatever the odd conditions \n> are, you can be sure you'll see them when there's a bad outage in actual \n> production though.\n> \n> A good test program that is a bit better at introducing and detecting \n> the write cache issue is described at \n> http://brad.livejournal.com/2116715.html\n\nWow, I had not seen that tool before. I have added a link to it from\nour documentation, and also added a mention of our src/tools/fsync test\ntool to our docs.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +", "msg_date": "Sat, 28 Nov 2009 11:20:56 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Bruce Momjian wrote:\n> Greg Smith wrote:\n>> A good test program that is a bit better at introducing and detecting \n>> the write cache issue is described at \n>> http://brad.livejournal.com/2116715.html\n> \n> Wow, I had not seen that tool before. I have added a link to it from\n> our documentation, and also added a mention of our src/tools/fsync test\n> tool to our docs.\n\nOne challenge with many of these test programs is that some\nfilesystem (ext3 is one) will flush drive caches on fsync()\n*sometimes, but not always. If your test program happens to do\na sequence of commands that makes an fsync() actually flush a\ndisk's caches, it might mislead you if your actual application\nhas a different series of system calls.\n\nFor example, ext3 fsync() will issue write barrier commands\nif the inode was modified; but not if the inode wasn't.\n\nSee test program here:\nhttp://www.mail-archive.com/[email protected]/msg272253.html\nand read two paragraphs further to see how touching\nthe inode makes ext3 fsync behave differently.\n\n\n\n", "msg_date": "Sun, 29 Nov 2009 16:46:33 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Ron Mayer wrote:\n> Bruce Momjian wrote:\n> > Greg Smith wrote:\n> >> A good test program that is a bit better at introducing and detecting \n> >> the write cache issue is described at \n> >> http://brad.livejournal.com/2116715.html\n> > \n> > Wow, I had not seen that tool before. I have added a link to it from\n> > our documentation, and also added a mention of our src/tools/fsync test\n> > tool to our docs.\n> \n> One challenge with many of these test programs is that some\n> filesystem (ext3 is one) will flush drive caches on fsync()\n> *sometimes, but not always. If your test program happens to do\n> a sequence of commands that makes an fsync() actually flush a\n> disk's caches, it might mislead you if your actual application\n> has a different series of system calls.\n> \n> For example, ext3 fsync() will issue write barrier commands\n> if the inode was modified; but not if the inode wasn't.\n> \n> See test program here:\n> http://www.mail-archive.com/[email protected]/msg272253.html\n> and read two paragraphs further to see how touching\n> the inode makes ext3 fsync behave differently.\n\nI thought our only problem was testing the I/O subsystem --- I never\nsuspected the file system might lie too. That email indicates that a\nlarge percentage of our install base is running on unreliable file\nsystems --- why have I not heard about this before? Do the write\nbarriers allow data loss but prevent data inconsistency? It sound like\nthey are effectively running with synchronous_commit = off.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Sun, 29 Nov 2009 22:09:45 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Bruce Momjian wrote:\n> I thought our only problem was testing the I/O subsystem --- I never\n> suspected the file system might lie too. That email indicates that a\n> large percentage of our install base is running on unreliable file\n> systems --- why have I not heard about this before? Do the write\n> barriers allow data loss but prevent data inconsistency? It sound like\n> they are effectively running with synchronous_commit = off.\n> \nYou might occasionally catch me ranting here that Linux write barriers \nare not a useful solution at all for PostgreSQL, and that you must turn \nthe disk write cache off rather than expect the barrier implementation \nto do the right thing. This sort of buginess is why. The reason why it \ndoesn't bite more people is that most Linux systems don't turn on write \nbarrier support by default, and there's a number of situations that can \ndisable barriers even if you did try to enable them. It's still pretty \nunusual to have a working system with barriers turned on nowadays; I \nreally doubt it's \"a large percentage of our install base\".\n\nI've started keeping most of my notes about where ext3 is vulnerable to \nissues in Wikipedia, specifically \nhttp://en.wikipedia.org/wiki/Ext3#No_checksumming_in_journal ; I just \nupdated that section to point out the specific issue Ron pointed out. \nMaybe we should point people toward that in the docs, I try to keep that \narticle correct.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Mon, 30 Nov 2009 00:12:41 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Greg Smith wrote:\n> Bruce Momjian wrote:\n> > I thought our only problem was testing the I/O subsystem --- I never\n> > suspected the file system might lie too. That email indicates that a\n> > large percentage of our install base is running on unreliable file\n> > systems --- why have I not heard about this before? Do the write\n> > barriers allow data loss but prevent data inconsistency? It sound like\n> > they are effectively running with synchronous_commit = off.\n> > \n> You might occasionally catch me ranting here that Linux write barriers \n> are not a useful solution at all for PostgreSQL, and that you must turn \n> the disk write cache off rather than expect the barrier implementation \n> to do the right thing. This sort of buginess is why. The reason why it \n> doesn't bite more people is that most Linux systems don't turn on write \n> barrier support by default, and there's a number of situations that can \n> disable barriers even if you did try to enable them. It's still pretty \n> unusual to have a working system with barriers turned on nowadays; I \n> really doubt it's \"a large percentage of our install base\".\n\nAh, so it is only when write barriers are enabled, and they are not\nenabled by default --- OK, that makes sense.\n\n> I've started keeping most of my notes about where ext3 is vulnerable to \n> issues in Wikipedia, specifically \n> http://en.wikipedia.org/wiki/Ext3#No_checksumming_in_journal ; I just \n> updated that section to point out the specific issue Ron pointed out. \n> Maybe we should point people toward that in the docs, I try to keep that \n> article correct.\n\nYes, good idea.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 30 Nov 2009 07:08:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Bruce Momjian wrote:\n>> For example, ext3 fsync() will issue write barrier commands\n>> if the inode was modified; but not if the inode wasn't.\n>>\n>> See test program here:\n>> http://www.mail-archive.com/[email protected]/msg272253.html\n>> and read two paragraphs further to see how touching\n>> the inode makes ext3 fsync behave differently.\n> \n> I thought our only problem was testing the I/O subsystem --- I never\n> suspected the file system might lie too. That email indicates that a\n> large percentage of our install base is running on unreliable file\n> systems --- why have I not heard about this before? \n\nIt came up a on these lists a few times in the past. Here's one example.\nhttp://archives.postgresql.org/pgsql-performance/2008-08/msg00159.php\n\nAs far as I can tell, most of the threads ended with people still\nsuspecting lying hard drives. But to the best of my ability I can't\nfind any drives that actually lie when sent the commands to flush\ntheir caches. But various combinations of ext3 & linux MD that\ndecide not to send IDE FLUSH_CACHE_EXT (nor the similiar\nSCSI SYNCHRONIZE CACHE command) under various situations.\n\nI wonder if there are enough ext3 users out there that postgres should\ntouch the inodes before doing a fsync.\n\n> Do the write barriers allow data loss but prevent data inconsistency? \n\nIf I understand right, data inconsistency could occur too. One\naspect of the write barriers is flushing a hard drive's caches.\n\n> It sound like they are effectively running with synchronous_commit = off.\n\nAnd with the (mythical?) hard drive with lying caches.\n\n\n", "msg_date": "Mon, 30 Nov 2009 07:48:32 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Bruce Momjian wrote:\n> Greg Smith wrote:\n>> Bruce Momjian wrote:\n>>> I thought our only problem was testing the I/O subsystem --- I never\n>>> suspected the file system might lie too. That email indicates that a\n>>> large percentage of our install base is running on unreliable file\n>>> systems --- why have I not heard about this before?\n>>> \n>> he reason why it \n>> doesn't bite more people is that most Linux systems don't turn on write \n>> barrier support by default, and there's a number of situations that can \n>> disable barriers even if you did try to enable them. It's still pretty \n>> unusual to have a working system with barriers turned on nowadays; I \n>> really doubt it's \"a large percentage of our install base\".\n> \n> Ah, so it is only when write barriers are enabled, and they are not\n> enabled by default --- OK, that makes sense.\n\nThe test program I linked up-thread shows that fsync does nothing\nunless the inode's touched on an out-of-the-box Ubuntu 9.10 using\next3 on a straight from Dell system.\n\nSurely that's a common config, no?\n\nIf I uncomment the fchmod lines below I can see that even with ext3\nand write caches enabled on my drives it does indeed wait.\nNote that EXT4 doesn't show the problem on the same system.\n\nHere's a slightly modified test program that's a bit easier to run.\nIf you run the program and it exits right away, your system isn't\nwaiting for platters to spin.\n\n////////////////////////////////////////////////////////////////////\n/*\n** based on http://article.gmane.org/gmane.linux.file-systems/21373\n** http://thread.gmane.org/gmane.linux.kernel/646040\n** If this program returns instantly, the fsync() lied.\n** If it takes a second or so, fsync() probably works.\n** On ext3 and drives that cache writes, you probably need\n** to uncomment the fchmod's to make fsync work right.\n*/\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include <stdio.h>\n#include <stdlib.h>\n\nint main(int argc,char *argv[]) {\n if (argc<2) {\n printf(\"usage: fs <filename>\\n\");\n exit(1);\n }\n int fd = open (argv[1], O_RDWR | O_CREAT | O_TRUNC, 0666);\n int i;\n for (i=0;i<100;i++) {\n char byte;\n pwrite (fd, &byte, 1, 0);\n // fchmod (fd, 0644); fchmod (fd, 0664);\n fsync (fd);\n }\n}\n////////////////////////////////////////////////////////////////////\nron@ron-desktop:/tmp$ /usr/bin/time ./a.out foo\n0.00user 0.00system 0:00.01elapsed 21%CPU (0avgtext+0avgdata 0maxresident)k\n\n\n", "msg_date": "Mon, 30 Nov 2009 08:32:34 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Ron Mayer wrote:\n> Bruce Momjian wrote:\n> > Greg Smith wrote:\n> >> Bruce Momjian wrote:\n> >>> I thought our only problem was testing the I/O subsystem --- I never\n> >>> suspected the file system might lie too. That email indicates that a\n> >>> large percentage of our install base is running on unreliable file\n> >>> systems --- why have I not heard about this before?\n> >>> \n> >> he reason why it \n> >> doesn't bite more people is that most Linux systems don't turn on write \n> >> barrier support by default, and there's a number of situations that can \n> >> disable barriers even if you did try to enable them. It's still pretty \n> >> unusual to have a working system with barriers turned on nowadays; I \n> >> really doubt it's \"a large percentage of our install base\".\n> > \n> > Ah, so it is only when write barriers are enabled, and they are not\n> > enabled by default --- OK, that makes sense.\n> \n> The test program I linked up-thread shows that fsync does nothing\n> unless the inode's touched on an out-of-the-box Ubuntu 9.10 using\n> ext3 on a straight from Dell system.\n> \n> Surely that's a common config, no?\n\nYea, this certainly suggests that the problem is wide-spread.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 30 Nov 2009 11:40:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\nOn 11/19/09 1:04 PM, \"Greg Smith\" <[email protected]> wrote:\n\n> That won't help. Once the checkpoint is done, the problem isn't just\n> that the WAL segments are recycled. The server isn't going to use them\n> even if they were there. The reason why you can erase/recycle them is\n> that you're doing so *after* writing out a checkpoint record that says\n> you don't have to ever look at them again. What you'd actually have to\n> do is hack the server code to insert that delay after every fsync--there\n> are none that you can cheat on and not introduce a corruption\n> possibility. The whole WAL/recovery mechanism in PostgreSQL doesn't\n> make a lot of assumptions about what the underlying disk has to actually\n> do beyond the fsync requirement; the flip side to that robustness is\n> that it's the one you can't ever violate safely.\n\nYeah, I guess its not so easy. Having the system \"hold\" one extra\ncheckpoint worth of segments and then during recovery, always replay that\nprevioius one plus the current might work, but I don't know if that could\ncause corruption. I assume replaying a log twice won't, so replaying N-1\ncheckpoint, then the current one, might work. If so that would be a cool\nfeature -- so long as the N-2 checkpoint is no longer in the OS or I/O\nhardware caches when checkpoint N completes, you're safe! Its probably more\ncomplicated though, especially with respect to things like MVCC on DDL\nchanges.\n\n> Right. It's not used like the write-cache on a regular hard drive,\n> where they're buffering 8MB-32MB worth of writes just to keep seek\n> overhead down. It's there primarily to allow combining writes into\n> large chunks, to better match the block size of the underlying SSD flash\n> cells (128K). Having enough space for two full cells allows spooling\n> out the flash write to a whole block while continuing to buffer the next\n> one.\n> \n> This is why turning the cache off can tank performance so badly--you're\n> going to be writing a whole 128K block no matter what if it's force to\n> disk without caching, even if it's just to write a 8K page to it.\n\nAs others mentioned, flash must erase a whole block at once, but it can\nwrite sequentially to a block in much smaller chunks. I believe that MLC\nand SLC differ a bit here, SLC can write smaller subsections of the erase\nblock.\n\nA little old but still very useful:\nhttp://research.microsoft.com/apps/pubs/?id=63596\n\n> That's only going to reach 1/16 of the usual write speed on single page\n> writes. And that's why you should also be concerned at whether\n> disabling the write cache impacts the drive longevity, lots of small\n> writes going out in small chunks is going to wear flash out much faster\n> than if the drive is allowed to wait until it's got a full sized block\n> to write every time.\n\nThis is still a concern, since even if the SLC cells are technically capable\nof writing sequentially in smaller chunks, with the write cache off they may\nnot do so. \n\n> \n> The fact that the cache is so small is also why it's harder to catch the\n> drive doing the wrong thing here. The plug test is pretty sensitive to\n> a problem when you've got megabytes worth of cached writes that are\n> spooling to disk at spinning hard drive speeds. The window for loss on\n> a SSD with no seek overhead and only a moderate number of KB worth of\n> cached data is much, much smaller. Doesn't mean it's gone though. It's\n> a shame that the design wasn't improved just a little bit; a cheap\n> capacitor and blocking new writes once the incoming power dropped is all\n> it would take to make these much more reliable for database use. But\n> that would raise the price, and not really help anybody but the small\n> subset of the market that cares about durable writes.\n\nYup. There are manufacturers who claim no data loss on power failure,\nhopefully these become more common.\nhttp://www.wdc.com/en/products/ssd/technology.asp?id=1\n\nI still contend its a lot more safe than a hard drive. I have not seen one\nfail yet (out of about 150 heavy use drive-years on X25-Ms). Any system\nthat does not have a battery backed write cache will be faster and safer if\nan SSD, with write cache on, than hard drives with write cache on.\n\nBBU caching is not fail-safe either, batteries wear out, cards die or\nmalfunction.\nIf you need the maximum data integrity, you will probably go with a\nbattery-backed cache raid setup with or without SSDs. If you don't go that\nroute SSD's seem like the best option. The 'middle ground' of software raid\nwith hard drives with their write caches off doesn't seem useful to me at\nall. I can't think of one use case that isn't better served by a slightly\ncheaper array of disks with a hardware bbu card (if the data is important or\ndata size is large) OR a set of SSD's (if performance is more important than\ndata safety). \n\n>> 4: Yet another solution: The drives DO adhere to write barriers properly.\n>> A filesystem that used these in the process of fsync() would be fine too.\n>> So XFS without LVM or MD (or the newer versions of those that don't ignore\n>> barriers) would work too.\n>> \n> If I really trusted anything beyond the very basics of the filesystem to\n> really work well on Linux, this whole issue would be moot for most of\n> the production deployments I do. Ideally, fsync would just push out the\n> minimum of what's needed, it would call the appropriate write cache\n> flush mechanism the way the barrier implementation does when that all\n> works, life would be good. Alternately, you might even switch to using\n> O_SYNC writes instead, which on a good filesystem implementation are\n> both accelerated and safe compared to write/fsync (I've seen that work\n> as expected on Vertias VxFS for example).\n> \n\nWe could all move to OpenSolaris where that stuff does work right... ;)\nI think a lot of the things that make ZFS slower for some tasks is that it\ncorrectly implements and uses write barriers...\n\n> Meanwhile, in the actual world we live, patches that make writes more\n> durable by default are dropped by the Linux community because they tank\n> performance for too many types of loads, I'm frightened to turn on\n> O_SYNC at all on ext3 because of reports of corruption on the lists\n> here, fsync does way more work than it needs to, and the way the\n> filesystem and block drivers have been separated makes it difficult to\n> do any sort of device write cache control from userland. This is why I\n> try to use the simplest, best tested approach out there whenever possible.\n> \n\nOh I hear you :) At least ext4 looks like an improvement for the\nRHEL6/CentOS6 timeframe. Checksums are handy.\n\nMany of my systems though don't need the highest data reliability. And a\nraid 0 of X-25 M's will be much, much more safe than the same thing of\nregular hard drives, and faster. Putting in a few of those on one system\nsoon (yes M, won't put WAL on it). 2 such drives kick the crap out of\nanything else for the price when performance is most important and the data\nis just a copy of something stored in a much safer place than any single\nserver. Previously on such systems, a caching raid card would be needed for\nperformance, but without a bbu data loss risk is very high (much higher than\na ssd with caching on -- 256K versus 512M cache!). And a SSD costs less\nthan the raid card. So long as the total data size isn't too big they work\nwell. And even then, some tablespaces can be put on a large HD leaving the\nmore critical ones on the SSD.\nI estimate the likelihood of complete data loss from a 2 SSD raid-0 as the\nsame as a 4-disk RAID 5 of hard drives. There is a big difference between a\ncouple corrupted files and a lost drive... I have recovered postgres\nsystems with corruption by reindexing and restoring single tables from\nbackups. When one drive in a stripe is lost or a pair in a raid 10 go down,\nall is lost.\n\nI wonder -- has anyone seen an Intel SSD randomly die like a hard drive?\nI'm still trying to get a \"M\" to wear out by writing about 120GB a day to it\nfor a year. But rough calculations show that I'm likely years from\ntrouble... By then I'll have upgraded to the gen 3 or 4 drives.\n\n> --\n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n> \n> \n\n", "msg_date": "Thu, 3 Dec 2009 16:04:32 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Fri, 13 Nov 2009, Greg Smith wrote:\n> In order for a drive to work reliably for database use such as for \n> PostgreSQL, it cannot have a volatile write cache. You either need a write \n> cache with a battery backup (and a UPS doesn't count), or to turn the cache \n> off. The SSD performance figures you've been looking at are with the drive's \n> write cache turned on, which means they're completely fictitious and \n> exaggerated upwards for your purposes. In the real world, that will result \n> in database corruption after a crash one day.\n\nSeagate are claiming to be on the ball with this one.\n\nhttp://www.theregister.co.uk/2009/12/08/seagate_pulsar_ssd/\n\nMatthew\n\n-- \n The third years are wandering about all worried at the moment because they\n have to hand in their final projects. Please be sympathetic to them, say\n things like \"ha-ha-ha\", but in a sympathetic tone of voice \n -- Computer Science Lecturer\n", "msg_date": "Tue, 8 Dec 2009 14:22:17 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Fri, 13 Nov 2009, Greg Smith wrote:\n> > In order for a drive to work reliably for database use such as for \n> > PostgreSQL, it cannot have a volatile write cache. You either need a write \n> > cache with a battery backup (and a UPS doesn't count), or to turn the cache \n> > off. The SSD performance figures you've been looking at are with the drive's \n> > write cache turned on, which means they're completely fictitious and \n> > exaggerated upwards for your purposes. In the real world, that will result \n> > in database corruption after a crash one day.\n> \n> Seagate are claiming to be on the ball with this one.\n> \n> http://www.theregister.co.uk/2009/12/08/seagate_pulsar_ssd/\n\nI have updated our documentation to mention that even SSD drives often\nhave volatile write-back caches. Patch attached and applied.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n + If your life is a hard drive, Christ can be your backup. +", "msg_date": "Sat, 20 Feb 2010 13:28:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nBruce Momjian wrote:\n> Matthew Wakeling wrote:\n>> On Fri, 13 Nov 2009, Greg Smith wrote:\n>>> In order for a drive to work reliably for database use such as for \n>>> PostgreSQL, it cannot have a volatile write cache. You either need a write \n>>> cache with a battery backup (and a UPS doesn't count), or to turn the cache \n>>> off. The SSD performance figures you've been looking at are with the drive's \n>>> write cache turned on, which means they're completely fictitious and \n>>> exaggerated upwards for your purposes. In the real world, that will result \n>>> in database corruption after a crash one day.\n>> Seagate are claiming to be on the ball with this one.\n>>\n>> http://www.theregister.co.uk/2009/12/08/seagate_pulsar_ssd/\n> \n> I have updated our documentation to mention that even SSD drives often\n> have volatile write-back caches. Patch attached and applied.\n\nHmmm. That got me thinking: consider ZFS and HDD with volatile cache.\nDo the characteristics of ZFS avoid this issue entirely?\n\n- --\nDan Langille\n\nBSDCan - The Technical BSD Conference : http://www.bsdcan.org/\nPGCon - The PostgreSQL Conference: http://www.pgcon.org/\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.13 (FreeBSD)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niEYEARECAAYFAkuAayQACgkQCgsXFM/7nTyMggCgnZUbVzldxjp/nPo8EL1Nq6uG\n6+IAoNGIB9x8/mwUQidjM9nnAADRbr9j\n=3RJi\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 20 Feb 2010 18:07:16 -0500", "msg_from": "Dan Langille <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Dan Langille wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Bruce Momjian wrote:\n> > Matthew Wakeling wrote:\n> >> On Fri, 13 Nov 2009, Greg Smith wrote:\n> >>> In order for a drive to work reliably for database use such as for \n> >>> PostgreSQL, it cannot have a volatile write cache. You either need a write \n> >>> cache with a battery backup (and a UPS doesn't count), or to turn the cache \n> >>> off. The SSD performance figures you've been looking at are with the drive's \n> >>> write cache turned on, which means they're completely fictitious and \n> >>> exaggerated upwards for your purposes. In the real world, that will result \n> >>> in database corruption after a crash one day.\n> >> Seagate are claiming to be on the ball with this one.\n> >>\n> >> http://www.theregister.co.uk/2009/12/08/seagate_pulsar_ssd/\n> > \n> > I have updated our documentation to mention that even SSD drives often\n> > have volatile write-back caches. Patch attached and applied.\n> \n> Hmmm. That got me thinking: consider ZFS and HDD with volatile cache.\n> Do the characteristics of ZFS avoid this issue entirely?\n\nNo, I don't think so. ZFS only avoids partial page writes. ZFS still\nassumes something sent to the drive is permanent or it would have no way\nto operate.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Sat, 20 Feb 2010 18:19:03 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Feb 20, 2010, at 3:19 PM, Bruce Momjian wrote:\n\n> Dan Langille wrote:\n>> -----BEGIN PGP SIGNED MESSAGE-----\n>> Hash: SHA1\n>> \n>> Bruce Momjian wrote:\n>>> Matthew Wakeling wrote:\n>>>> On Fri, 13 Nov 2009, Greg Smith wrote:\n>>>>> In order for a drive to work reliably for database use such as for \n>>>>> PostgreSQL, it cannot have a volatile write cache. You either need a write \n>>>>> cache with a battery backup (and a UPS doesn't count), or to turn the cache \n>>>>> off. The SSD performance figures you've been looking at are with the drive's \n>>>>> write cache turned on, which means they're completely fictitious and \n>>>>> exaggerated upwards for your purposes. In the real world, that will result \n>>>>> in database corruption after a crash one day.\n>>>> Seagate are claiming to be on the ball with this one.\n>>>> \n>>>> http://www.theregister.co.uk/2009/12/08/seagate_pulsar_ssd/\n>>> \n>>> I have updated our documentation to mention that even SSD drives often\n>>> have volatile write-back caches. Patch attached and applied.\n>> \n>> Hmmm. That got me thinking: consider ZFS and HDD with volatile cache.\n>> Do the characteristics of ZFS avoid this issue entirely?\n> \n> No, I don't think so. ZFS only avoids partial page writes. ZFS still\n> assumes something sent to the drive is permanent or it would have no way\n> to operate.\n> \n\nZFS is write-back cache aware, and safe provided the drive's cache flushing and write barrier related commands work. It will flush data in 'transaction groups' and flush the drive write caches at the end of those transactions. Since its copy on write, it can ensure that all the changes in the transaction group appear on disk, or all are lost. This all works so long as the cache flush commands do.\n\n\n> -- \n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n> PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n> + If your life is a hard drive, Christ can be your backup. +\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Sun, 21 Feb 2010 01:34:42 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Scott Carey wrote:\n> On Feb 20, 2010, at 3:19 PM, Bruce Momjian wrote:\n> \n> > Dan Langille wrote:\n> >> -----BEGIN PGP SIGNED MESSAGE-----\n> >> Hash: SHA1\n> >>\n> >> Bruce Momjian wrote:\n> >>> Matthew Wakeling wrote:\n> >>>> On Fri, 13 Nov 2009, Greg Smith wrote:\n> >>>>> In order for a drive to work reliably for database use such as for\n> >>>>> PostgreSQL, it cannot have a volatile write cache. You either need a write\n> >>>>> cache with a battery backup (and a UPS doesn't count), or to turn the cache\n> >>>>> off. The SSD performance figures you've been looking at are with the drive's\n> >>>>> write cache turned on, which means they're completely fictitious and\n> >>>>> exaggerated upwards for your purposes. In the real world, that will result\n> >>>>> in database corruption after a crash one day.\n> >>>> Seagate are claiming to be on the ball with this one.\n> >>>>\n> >>>> http://www.theregister.co.uk/2009/12/08/seagate_pulsar_ssd/\n> >>>\n> >>> I have updated our documentation to mention that even SSD drives often\n> >>> have volatile write-back caches. Patch attached and applied.\n> >>\n> >> Hmmm. That got me thinking: consider ZFS and HDD with volatile cache.\n> >> Do the characteristics of ZFS avoid this issue entirely?\n> >\n> > No, I don't think so. ZFS only avoids partial page writes. ZFS still\n> > assumes something sent to the drive is permanent or it would have no way\n> > to operate.\n> >\n> \n> ZFS is write-back cache aware, and safe provided the drive's\n> cache flushing and write barrier related commands work. It will\n> flush data in 'transaction groups' and flush the drive write\n> caches at the end of those transactions. Since its copy on\n> write, it can ensure that all the changes in the transaction\n> group appear on disk, or all are lost. This all works so long\n> as the cache flush commands do.\n\nAgreed, thought I thought the problem was that SSDs lie about their\ncache flush like SATA drives do, or is there something I am missing?\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Sun, 21 Feb 2010 09:10:43 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Bruce Momjian wrote:\n> Agreed, thought I thought the problem was that SSDs lie about their\n> cache flush like SATA drives do, or is there something I am missing?\n\nThere's exactly one case I can find[1] where this century's IDE\ndrives lied more than any other drive with a cache:\n\n Under 120GB Maxtor drives from late 2003 to early 2004.\n\nand it's apparently been worked around for years.\n\nThose drives claimed to support the \"FLUSH_CACHE_EXT\" feature (IDE\ncommand 0xEA), but did not support sending 48-bit commands which\nwas needed to send the cache flushing command.\n\nAnd for that case a workaround for Linux was quickly identified by\nchecking for *both* the support for 48-bit commands and support for the\nflush cache extension[2].\n\n\nBeyond those 2004 drive + 2003 kernel systems, I think most the rest\nof such reports have been various misfeatures in some of Linux's\nfilesystems (like EXT3 that only wants to send drives cache-flushing\ncommands when inode change[3]) and linux software raid misfeatures....\n\n...and ISTM those would affect SSDs the same way they'd affect SATA drives.\n\n\n[1] http://lkml.org/lkml/2004/5/12/132\n[2] http://lkml.org/lkml/2004/5/12/200\n[3] http://www.mail-archive.com/[email protected]/msg272253.html\n\n\n", "msg_date": "Sun, 21 Feb 2010 06:54:24 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Ron Mayer wrote:\n> Bruce Momjian wrote:\n> \n>> Agreed, thought I thought the problem was that SSDs lie about their\n>> cache flush like SATA drives do, or is there something I am missing?\n>> \n>\n> There's exactly one case I can find[1] where this century's IDE\n> drives lied more than any other drive with a cache:\n\nRon is correct that the problem of mainstream SATA drives accepting the \ncache flush command but not actually doing anything with it is long gone \nat this point. If you have a regular SATA drive, it almost certainly \nsupports proper cache flushing. And if your whole software/storage \nstacks understands all that, you should not end up with corrupted data \njust because there's a volative write cache in there.\n\nBut the point of this whole testing exercise coming back into vogue \nagain is that SSDs have returned this negligent behavior to the \nmainstream again. See \nhttp://opensolaris.org/jive/thread.jspa?threadID=121424 for a discussion \nof this in a ZFS context just last month. There are many documented \ncases of Intel SSDs that will fake a cache flush, such that the only way \nto get good reliable writes is to totally disable their writes \ncaches--at which point performance is so bad you might as well have \ngotten a RAID10 setup instead (and longevity is toast too).\n\nThis whole area remains a disaster area and extreme distrust of all the \nSSD storage vendors is advisable at this point. Basically, if I don't \nsee the capacitor responsible for flushing outstanding writes, and get a \nclear description from the manufacturer how the cached writes are going \nto be handled in the event of a power failure, at this point I have to \nassume the answer is \"badly and your data will be eaten\". And the \nprices for SSDs that meet that requirement are still quite steep. I \nkeep hoping somebody will address this market at something lower than \nthe standard \"enterprise\" prices. The upcoming SandForce designs seem \nto have thought this through correctly: \nhttp://www.anandtech.com/storage/showdoc.aspx?i=3702&p=6 But the \nproduct's not out to the general public yet (just like the Seagate units \nthat claim to have capacitor backups--I heard a rumor those are also \nSandforce designs actually, so they may be the only ones doing this \nright and aiming at a lower price).\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n\n\n\n\n\n\nRon Mayer wrote:\n\nBruce Momjian wrote:\n \n\nAgreed, thought I thought the problem was that SSDs lie about their\ncache flush like SATA drives do, or is there something I am missing?\n \n\n\nThere's exactly one case I can find[1] where this century's IDE\ndrives lied more than any other drive with a cache:\n\n\nRon is correct that the problem of mainstream SATA drives accepting the\ncache flush command but not actually doing anything with it is long\ngone at this point.  If you have a regular SATA drive, it almost\ncertainly supports proper cache flushing.  And if your whole\nsoftware/storage stacks understands all that, you should not end up\nwith corrupted data just because there's a volative write cache in\nthere.\n\nBut the point of this whole testing exercise coming back into vogue\nagain is that SSDs have returned this negligent behavior to the\nmainstream again.  See\nhttp://opensolaris.org/jive/thread.jspa?threadID=121424 for a\ndiscussion of this in a ZFS context just last month.  There are many\ndocumented cases of Intel SSDs that will fake a cache flush, such that\nthe only way to get good reliable writes is to totally disable their\nwrites caches--at which point performance is so bad you might as well\nhave gotten a RAID10 setup instead (and longevity is toast too).\n\nThis whole area remains a disaster area and extreme distrust of all the\nSSD storage vendors is advisable at this point.  Basically, if I don't\nsee the capacitor responsible for flushing outstanding writes, and get\na clear description from the manufacturer how the cached writes are\ngoing to be handled in the event of a power failure, at this point I\nhave to assume the answer is \"badly and your data will be eaten\".  And\nthe prices for SSDs that meet that requirement are still quite steep. \nI keep hoping somebody will address this market at something lower than\nthe standard \"enterprise\" prices.  The upcoming SandForce designs seem\nto have thought this through correctly: \nhttp://www.anandtech.com/storage/showdoc.aspx?i=3702&p=6  But the\nproduct's not out to the general public yet (just like the Seagate\nunits that claim to have capacitor backups--I heard a rumor those are\nalso Sandforce designs actually, so they may be the only ones doing\nthis right and aiming at a lower price).\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us", "msg_date": "Mon, 22 Feb 2010 00:39:06 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On 22-2-2010 6:39 Greg Smith wrote:\n> But the point of this whole testing exercise coming back into vogue\n> again is that SSDs have returned this negligent behavior to the\n> mainstream again. See\n> http://opensolaris.org/jive/thread.jspa?threadID=121424 for a discussion\n> of this in a ZFS context just last month. There are many documented\n> cases of Intel SSDs that will fake a cache flush, such that the only way\n> to get good reliable writes is to totally disable their writes\n> caches--at which point performance is so bad you might as well have\n> gotten a RAID10 setup instead (and longevity is toast too).\n\nThat's weird. Intel's SSD's didn't have a write cache afaik:\n\"I asked Intel about this and it turns out that the DRAM on the Intel \ndrive isn't used for user data because of the risk of data loss, instead \nit is used as memory by the Intel SATA/flash controller for deciding \nexactly where to write data (I'm assuming for the wear \nleveling/reliability algorithms).\"\nhttp://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=10\n\nBut that is the old version, perhaps the second generation does have a \nbit of write caching.\n\nI can understand a SSD might do unexpected things when it loses power \nall of a sudden. It will probably try to group writes to fill a single \nblock (and those blocks vary in size but are normally way larger than \nthose of a normal spinning disk, they are values like 256 or 512KB) and \nit might loose that \"waiting until a full block can be written\"-data or \nperhaps it just couldn't complete a full block-write due to the power \nfailure.\nAlthough that behavior isn't really what you want, it would be incorrect \nto blame write caching for the behavior if the device doesn't even have \na write cache ;)\n\nBest regards,\n\nArjen\n\n", "msg_date": "Mon, 22 Feb 2010 08:10:55 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Greg Smith wrote:\n> Ron Mayer wrote:\n> > Bruce Momjian wrote:\n> > \n> >> Agreed, thought I thought the problem was that SSDs lie about their\n> >> cache flush like SATA drives do, or is there something I am missing?\n> >> \n> >\n> > There's exactly one case I can find[1] where this century's IDE\n> > drives lied more than any other drive with a cache:\n> \n> Ron is correct that the problem of mainstream SATA drives accepting the \n> cache flush command but not actually doing anything with it is long gone \n> at this point. If you have a regular SATA drive, it almost certainly \n> supports proper cache flushing. And if your whole software/storage \n> stacks understands all that, you should not end up with corrupted data \n> just because there's a volative write cache in there.\n\nOK, but I have a few questions. Is a write to the drive and a cache\nflush command the same? Which file systems implement both? I thought a\nwrite to the drive was always assumed to flush it to the platters,\nassuming the drive's cache is set to write-through.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 22 Feb 2010 09:37:13 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Ron Mayer wrote:\n> Bruce Momjian wrote:\n> > Agreed, thought I thought the problem was that SSDs lie about their\n> > cache flush like SATA drives do, or is there something I am missing?\n> \n> There's exactly one case I can find[1] where this century's IDE\n> drives lied more than any other drive with a cache:\n> \n> Under 120GB Maxtor drives from late 2003 to early 2004.\n> \n> and it's apparently been worked around for years.\n> \n> Those drives claimed to support the \"FLUSH_CACHE_EXT\" feature (IDE\n> command 0xEA), but did not support sending 48-bit commands which\n> was needed to send the cache flushing command.\n> \n> And for that case a workaround for Linux was quickly identified by\n> checking for *both* the support for 48-bit commands and support for the\n> flush cache extension[2].\n> \n> \n> Beyond those 2004 drive + 2003 kernel systems, I think most the rest\n> of such reports have been various misfeatures in some of Linux's\n> filesystems (like EXT3 that only wants to send drives cache-flushing\n> commands when inode change[3]) and linux software raid misfeatures....\n> \n> ...and ISTM those would affect SSDs the same way they'd affect SATA drives.\n\nI think the point is not that drives lie about their write-back and\nwrite-through behavior, but rather that many SATA/IDE drives default to\nwrite-back, and not write-through, and many administrators an file\nsystems are not aware of this behavior.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 22 Feb 2010 09:39:55 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Bruce Momjian wrote:\n> Greg Smith wrote:\n>> .... If you have a regular SATA drive, it almost certainly \n>> supports proper cache flushing....\n> \n> OK, but I have a few questions. Is a write to the drive and a cache\n> flush command the same?\n\nI believe they're different as of ATAPI-6 from 2001.\n\n> Which file systems implement both?\n\nSeems ZFS and recent ext4 have thought these interactions out\nthoroughly. Find a slow ext4 that people complain about, and\nthat's the one doing it right :-).\n\nExt3 has some particularly odd annoyances where it flushes and waits\nfor certain writes (ones involving inode changes) but doesn't bother\nto flush others (just data changes). As far as I can tell, with\next3 you need userspace utilities to make sure flushes occur when\nyou need them. At one point I was tempted to try to put such\nuserspace hacks into postgres.\n\nI know less about other file systems. Apparently the NTFS guys\nare aware of such stuff - but don't know what kinds of fsync equivalent\nyou'd need to make it happen.\n\nAlso worth noting - Linux's software raid stuff (MD and LVM)\nneed to handle this right as well - and last I checked (sometime\nlast year) the default setups didn't.\n\n> I thought a\n> write to the drive was always assumed to flush it to the platters,\n> assuming the drive's cache is set to write-through.\n\nApparently somewhere around here:\nhttp://www.t10.org/t13/project/d1410r3a-ATA-ATAPI-6.pdf\nthey were separated in the IDE world.\n", "msg_date": "Mon, 22 Feb 2010 10:00:34 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Arjen van der Meijden wrote:\n> That's weird. Intel's SSD's didn't have a write cache afaik:\n> \"I asked Intel about this and it turns out that the DRAM on the Intel \n> drive isn't used for user data because of the risk of data loss, \n> instead it is used as memory by the Intel SATA/flash controller for \n> deciding exactly where to write data (I'm assuming for the wear \n> leveling/reliability algorithms).\"\n> http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=10\n\nRead further down:\n\n\"Despite the presence of the external DRAM, both the Intel controller \nand the JMicron rely on internal buffers to cache accesses to the \nSSD...Intel's controller has a 256KB SRAM on-die.\"\n\nThat's the problematic part: the Intel controllers have a volatile \n256KB write cache stored deep inside the SSD controller, and issuing a \nstandard SATA write cache flush command doesn't seem to clear it. Makes \nthe drives troublesome for database use.\n\n> I can understand a SSD might do unexpected things when it loses power \n> all of a sudden. It will probably try to group writes to fill a single \n> block (and those blocks vary in size but are normally way larger than \n> those of a normal spinning disk, they are values like 256 or 512KB) \n> and it might loose that \"waiting until a full block can be \n> written\"-data or perhaps it just couldn't complete a full block-write \n> due to the power failure.\n> Although that behavior isn't really what you want, it would be \n> incorrect to blame write caching for the behavior if the device \n> doesn't even have a write cache ;)\n\nIf you write data and that write call returns before the data hits disk, \nit's a write cache, period. And if that write cache loses its contents \nif power is lost, it's a volatile write cache that can cause database \ncorruption. The fact that the one on the Intel devices is very small, \nbasically just dealing with the block chunking behavior you describe, \ndoesn't change either of those facts.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 22 Feb 2010 20:04:35 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On 02/22/2010 08:04 PM, Greg Smith wrote:\n> Arjen van der Meijden wrote:\n>> That's weird. Intel's SSD's didn't have a write cache afaik:\n>> \"I asked Intel about this and it turns out that the DRAM on the Intel \n>> drive isn't used for user data because of the risk of data loss, \n>> instead it is used as memory by the Intel SATA/flash controller for \n>> deciding exactly where to write data (I'm assuming for the wear \n>> leveling/reliability algorithms).\"\n>> http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=10\n>\n> Read further down:\n>\n> \"Despite the presence of the external DRAM, both the Intel controller \n> and the JMicron rely on internal buffers to cache accesses to the \n> SSD...Intel's controller has a 256KB SRAM on-die.\"\n>\n> That's the problematic part: the Intel controllers have a volatile \n> 256KB write cache stored deep inside the SSD controller, and issuing a \n> standard SATA write cache flush command doesn't seem to clear it. \n> Makes the drives troublesome for database use.\n\nI had read the above when posted, and then looked up SRAM. SRAM seems to \nsuggest it will hold the data even after power loss, but only for a \nperiod of time. As long as power can restore within a few minutes, it \nseemed like this would be ok?\n\n>> I can understand a SSD might do unexpected things when it loses power \n>> all of a sudden. It will probably try to group writes to fill a \n>> single block (and those blocks vary in size but are normally way \n>> larger than those of a normal spinning disk, they are values like 256 \n>> or 512KB) and it might loose that \"waiting until a full block can be \n>> written\"-data or perhaps it just couldn't complete a full block-write \n>> due to the power failure.\n>> Although that behavior isn't really what you want, it would be \n>> incorrect to blame write caching for the behavior if the device \n>> doesn't even have a write cache ;)\n>\n> If you write data and that write call returns before the data hits \n> disk, it's a write cache, period. And if that write cache loses its \n> contents if power is lost, it's a volatile write cache that can cause \n> database corruption. The fact that the one on the Intel devices is \n> very small, basically just dealing with the block chunking behavior \n> you describe, doesn't change either of those facts.\n>\n\nThe SRAM seems to suggest that it does not necessarily lose its contents \nif power is lost - it just doesn't say how long you have to plug it back \nin. Isn't this similar to a battery-backed cache or capacitor-backed cache?\n\nI'd love to have a better guarantee - but is SRAM really such a bad model?\n\nCheers,\nmark\n\n", "msg_date": "Mon, 22 Feb 2010 20:11:26 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Ron Mayer wrote:\n> I know less about other file systems. Apparently the NTFS guys\n> are aware of such stuff - but don't know what kinds of fsync equivalent\n> you'd need to make it happen.\n> \n\nIt's actually pretty straightforward--better than ext3. Windows with \nNTFS has been perfectly aware how to do write-through on drives that \nsupport it when you execute _commit for some time: \nhttp://msdn.microsoft.com/en-us/library/17618685(VS.80).aspx\n\nIf you switch the postgresql.conf setting to fsync_writethrough on \nWindows, it will execute _commit where it would execute fsync on other \nplatforms, and that pushes through the drive's caches as it should \n(unlike fsync in many cases). More about this at \nhttp://archives.postgresql.org/pgsql-hackers/2005-08/msg00227.php and \nhttp://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm (which \nalso covers OS X).\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 22 Feb 2010 20:14:47 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Mark Mielke wrote:\n> I had read the above when posted, and then looked up SRAM. SRAM seems \n> to suggest it will hold the data even after power loss, but only for a \n> period of time. As long as power can restore within a few minutes, it \n> seemed like this would be ok?\n\nThe normal type of RAM everyone uses is DRAM, which requires constrant \n\"refresh\" cycles to keep it working and is pretty power hungry as a \nresult. Power gone, data gone an instant later.\n\nThere is also Non-volatile SRAM that includes an integrated battery ( \nhttp://www.maxim-ic.com/quick_view2.cfm/qv_pk/2648 is a typical \nexample), and that sort of design can be used to build the sort of \nbattery-backed caches that RAID controllers provide. If Intel's drives \nwere built using a NV-SRAM implementation, I'd be a happy owner of one \ninstead of a constant critic of their drives.\n\nBut regular old SRAM is still completely volatile and loses its contents \nvery quickly after power fails. I doubt you'd even get minutes of time \nbefore it's gone. The ease at which data loss failures with these Intel \ndrives continue to be duplicated in the field says their design isn't \nanywhere near good enough to be considered non-volatile.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Mon, 22 Feb 2010 20:39:33 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Mon, Feb 22, 2010 at 6:39 PM, Greg Smith <[email protected]> wrote:\n> Mark Mielke wrote:\n>>\n>> I had read the above when posted, and then looked up SRAM. SRAM seems to\n>> suggest it will hold the data even after power loss, but only for a period\n>> of time. As long as power can restore within a few minutes, it seemed like\n>> this would be ok?\n>\n> The normal type of RAM everyone uses is DRAM, which requires constrant\n> \"refresh\" cycles to keep it working and is pretty power hungry as a result.\n>  Power gone, data gone an instant later.\n\nActually, oddly enough, per bit stored dram is much lower power usage\nthan sram, because it only has something like 2 transistors per bit,\nwhile sram needs something like 4 or 5 (it's been a couple decades\nsince I took the classes on each). Even with the constant refresh,\ndram has a lower power draw than sram.\n", "msg_date": "Mon, 22 Feb 2010 19:21:40 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Mon, Feb 22, 2010 at 7:21 PM, Scott Marlowe <[email protected]> wrote:\n> On Mon, Feb 22, 2010 at 6:39 PM, Greg Smith <[email protected]> wrote:\n>> Mark Mielke wrote:\n>>>\n>>> I had read the above when posted, and then looked up SRAM. SRAM seems to\n>>> suggest it will hold the data even after power loss, but only for a period\n>>> of time. As long as power can restore within a few minutes, it seemed like\n>>> this would be ok?\n>>\n>> The normal type of RAM everyone uses is DRAM, which requires constrant\n>> \"refresh\" cycles to keep it working and is pretty power hungry as a result.\n>>  Power gone, data gone an instant later.\n>\n> Actually, oddly enough, per bit stored dram is much lower power usage\n> than sram, because it only has something like 2 transistors per bit,\n> while sram needs something like 4 or 5 (it's been a couple decades\n> since I took the classes on each).  Even with the constant refresh,\n> dram has a lower power draw than sram.\n\nNote that's power draw per bit. dram is usually much more densely\npacked (it can be with fewer transistors per cell) so the individual\nchips for each may have similar power draws while the dram will be 10\ntimes as densely packed as the sram.\n", "msg_date": "Mon, 22 Feb 2010 19:22:56 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Mon, 22 Feb 2010, Ron Mayer wrote:\n\n>\n> Also worth noting - Linux's software raid stuff (MD and LVM)\n> need to handle this right as well - and last I checked (sometime\n> last year) the default setups didn't.\n>\n\nI think I saw some stuff in the last few months on this issue on the \nkernel mailing list. you may want to doublecheck this when 2.6.33 gets \nreleased (probably this week)\n\nDavid Lang\n", "msg_date": "Tue, 23 Feb 2010 00:23:25 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "> Note that's power draw per bit. dram is usually much more densely\n> packed (it can be with fewer transistors per cell) so the individual\n> chips for each may have similar power draws while the dram will be 10\n> times as densely packed as the sram.\n\nDifferences between SRAM and DRAM :\n\n- price per byte (DRAM much cheaper)\n\n- silicon area per byte (DRAM much smaller)\n\n- random access latency\n SRAM = fast, uniform, and predictable, usually 0/1 cycles\n DRAM = \"a few\" up to \"a lot\" of cycles depending on chip type,\n which page/row/column you want to access, wether it's R or W,\n wether the page is already open, etc\n\nIn fact, DRAM is the new harddisk. SRAM is used mostly when low-latency is \nneeded (caches, etc).\n\n- ease of use :\n SRAM very easy to use : address, data, read, write, clock.\n SDRAM needs a smart controller.\n SRAM easier to instantiate on a silicon chip\n\n- power draw\n When used at high speeds, SRAM ist't power-saving at all, it's used for \nspeed.\n However when not used, the power draw is really negligible.\n\nWhile it is true that you can recover *some* data out of a SRAM/DRAM chip \nthat hasn't been powered for a few seconds, you can't really trust that \ndata. It's only a forensics tool.\n\nMost DRAM now (especially laptop DRAM) includes special power-saving modes \nwhich only keep the data retention logic (refresh, etc) powered, but not \nthe rest of the chip (internal caches, IO buffers, etc). Laptops, PDAs, \netc all use this feature in suspend-to-RAM mode. In this mode, the power \ndraw is higher than SRAM, but still pretty minimal, so a laptop can stay \nin suspend-to-RAM mode for days.\n\nAnyway, the SRAM vs DRAM isn't really relevant for the debate of SSD data \nintegrity. You can backup both with a small battery of ultra-cap.\n\nWhat is important too is that the entire SSD chipset must have been \ndesigned with this in mind : it must detect power loss, and correctly \nreact to it, and especially not reset itself or do funny stuff to the \nmemory when the power comes back. Which means at least some parts of the \nchipset must stay powered to keep their state.\n\nNow I wonder about something. SSDs use wear-leveling which means the \ninformation about which block was written where must be kept somewhere. \nWhich means this information must be updated. I wonder how crash-safe and \nhow atomic these updates are, in the face of a power loss. This is just \nlike a filesystem. You've been talking only about data, but the block \nlayout information (metadata) is subject to the same concerns. If the \ndrive says it's written, not only the data must have been written, but \nalso the information needed to locate that data...\n\nTherefore I think the yank-the-power-cord test should be done with random \nwrites happening on an aged and mostly-full SSD... and afterwards, I'd be \ninterested to know if not only the last txn really committed, but if some \nrandom parts of other stuff weren't \"wear-leveled\" into oblivion at the \npower loss...\n\n\n\n\n\n", "msg_date": "Tue, 23 Feb 2010 12:49:52 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Tue, Feb 23, 2010 at 6:49 AM, Pierre C <[email protected]> wrote:\n\n> Note that's power draw per bit. dram is usually much more densely\n>> packed (it can be with fewer transistors per cell) so the individual\n>> chips for each may have similar power draws while the dram will be 10\n>> times as densely packed as the sram.\n>>\n>\n> Differences between SRAM and DRAM :\n>\n> [lots of informative stuff]\n>\n\nI've been slowly reading the paper at\nhttp://people.redhat.com/drepper/cpumemory.pdf which has a big section on\nSRAM vs DRAM with nice pretty pictures. While not strictly relevant its been\nilluminating and I wanted to share.\n\nOn Tue, Feb 23, 2010 at 6:49 AM, Pierre C <[email protected]> wrote:\n\nNote that's power draw per bit.  dram is usually much more densely\npacked (it can be with fewer transistors per cell) so the individual\nchips for each may have similar power draws while the dram will be 10\ntimes as densely packed as the sram.\n\n\nDifferences between SRAM and DRAM :\n[lots of informative stuff]I've been slowly reading the paper at http://people.redhat.com/drepper/cpumemory.pdf  which has a big section on SRAM vs DRAM with nice pretty pictures. While not strictly relevant its been illuminating and I wanted to share.", "msg_date": "Tue, 23 Feb 2010 07:07:24 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\nOn Feb 23, 2010, at 3:49 AM, Pierre C wrote:\n> Now I wonder about something. SSDs use wear-leveling which means the \n> information about which block was written where must be kept somewhere. \n> Which means this information must be updated. I wonder how crash-safe and \n> how atomic these updates are, in the face of a power loss. This is just \n> like a filesystem. You've been talking only about data, but the block \n> layout information (metadata) is subject to the same concerns. If the \n> drive says it's written, not only the data must have been written, but \n> also the information needed to locate that data...\n> \n> Therefore I think the yank-the-power-cord test should be done with random \n> writes happening on an aged and mostly-full SSD... and afterwards, I'd be \n> interested to know if not only the last txn really committed, but if some \n> random parts of other stuff weren't \"wear-leveled\" into oblivion at the \n> power loss...\n> \n\nA couple years ago I postulated that SSD's could do random writes fast if they remapped blocks. Microsoft's SSD whitepaper at the time hinted at this too.\nPersisting the remap data is not hard. It goes in the same location as the data, or a separate area that can be written to linearly.\n\nEach block may contain its LBA and a transaction ID or other atomic count. Or another block can have that info. When the SSD\npowers up, it can build its table of LBA > block by looking at that data and inverting it and keeping the highest transaction ID for duplicate LBA claims.\n\nAlthough SSD's have to ERASE data in a large block at a time (256K to 2M typically), they can write linearly to an erased block in much smaller chunks.\nThus, to commit a write, either:\nData, LBA tag, and txID in same block (may require oddly sized blocks).\nor\nData written to one block (not committed yet), then LBA tag and txID written elsewhere (which commits the write). Since its all copy on write, partial writes can't happen.\nIf a block is being moved or compressed when power fails data should never be lost since the old data still exists, the new version just didn't commit. But new data that is being written may not be committed yet in the case of a power failure unless other measures are taken.\n\n> \n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Tue, 23 Feb 2010 10:36:56 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Tue, 23 Feb 2010, [email protected] wrote:\n\n> On Mon, 22 Feb 2010, Ron Mayer wrote:\n>\n>> \n>> Also worth noting - Linux's software raid stuff (MD and LVM)\n>> need to handle this right as well - and last I checked (sometime\n>> last year) the default setups didn't.\n>> \n>\n> I think I saw some stuff in the last few months on this issue on the kernel \n> mailing list. you may want to doublecheck this when 2.6.33 gets released \n> (probably this week)\n\nto clarify further (after getting more sleep ;-)\n\nI believe that the linux software raid always did the right thing if you \ndid a fsync/fdatacync. however barriers that filesystems attempted to use \nto avoid the need for a hard fsync used to be silently ignored. I believe \nthese are now honored (in at least some configurations)\n\nHowever, one thing that you do not get protection against with software \nraid is the potential for the writes to hit some drives but not others. If \nthis happens the software raid cannot know what the correct contents of \nthe raid stripe are, and so you could loose everything in that stripe \n(including contents of other files that are not being modified that \nhappened to be in the wrong place on the array)\n\nIf you have critical data, you _really_ want to use a raid controller with \nbattery backup so that if you loose power you have a chance of eventually \ncompleting the write.\n\nDavid Lang\n", "msg_date": "Tue, 23 Feb 2010 11:35:46 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "* [email protected] <[email protected]> [100223 15:05]:\n\n> However, one thing that you do not get protection against with software \n> raid is the potential for the writes to hit some drives but not others. \n> If this happens the software raid cannot know what the correct contents \n> of the raid stripe are, and so you could loose everything in that stripe \n> (including contents of other files that are not being modified that \n> happened to be in the wrong place on the array)\n\nThat's for stripe-based raid. Mirror sets like raid-1 should give you\neither the old data, or the new data, both acceptable responses since\nthe fsync/barreir hasn't \"completed\".\n\nOr have I missed another subtle interaction?\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.", "msg_date": "Tue, 23 Feb 2010 15:34:35 -0500", "msg_from": "Aidan Van Dyk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On Tue, 23 Feb 2010, Aidan Van Dyk wrote:\n\n> * [email protected] <[email protected]> [100223 15:05]:\n>\n>> However, one thing that you do not get protection against with software\n>> raid is the potential for the writes to hit some drives but not others.\n>> If this happens the software raid cannot know what the correct contents\n>> of the raid stripe are, and so you could loose everything in that stripe\n>> (including contents of other files that are not being modified that\n>> happened to be in the wrong place on the array)\n>\n> That's for stripe-based raid. Mirror sets like raid-1 should give you\n> either the old data, or the new data, both acceptable responses since\n> the fsync/barreir hasn't \"completed\".\n>\n> Or have I missed another subtle interaction?\n\none problem is that when the system comes back up and attempts to check \nthe raid array, it is not going to know which drive has valid data. I \ndon't know exactly what it does in that situation, but this type of error \nin other conditions causes the system to take the array offline.\n\nDavid Lang\n", "msg_date": "Tue, 23 Feb 2010 13:22:16 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "On 02/23/2010 04:22 PM, [email protected] wrote:\n> On Tue, 23 Feb 2010, Aidan Van Dyk wrote:\n>\n>> * [email protected] <[email protected]> [100223 15:05]:\n>>\n>>> However, one thing that you do not get protection against with software\n>>> raid is the potential for the writes to hit some drives but not others.\n>>> If this happens the software raid cannot know what the correct contents\n>>> of the raid stripe are, and so you could loose everything in that \n>>> stripe\n>>> (including contents of other files that are not being modified that\n>>> happened to be in the wrong place on the array)\n>>\n>> That's for stripe-based raid. Mirror sets like raid-1 should give you\n>> either the old data, or the new data, both acceptable responses since\n>> the fsync/barreir hasn't \"completed\".\n>>\n>> Or have I missed another subtle interaction?\n>\n> one problem is that when the system comes back up and attempts to \n> check the raid array, it is not going to know which drive has valid \n> data. I don't know exactly what it does in that situation, but this \n> type of error in other conditions causes the system to take the array \n> offline.\n\nI think the real concern here is that depending on how the data is read \nlater - and depending on which disks it reads from - it could read \n*either* old or new, at any time in the future. I.e. it reads \"new\" from \ndisk 1 the first time, and then an hour later it reads \"old\" from disk 2.\n\nI think this concern might be invalid for a properly running system, \nthough. When a RAID array is not cleanly shut down, the RAID array \nshould run in \"degraded\" mode until it can be sure that the data is \nconsistent. In this case, it should pick one drive, and call it the \n\"live\" one, and then rebuild the other from the \"live\" one. Until it is \nre-built, it should only satisfy reads from the \"live\" one, or parts of \nthe \"rebuilding\" one that are known to be clean.\n\nI use mdadm software RAID, and all of me reading (including some of its \nsource code) and experience (shutting down the box uncleanly) tells me, \nit is working properly. In fact, the \"rebuild\" process can get quite \nANNOYING as the whole system becomes much slower during rebuild, and \nrebuild of large partitions can take hours to complete.\n\nFor mdadm, there is a not-so-well-known \"write-intent bitmap\" \ncapability. Once enabled, mdadm will embed a small bitmap (128 bits?) \ninto the partition, and each bit will indicate a section of the \npartition. Before writing to a section, it will mark that section as \ndirty using this bitmap. It will leave this bit set for some time after \nthe partition is \"clean\" (lazy clear). The effect of this, is that at \nany point in time, only certain sections of the drive are dirty, and on \nrecovery, it is a lot cheaper to only rebuild the dirty sections. It \nworks really well.\n\nSo, I don't think this has to be a problem. There are solutions, and any \nsolution that claims to be complete should offer these sorts of \ncapabilities.\n\nCheers,\nmark\n\n", "msg_date": "Tue, 23 Feb 2010 16:32:13 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "It's always possible to rebuild into a consistent configuration by assigning\na precedence order; for parity RAID, the data drives take precedence over\nparity drives, and for RAID-1 sets it assigns an arbitrary master.\n\nYou *should* never lose a whole stripe ... for example, RAID-5 updates do\n\"read old data / parity, write new data, write new parity\" ... there is no\nneed to touch any other data disks, so they will be preserved through the\nrebuild. Similarly, if only one block is being updated there is no need to\nupdate the entire stripe.\n\nDavid - what caused /dev/md to decide to take an array offline?\n\nCheers\nDave\n\nOn Tue, Feb 23, 2010 at 3:22 PM, <[email protected]> wrote:\n\n> On Tue, 23 Feb 2010, Aidan Van Dyk wrote:\n>\n> * [email protected] <[email protected]> [100223 15:05]:\n>>\n>> However, one thing that you do not get protection against with software\n>>> raid is the potential for the writes to hit some drives but not others.\n>>> If this happens the software raid cannot know what the correct contents\n>>> of the raid stripe are, and so you could loose everything in that stripe\n>>> (including contents of other files that are not being modified that\n>>> happened to be in the wrong place on the array)\n>>>\n>>\n>> That's for stripe-based raid. Mirror sets like raid-1 should give you\n>> either the old data, or the new data, both acceptable responses since\n>> the fsync/barreir hasn't \"completed\".\n>>\n>> Or have I missed another subtle interaction?\n>>\n>\n> one problem is that when the system comes back up and attempts to check the\n> raid array, it is not going to know which drive has valid data. I don't know\n> exactly what it does in that situation, but this type of error in other\n> conditions causes the system to take the array offline.\n>\n>\n> David Lang\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIt's always possible to rebuild into a consistent configuration by assigning a precedence order; for parity RAID, the data drives take precedence over parity drives, and for RAID-1 sets it assigns an arbitrary master.\nYou *should* never lose a whole stripe ... for example, RAID-5 updates do \"read old data / parity, write new data, write new parity\" ... there is no need to touch any other data disks, so they will be preserved through the rebuild. Similarly, if only one block is being updated there is no need to update the entire stripe.\nDavid - what caused /dev/md to decide to take an array offline? CheersDaveOn Tue, Feb 23, 2010 at 3:22 PM, <[email protected]> wrote:\nOn Tue, 23 Feb 2010, Aidan Van Dyk wrote:\n\n\n* [email protected] <[email protected]> [100223 15:05]:\n\n\nHowever, one thing that you do not get protection against with software\nraid is the potential for the writes to hit some drives but not others.\nIf this happens the software raid cannot know what the correct contents\nof the raid stripe are, and so you could loose everything in that stripe\n(including contents of other files that are not being modified that\nhappened to be in the wrong place on the array)\n\n\nThat's for stripe-based raid.  Mirror sets like raid-1 should give you\neither the old data, or the new data, both acceptable responses since\nthe fsync/barreir hasn't \"completed\".\n\nOr have I missed another subtle interaction?\n\n\none problem is that when the system comes back up and attempts to check the raid array, it is not going to know which drive has valid data. I don't know exactly what it does in that situation, but this type of error in other conditions causes the system to take the array offline.\n\n\nDavid Lang\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 24 Feb 2010 02:32:40 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "I have added documentation about the ATAPI drive flush command, and the\ntypical SSD behavior.\n\n---------------------------------------------------------------------------\n\nGreg Smith wrote:\n> Ron Mayer wrote:\n> > Bruce Momjian wrote:\n> > \n> >> Agreed, thought I thought the problem was that SSDs lie about their\n> >> cache flush like SATA drives do, or is there something I am missing?\n> >> \n> >\n> > There's exactly one case I can find[1] where this century's IDE\n> > drives lied more than any other drive with a cache:\n> \n> Ron is correct that the problem of mainstream SATA drives accepting the \n> cache flush command but not actually doing anything with it is long gone \n> at this point. If you have a regular SATA drive, it almost certainly \n> supports proper cache flushing. And if your whole software/storage \n> stacks understands all that, you should not end up with corrupted data \n> just because there's a volative write cache in there.\n> \n> But the point of this whole testing exercise coming back into vogue \n> again is that SSDs have returned this negligent behavior to the \n> mainstream again. See \n> http://opensolaris.org/jive/thread.jspa?threadID=121424 for a discussion \n> of this in a ZFS context just last month. There are many documented \n> cases of Intel SSDs that will fake a cache flush, such that the only way \n> to get good reliable writes is to totally disable their writes \n> caches--at which point performance is so bad you might as well have \n> gotten a RAID10 setup instead (and longevity is toast too).\n> \n> This whole area remains a disaster area and extreme distrust of all the \n> SSD storage vendors is advisable at this point. Basically, if I don't \n> see the capacitor responsible for flushing outstanding writes, and get a \n> clear description from the manufacturer how the cached writes are going \n> to be handled in the event of a power failure, at this point I have to \n> assume the answer is \"badly and your data will be eaten\". And the \n> prices for SSDs that meet that requirement are still quite steep. I \n> keep hoping somebody will address this market at something lower than \n> the standard \"enterprise\" prices. The upcoming SandForce designs seem \n> to have thought this through correctly: \n> http://www.anandtech.com/storage/showdoc.aspx?i=3702&p=6 But the \n> product's not out to the general public yet (just like the Seagate units \n> that claim to have capacitor backups--I heard a rumor those are also \n> Sandforce designs actually, so they may be the only ones doing this \n> right and aiming at a lower price).\n> \n> -- \n> Greg Smith 2ndQuadrant US Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.us\n> \n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n + If your life is a hard drive, Christ can be your backup. +", "msg_date": "Fri, 26 Feb 2010 20:40:18 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Bruce Momjian wrote:\n> I have added documentation about the ATAPI drive flush command, and the\n> typical SSD behavior.\n> \n\nIf one of us goes back into that section one day to edit again it might \nbe worth mentioning that FLUSH CACHE EXT is the actual ATAPI-6 command \nthat a drive needs to support properly. I wouldn't bother with another \ndoc edit commit just for that specific part though, pretty obscure.\n\nI find it kind of funny how many discussions run in parallel about even \nreally detailed technical implementation details around the world. For \nexample, doesn't \nhttp://www.mail-archive.com/[email protected]/msg30585.html \nlook exactly like the exchange between myself and Arjen the other day, \nreferencing the same AnandTech page?\n\nCould be worse; one of us could be the poor sap at \nhttp://opensolaris.org/jive/thread.jspa;jsessionid=41B679C30D136C059E1BB7C06CA7DCE0?messageID=397730 \nwho installed Windows XP, VirtualBox for Windows, an OpenSolaris VM \ninside of it, and then was shocked that cache flushes didn't make their \nway all the way through that chain and had his 10TB ZFS pool corrupted \nas a result. Hurray for virtualization!\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 27 Feb 2010 14:38:20 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Greg Smith wrote:\n> Bruce Momjian wrote:\n> > I have added documentation about the ATAPI drive flush command, and the\n> > typical SSD behavior.\n> > \n> \n> If one of us goes back into that section one day to edit again it might \n> be worth mentioning that FLUSH CACHE EXT is the actual ATAPI-6 command \n> that a drive needs to support properly. I wouldn't bother with another \n> doc edit commit just for that specific part though, pretty obscure.\n\nThat setting name was not easy to find so I added it to the\ndocumentation.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n", "msg_date": "Sat, 27 Feb 2010 15:16:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Bruce Momjian wrote:\n> Greg Smith wrote:\n>> Bruce Momjian wrote:\n>>> I have added documentation about the ATAPI drive flush command, and the\n>> \n>> If one of us goes back into that section one day to edit again it might \n>> be worth mentioning that FLUSH CACHE EXT is the actual ATAPI-6 command \n>> that a drive needs to support properly. I wouldn't bother with another \n>> doc edit commit just for that specific part though, pretty obscure.\n> \n> That setting name was not easy to find so I added it to the\n> documentation.\n\nIf we're spelling out specific IDE commands, it might be worth\nnoting that the corresponding SCSI command is \"SYNCHRONIZE CACHE\"[1].\n\n\nLinux apparently sends FLUSH_CACHE commands to IDE drives in the\nexact sample places it sends SYNCHRONIZE CACHE commands to SCSI\ndrives[2].\n\nIt seems that the same file systems, SW raid layers,\nvirtualization platforms, and kernels that have a problem\nsending FLUSH CACHE commands to SATA drives have he same exact\nsame problems sending SYNCHRONIZE CACHE commands to SCSI drives.\nWith the exact same effect of not getting writes all the way\nthrough disk caches.\n\nNo?\n\n\n[1] http://linux.die.net/man/8/sg_sync\n[2] http://hardware.slashdot.org/comments.pl?sid=149349&cid=12519114\n", "msg_date": "Sat, 27 Feb 2010 20:05:16 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Ron Mayer wrote:\n> Linux apparently sends FLUSH_CACHE commands to IDE drives in the\n> exact sample places it sends SYNCHRONIZE CACHE commands to SCSI\n> drives[2].\n> [2] http://hardware.slashdot.org/comments.pl?sid=149349&cid=12519114\n> \n\nWell, that's old enough to not even be completely right anymore about \nSATA disks and kernels. It's FLUSH_CACHE_EXT that's been added to ATA-6 \nto do the right thing on modern drives and that gets used nowadays, and \nthat doesn't necessarily do so on most of the SSDs out there; all of \nwhich Bruce's recent doc additions now talk about correctly.\n\nThere's this one specific area we know about that the most popular \nsystems tend to get really wrong all the time; that's got the \nappropriate warning now with the right magic keywords that people can \nlook into it more if motivated. While it would be nice to get super \nthorough and document everything, I think there's already more docs in \nthere than this project would prefer to have to maintain in this area.\n\nAre we going to get into IDE, SATA, SCSI, SAS, FC, and iSCSI? If the \nidea is to be complete that's where this would go. I don't know that \nthe documentation needs to address every possible way every possible \nfilesystem can be flushed. \n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sun, 28 Feb 2010 00:06:36 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Ron Mayer wrote:\n> Bruce Momjian wrote:\n> > Greg Smith wrote:\n> >> Bruce Momjian wrote:\n> >>> I have added documentation about the ATAPI drive flush command, and the\n> >> \n> >> If one of us goes back into that section one day to edit again it might \n> >> be worth mentioning that FLUSH CACHE EXT is the actual ATAPI-6 command \n> >> that a drive needs to support properly. I wouldn't bother with another \n> >> doc edit commit just for that specific part though, pretty obscure.\n> > \n> > That setting name was not easy to find so I added it to the\n> > documentation.\n> \n> If we're spelling out specific IDE commands, it might be worth\n> noting that the corresponding SCSI command is \"SYNCHRONIZE CACHE\"[1].\n> \n> \n> Linux apparently sends FLUSH_CACHE commands to IDE drives in the\n> exact sample places it sends SYNCHRONIZE CACHE commands to SCSI\n> drives[2].\n> \n> It seems that the same file systems, SW raid layers,\n> virtualization platforms, and kernels that have a problem\n> sending FLUSH CACHE commands to SATA drives have he same exact\n> same problems sending SYNCHRONIZE CACHE commands to SCSI drives.\n> With the exact same effect of not getting writes all the way\n> through disk caches.\n\nI always assumed SCSI disks had a write-through cache and therefore\ndidn't need a drive cache flush comment.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n", "msg_date": "Mon, 1 Mar 2010 22:33:53 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Greg Smith wrote:\n> Ron Mayer wrote:\n> > Linux apparently sends FLUSH_CACHE commands to IDE drives in the\n> > exact sample places it sends SYNCHRONIZE CACHE commands to SCSI\n> > drives[2].\n> > [2] http://hardware.slashdot.org/comments.pl?sid=149349&cid=12519114\n> > \n> \n> Well, that's old enough to not even be completely right anymore about \n> SATA disks and kernels. It's FLUSH_CACHE_EXT that's been added to ATA-6 \n> to do the right thing on modern drives and that gets used nowadays, and \n> that doesn't necessarily do so on most of the SSDs out there; all of \n> which Bruce's recent doc additions now talk about correctly.\n> \n> There's this one specific area we know about that the most popular \n> systems tend to get really wrong all the time; that's got the \n> appropriate warning now with the right magic keywords that people can \n> look into it more if motivated. While it would be nice to get super \n> thorough and document everything, I think there's already more docs in \n> there than this project would prefer to have to maintain in this area.\n> \n> Are we going to get into IDE, SATA, SCSI, SAS, FC, and iSCSI? If the \n> idea is to be complete that's where this would go. I don't know that \n> the documentation needs to address every possible way every possible \n> filesystem can be flushed. \n\nThe bottom line is that the reason we have so much detailed\ndocumentation about this is that mostly only database folks care about\nsuch issues, so we end up having to research and document this\nourselves --- I don't see any alternatives.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n PG East: http://www.enterprisedb.com/community/nav-pg-east-2010.do\n", "msg_date": "Mon, 1 Mar 2010 22:34:57 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Bruce Momjian wrote:\n> I always assumed SCSI disks had a write-through cache and therefore\n> didn't need a drive cache flush comment.\n> \n\nThere's more detail on all this mess at \nhttp://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks and it includes \nthis perception, which I've recently come to believe isn't actually \ncorrect anymore. Like the IDE crowd, it looks like one day somebody \nsaid \"hey, we lose every write heavy benchmark badly because we only \nhave a write-through cache\", and that principle got lost along the \nwayside. What has been true, and I'm staring to think this is what \nwe've all been observing rather than a write-through cache, is that the \nproper cache flushing commands have been there in working form for so \nmuch longer that it's more likely your SCSI driver and drive do the \nright thing if the filesystem asks them to. SCSI SYNCHRONIZE CACHE has \na much longer and prouder history than IDE's FLUSH_CACHE and SATA's \nFLUSH_CACHE_EXT.\n\nIt's also worth noting that many current SAS drives, the current SCSI \nincarnation, are basically SATA drives with a bridge chipset stuck onto \nthem, or with just the interface board swapped out. This one reason why \ntop-end SAS capacities lag behind consumer SATA drives. They use the \nconsumers as beta testers to get the really fundamental firmware issues \nsorted out, and once things are stable they start stamping out the \nversion with the SAS interface instead. (Note that there's a parallel \nmanufacturing approach that makes much smaller SAS drives, the 2.5\" \nserver models or those at higher RPMs, that doesn't go through this \npath. Those are also the really expensive models, due to economy of \nscale issues). The idea that these would have fundamentally different \nwrite cache behavior doesn't really follow from that development model.\n\nAt this point, there are only two common differences between \"consumer\" \nand \"enterprise\" hard drives of the same size and RPM when there are \ndirectly matching ones:\n\n1) You might get SAS instead of SATA as the interface, which provides \nthe more mature command set I was talking about above--and therefore may \ngive you a sane write-back cache with proper flushing, which is all the \ndatabase really expects.\n\n2) The timeouts when there's a read/write problem are tuned down in the \nenterprise version, to be more compatible with RAID setups where you \nwant to push the drive off-line when this happens rather than presuming \nyou can fix it. Consumers would prefer that the drive spent a lot of \ntime doing heroics to try and save their sole copy of the apparently \nmissing data.\n\nYou might get a slightly higher grade of parts if you're lucky too; I \nwouldn't count on it though. That seems to be saved for the high RPM or \nsmaller size drives only.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Tue, 02 Mar 2010 01:13:29 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "\n> I always assumed SCSI disks had a write-through cache and therefore\n> didn't need a drive cache flush comment.\n\nMaximum performance can only be reached with a writeback cache so the \ndrive can reorder and cluster writes, according to the realtime position \nof the heads and platter rotation.\n\nThe problem is not the write cache itself, it is that, for your data to be \nsafe, the \"flush cache\" or \"barrier\" command must get all the way through \nthe application / filesystem to the hardware, going through a nondescript \nnumber of software/firmware/hardware layers, all of which may :\n\n- not specify if they honor or ignore flush/barrier commands, and which \nones\n- not specify if they will reordre writes ignoring barriers/flushes or not\n- have been written by people who are not aware of such issues\n- have been written by companies who are perfectly aware of such issues \nbut chose to ignore them to look good in benchmarks\n- have some incompatibilities that result in broken behaviour\n- have bugs\n\nAs far as I'm concerned, a configuration that doesn't properly respect the \ncommands needed for data integrity is broken.\n\nThe sad truth is that given a software/hardware IO stack, there's no way \nto be sure, and testing isn't easy, if at all possible to do. Some cache \nflushes might be ignored under some circumstances.\n\nFor this to change, you don't need a hardware change, but a mentality \nchange.\n\nFlash filesystem developers use flash simulators which measure wear \nleveling, etc.\n\nWe'd need a virtual box with a simulated virtual harddrive which is able \nto check this.\n\nWhat a mess.\n\n", "msg_date": "Tue, 02 Mar 2010 09:36:48 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" }, { "msg_contents": "Greg Smith wrote:\n> Bruce Momjian wrote:\n>> I always assumed SCSI disks had a write-through cache and therefore\n>> didn't need a drive cache flush comment.\n\nSome do. SCSI disks have write-back caches.\n\nSome have both(!) - a write-back cache but the user can explicitly\nsend write-through requests.\n\nMicrosoft explains it well (IMHO) here:\nhttp://msdn.microsoft.com/en-us/library/aa508863.aspx\n \"For example, suppose that the target is a SCSI device with\n a write-back cache. If the device supports write-through\n requests, the initiator can bypass the write cache by\n setting the force unit access (FUA) bit in the command\n descriptor block (CDB) of the write command.\"\n\n> this perception, which I've recently come to believe isn't actually\n> correct anymore. ... I'm staring to think this is what\n> we've all been observing rather than a write-through cache\n\nI think what we've been observing is that guys with SCSI drives\nare more likely to either\n (a) have battery-backed RAID controllers that insure writes succeed,\nor\n (b) have other decent RAID controllers that understand details\n like that FUA bit to send write-through requests even if\n a SCSI devices has a write-back cache.\n\nIn contrast, most guys with PATA drives are probably running\nsoftware RAID (if any) with a RAID stack (older LVM and MD)\nknown to lose the cache flushing commands.\n\n", "msg_date": "Wed, 03 Mar 2010 07:16:40 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD + RAID" } ]
[ { "msg_contents": "\nHello\n\n I just finished implementing a \"search engine\" for my site and found \nts_headline extremely slow when used with a Polish tsearch \nconfiguration, while fast with English. All of it boils down to a simple \ntestcase, but first some background.\n\n I tested on 8.3.1 on G5/OSX 10.5.8 and Xeon/Gentoo AMD64-2008.0 \n(2.6.21), then switched both installations to 8.3.8 (both packages \ncompiled, but provided by the distro - port/emerge). The Polish \ndictionaries and config were created according to this article (it's in \nPolish, but the code is self-explanatory):\n\nhttp://www.depesz.com/index.php/2008/04/22/polish-tsearch-in-83-polski-tsearch-w-postgresie-83/\n\n Now for the testcase:\n\ntext = 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do \neiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad \nminim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip \nex ea commodo consequat. Duis aute irure dolor in reprehenderit in \nvoluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur \nsint occaecat cupidatat non proident, sunt in culpa qui officia deserunt \nmollit anim id est laborum.'\n\n# explain analyze select ts_headline('polish', text, \nplainto_tsquery('polish', 'foobar'));\n QUERY PLAN \n\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=6.407..6.470 \nrows=1 loops=1)\n Total runtime: 6.524 ms\n(2 rows)\n\n# explain analyze select ts_headline('english', text, \nplainto_tsquery('english', 'foobar'));\n QUERY PLAN \n\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.861..0.895 \nrows=1 loops=1)\n Total runtime: 0.935 ms\n(2 rows)\n\n# explain analyze select ts_headline('simple', text, \nplainto_tsquery('simple', 'foobar'));\n QUERY PLAN \n\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.627..0.660 \nrows=1 loops=1)\n Total runtime: 0.697 ms\n(2 rows)\n\n#\n\n As you can see, the results differ by an order of magnitude between \nPolish and English. While in this simple testcase it's a non-issue, in \nthe real world this translates into enormous overhead.\n\n One of the queries I ran testing my site's search function took \n1870ms. When I took that query and changed all ts_headline(foo) calls to \njust foo, the time dropped below 100ms. That's the difference between \nsomething completely unacceptable and something quite useful.\n\n I can post various details about the hardware, software and specific \nqueries, but the testcases speak for themselves. I'm sure you can easily \nreproduce my results.\n\n Hints would be very much appreciated, since I've already spent way \nmore time on this, than I could afford.\n\n\ncheers,\nWojciech Knapik\n\n\nPS. A few other details can be found here \nhttp://pastie.textmate.org/private/hqnqfnsfsknjyjlffzmog along with \nsnippets of my conversations in #postgresql that lead to this testcase. \nBig thanks to RhodiumToad for helping me with fts for the last couple \ndays ;]\n\n\n\n", "msg_date": "Sat, 14 Nov 2009 12:25:05 +0100", "msg_from": "Wojciech Knapik <[email protected]>", "msg_from_op": true, "msg_subject": "FTS performance with the Polish config" }, { "msg_contents": "On Sat, Nov 14, 2009 at 12:25:05PM +0100, Wojciech Knapik wrote:\n>\n> Hello\n>\n> I just finished implementing a \"search engine\" for my site and found \n> ts_headline extremely slow when used with a Polish tsearch configuration, \n> while fast with English. All of it boils down to a simple testcase, but \n> first some background.\n>\n> I tested on 8.3.1 on G5/OSX 10.5.8 and Xeon/Gentoo AMD64-2008.0 (2.6.21), \n> then switched both installations to 8.3.8 (both packages compiled, but \n> provided by the distro - port/emerge). The Polish dictionaries and config \n> were created according to this article (it's in Polish, but the code is \n> self-explanatory):\n>\n> http://www.depesz.com/index.php/2008/04/22/polish-tsearch-in-83-polski-tsearch-w-postgresie-83/\n>\n> Now for the testcase:\n>\n> text = 'Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do \n> eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad \n> minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex \n> ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate \n> velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat \n> cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id \n> est laborum.'\n>\n> # explain analyze select ts_headline('polish', text, \n> plainto_tsquery('polish', 'foobar'));\n> QUERY PLAN \n> ------------------------------------------------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=6.407..6.470 rows=1 \n> loops=1)\n> Total runtime: 6.524 ms\n> (2 rows)\n>\n> # explain analyze select ts_headline('english', text, \n> plainto_tsquery('english', 'foobar'));\n> QUERY PLAN \n> ------------------------------------------------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.861..0.895 rows=1 \n> loops=1)\n> Total runtime: 0.935 ms\n> (2 rows)\n>\n> # explain analyze select ts_headline('simple', text, \n> plainto_tsquery('simple', 'foobar'));\n> QUERY PLAN \n> ------------------------------------------------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.627..0.660 rows=1 \n> loops=1)\n> Total runtime: 0.697 ms\n> (2 rows)\n>\n> #\n>\n> As you can see, the results differ by an order of magnitude between Polish \n> and English. While in this simple testcase it's a non-issue, in the real \n> world this translates into enormous overhead.\n>\n> One of the queries I ran testing my site's search function took 1870ms. \n> When I took that query and changed all ts_headline(foo) calls to just foo, \n> the time dropped below 100ms. That's the difference between something \n> completely unacceptable and something quite useful.\n>\n> I can post various details about the hardware, software and specific \n> queries, but the testcases speak for themselves. I'm sure you can easily \n> reproduce my results.\n>\n> Hints would be very much appreciated, since I've already spent way more \n> time on this, than I could afford.\n>\n>\n> cheers,\n> Wojciech Knapik\n>\n>\n> PS. A few other details can be found here \n> http://pastie.textmate.org/private/hqnqfnsfsknjyjlffzmog along with \n> snippets of my conversations in #postgresql that lead to this testcase. Big \n> thanks to RhodiumToad for helping me with fts for the last couple days ;]\n>\n\nHi,\n\nThe documentation for ts_headline() states:\n\nts_headline uses the original document, not a tsvector summary, so it can be slow\nand should be used with care. A typical mistake is to call ts_headline for every\nmatching document when only ten documents are to be shown. SQL subqueries can help;\nhere is an example:\n\nSELECT id, ts_headline(body, q), rank\nFROM (SELECT id, body, q, ts_rank_cd(ti, q) AS rank\n FROM apod, to_tsquery('stars') q\n WHERE ti @@ q\n ORDER BY rank DESC\n LIMIT 10) AS foo;\n\nIt looks like you have proven that behavior. I have not looked at the ts_headline\ncode, but it may also be slowed by the locale, so showing that it is faster for\nEnglish is not really saying much. Maybe there is a better algorithm that could\nbe used, but that would require code changes. It may be that you can change some\nof the parameters to speed it up. Good luck.\n\nRegards,\nKen\n", "msg_date": "Sat, 14 Nov 2009 09:58:02 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTS performance with the Polish config" }, { "msg_contents": "Kenneth Marshall <[email protected]> writes:\n> On Sat, Nov 14, 2009 at 12:25:05PM +0100, Wojciech Knapik wrote:\n>> I just finished implementing a \"search engine\" for my site and found \n>> ts_headline extremely slow when used with a Polish tsearch configuration, \n>> while fast with English.\n\n> The documentation for ts_headline() states:\n> ts_headline uses the original document, not a tsvector summary, so it\n> can be slow and should be used with care.\n\nThat's true but the argument in the docs would apply just as well to\nenglish or any other config. So while Wojciech would be well advised\nto try to avoid making a lot of calls to ts_headline, it's still curious\nthat it's so much slower in polish than english. Could we see a\nself-contained test case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Nov 2009 12:07:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTS performance with the Polish config " }, { "msg_contents": "2009/11/14 Tom Lane <[email protected]>:\n> Kenneth Marshall <[email protected]> writes:\n>> On Sat, Nov 14, 2009 at 12:25:05PM +0100, Wojciech Knapik wrote:\n>>> I just finished implementing a \"search engine\" for my site and found\n>>> ts_headline extremely slow when used with a Polish tsearch configuration,\n>>> while fast with English.\n>\n>> The documentation for ts_headline() states:\n>> ts_headline uses the original document, not a tsvector summary, so it\n>> can be slow and should be used with care.\n>\n> That's true but the argument in the docs would apply just as well to\n> english or any other config.  So while Wojciech would be well advised\n> to try to avoid making a lot of calls to ts_headline, it's still curious\n> that it's so much slower in polish than english.  Could we see a\n> self-contained test case?\n\nis it dictionary based or stem based?\n\nDictionary based FTS is very slow (first load). Minimally czech FTS is slow.\n\nregards\nPavel Stehule\n\n>\n>                        regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sat, 14 Nov 2009 18:24:05 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTS performance with the Polish config" }, { "msg_contents": "Yes, as stated original author use polish ispell dictionary.\nIspell dictionary is slow to load first time. In real life it should \nbe no problem.\n\nOleg\nOn Sat, 14 Nov 2009, Pavel Stehule wrote:\n\n> 2009/11/14 Tom Lane <[email protected]>:\n> > Kenneth Marshall <[email protected]> writes:\n> >> On Sat, Nov 14, 2009 at 12:25:05PM +0100, Wojciech Knapik wrote:\n> >>> I just finished implementing a \"search engine\" for my site and found\n> >>> ts_headline extremely slow when used with a Polish tsearch configuratio=\n> n,\n> >>> while fast with English.\n> >\n> >> The documentation for ts_headline() states:\n> >> ts_headline uses the original document, not a tsvector summary, so it\n> >> can be slow and should be used with care.\n> >\n> > That's true but the argument in the docs would apply just as well to\n> > english or any other config. =C2=A0So while Wojciech would be well advised\n> > to try to avoid making a lot of calls to ts_headline, it's still curious\n> > that it's so much slower in polish than english. =C2=A0Could we see a\n> > self-contained test case?\n> \n> is it dictionary based or stem based?\n> \n> Dictionary based FTS is very slow (first load). Minimally czech FTS is slow.\n> \n> regards\n> Pavel Stehule\n> \n> >\n> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=\n> =A0 =C2=A0regards, tom lane\n> >\n> > --\n> > Sent via pgsql-performance mailing list ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n> \n> --=20\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Sun, 15 Nov 2009 10:36:30 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTS performance with the Polish config" }, { "msg_contents": "2009/11/15 Oleg Bartunov <[email protected]>:\n> Yes, as stated original author use polish ispell dictionary.\n> Ispell dictionary is slow to load first time. In real life it should be no\n> problem.\n>\n\nit is a problem. People who needs fast access uses english without\nczech. It drop some features, but it is significaly faster.\n\nPavel\n\n> Oleg\n> On Sat, 14 Nov 2009, Pavel Stehule wrote:\n>\n>> 2009/11/14 Tom Lane <[email protected]>:\n>> > Kenneth Marshall <[email protected]> writes:\n>> >> On Sat, Nov 14, 2009 at 12:25:05PM +0100, Wojciech Knapik wrote:\n>> >>> I just finished implementing a \"search engine\" for my site and found\n>> >>> ts_headline extremely slow when used with a Polish tsearch\n>> >>> configuratio=\n>> n,\n>> >>> while fast with English.\n>> >\n>> >> The documentation for ts_headline() states:\n>> >> ts_headline uses the original document, not a tsvector summary, so it\n>> >> can be slow and should be used with care.\n>> >\n>> > That's true but the argument in the docs would apply just as well to\n>> > english or any other config. =C2=A0So while Wojciech would be well\n>> > advised\n>> > to try to avoid making a lot of calls to ts_headline, it's still curious\n>> > that it's so much slower in polish than english. =C2=A0Could we see a\n>> > self-contained test case?\n>>\n>> is it dictionary based or stem based?\n>>\n>> Dictionary based FTS is very slow (first load). Minimally czech FTS is\n>> slow.\n>>\n>> regards\n>> Pavel Stehule\n>>\n>> >\n>> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\n>> > =C2=\n>> =A0 =C2=A0regards, tom lane\n>> >\n>> > --\n>> > Sent via pgsql-performance mailing list\n>> > ([email protected])\n>> > To make changes to your subscription:\n>> > http://www.postgresql.org/mailpref/pgsql-performance\n>> >\n>>\n>> --=20\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>        Regards,\n>                Oleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n>\n", "msg_date": "Sun, 15 Nov 2009 08:42:36 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTS performance with the Polish config" }, { "msg_contents": "On Sun, 15 Nov 2009, Pavel Stehule wrote:\n\n> 2009/11/15 Oleg Bartunov <[email protected]>:\n>> Yes, as stated original author use polish ispell dictionary.\n>> Ispell dictionary is slow to load first time. In real life it should be no\n>> problem.\n>>\n>\n> it is a problem. People who needs fast access uses english without\n> czech. It drop some features, but it is significaly faster.\n\njust don't use ispell dictionary, czech snowball stemmer is as fast as \nenglish.\n\nIspell dictionary (doesn't matter english, or other language) is slow for the \nfirst load and then it caches, so there is no problem if use persistent \ndatabase connection, which is de facto standard for any serious projects.\n\n>\n> Pavel\n>\n>> Oleg\n>> On Sat, 14 Nov 2009, Pavel Stehule wrote:\n>>\n>>> 2009/11/14 Tom Lane <[email protected]>:\n>>>> Kenneth Marshall <[email protected]> writes:\n>>>>> On Sat, Nov 14, 2009 at 12:25:05PM +0100, Wojciech Knapik wrote:\n>>>>>> I just finished implementing a \"search engine\" for my site and found\n>>>>>> ts_headline extremely slow when used with a Polish tsearch\n>>>>>> configuratio=\n>>> n,\n>>>>>> while fast with English.\n>>>>\n>>>>> The documentation for ts_headline() states:\n>>>>> ts_headline uses the original document, not a tsvector summary, so it\n>>>>> can be slow and should be used with care.\n>>>>\n>>>> That's true but the argument in the docs would apply just as well to\n>>>> english or any other config. =C2=A0So while Wojciech would be well\n>>>> advised\n>>>> to try to avoid making a lot of calls to ts_headline, it's still curious\n>>>> that it's so much slower in polish than english. =C2=A0Could we see a\n>>>> self-contained test case?\n>>>\n>>> is it dictionary based or stem based?\n>>>\n>>> Dictionary based FTS is very slow (first load). Minimally czech FTS is\n>>> slow.\n>>>\n>>> regards\n>>> Pavel Stehule\n>>>\n>>>>\n>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\n>>>> =C2=\n>>> =A0 =C2=A0regards, tom lane\n>>>>\n>>>> --\n>>>> Sent via pgsql-performance mailing list\n>>>> ([email protected])\n>>>> To make changes to your subscription:\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>\n>>>\n>>> --=20\n>>> Sent via pgsql-performance mailing list ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>>        Regards,\n>>                Oleg\n>> _____________________________________________________________\n>> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n>> Sternberg Astronomical Institute, Moscow University, Russia\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(495)939-16-83, +007(495)939-23-83\n>>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83", "msg_date": "Sun, 15 Nov 2009 12:05:07 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTS performance with the Polish config" }, { "msg_contents": "2009/11/15 Oleg Bartunov <[email protected]>:\n> On Sun, 15 Nov 2009, Pavel Stehule wrote:\n>\n>> 2009/11/15 Oleg Bartunov <[email protected]>:\n>>>\n>>> Yes, as stated original author use polish ispell dictionary.\n>>> Ispell dictionary is slow to load first time. In real life it should be\n>>> no\n>>> problem.\n>>>\n>>\n>> it is a problem. People who needs fast access uses english without\n>> czech. It drop some features, but it is significaly faster.\n>\n> just don't use ispell dictionary, czech snowball stemmer is as fast as\n> english.\n\nczech stemmer doesn't exist :(\n\n>\n> Ispell dictionary (doesn't matter english, or other language) is slow for\n> the first load and then it caches, so there is no problem if use persistent\n> database connection, which is de facto standard for any serious projects.\n>\n\nI agree so connection pooling should be a solution. But it is good?\nCannot we share dictionary better?\n\n>>\n>> Pavel\n>>\n>>> Oleg\n>>> On Sat, 14 Nov 2009, Pavel Stehule wrote:\n>>>\n>>>> 2009/11/14 Tom Lane <[email protected]>:\n>>>>>\n>>>>> Kenneth Marshall <[email protected]> writes:\n>>>>>>\n>>>>>> On Sat, Nov 14, 2009 at 12:25:05PM +0100, Wojciech Knapik wrote:\n>>>>>>>\n>>>>>>> I just finished implementing a \"search engine\" for my site and found\n>>>>>>> ts_headline extremely slow when used with a Polish tsearch\n>>>>>>> configuratio=\n>>>>\n>>>> n,\n>>>>>>>\n>>>>>>> while fast with English.\n>>>>>\n>>>>>> The documentation for ts_headline() states:\n>>>>>> ts_headline uses the original document, not a tsvector summary, so it\n>>>>>> can be slow and should be used with care.\n>>>>>\n>>>>> That's true but the argument in the docs would apply just as well to\n>>>>> english or any other config. =C2=A0So while Wojciech would be well\n>>>>> advised\n>>>>> to try to avoid making a lot of calls to ts_headline, it's still\n>>>>> curious\n>>>>> that it's so much slower in polish than english. =C2=A0Could we see a\n>>>>> self-contained test case?\n>>>>\n>>>> is it dictionary based or stem based?\n>>>>\n>>>> Dictionary based FTS is very slow (first load). Minimally czech FTS is\n>>>> slow.\n>>>>\n>>>> regards\n>>>> Pavel Stehule\n>>>>\n>>>>>\n>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\n>>>>> =C2=\n>>>>\n>>>> =A0 =C2=A0regards, tom lane\n>>>>>\n>>>>> --\n>>>>> Sent via pgsql-performance mailing list\n>>>>> ([email protected])\n>>>>> To make changes to your subscription:\n>>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>>\n>>>>\n>>>> --=20\n>>>> Sent via pgsql-performance mailing list\n>>>> ([email protected])\n>>>> To make changes to your subscription:\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>\n>>>\n>>>        Regards,\n>>>                Oleg\n>>> _____________________________________________________________\n>>> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n>>> Sternberg Astronomical Institute, Moscow University, Russia\n>>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>>> phone: +007(495)939-16-83, +007(495)939-23-83\n>>>\n>>\n>\n>        Regards,\n>                Oleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Sun, 15 Nov 2009 10:15:05 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTS performance with the Polish config" }, { "msg_contents": "On Sun, 15 Nov 2009, Pavel Stehule wrote:\n\n>\n> czech stemmer doesn't exist :(\n>\n\nI'd try morfessor http://www.cis.hut.fi/projects/morpho/, which is \nunsupervised morphological dictionary. I think it'd be not very hard to add\nmorfessor dictionary template to tsearch2, so people could create their\nown stemmers.\n\n>>\n>> Ispell dictionary (doesn't matter english, or other language) is slow for\n>> the first load and then it caches, so there is no problem if use persistent\n>> database connection, which is de facto standard for any serious projects.\n>>\n>\n> I agree so connection pooling should be a solution. But it is good?\n> Cannot we share dictionary better?\n\nWe thought about this issue and got some idea. Teodor can be more clear here,\nsince I don't remember all details.\n\n\n>\n>>>\n>>> Pavel\n>>>\n>>>> Oleg\n>>>> On Sat, 14 Nov 2009, Pavel Stehule wrote:\n>>>>\n>>>>> 2009/11/14 Tom Lane <[email protected]>:\n>>>>>>\n>>>>>> Kenneth Marshall <[email protected]> writes:\n>>>>>>>\n>>>>>>> On Sat, Nov 14, 2009 at 12:25:05PM +0100, Wojciech Knapik wrote:\n>>>>>>>>\n>>>>>>>> I just finished implementing a \"search engine\" for my site and found\n>>>>>>>> ts_headline extremely slow when used with a Polish tsearch\n>>>>>>>> configuratio=\n>>>>>\n>>>>> n,\n>>>>>>>>\n>>>>>>>> while fast with English.\n>>>>>>\n>>>>>>> The documentation for ts_headline() states:\n>>>>>>> ts_headline uses the original document, not a tsvector summary, so it\n>>>>>>> can be slow and should be used with care.\n>>>>>>\n>>>>>> That's true but the argument in the docs would apply just as well to\n>>>>>> english or any other config. =C2=A0So while Wojciech would be well\n>>>>>> advised\n>>>>>> to try to avoid making a lot of calls to ts_headline, it's still\n>>>>>> curious\n>>>>>> that it's so much slower in polish than english. =C2=A0Could we see a\n>>>>>> self-contained test case?\n>>>>>\n>>>>> is it dictionary based or stem based?\n>>>>>\n>>>>> Dictionary based FTS is very slow (first load). Minimally czech FTS is\n>>>>> slow.\n>>>>>\n>>>>> regards\n>>>>> Pavel Stehule\n>>>>>\n>>>>>>\n>>>>>> =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0\n>>>>>> =C2=\n>>>>>\n>>>>> =A0 =C2=A0regards, tom lane\n>>>>>>\n>>>>>> --\n>>>>>> Sent via pgsql-performance mailing list\n>>>>>> ([email protected])\n>>>>>> To make changes to your subscription:\n>>>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>>>\n>>>>>\n>>>>> --=20\n>>>>> Sent via pgsql-performance mailing list\n>>>>> ([email protected])\n>>>>> To make changes to your subscription:\n>>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>>\n>>>>\n>>>>        Regards,\n>>>>                Oleg\n>>>> _____________________________________________________________\n>>>> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n>>>> Sternberg Astronomical Institute, Moscow University, Russia\n>>>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>>>> phone: +007(495)939-16-83, +007(495)939-23-83\n>>>>\n>>>\n>>\n>>        Regards,\n>>                Oleg\n>> _____________________________________________________________\n>> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n>> Sternberg Astronomical Institute, Moscow University, Russia\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(495)939-16-83, +007(495)939-23-83\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83", "msg_date": "Sun, 15 Nov 2009 17:06:42 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FTS performance with the Polish config" } ]
[ { "msg_contents": "Hi, everyone.\n\nBetween postres docs, forum posts, previous similar questions answered and\nrandom blogs, I've read as much as I could about why others have had similar\nproblems in the past before turning to you guys for help, so I really hope\nthis is not some completely obvious oversight on my part (I am fairly new to\nDB programming after all).\n\nSo, my postgres version is: PostgreSQL 8.4.1, compiled by Visual C++ build\n1400, 32-bit\n\nThe table used in this query is called \"users\", and it has columns \"userid\"\n(primary key) and \"location\".\nThe \"location\" column is indexed.\nThe users table has 1 million rows, and all rows have integer typed value\n'-1' for \"location\" column, except for 2 rows that have the integer value\n'76543'.\n\nI've attached a file with SQL commands that will setup this condition.\n\nThen I run statement A\nSELECT userid FROM users, (VALUES (76543)) val (id) WHERE location = val.id;\n\nand the correct 2 results are returned, but after much more time than I\nwould expect, since the location column is indexed.\nI know that if all I wanted was the results from this specific query I could\nsimply do statement B\n\nSELECT userid FROM users WHERE location = 76543;\n\nand that works 100% of the time, at the speed that I would expect it to.\nHowever, the example I'm giving here is a simplification of significantly\nmore complex statements that involve more joins and such, where I'm trying\nto minimize round trips to database, and I've narrowed things down to the\npoint where I think that if I can figure out how to make something like\nstatement A perform well, then the overall performance problem will be\npretty easy to fix.\n\nSo, when I run explain analyze for statement A I get the following:\n\n Nested Loop (cost=0.00..27906.01 rows=1000000 width=8) (actual\ntime=15.670..5411.503 rows=2 loops=1)\n Join Filter: (users.location = \"*VALUES*\".column1)\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.01 rows=1 width=4) (actual\ntime=0.005..0.007 rows=1 loops=1)\n -> Seq Scan on users (cost=0.00..15406.00 rows=1000000 width=12)\n(actual time=0.028..2903.398 rows=1000000 loops=1)\n Total runtime: 5411.620 ms\n(5 rows)\n\nNote that I did run VACUUM ANALYZE before running EXPLAIN ANALYZE.\n\nI was curious about why the index wasn't being used so I forced it to be\nused through \"SET enable_seqscan TO off\", and then saw the following EXPLAIN\nANALYZE output:\n\n Nested Loop (cost=0.00..43889.37 rows=1000000 width=8) (actual\ntime=5813.875..5813.897 rows=2 loops=1)\n Join Filter: (users.location = \"*VALUES*\".column1)\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.01 rows=1 width=4) (actual\ntime=0.004..0.006 rows=1 loops=1)\n -> Index Scan using idx_users_location on users (cost=0.00..31389.36\nrows=1000000 width=12) (actual time=0.375..2967.249 rows=1000000 loops=1)\n Total runtime: 5814.029 ms\n\nSo, even when we use the index, the planner seems to force the query to scan\nthrough all rows, rather than stopping the scan once it can knows that there\nwill be no more rows returned (given the presence of the index).\n\nIf I use a ORDER BY clause to force the table scan to happen in descending\norder by location, then the SQL statement C performs wonderfully:\n\npostgres=# explain analyze SELECT userid FROM users2, (VALUES (76543)) val\n(id) WHERE location = val.id ORDER BY location DESC;\n\nBut that's only due to the specific values used in this example and wouldn't\nwork in general. If we ORDER_BY ascendingly, then the performance is still\nreally slow. So, basically the planner seems to always want to do a\nsequential scan of the entire index, without placing any restriction on the\nindex, and it may abort the full index scan early under ordered conditions,\nif the scan gets lucky.\n\nDo you guys have any idea why this is not working as I expect? What I hope\nto accomplish is to have a construct such as the table I labeled \"val\"\nobtained from a sub-select. Given the size of the pool from which I'm\nselecting these values, I very rarely expect the number of values in the\nsub-select results to exceed 10, so I was hoping that the database would try\nto do something like a bitmapped scan after restricting the user table to\nthe values in the small value table. Maybe it's not doing so given the\nlopsided distribution of location values in the users table, but I'm just\nnot sure.\n\nAny help is appreciated.\n\nThanks!\nEddy", "msg_date": "Sun, 15 Nov 2009 14:51:26 -0800", "msg_from": "Eddy Escardo-Raffo <[email protected]>", "msg_from_op": true, "msg_subject": "Unexpected sequential scan on an indexed column" }, { "msg_contents": "Eddy Escardo-Raffo <[email protected]> writes:\n> Do you guys have any idea why this is not working as I expect?\n\nDatatype issue maybe? When I try what seems to be the same case here\nI get the expected indexscan, so I'm thinking the problem is that\nthe comparison isn't indexable, which is a possibility if the location\ncolumn isn't actually integer.\n\nThe fact that it's estimating 1000000 rows out is also extremely\nsuspicious --- it might or might not get the exact \"2\" estimate,\nbut I'd sure expect it to know that the majority of rows don't match.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 15 Nov 2009 18:05:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected sequential scan on an indexed column " }, { "msg_contents": "Eddy Escardo-Raffo <[email protected]> writes:\n> The table used in this query is called \"users\", and it has columns \"userid\"\n> (primary key) and \"location\".\n> The \"location\" column is indexed.\n> The users table has 1 million rows, and all rows have integer typed value\n> '-1' for \"location\" column, except for 2 rows that have the integer value\n> '76543'.\n\nOh, after poking at it a bit more, I realize the problem: the planner\ndoesn't want to use an indexscan because it assumes there's a\nsignificant probability that the search will be for -1 (in which case\nthe indexscan would be slower than a seqscan, as indeed your results\nprove). Even though it could know in this particular case that the\ncomparison value isn't -1, I doubt that teaching it that would help your\nreal queries where it will probably be impossible to determine the\ncomparison values in advance.\n\nI would suggest considering using NULL rather than inventing a dummy\nvalue for unknown locations. The estimation heuristics will play a\nlot nicer with that choice.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 15 Nov 2009 18:33:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected sequential scan on an indexed column " }, { "msg_contents": "Thanks, Tom. I had discarded the possibility of data type mismatch already,\nwhich was your first guess, but was wondering if the lopsided distribution\nof location values would lead the planner to make a decision that is good on\naverage but bad for this particular query, as you point out in your second\nguess.\n\nI'll try populating the test users with a more evenly distributed location\nfield, which will be more realistic anyway, and see if that works out\nbetter.\n\nBTW, the -1 is not really a dummy value, but it's just a value that we have\nbeen using in tests for \"fake test location ID\". I just started performance\nmeasurement for my application and so far had measured performance with\nevery user being in the same default location and things seemed to be going\nwell, so I tried to switch a couple users to a different location and see\nwhat happened, and that made performance drop significantly.\n(even more detail: my queries also limit results to 10 approx, so DB quickly\nfound 10 rows that match location -1, but it took a while to discover there\nweren't more than 2 rows with the other value).\n\nThanks!\nEddy\n\nOn Sun, Nov 15, 2009 at 3:33 PM, Tom Lane <[email protected]> wrote:\n\n> Eddy Escardo-Raffo <[email protected]> writes:\n> > The table used in this query is called \"users\", and it has columns\n> \"userid\"\n> > (primary key) and \"location\".\n> > The \"location\" column is indexed.\n> > The users table has 1 million rows, and all rows have integer typed value\n> > '-1' for \"location\" column, except for 2 rows that have the integer\n> value\n> > '76543'.\n>\n> Oh, after poking at it a bit more, I realize the problem: the planner\n> doesn't want to use an indexscan because it assumes there's a\n> significant probability that the search will be for -1 (in which case\n> the indexscan would be slower than a seqscan, as indeed your results\n> prove). Even though it could know in this particular case that the\n> comparison value isn't -1, I doubt that teaching it that would help your\n> real queries where it will probably be impossible to determine the\n> comparison values in advance.\n>\n> I would suggest considering using NULL rather than inventing a dummy\n> value for unknown locations. The estimation heuristics will play a\n> lot nicer with that choice.\n>\n> regards, tom lane\n>\n\nThanks, Tom. I had discarded the possibility of data type mismatch already, which was your first guess, but was wondering if the lopsided distribution of location values would lead the planner to make a decision that is good on average but bad for this particular query, as you point out in your second guess.\n \nI'll try populating the test users with a more evenly distributed location field, which will be more realistic anyway, and see if that works out better.\n \nBTW, the -1 is not really a dummy value, but it's just a value that we have been using in tests for \"fake test location ID\". I just started performance measurement for my application and so far had measured performance with every user being in the same default location and things seemed to be going well, so I tried to switch a couple users to a different location and see what happened, and that made performance drop significantly.\n(even more detail: my queries also limit results to 10 approx, so DB quickly found 10 rows that match location -1, but it took a while to discover there weren't more than 2 rows with the other value).\n \nThanks!\nEddy\nOn Sun, Nov 15, 2009 at 3:33 PM, Tom Lane <[email protected]> wrote:\n\nEddy Escardo-Raffo <[email protected]> writes:\n> The table used in this query is called \"users\", and it has columns \"userid\"> (primary key) and \"location\".> The \"location\" column is indexed.\n> The users table has 1 million rows, and all rows have integer typed value> '-1' for  \"location\" column, except for 2 rows that have the integer value> '76543'.Oh, after poking at it a bit more, I realize the problem: the planner\ndoesn't want to use an indexscan because it assumes there's asignificant probability that the search will be for -1 (in which casethe indexscan would be slower than a seqscan, as indeed your resultsprove).  Even though it could know in this particular case that the\ncomparison value isn't -1, I doubt that teaching it that would help yourreal queries where it will probably be impossible to determine thecomparison values in advance.I would suggest considering using NULL rather than inventing a dummy\nvalue for unknown locations.  The estimation heuristics will play alot nicer with that choice.                       regards, tom lane", "msg_date": "Sun, 15 Nov 2009 15:59:31 -0800", "msg_from": "Eddy Escardo-Raffo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected sequential scan on an indexed column" }, { "msg_contents": "Yeah, that was it. Thanks! I do have one more question at the bottom,\nthough, if anyone has enough time to read through my analysis\n\nIf I create the table as:\n\nCREATE TABLE users\n(\nuserid integer NOT NULL,\nlocation integer NOT NULL,\nCONSTRAINT pk_users PRIMARY KEY (userid)\n)\nWITH (\nOIDS=FALSE\n);\n\nCREATE INDEX idx_users_location\n ON users\n USING btree\n (location);\n\nINSERT INTO users (userid,location) SELECT GENERATE_SERIES(1,1000000) ,\nGENERATE_SERIES(1,1000000)/100000;\nUPDATE users SET location=76543 WHERE userid=100092;\nUPDATE users SET location=76543 WHERE userid=997000;\n\nSo, now we have 10 distinct location values, evenly distributed, one value\n(10) with only one row and one value (76543) with 2 rows. If, after this\nsetup I do *statement C:*\n\nexplain analyze SELECT userid FROM users, (VALUES (76543), (892), (10)) val\n(id) WHERE location = val.id;\n\n Nested Loop (cost=0.00..17277.21 rows=300000 width=4) (actual\ntime=0.023..0.06 rows=3 loops=1)\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.04 rows=3 width=4) (actual\ntime0.002..0.004 rows=3 loops=1)\n -> Index Scan using idx_users_location on users (cost=0.00..4509.06\nrows=10000 width=8) (actual time=0.008..0.009 rows=1 loops=3)\n Index Cond: (users.location = \"*VALUES*\".column1)\n Total runtime: 0.078 ms\n(5 rows)\n\n*and if I do statement D:*\n\nexplain analyze SELECT userid FROM users WHERE location IN (VALUES (76543),\n(892), (10));\n Nested Loop (cost=0.05..17277.24 rows=300000 width=4) (actual\ntime=0.033..0.056 rows=3 loops=1)\n -> HashAggregate (cost=0.05..0.08 rows=3 width=4) (actual\ntime=0.012..0.015 rows=3 loops=1)\n -> Values Scan on \"*VALUES*\" (cost=0.00..0.04 rows=3 width=4)\n(actual time=0.002..0.004 rows=3 loops=1)\n -> Index Scan using idx_users_location on users (cost=0.00..4509.06\nrows=100000 width=8) (actual time=0.007..0.009 rows=1 loops=3)\n Index Cond: (users.location = \"*VALUES*\".column1)\n Total runtime: 0.094 ms\n(6 rows)\n\nWhere C has a slight edge over D (I ran them both about 5 times and it seems\nlike C is approx. 20% faster for this specific data set). So, I think this\nwill work pretty good. However, I'm still curious (for my own education) as\nto why something like the following has even more of an edge over the\nprevious two alternatives. *Statement E*:\n\nexplain analyze SELECT userid FROM users WHERE location IN (76543, 892, 10);\n\n Bitmap Heap Scan on users (cost=12.91..16.93 rows=1 width=4) (actual\ntime=0.035..0.038 rows=3 loops=1)\n Recheck Cond: (location = ANY ('{76543,892,10}'::integer[]))\n -> Bitmap Index Scan on idx_users_location (cost=0.00..12.91 rows=1\nwidth=0) (actual time=0.027..0.027 rows=3 loops=1)\n Index Cond: (location = ANY ('{76543,892,10}'::integer[]))\n Total runtime: 0.072 ms\n(5 rows)\n\nFor C, the planner estimated 10 thousand rows. For D, the planner estimated\n100 thousand rows, yet for E the planner estimated only 1 row, which is the\nclosest to reality. So, is there any way to specify a query that has a\nSUB-SELECT that returns a small set of values so that the planner treats it\nsimilar to how it treats statement E, or does statement E get its additional\nedge precisely from the fact that the restriction is defined by integer\nliterals? If so, I think it's ok, because it seems like statements C or D\nwill work well enough when the location distribution is realistic, but I'd\nlike to be educated for the future :)\n\nThanks again!\nEddy\n\nOn Sun, Nov 15, 2009 at 3:59 PM, Eddy Escardo-Raffo <[email protected]>wrote:\n\n> Thanks, Tom. I had discarded the possibility of data type mismatch already,\n> which was your first guess, but was wondering if the lopsided distribution\n> of location values would lead the planner to make a decision that is good on\n> average but bad for this particular query, as you point out in your second\n> guess.\n>\n> I'll try populating the test users with a more evenly distributed location\n> field, which will be more realistic anyway, and see if that works out\n> better.\n>\n> BTW, the -1 is not really a dummy value, but it's just a value that we have\n> been using in tests for \"fake test location ID\". I just started performance\n> measurement for my application and so far had measured performance with\n> every user being in the same default location and things seemed to be going\n> well, so I tried to switch a couple users to a different location and see\n> what happened, and that made performance drop significantly.\n> (even more detail: my queries also limit results to 10 approx, so DB\n> quickly found 10 rows that match location -1, but it took a while to\n> discover there weren't more than 2 rows with the other value).\n>\n> Thanks!\n> Eddy\n>\n> On Sun, Nov 15, 2009 at 3:33 PM, Tom Lane <[email protected]> wrote:\n>\n>> Eddy Escardo-Raffo <[email protected]> writes:\n>> > The table used in this query is called \"users\", and it has columns\n>> \"userid\"\n>> > (primary key) and \"location\".\n>> > The \"location\" column is indexed.\n>> > The users table has 1 million rows, and all rows have integer typed\n>> value\n>> > '-1' for \"location\" column, except for 2 rows that have the integer\n>> value\n>> > '76543'.\n>>\n>> Oh, after poking at it a bit more, I realize the problem: the planner\n>> doesn't want to use an indexscan because it assumes there's a\n>> significant probability that the search will be for -1 (in which case\n>> the indexscan would be slower than a seqscan, as indeed your results\n>> prove). Even though it could know in this particular case that the\n>> comparison value isn't -1, I doubt that teaching it that would help your\n>> real queries where it will probably be impossible to determine the\n>> comparison values in advance.\n>>\n>> I would suggest considering using NULL rather than inventing a dummy\n>> value for unknown locations. The estimation heuristics will play a\n>> lot nicer with that choice.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nYeah, that was it. Thanks! I do have one more question at the bottom, though, if anyone has enough time to read through my analysis\n \nIf I create the table as:\n \nCREATE TABLE users(userid integer NOT NULL,location integer NOT NULL,CONSTRAINT pk_users PRIMARY KEY (userid))WITH (OIDS=FALSE);\n \nCREATE INDEX idx_users_location  ON users  USING btree  (location);\n \nINSERT INTO users (userid,location) SELECT GENERATE_SERIES(1,1000000) , GENERATE_SERIES(1,1000000)/100000;\nUPDATE users SET location=76543 WHERE userid=100092;UPDATE users SET location=76543 WHERE userid=997000;\n \nSo, now we have 10 distinct location values, evenly distributed, one value (10) with only one row and one value (76543) with 2 rows. If, after this setup I do statement C:\n \nexplain analyze SELECT userid FROM users, (VALUES (76543), (892), (10)) val (id) WHERE location = val.id;\n \n Nested Loop  (cost=0.00..17277.21 rows=300000 width=4) (actual time=0.023..0.06 rows=3 loops=1)   ->  Values Scan on \"*VALUES*\"  (cost=0.00..0.04 rows=3 width=4) (actual time0.002..0.004 rows=3 loops=1)\n   ->  Index Scan using idx_users_location on users  (cost=0.00..4509.06 rows=10000 width=8) (actual time=0.008..0.009 rows=1 loops=3)         Index Cond: (users.location = \"*VALUES*\".column1) Total runtime: 0.078 ms\n(5 rows)\n \nand if I do statement D:\n \nexplain analyze SELECT userid FROM users WHERE location IN (VALUES (76543), (892), (10));\n Nested Loop  (cost=0.05..17277.24 rows=300000 width=4) (actual time=0.033..0.056 rows=3 loops=1)   ->  HashAggregate  (cost=0.05..0.08 rows=3 width=4) (actual time=0.012..0.015 rows=3 loops=1)         ->  Values Scan on \"*VALUES*\"  (cost=0.00..0.04 rows=3 width=4) (actual time=0.002..0.004 rows=3 loops=1)\n   ->  Index Scan using idx_users_location on users  (cost=0.00..4509.06 rows=100000 width=8) (actual time=0.007..0.009 rows=1 loops=3)         Index Cond: (users.location = \"*VALUES*\".column1) Total runtime: 0.094 ms\n(6 rows)\n \nWhere C has a slight edge over D (I ran them both about 5 times and it seems like C is approx. 20% faster for this specific data set). So, I think this will work pretty good. However, I'm still curious (for my own education) as to why something like the following has even more of an edge over the previous two alternatives. Statement E:\n \nexplain analyze SELECT userid FROM users WHERE location IN (76543, 892, 10);\n \n Bitmap Heap Scan on users  (cost=12.91..16.93 rows=1 width=4) (actual time=0.035..0.038 rows=3 loops=1)   Recheck Cond: (location = ANY ('{76543,892,10}'::integer[]))   ->  Bitmap Index Scan on idx_users_location  (cost=0.00..12.91 rows=1 width=0) (actual time=0.027..0.027 rows=3 loops=1)\n         Index Cond: (location = ANY ('{76543,892,10}'::integer[])) Total runtime: 0.072 ms(5 rows)\n \nFor C, the planner estimated 10 thousand rows. For D, the planner estimated 100 thousand rows, yet for E the planner estimated only 1 row, which is the closest to reality. So, is there any way to specify a query that has a SUB-SELECT that returns a small set of values so that the planner treats it similar to how it treats statement E, or does statement E get its additional edge precisely from the fact that the restriction is defined by integer literals? If so, I think it's ok, because it seems like statements C or D will work well enough when the location distribution is realistic, but I'd like to be educated for the future :)\n \nThanks again!\nEddy\n \nOn Sun, Nov 15, 2009 at 3:59 PM, Eddy Escardo-Raffo <[email protected]> wrote:\n\nThanks, Tom. I had discarded the possibility of data type mismatch already, which was your first guess, but was wondering if the lopsided distribution of location values would lead the planner to make a decision that is good on average but bad for this particular query, as you point out in your second guess.\n \nI'll try populating the test users with a more evenly distributed location field, which will be more realistic anyway, and see if that works out better.\n \nBTW, the -1 is not really a dummy value, but it's just a value that we have been using in tests for \"fake test location ID\". I just started performance measurement for my application and so far had measured performance with every user being in the same default location and things seemed to be going well, so I tried to switch a couple users to a different location and see what happened, and that made performance drop significantly.\n(even more detail: my queries also limit results to 10 approx, so DB quickly found 10 rows that match location -1, but it took a while to discover there weren't more than 2 rows with the other value).\n \nThanks!\nEddy\n\n\n\nOn Sun, Nov 15, 2009 at 3:33 PM, Tom Lane <[email protected]> wrote:\n\nEddy Escardo-Raffo <[email protected]> writes:\n> The table used in this query is called \"users\", and it has columns \"userid\"> (primary key) and \"location\".> The \"location\" column is indexed.> The users table has 1 million rows, and all rows have integer typed value\n> '-1' for  \"location\" column, except for 2 rows that have the integer value> '76543'.Oh, after poking at it a bit more, I realize the problem: the plannerdoesn't want to use an indexscan because it assumes there's a\nsignificant probability that the search will be for -1 (in which casethe indexscan would be slower than a seqscan, as indeed your resultsprove).  Even though it could know in this particular case that thecomparison value isn't -1, I doubt that teaching it that would help your\nreal queries where it will probably be impossible to determine thecomparison values in advance.I would suggest considering using NULL rather than inventing a dummyvalue for unknown locations.  The estimation heuristics will play a\nlot nicer with that choice.                       regards, tom lane", "msg_date": "Sun, 15 Nov 2009 17:06:26 -0800", "msg_from": "Eddy Escardo-Raffo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected sequential scan on an indexed column" }, { "msg_contents": "Eddy Escardo-Raffo <[email protected]> writes:\n> For C, the planner estimated 10 thousand rows. For D, the planner estimated\n> 100 thousand rows, yet for E the planner estimated only 1 row, which is the\n> closest to reality. So, is there any way to specify a query that has a\n> SUB-SELECT that returns a small set of values so that the planner treats it\n> similar to how it treats statement E, or does statement E get its additional\n> edge precisely from the fact that the restriction is defined by integer\n> literals?\n\nCurrently there is no attempt to look at the exact contents of a VALUES\nconstruct for planning purposes. For the examples you're showing it\nseems like the IN (list) notation is more compact and more widely used,\nso improving the VALUES alternative doesn't seem that exciting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 15 Nov 2009 20:23:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected sequential scan on an indexed column " }, { "msg_contents": "OK, I think that after reading this\ndoc<http://www.postgresql.org/files/developer/optimizer.pdf> (which\nI hadn't encountered before) about the optimizer, something clicked in my\nbrain and I think I can answer my own question. I was basically thinking\nfrom my own perspective rather than from the query planner's perspective:\n- From my perspective I know that the subselect will return very few values,\nso naively I expected that the planner would be able to do a bitmap index\nscan with the small set of values returned, without needing to do a join\n(such as the nested loop join it ended up choosing).\n- However (and this is probably obvious to all of you), the query\nplanner doesn't really know for a fact that a sub-select will result in a\nsmall number of rows, so it guesses based on its statistics what the best\nkind of join would be. A 'bitmap index scan' is not one of the choices for a\njoin, I'm guessing because a 'nested loop join with inner index scan' is a\nmore generally applicable strategy that can get the same order of magnitude\nof performance in restriction cases that end up being as simple as an IN\n(list) restriction. However, there are more competing possibilities for\npicking an appropriate join strategy than for picking a strategy to apply an\nIN (list) restriction, so the planner may not pick the 'nested loop join\nwith inner index scan' if the ANALYZE statistics don't guide it that way,\neven if that would be the best strategy in the end.\nI guess the only way I can think of to make a generic planner that would\nhave performend well even in the lopsided statistics case is to create some\nplan nodes with contingency conditions. E.g.:\n\nPlan: Nested loop join with sequential scan\nAssumption: all table values are the same\nContingency plan: nested loop join with index scan\n\nThen, if the assumption for the plan is violated early enough while\nexecuting the plan, the query executor would abort that plan node execution\nand start over with the contingency plan.\n\nI guess implementing this kind of system in a generic way could get pretty\nhairy, and given my limited experience I don't know if the proportion of\nquery plans that would be improved by having these kinds of contingency\nplans is significant enough to warrant the cost of developing this system,\nbut I'm gathering that most query planners (including the postgres planner)\ndon't do this kind of contingency planning :)\n\nThanks!\nEddy\nOn Sun, Nov 15, 2009 at 5:46 PM, Eddy Escardo-Raffo <[email protected]>wrote:\n\n> I was using VALUES in my examples to more closely mirror the results of a\n> sub-select (I abstracted everything else away and noticed that even just\n> using VALUES syntax instead of a sub-select, the performance was bad). The\n> full statement I had that led me into this more narrow investigation in the\n> first place looks more like:\n>\n> explain analyze SELECT u.userid FROM users u, (SELECT locid FROM locations\n> WHERE ...) l WHERE u.location = l.locid LIMIT 10;\n>\n> Based on the investigation so far, it seems like this kind of statement\n> will perform well when the users.location distribution is not overwhelmingly\n> lopsided, but not otherwise. However, using the IN (list) notation with a\n> list of integer literals seems to perform well no matter what is the\n> specific distribution of values in the users.location column.\n>\n> I would like to understand why this is so, to help me write better queries\n> in the future.\n>\n> Thanks,\n> Eddy\n> On Sun, Nov 15, 2009 at 5:23 PM, Tom Lane <[email protected]> wrote:\n>\n>> Eddy Escardo-Raffo <[email protected]> writes:\n>> > For C, the planner estimated 10 thousand rows. For D, the planner\n>> estimated\n>> > 100 thousand rows, yet for E the planner estimated only 1 row, which is\n>> the\n>> > closest to reality. So, is there any way to specify a query that has a\n>> > SUB-SELECT that returns a small set of values so that the planner treats\n>> it\n>> > similar to how it treats statement E, or does statement E get its\n>> additional\n>> > edge precisely from the fact that the restriction is defined by integer\n>> > literals?\n>>\n>> Currently there is no attempt to look at the exact contents of a VALUES\n>> construct for planning purposes. For the examples you're showing it\n>> seems like the IN (list) notation is more compact and more widely used,\n>> so improving the VALUES alternative doesn't seem that exciting.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nOK, I think that after reading this doc (which I hadn't encountered before) about the optimizer, something clicked in my brain and I think I can answer my own question. I was basically thinking from my own perspective rather than from the query planner's perspective:\n- From my perspective I know that the subselect will return very few values, so naively I expected that the planner would be able to do a bitmap index scan with the small set of values returned, without needing to do a join (such as the nested loop join it ended up choosing).\n- However (and this is probably obvious to all of you), the query planner doesn't really know for a fact that a sub-select will result in a small number of rows, so it guesses based on its statistics what the best kind of join would be. A 'bitmap index scan' is not one of the choices for a join, I'm guessing because a 'nested loop join with inner index scan' is a more generally applicable strategy that can get the same order of magnitude of performance in restriction cases that end up being as simple as an IN (list) restriction. However, there are more competing possibilities for picking an appropriate join strategy than for picking a strategy to apply an IN (list) restriction, so the planner may not pick the 'nested loop join with inner index scan' if the ANALYZE statistics don't guide it that way, even if that would be the best strategy in the end.\n\nI guess the only way I can think of to make a generic planner that would have performend well even in the lopsided statistics case is to create some plan nodes with contingency conditions. E.g.:\n \nPlan: Nested loop join with sequential scan\nAssumption: all table values are the same\nContingency plan: nested loop join with index scan\n \nThen, if the assumption for the plan is violated early enough while executing the plan, the query executor would abort that plan node execution and start over with the contingency plan.\n \nI guess implementing this kind of system in a generic way could get pretty hairy, and given my limited experience I don't know if the proportion of query plans that would be improved by having these kinds of contingency plans is significant enough to warrant the cost of developing this system, but I'm gathering that most query planners (including the postgres planner) don't do this kind of contingency planning :)\n \nThanks!\nEddy\nOn Sun, Nov 15, 2009 at 5:46 PM, Eddy Escardo-Raffo <[email protected]> wrote:\n\nI was using VALUES in my examples to more closely mirror the results of a sub-select (I abstracted everything else away and noticed that even just using VALUES syntax instead of a sub-select, the performance was bad). The full statement I had that led me into this more narrow investigation in the first place looks more like:\n explain analyze SELECT u.userid FROM users u, (SELECT locid FROM locations WHERE ...) l WHERE u.location = l.locid LIMIT 10; \n \nBased on the investigation so far, it seems like this kind of statement will perform well when the users.location distribution is not overwhelmingly lopsided, but not otherwise. However, using the IN (list) notation with a list of integer literals seems to perform well no matter what is the specific distribution of values in the users.location column.\n \nI would like to understand why this is so, to help me write better queries in the future.\n \nThanks,\nEddy\n\n\n\nOn Sun, Nov 15, 2009 at 5:23 PM, Tom Lane <[email protected]> wrote:\n\nEddy Escardo-Raffo <[email protected]> writes:\n> For C, the planner estimated 10 thousand rows. For D, the planner estimated> 100 thousand rows, yet for E the planner estimated only 1 row, which is the> closest to reality. So, is there any way to specify a query that has a\n> SUB-SELECT that returns a small set of values so that the planner treats it> similar to how it treats statement E, or does statement E get its additional> edge precisely from the fact that the restriction is defined by integer\n> literals?Currently there is no attempt to look at the exact contents of a VALUESconstruct for planning purposes.  For the examples you're showing itseems like the IN (list) notation is more compact and more widely used,\nso improving the VALUES alternative doesn't seem that exciting.                       regards, tom lane", "msg_date": "Mon, 16 Nov 2009 03:18:25 -0800", "msg_from": "Eddy Escardo-Raffo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected sequential scan on an indexed column" }, { "msg_contents": "Hi Eddy\n\nPerhaps a slightly naive suggestion .... have you considered\nconverting the query to a small stored procedure ('function' in\nPostgres speak)? You can pull the location values, and then iterate\nover a query like this:\n\nselect userid from users where location=:x\n\nwhich is more-or-less guaranteed to use the index.\n\n\nI had a somewhat similar situation recently, where I was passing in a\nlist of id's (from outwith Postgres) and it would on occasion avoid\nthe index in favour of a full table scan .... I changed this to\niterate over the id's with separate queries (in Java, but using a\nfunction will achieve the same thing) and went from one 5 minute query\ndoing full table scan to a handful of queries doing sub-millisecond\ndirect index lookups.\n\nCheers\nDave\n", "msg_date": "Mon, 16 Nov 2009 11:44:29 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected sequential scan on an indexed column" }, { "msg_contents": "Yeah this kind of thing would probably work. Doing this in java with\nseparate queries would be easy to code but require multiple round trips.\nDoing it as a stored procedure would be nicer but I'd have to think a little\nmore about how to refactor the java code around the query to make this\nhappen. Thanks for the suggestion.\n\nEddy\n\nOn Mon, Nov 16, 2009 at 9:44 AM, Dave Crooke <[email protected]> wrote:\n\n> Hi Eddy\n>\n> Perhaps a slightly naive suggestion .... have you considered\n> converting the query to a small stored procedure ('function' in\n> Postgres speak)? You can pull the location values, and then iterate\n> over a query like this:\n>\n> select userid from users where location=:x\n>\n> which is more-or-less guaranteed to use the index.\n>\n>\n> I had a somewhat similar situation recently, where I was passing in a\n> list of id's (from outwith Postgres) and it would on occasion avoid\n> the index in favour of a full table scan .... I changed this to\n> iterate over the id's with separate queries (in Java, but using a\n> function will achieve the same thing) and went from one 5 minute query\n> doing full table scan to a handful of queries doing sub-millisecond\n> direct index lookups.\n>\n> Cheers\n> Dave\n>\n\nYeah this kind of thing would probably work. Doing this in java with separate queries would be easy to code but require multiple round trips. Doing it as a stored procedure would be nicer but I'd have to think a little more about how to refactor the java code around the query to make this happen. Thanks for the suggestion.\n \nEddy\nOn Mon, Nov 16, 2009 at 9:44 AM, Dave Crooke <[email protected]> wrote:\nHi EddyPerhaps a slightly naive suggestion .... have you consideredconverting the query to a small stored procedure ('function' in\nPostgres speak)? You can pull the location values, and then iterateover a query like this:select userid from users where location=:xwhich is more-or-less guaranteed to use the index.I had a somewhat similar situation recently, where I was passing in a\nlist of id's (from outwith Postgres) and it would on occasion avoidthe index in favour of a full table scan .... I changed this toiterate over the id's with separate queries (in Java, but using afunction will achieve the same thing) and went from one 5 minute query\ndoing full table scan to a handful of queries doing sub-milliseconddirect index lookups.CheersDave", "msg_date": "Mon, 16 Nov 2009 12:45:46 -0800", "msg_from": "Eddy Escardo-Raffo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected sequential scan on an indexed column" }, { "msg_contents": "On Mon, Nov 16, 2009 at 12:45:46PM -0800, Eddy Escardo-Raffo wrote:\n> Yeah this kind of thing would probably work. Doing this in java with\n> separate queries would be easy to code but require multiple round trips.\n> Doing it as a stored procedure would be nicer but I'd have to think a little\n> more about how to refactor the java code around the query to make this\n> happen. Thanks for the suggestion.\n> \n> Eddy\n> \n\nHi Eddy,\n\nHere is a lookup wrapper that is used in DSPAM to work around\na similar problem. Maybe you can use it as a template for your\nfunction:\n\ncreate function lookup_tokens(integer,bigint[])\n returns setof dspam_token_data\n language plpgsql stable\n as '\ndeclare\n v_rec record;\nbegin\n for v_rec in select * from dspam_token_data\n where uid=$1\n and token in (select $2[i]\n from generate_series(array_lower($2,1),array_upper($2,1)) s(i))\n loop\n return next v_rec;\n end loop;\n return;\nend;';\n\nRegards,\nKen\n\n> On Mon, Nov 16, 2009 at 9:44 AM, Dave Crooke <[email protected]> wrote:\n> \n> > Hi Eddy\n> >\n> > Perhaps a slightly naive suggestion .... have you considered\n> > converting the query to a small stored procedure ('function' in\n> > Postgres speak)? You can pull the location values, and then iterate\n> > over a query like this:\n> >\n> > select userid from users where location=:x\n> >\n> > which is more-or-less guaranteed to use the index.\n> >\n> >\n> > I had a somewhat similar situation recently, where I was passing in a\n> > list of id's (from outwith Postgres) and it would on occasion avoid\n> > the index in favour of a full table scan .... I changed this to\n> > iterate over the id's with separate queries (in Java, but using a\n> > function will achieve the same thing) and went from one 5 minute query\n> > doing full table scan to a handful of queries doing sub-millisecond\n> > direct index lookups.\n> >\n> > Cheers\n> > Dave\n> >\n", "msg_date": "Mon, 16 Nov 2009 14:55:14 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected sequential scan on an indexed column" }, { "msg_contents": "This is incredibly helpful, Kenneth. I didn't know about the SETOF syntax at\nall. This could help minimize the amount of refactoring I need to do.\n\nThanks!\nEddy\n\nOn Mon, Nov 16, 2009 at 12:55 PM, Kenneth Marshall <[email protected]> wrote:\n\n> On Mon, Nov 16, 2009 at 12:45:46PM -0800, Eddy Escardo-Raffo wrote:\n> > Yeah this kind of thing would probably work. Doing this in java with\n> > separate queries would be easy to code but require multiple round trips.\n> > Doing it as a stored procedure would be nicer but I'd have to think a\n> little\n> > more about how to refactor the java code around the query to make this\n> > happen. Thanks for the suggestion.\n> >\n> > Eddy\n> >\n>\n> Hi Eddy,\n>\n> Here is a lookup wrapper that is used in DSPAM to work around\n> a similar problem. Maybe you can use it as a template for your\n> function:\n>\n> create function lookup_tokens(integer,bigint[])\n> returns setof dspam_token_data\n> language plpgsql stable\n> as '\n> declare\n> v_rec record;\n> begin\n> for v_rec in select * from dspam_token_data\n> where uid=$1\n> and token in (select $2[i]\n> from generate_series(array_lower($2,1),array_upper($2,1)) s(i))\n> loop\n> return next v_rec;\n> end loop;\n> return;\n> end;';\n>\n> Regards,\n> Ken\n>\n> > On Mon, Nov 16, 2009 at 9:44 AM, Dave Crooke <[email protected]> wrote:\n> >\n> > > Hi Eddy\n> > >\n> > > Perhaps a slightly naive suggestion .... have you considered\n> > > converting the query to a small stored procedure ('function' in\n> > > Postgres speak)? You can pull the location values, and then iterate\n> > > over a query like this:\n> > >\n> > > select userid from users where location=:x\n> > >\n> > > which is more-or-less guaranteed to use the index.\n> > >\n> > >\n> > > I had a somewhat similar situation recently, where I was passing in a\n> > > list of id's (from outwith Postgres) and it would on occasion avoid\n> > > the index in favour of a full table scan .... I changed this to\n> > > iterate over the id's with separate queries (in Java, but using a\n> > > function will achieve the same thing) and went from one 5 minute query\n> > > doing full table scan to a handful of queries doing sub-millisecond\n> > > direct index lookups.\n> > >\n> > > Cheers\n> > > Dave\n> > >\n>\n\nThis is incredibly helpful, Kenneth. I didn't know about the SETOF syntax at all. This could help minimize the amount of refactoring I need to do.\n \nThanks!\nEddy\nOn Mon, Nov 16, 2009 at 12:55 PM, Kenneth Marshall <[email protected]> wrote:\n\nOn Mon, Nov 16, 2009 at 12:45:46PM -0800, Eddy Escardo-Raffo wrote:> Yeah this kind of thing would probably work. Doing this in java with> separate queries would be easy to code but require multiple round trips.\n> Doing it as a stored procedure would be nicer but I'd have to think a little> more about how to refactor the java code around the query to make this> happen. Thanks for the suggestion.>> Eddy\n>Hi Eddy,Here is a lookup wrapper that is used in DSPAM to work arounda similar problem. Maybe you can use it as a template for yourfunction:create function lookup_tokens(integer,bigint[])\n returns setof dspam_token_data language plpgsql stable as 'declare v_rec record;begin for v_rec in select * from dspam_token_data   where uid=$1     and token in (select $2[i]       from generate_series(array_lower($2,1),array_upper($2,1)) s(i))\n loop   return next v_rec; end loop; return;end;';Regards,Ken\n\n\n> On Mon, Nov 16, 2009 at 9:44 AM, Dave Crooke <[email protected]> wrote:>> > Hi Eddy> >> > Perhaps a slightly naive suggestion .... have you considered\n> > converting the query to a small stored procedure ('function' in> > Postgres speak)? You can pull the location values, and then iterate> > over a query like this:> >> > select userid from users where location=:x\n> >> > which is more-or-less guaranteed to use the index.> >> >> > I had a somewhat similar situation recently, where I was passing in a> > list of id's (from outwith Postgres) and it would on occasion avoid\n> > the index in favour of a full table scan .... I changed this to> > iterate over the id's with separate queries (in Java, but using a> > function will achieve the same thing) and went from one 5 minute query\n> > doing full table scan to a handful of queries doing sub-millisecond> > direct index lookups.> >> > Cheers> > Dave> >", "msg_date": "Mon, 16 Nov 2009 13:51:06 -0800", "msg_from": "Eddy Escardo-Raffo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected sequential scan on an indexed column" }, { "msg_contents": "With Postgres, you can transparently replace a regular select with a\nfunction that takes the same types and returns a record iterator with the\nsame columns. The only change needed is the SQL used to invoke it, you won't\nneed any logic changes in your app code (Java or whatever), e.g.\n\n*select ............ where x=:x ......(select ...... where ..... y=:y)\n*\nBecomes\n\n*select myfunction(:x, :y)\n*\nOn Mon, Nov 16, 2009 at 2:45 PM, Eddy Escardo-Raffo <[email protected]>wrote:\n\n> Yeah this kind of thing would probably work. Doing this in java with\n> separate queries would be easy to code but require multiple round trips.\n> Doing it as a stored procedure would be nicer but I'd have to think a little\n> more about how to refactor the java code around the query to make this\n> happen. Thanks for the suggestion.\n>\n> Eddy\n>\n> On Mon, Nov 16, 2009 at 9:44 AM, Dave Crooke <[email protected]> wrote:\n>\n>> Hi Eddy\n>>\n>> Perhaps a slightly naive suggestion .... have you considered\n>> converting the query to a small stored procedure ('function' in\n>> Postgres speak)? You can pull the location values, and then iterate\n>> over a query like this:\n>>\n>> select userid from users where location=:x\n>>\n>> which is more-or-less guaranteed to use the index.\n>>\n>>\n>> I had a somewhat similar situation recently, where I was passing in a\n>> list of id's (from outwith Postgres) and it would on occasion avoid\n>> the index in favour of a full table scan .... I changed this to\n>> iterate over the id's with separate queries (in Java, but using a\n>> function will achieve the same thing) and went from one 5 minute query\n>> doing full table scan to a handful of queries doing sub-millisecond\n>> direct index lookups.\n>>\n>> Cheers\n>> Dave\n>>\n>\n>\n\nWith Postgres, you can transparently replace a regular select with a function that takes the same types and returns a record iterator with the same columns. The only change needed is the SQL used to invoke it, you won't need any logic changes in your app code (Java or whatever), e.g.\nselect ............ where x=:x ......(select ...... where ..... y=:y)Becomesselect myfunction(:x, :y)On Mon, Nov 16, 2009 at 2:45 PM, Eddy Escardo-Raffo <[email protected]> wrote:\nYeah this kind of thing would probably work. Doing this in java with separate queries would be easy to code but require multiple round trips. Doing it as a stored procedure would be nicer but I'd have to think a little more about how to refactor the java code around the query to make this happen. Thanks for the suggestion.\n \nEddy\nOn Mon, Nov 16, 2009 at 9:44 AM, Dave Crooke <[email protected]> wrote:\nHi EddyPerhaps a slightly naive suggestion .... have you consideredconverting the query to a small stored procedure ('function' in\n\nPostgres speak)? You can pull the location values, and then iterateover a query like this:select userid from users where location=:xwhich is more-or-less guaranteed to use the index.I had a somewhat similar situation recently, where I was passing in a\n\nlist of id's (from outwith Postgres) and it would on occasion avoidthe index in favour of a full table scan .... I changed this toiterate over the id's with separate queries (in Java, but using afunction will achieve the same thing) and went from one 5 minute query\n\ndoing full table scan to a handful of queries doing sub-milliseconddirect index lookups.CheersDave", "msg_date": "Mon, 16 Nov 2009 15:52:08 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected sequential scan on an indexed column" }, { "msg_contents": "Thanks, Dave.\nEddy\n\nOn Mon, Nov 16, 2009 at 1:52 PM, Dave Crooke <[email protected]> wrote:\n\n> With Postgres, you can transparently replace a regular select with a\n> function that takes the same types and returns a record iterator with the\n> same columns. The only change needed is the SQL used to invoke it, you won't\n> need any logic changes in your app code (Java or whatever), e.g.\n>\n> *select ............ where x=:x ......(select ...... where ..... y=:y)\n> *\n> Becomes\n>\n> *select myfunction(:x, :y)\n> *\n>\n> On Mon, Nov 16, 2009 at 2:45 PM, Eddy Escardo-Raffo <[email protected]>wrote:\n>\n>> Yeah this kind of thing would probably work. Doing this in java with\n>> separate queries would be easy to code but require multiple round trips.\n>> Doing it as a stored procedure would be nicer but I'd have to think a little\n>> more about how to refactor the java code around the query to make this\n>> happen. Thanks for the suggestion.\n>>\n>> Eddy\n>>\n>> On Mon, Nov 16, 2009 at 9:44 AM, Dave Crooke <[email protected]> wrote:\n>>\n>>> Hi Eddy\n>>>\n>>> Perhaps a slightly naive suggestion .... have you considered\n>>> converting the query to a small stored procedure ('function' in\n>>> Postgres speak)? You can pull the location values, and then iterate\n>>> over a query like this:\n>>>\n>>> select userid from users where location=:x\n>>>\n>>> which is more-or-less guaranteed to use the index.\n>>>\n>>>\n>>> I had a somewhat similar situation recently, where I was passing in a\n>>> list of id's (from outwith Postgres) and it would on occasion avoid\n>>> the index in favour of a full table scan .... I changed this to\n>>> iterate over the id's with separate queries (in Java, but using a\n>>> function will achieve the same thing) and went from one 5 minute query\n>>> doing full table scan to a handful of queries doing sub-millisecond\n>>> direct index lookups.\n>>>\n>>> Cheers\n>>> Dave\n>>>\n>>\n>>\n>\n\nThanks, Dave.\nEddy\nOn Mon, Nov 16, 2009 at 1:52 PM, Dave Crooke <[email protected]> wrote:\nWith Postgres, you can transparently replace a regular select with a function that takes the same types and returns a record iterator with the same columns. The only change needed is the SQL used to invoke it, you won't need any logic changes in your app code (Java or whatever), e.g.\nselect ............ where x=:x ......(select ...... where ..... y=:y)Becomesselect myfunction(:x, :y)\n\n\n\nOn Mon, Nov 16, 2009 at 2:45 PM, Eddy Escardo-Raffo <[email protected]> wrote:\n\nYeah this kind of thing would probably work. Doing this in java with separate queries would be easy to code but require multiple round trips. Doing it as a stored procedure would be nicer but I'd have to think a little more about how to refactor the java code around the query to make this happen. Thanks for the suggestion.\n \nEddy\n\n\n\nOn Mon, Nov 16, 2009 at 9:44 AM, Dave Crooke <[email protected]> wrote:\nHi EddyPerhaps a slightly naive suggestion .... have you consideredconverting the query to a small stored procedure ('function' in\nPostgres speak)? You can pull the location values, and then iterateover a query like this:select userid from users where location=:xwhich is more-or-less guaranteed to use the index.I had a somewhat similar situation recently, where I was passing in a\nlist of id's (from outwith Postgres) and it would on occasion avoidthe index in favour of a full table scan .... I changed this toiterate over the id's with separate queries (in Java, but using afunction will achieve the same thing) and went from one 5 minute query\ndoing full table scan to a handful of queries doing sub-milliseconddirect index lookups.CheersDave", "msg_date": "Mon, 16 Nov 2009 14:15:15 -0800", "msg_from": "Eddy Escardo-Raffo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected sequential scan on an indexed column" } ]
[ { "msg_contents": "I've got a pair of servers running PostgreSQL 8.0.4 on Windows. We \nhave several tables that add and delete massive amounts of data in a \nsingle day and are increasingly having a problem with drive \nfragmentation and it appears to be giving us a decent performance hit. \nThis is external fragmentation we are dealing with. We already vacuum \nthe tables on a regular basis to reduce internal fragmentation as best \nas possible.\n\nCurrently I shut down the PostgreSQL service every few weeks and \nmanually run a defragment of the drive, but this is getting tedious. \nDiskeeper has an Automatic Mode that runs in the background all the \ntime to handle this for me. They advertise they are compatible with MS \nSQL server, but don't appear to have any specific info on PostgreSQL.\n\nI'm curious if anyone else has used Diskeeper's Automatic Mode in \ncombination with PostgreSQL to defrag and keep the drive defragged \nwhile PostgreSQL is actually running.\n\nThanks!\n\n-chris\n<www.mythtech.net>\n\n\n", "msg_date": "Mon, 16 Nov 2009 12:14:44 -0500", "msg_from": "cb <[email protected]>", "msg_from_op": true, "msg_subject": "Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Mon, Nov 16, 2009 at 12:14 PM, cb <[email protected]> wrote:\n> I've got a pair of servers running PostgreSQL 8.0.4 on Windows. We have\n> several tables that add and delete massive amounts of data in a single day\n> and are increasingly having a problem with drive fragmentation and it\n> appears to be giving us a decent performance hit. This is external\n> fragmentation we are dealing with. We already vacuum the tables on a regular\n> basis to reduce internal fragmentation as best as possible.\n>\n> Currently I shut down the PostgreSQL service every few weeks and manually\n> run a defragment of the drive, but this is getting tedious. Diskeeper has an\n> Automatic Mode that runs in the background all the time to handle this for\n> me. They advertise they are compatible with MS SQL server, but don't appear\n> to have any specific info on PostgreSQL.\n>\n> I'm curious if anyone else has used Diskeeper's Automatic Mode in\n> combination with PostgreSQL to defrag and keep the drive defragged while\n> PostgreSQL is actually running.\n>\n> Thanks!\n>\n> -chris\n> <www.mythtech.net>\n\nI'm not sure what the answer is to your actual question, but I'd\nhighly recommend upgrading to 8.3 or 8.4. The performance is likely\nto be a lot better, and 8.0/8.1 are no longer supported on Windows.\n\n...Robert\n", "msg_date": "Mon, 16 Nov 2009 13:09:33 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "cb wrote:\n> I'm curious if anyone else has used Diskeeper's Automatic Mode in \n> combination with PostgreSQL to defrag and keep the drive defragged \n> while PostgreSQL is actually running.\n>\n> Thanks!\n>\n> -chris\n> <www.mythtech.net>\n> \n\nI've been a Diskeeper customer for about 10 years now and consider it \n'must have' software for Windows machines. I do not work for them nor \nget paid by them, I just find the software incredibly valuable. I'm \nrunning XP-64bit with 8.4.0 and Diskeeper does a wonderful job of \ndefragmenting the database tables when they get fragmented. I just \nchecked their website and the 2009 version is still listed. I've been \nrunning the 2010 Enterprise Server version for about a week and I can \ntell you that it's great! (I'm actually running it on 3 servers but \nonly mine has PG) The main difference with the 2010 version is \nsomething that they call IntelliWrite. As everyone knows, one of the \nbiggest problems with the Windows OS is that it lets fragmentation occur \nin the first place. This new IntelliWrite actually prevents the \nfragmentation from occurring in the first place (or at least the vast \nmajority of it). The auto defrag takes care of the rest. I can attest \nto this actually working in real life scenarios. The other thing \nDiskeeper has is something they call I-FAAST. What this does is monitor \nfile usage and moves the most heavily accessed files to the fastest part \nof the drive. My db is on an Adaptec 52445 with 16 ST373455SS (15K5) in \nRAID5 and Diskeeper defrags and moves pretty much everything in \n\\data\\base to the outer part of the drive. So the short answer is yes, \nI have it running with PostgreSQL and have not had any problems.\n\n\nBob\n", "msg_date": "Mon, 16 Nov 2009 12:11:24 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Mon, Nov 16, 2009 at 1:11 PM, Robert Schnabel <[email protected]> wrote:\n> cb wrote:\n>>\n>> I'm curious if anyone else has used Diskeeper's Automatic Mode in\n>>  combination with PostgreSQL to defrag and keep the drive defragged  while\n>> PostgreSQL is actually running.\n>>\n>> Thanks!\n>>\n>> -chris\n>> <www.mythtech.net>\n>>\n>\n> I've been a Diskeeper customer for about 10 years now and consider it 'must\n> have' software for Windows machines.  I do not work for them nor get paid by\n> them, I just find the software incredibly valuable.  I'm running XP-64bit\n> with 8.4.0 and Diskeeper does a wonderful job of defragmenting the database\n> tables when they get fragmented.  I just checked their website and the 2009\n> version is still listed.  I've been running the 2010  Enterprise Server\n> version for about a week and I can tell you that it's great!  (I'm actually\n> running it on 3 servers but only mine has PG)  The main difference with the\n> 2010 version is something that they call IntelliWrite.  As everyone knows,\n> one of the biggest problems with the Windows OS is that it lets\n> fragmentation occur in the first place.  This new IntelliWrite actually\n> prevents the fragmentation from occurring in the first place (or at least\n> the vast majority of it).  The auto defrag takes care of the rest.  I can\n> attest to this actually working in real life scenarios.  The other thing\n> Diskeeper has is something they call I-FAAST.  What this does is monitor\n> file usage and moves the most heavily accessed files to the fastest part of\n> the drive.  My db is on an Adaptec 52445 with 16 ST373455SS (15K5) in RAID5\n> and Diskeeper defrags and moves pretty much everything in \\data\\base to the\n> outer part of the drive.  So the short answer is yes, I have it running with\n> PostgreSQL and have not had any problems.\n\nHave you unplugged the power cord a few times in the middle of heavy\nwrite activity?\n\n...Robert\n", "msg_date": "Mon, 16 Nov 2009 14:42:42 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "\n\n\n\n\n\n\n\n So the short answer is yes, I have it running with\nPostgreSQL and have not had any problems.\n \n\n\nHave you unplugged the power cord a few times in the middle of heavy\nwrite activity?\n\n...Robert\n\nNope.  Forgive my ignorance but isn't that what a UPS is for anyway? \nAlong with a BBU controller.\n\nBob\n\n\n\n", "msg_date": "Mon, 16 Nov 2009 14:04:46 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Mon, Nov 16, 2009 at 1:04 PM, Robert Schnabel <[email protected]> wrote:\n>\n>  So the short answer is yes, I have it running with\n> PostgreSQL and have not had any problems.\n>\n>\n> Have you unplugged the power cord a few times in the middle of heavy\n> write activity?\n>\n> ...Robert\n>\n> Nope.  Forgive my ignorance but isn't that what a UPS is for anyway?  Along\n> with a BBU controller.\n\nBBU controller, yes. UPS, no. I've seen more than one multi-million\ndollar hosting center go down from something as simple as a piece of\nwire flying into a power conditioner, shorting it out, and feeding\nback and blowing every single power conditioner and UPS AND the switch\nthat allowed the diesel to come into the loop. All failed. Every\nmachine lost power. One database server out of a few dozens came back\nup. In fact there were a lot of different dbm systems running in that\ncenter, and only the pg 7.2 version came back up unscathed.\n\nBecause someone insisted on pulling the plug out from the back a dozen\nor so times to make sure it would do come back up. PG saved our\nshorts and the asses they contain. Sad thing is I'm sure the other\nservers COULD have come back up if they had been running proper BBUs\nand hard drives that didn't lie about fsync, and an OS that enforced\nfsync properly, at least for scsi, at the time.\n\nPower supplies / UPSes fail far more often than one might think. And\na db that doesn't come back up afterwards is not to be placed into\nproduction.\n", "msg_date": "Mon, 16 Nov 2009 13:12:00 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Mon, Nov 16, 2009 at 1:12 PM, Scott Marlowe <[email protected]> wrote:\n> Power supplies / UPSes fail far more often than one might think.  And\n> a db that doesn't come back up afterwards is not to be placed into\n> production.\n\nNote that there are uses for databases that can lose everything and\njust initdb and be happy. Session databases are like that. But I'm\ntalking persistent databases.\n", "msg_date": "Mon, 16 Nov 2009 13:15:34 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "Robert Schnabel wrote:\n> Nope. Forgive my ignorance but isn't that what a UPS is for anyway? \n> Along with a BBU controller.\nIf you have a UPS *and* a BBU controller, then things are much \nbetter--those should have a write cache that insulates you from the \nworst of the problems. But just a UPS alone doesn't help you very much:\n\n1) A UPS is built with a consumable (the battery), and they do wear \nout. Unless you're proactive about monitoring UPS battery quality and \ndoing tests, you won't find this out until the first time the power goes \nout and the UPS doesn't work anymore.\n2) Do you trust that the UPS integration software will *always* shut the \nserver down before the power goes out? You shouldn't.\n3) Ever had someone trip over the cord between the UPS and the server? \nHow about accidentally unplugging the wrong server? These things \nhappen; do you want data corruption when they do?\n4) There are all sorts of major electrical problems you can run into \n(around here it's mainly summer lightening) that will blow out a UPS \nwithout giving an opportunity for graceful shutdown.\n\nIf there's anyone who thinks a UPS is all you need to be safe from power \nissues, I know a guy named Murphy you should get introduced to.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nRobert Schnabel wrote:\n\n\nNope.  Forgive my ignorance but isn't that what a UPS is for anyway? \nAlong with a BBU controller.\n\nIf you have a UPS *and* a BBU controller, then things are much\nbetter--those should have a write cache that insulates you from the\nworst of the problems.  But just a UPS alone doesn't help you very much:\n\n1) A UPS is built with a consumable (the battery), and they do wear\nout.  Unless you're proactive about monitoring UPS battery quality and\ndoing tests, you won't find this out until the first time the power\ngoes out and the UPS doesn't work anymore.\n2) Do you trust that the UPS integration software will *always* shut\nthe server down before the power goes out?  You shouldn't.\n3) Ever had someone trip over the cord between the UPS and the server? \nHow about accidentally unplugging the wrong server?  These things\nhappen; do you want data corruption when they do?\n4) There are all sorts of major electrical problems you can run into\n(around here it's mainly summer lightening) that will blow out a UPS\nwithout giving an opportunity for graceful shutdown.\n\nIf there's anyone who thinks a UPS is all you need to be safe from\npower issues, I know a guy named Murphy you should get introduced to.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com", "msg_date": "Mon, 16 Nov 2009 15:20:12 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "Greg Smith wrote:\n> Robert Schnabel wrote:\n>> Nope. Forgive my ignorance but isn't that what a UPS is for anyway? \n>> Along with a BBU controller.\n> If you have a UPS *and* a BBU controller, then things are much\n> better--those should have a write cache that insulates you from the\n> worst of the problems. But just a UPS alone doesn't help you very much:\nA UPS is just a controlled shutdown device for when things are working\nand mains power goes off.\n\nNote the \"when things are working\" qualifier. :)\n\n-- Karl", "msg_date": "Mon, 16 Nov 2009 14:20:29 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "My reply about server failure was shwoing what could go wrong at the server\nlevel assuming a first-class, properly run data center, with fully redundant\npower, including a server with dual power supplies on separate cords fed by\nseparate UPS'es etc. ....\n\nUnfortunately, *correctly* configured A/B power is all too rare these days.\nSome examples of foo that I've seen at professional data centers:\n\n- Allegedly \"A/B\" power supplied from two phases of the same UPS (which was\nthen taken down due to a tech's error during \"hot\" maintenance)\n- \"A/B\" power fed through a common switch panel\n- A/B power with dual attached servers, with each power feed running a\nsteady 60% load (do the math!)\n\nA classic piece of foo from a manufacturer - Dell supplies their low end\ndual-power rackmount boxes with a Y shaped IEC cable ... clearly, this is\nonly suitable for non-redundant use but I've seen plenty of them deployed in\ndata centers by less-than-clueful admins.\n\n\nOn Mon, Nov 16, 2009 at 2:12 PM, Scott Marlowe <[email protected]>\nwrote:\n> On Mon, Nov 16, 2009 at 1:04 PM, Robert Schnabel <[email protected]>\nwrote:\n>>\n>> So the short answer is yes, I have it running with\n>> PostgreSQL and have not had any problems.\n>>\n>>\n>> Have you unplugged the power cord a few times in the middle of heavy\n>> write activity?\n>>\n>> ...Robert\n>>\n>> Nope. Forgive my ignorance but isn't that what a UPS is for anyway?\nAlong\n>> with a BBU controller.\n>\n> BBU controller, yes. UPS, no. I've seen more than one multi-million\n> dollar hosting center go down from something as simple as a piece of\n> wire flying into a power conditioner, shorting it out, and feeding\n> back and blowing every single power conditioner and UPS AND the switch\n> that allowed the diesel to come into the loop. All failed. Every\n> machine lost power. One database server out of a few dozens came back\n> up. In fact there were a lot of different dbm systems running in that\n> center, and only the pg 7.2 version came back up unscathed.\n>\n> Because someone insisted on pulling the plug out from the back a dozen\n> or so times to make sure it would do come back up. PG saved our\n> shorts and the asses they contain. Sad thing is I'm sure the other\n> servers COULD have come back up if they had been running proper BBUs\n> and hard drives that didn't lie about fsync, and an OS that enforced\n> fsync properly, at least for scsi, at the time.\n>\n> Power supplies / UPSes fail far more often than one might think. And\n> a db that doesn't come back up afterwards is not to be placed into\n> production.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nMy reply about server failure was shwoing what could go wrong at the server level assuming a first-class, properly run data center, with fully redundant power, including a server with dual power supplies on separate cords fed by separate UPS'es etc. .... \nUnfortunately, correctly configured A/B power is all too rare these days. Some examples of foo that I've seen at professional data centers:- Allegedly \"A/B\" power supplied from two phases of the same UPS (which was then taken down due to a tech's error during \"hot\" maintenance)\n- \"A/B\" power fed through a common switch panel- A/B power with dual attached servers, with each power feed running a steady 60% load (do the math!)A classic piece of foo from a manufacturer - Dell supplies their low end dual-power rackmount boxes with a Y shaped IEC cable ... clearly, this is only suitable for non-redundant use but I've seen plenty of them deployed in data centers by less-than-clueful admins.\nOn Mon, Nov 16, 2009 at 2:12 PM, Scott Marlowe <[email protected]> wrote:> On Mon, Nov 16, 2009 at 1:04 PM, Robert Schnabel <[email protected]> wrote:\n>>>>  So the short answer is yes, I have it running with>> PostgreSQL and have not had any problems.>>>>>> Have you unplugged the power cord a few times in the middle of heavy\n>> write activity?>>>> ...Robert>>>> Nope.  Forgive my ignorance but isn't that what a UPS is for anyway?  Along>> with a BBU controller.>> BBU controller, yes.  UPS, no.  I've seen more than one multi-million\n> dollar hosting center go down from something as simple as a piece of> wire flying into a power conditioner, shorting it out, and feeding> back and blowing every single power conditioner and UPS AND the switch\n> that allowed the diesel to come into the loop.  All failed.  Every> machine lost power.  One database server out of a few dozens came back> up.  In fact there were a lot of different dbm systems running in that\n> center, and only the pg 7.2 version came back up unscathed.>> Because someone insisted on pulling the plug out from the back a dozen> or so times to make sure it would do come back up.  PG saved our\n> shorts and the asses they contain.  Sad thing is I'm sure the other> servers COULD have come back up if they had been running proper BBUs> and hard drives that didn't lie about fsync, and an OS that enforced\n> fsync properly, at least for scsi, at the time.>> Power supplies / UPSes fail far more often than one might think.  And> a db that doesn't come back up afterwards is not to be placed into\n> production.>> --> Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance>", "msg_date": "Mon, 16 Nov 2009 14:23:24 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "Dave Crooke wrote:\n> My reply about server failure was shwoing what could go wrong at the\n> server level assuming a first-class, properly run data center, with\n> fully redundant power, including a server with dual power supplies on\n> separate cords fed by separate UPS'es etc. ....\nNever had a motherboard short out either eh? China makes really GOOD\nelectrolytic caps these days (I can show you several SERVER CLASS boards\nthat were on conditioned power and popped 'em, rendering the board dead\ninstantly.)\n\nMurphy is a bastard.\n\n-- Karl", "msg_date": "Mon, 16 Nov 2009 14:32:32 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "\n\n\n\n\n\nScott Marlowe wrote:\n\nOn Mon, Nov 16, 2009 at 1:04 PM, Robert Schnabel <[email protected]> wrote:\n \n\n So the short answer is yes, I have it running with\nPostgreSQL and have not had any problems.\n\n\nHave you unplugged the power cord a few times in the middle of heavy\nwrite activity?\n\n...Robert\n\nNope.  Forgive my ignorance but isn't that what a UPS is for anyway?  Along\nwith a BBU controller.\n \n\n\nBBU controller, yes. UPS, no. I've seen more than one multi-million\ndollar hosting center go down from something as simple as a piece of\nwire flying into a power conditioner, shorting it out, and feeding\nback and blowing every single power conditioner and UPS AND the switch\nthat allowed the diesel to come into the loop. All failed. Every\nmachine lost power. One database server out of a few dozens came back\nup. In fact there were a lot of different dbm systems running in that\ncenter, and only the pg 7.2 version came back up unscathed.\n\nBecause someone insisted on pulling the plug out from the back a dozen\nor so times to make sure it would do come back up. PG saved our\nshorts and the asses they contain. Sad thing is I'm sure the other\nservers COULD have come back up if they had been running proper BBUs\nand hard drives that didn't lie about fsync, and an OS that enforced\nfsync properly, at least for scsi, at the time.\n\nPower supplies / UPSes fail far more often than one might think. And\na db that doesn't come back up afterwards is not to be placed into\nproduction.\n \n\nOk, so you have sufficiently sparked my curiosity as to whether\nDiskeeper will in any way cause Postgres to fail the power chord test. \nUnfortunately I have some deadlines to meet so won't be able to test\nthis out until later in the week.  I'm in the fortunate position that\nthe only person that uses my db is me myself and I so I can control\nwhat and when it does work.  I also have backup software running that\ndoes complete drive imaging so I should be able to do this fairly\nsafely.  Here is the plan...\n\n1) Shut down the Diskeeper service, run a query that is write heavy and\nthen pull the chord on the box.  Wait a few minutes then plug it back\nin and see if it recovers.\n2) Leave Diskeeper running and repeat the above...\n\nComments/suggestions?  If I'm going to do this I'd like to make sure I\ndo it correctly so it will be useful for the group.\n\nI'm using XP 64 bit, Adaptec 52445 + BBU, I have two external drive\nenclosures (8 each) plus the 8 in the box, pg 8.4.0\n\nBob\n\n\n\n\n\n", "msg_date": "Mon, 16 Nov 2009 14:32:51 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Mon, Nov 16, 2009 at 1:32 PM, Robert Schnabel <[email protected]> wrote:\n>\n> Ok, so you have sufficiently sparked my curiosity as to whether Diskeeper\n> will in any way cause Postgres to fail the power chord test.  Unfortunately\n> I have some deadlines to meet so won't be able to test this out until later\n\nBest time is during acceptance testing before deployment. Failing\nthat testing it in production on the backup server so you can burn it\nto the ground and rebuild it on a saturday.\n\nNote that surviving the power plug being pulled doesn't PROVE your\nsystem will always do that. You can try to simulate the real mix of\nload and even replay queries when pulling the plug, only to find the\none corner case you didnt' test in production when power is lost. The\npower cord plug can prove a system bad, but you're still somewhat\n\"hoping\" it's really good, with a high probability of being right.\n\nWhich is why backup is so important.\n", "msg_date": "Mon, 16 Nov 2009 13:50:39 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Mon, Nov 16, 2009 at 1:32 PM, Karl Denninger <[email protected]> wrote:\n> Dave Crooke wrote:\n>> My reply about server failure was shwoing what could go wrong at the\n>> server level assuming a first-class, properly run data center, with\n>> fully redundant power, including a server with dual power supplies on\n>> separate cords fed by separate UPS'es etc. ....\n> Never had a motherboard short out either eh?  China makes really GOOD\n> electrolytic caps these days (I can show you several SERVER CLASS boards\n> that were on conditioned power and popped 'em, rendering the board dead\n> instantly.)\n>\n> Murphy is a bastard.\n\nYou know about the whole capacitor caper from a few years back, where\nthis one plant was making corrosive electrolyte and a huge number of\ncapacitor suppliers were buying from them. Mobos from that era are\nterrible. Caps that expand and burst after anywhere from a few months\nto a few years of use. ugh.\n", "msg_date": "Mon, 16 Nov 2009 13:52:29 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "\n\n\n\n\nScott Marlowe wrote:\n\nOn Mon, Nov 16, 2009 at 1:32 PM, Robert Schnabel <[email protected]> wrote:\n \n\nOk, so you have sufficiently sparked my curiosity as to whether Diskeeper\nwill in any way cause Postgres to fail the power chord test.  Unfortunately\nI have some deadlines to meet so won't be able to test this out until later\n \n\n\nBest time is during acceptance testing before deployment. Failing\nthat testing it in production on the backup server so you can burn it\nto the ground and rebuild it on a saturday.\n\nNote that surviving the power plug being pulled doesn't PROVE your\nsystem will always do that. You can try to simulate the real mix of\nload and even replay queries when pulling the plug, only to find the\none corner case you didnt' test in production when power is lost. The\npower cord plug can prove a system bad, but you're still somewhat\n\"hoping\" it's really good, with a high probability of being right.\n\nWhich is why backup is so important.\n \n\nGranted, but the point of me testing this is to say whether or not the\nDiskeeper service could introduce a problem.  If the system recovers\nwithout Diskeeper running but does not recover while Diskeeper is\nactively running then we have a problem.  If they both recover then\nI've answered the question \"Have you unplugged the power cord a few\ntimes in the middle of heavy write activity?\"  I understand that we\ncan't prove that it works but I should be able to at least answer the\nquestion asked.\n\nI wouldn't consider my database a production one.  I basically use it\nto store a large amount of genetic data for my lab.  The only time the\ndatabase gets use is when I use it.  Short of frying a piece of\nhardware by pulling the plug I'm not worried about losing any data and\nrebuilding is actually quite a simple process that only takes about 2\nhours... been there done that when I pulled the wrong SAS connector.\n\nBob\n\n\n\n", "msg_date": "Mon, 16 Nov 2009 15:04:37 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Mon, Nov 16, 2009 at 03:20:12PM -0500, Greg Smith wrote:\n> Robert Schnabel wrote:\n> >Nope. Forgive my ignorance but isn't that what a UPS is for anyway? \n> >Along with a BBU controller.\n>\n> If you have a UPS *and* a BBU controller, then things are much \n> better--those should have a write cache that insulates you from the \n> worst of the problems. But just a UPS alone doesn't help you very much:\n> \n> 1) A UPS is built with a consumable (the battery), and they do wear \n> out. Unless you're proactive about monitoring UPS battery quality and \n> doing tests, you won't find this out until the first time the power goes \n> out and the UPS doesn't work anymore.\n\nWell the bbu is just another battery (ok some are capacitors but...)\nso the same caveats apply for a bbu raid card. We test ours every 6\nmonths and fail them if they are less than a 5 day capacity (failure\nover a long weekend 3 days + 1-2 day(s) to fix the issue (replace\npower supply, mobo etc.)).\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n", "msg_date": "Mon, 16 Nov 2009 21:05:10 +0000", "msg_from": "John Rouillard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "> I also have backup software running that does \n> complete drive imaging so I should be able to do this fairly safely. \n> Here is the plan...\n> \n> 1) Shut down the Diskeeper service, run a query that is write heavy and \n> then pull the chord on the box. Wait a few minutes then plug it back in \n> and see if it recovers.\n> 2) Leave Diskeeper running and repeat the above...\n> \n> Comments/suggestions? If I'm going to do this I'd like to make sure I \n> do it correctly so it will be useful for the group.\n\nDo it more than once. This is a highly erratic test that can catch your system at a wide variety of points, some of which cause no problems, and some of which can be catastrophic. If you test and it fails, you know you have a problem. If you test and it doesn't fail, you don't know much. It's only when you've tested a number of times without failure that you've gained any real knowledge.\n\nCraig\n", "msg_date": "Mon, 16 Nov 2009 13:09:22 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "Craig James escribi�:\n\n> Do it more than once. This is a highly erratic test that can catch\n> your system at a wide variety of points, some of which cause no\n> problems, and some of which can be catastrophic. If you test and it\n> fails, you know you have a problem. If you test and it doesn't fail,\n> you don't know much. It's only when you've tested a number of times\n> without failure that you've gained any real knowledge.\n\nOf course, you're only truly safe when you've tested infinite times,\nwhich may take a bit longer than management expects.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 16 Nov 2009 18:25:54 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Mon, Nov 16, 2009 at 2:04 PM, Robert Schnabel <[email protected]> wrote:\n\n> Granted, but the point of me testing this is to say whether or not the\n> Diskeeper service could introduce a problem.  If the system recovers without\n> Diskeeper running but does not recover while Diskeeper is actively running\n> then we have a problem.  If they both recover then I've answered the\n> question \"Have you unplugged the power cord a few times in the middle of\n> heavy write activity?\"  I understand that we can't prove that it works but I\n> should be able to at least answer the question asked.\n>\n> I wouldn't consider my database a production one.  I basically use it to\n> store a large amount of genetic data for my lab.  The only time the database\n> gets use is when I use it.  Short of frying a piece of hardware by pulling\n> the plug I'm not worried about losing any data and rebuilding is actually\n> quite a simple process that only takes about 2 hours... been there done that\n> when I pulled the wrong SAS connector.\n\nBe careful, it's not uncommon for a database / app to suddenly become\npopular and people start expecting it to be up all the time. For\nthings like a corporate intranet, losing a few hours work from\nsomething like a power loss is acceptable.\n\nWe have four types of dbs where I work. session servers can be\nconfigured to have fsync off and don't have to be ultra reliable under\nthings like power loss. Search database which gets recreated every\nfew days as the indexer runs. Stats database where reliability is\nsorta important but not life or death, and the user data database\nwhich has to work and stay up. So each one is tested differently\nbecause each one would have a much different impact if they crash and\ncan't come back up without help.\n", "msg_date": "Mon, 16 Nov 2009 17:01:49 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Nov 16, 2009, at 1:09 PM, Robert Haas wrote:\n\n> I'm not sure what the answer is to your actual question, but I'd\n> highly recommend upgrading to 8.3 or 8.4. The performance is likely\n> to be a lot better, and 8.0/8.1 are no longer supported on Windows.\n\n\nUgh, yeah, I'd love to upgrade but the powers that get to make that \ndecision have no interest in upgrading. So I'm stuck on 8.0.4, and \nsince I really don't do the PG support itself, I don't even get to \nvoice much of an opinion (I deal really just with making sure the \nphysical hardware is doing what it needs to do, which is where the \ndisk defrag comes in to play).\n\n-chris\n<www.mythtech.net>\n\n\n", "msg_date": "Mon, 16 Nov 2009 20:12:58 -0500", "msg_from": "cb <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Nov 16, 2009, at 1:11 PM, Robert Schnabel wrote:\n\n> I've been a Diskeeper customer for about 10 years now and consider \n> it 'must have' software for Windows machines.\n> <snip>\n> So the short answer is yes, I have it running with PostgreSQL and \n> have not had any problems.\n\n\nSo that seems to be a definite vote for it should be just fine.\n\nI've read the other posts and I understand the concerns that were \nraised. I may try to do some testing myself since other than the one \nYes there isn't anyone else jumping in to say they are doing it \nsafely. Of course there is also no one saying don't do it, just \nstatements of caution as it appears to be an unknown and has the \npotential to cause problems.\n\nIt looks like to be really safe I should do some failure testing on my \nend first.\n\nThanks to everyone for their input!\n\n-chris\n<www.mythtech.net>\n", "msg_date": "Mon, 16 Nov 2009 20:22:25 -0500", "msg_from": "cb <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "cb <[email protected]> writes:\n> Ugh, yeah, I'd love to upgrade but the powers that get to make that \n> decision have no interest in upgrading. So I'm stuck on 8.0.4,\n\nMake sure you're not in the line of fire when (not if) that version\neats your data. Particularly on Windows, insisting on not upgrading\nthat version is unbelievably, irresponsibly stupid. There are a\n*large* number of known bugs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Nov 2009 20:31:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe? " }, { "msg_contents": "Tom Lane wrote:\n> cb <[email protected]> writes:\n> \n>> Ugh, yeah, I'd love to upgrade but the powers that get to make that \n>> decision have no interest in upgrading. So I'm stuck on 8.0.4,\n>> \n>\n> Make sure you're not in the line of fire when (not if) that version\n> eats your data. Particularly on Windows, insisting on not upgrading\n> that version is unbelievably, irresponsibly stupid. There are a\n> *large* number of known bugs.\n> \nYeah, the prudent thing to do in your situation is to issue a CYA memo \nthat says something like \"I think the hardware is OK, but due to large \nnumber of bugs in PostgreSQL 8.0.4 on Windows it's easy for the database \nto become corrupted anyway\", point toward \nhttp://www.postgresql.org/docs/8.4/static/release.html to support that \nclaim and note that 8.0.22 is the absolutely minimum version anyone \nshould be running, then CC everyone up the management chain. You're \nusing a version that considers your data quite tasty and would like to \nmake a snack of it at the first opportunity that arises.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nTom Lane wrote:\n\ncb <[email protected]> writes:\n \n\nUgh, yeah, I'd love to upgrade but the powers that get to make that \ndecision have no interest in upgrading. So I'm stuck on 8.0.4,\n \n\n\nMake sure you're not in the line of fire when (not if) that version\neats your data. Particularly on Windows, insisting on not upgrading\nthat version is unbelievably, irresponsibly stupid. There are a\n*large* number of known bugs.\n \n\nYeah, the prudent thing to do in your situation is to issue a CYA memo\nthat says something like \"I think the hardware is OK, but due to large\nnumber of bugs in PostgreSQL 8.0.4 on Windows it's easy for the\ndatabase to become corrupted anyway\", point toward\nhttp://www.postgresql.org/docs/8.4/static/release.html to support that\nclaim and note that 8.0.22 is the absolutely minimum version anyone\nshould be running, then CC everyone up the management chain.  You're\nusing a version that considers your data quite tasty and would like to\nmake a snack of it at the first opportunity that arises.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com", "msg_date": "Mon, 16 Nov 2009 21:45:06 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Mon, Nov 16, 2009 at 7:45 PM, Greg Smith <[email protected]> wrote:\n> Tom Lane wrote:\n>\n> cb <[email protected]> writes:\n>\n>\n> Ugh, yeah, I'd love to upgrade but the powers that get to make that\n> decision have no interest in upgrading. So I'm stuck on 8.0.4,\n>\n>\n> Make sure you're not in the line of fire when (not if) that version\n> eats your data. Particularly on Windows, insisting on not upgrading\n> that version is unbelievably, irresponsibly stupid. There are a\n> *large* number of known bugs.\n>\n>\n> Yeah, the prudent thing to do in your situation is to issue a CYA memo that\n> says something like \"I think the hardware is OK, but due to large number of\n> bugs in PostgreSQL 8.0.4 on Windows it's easy for the database to become\n> corrupted anyway\", point toward\n> http://www.postgresql.org/docs/8.4/static/release.html to support that claim\n> and note that 8.0.22 is the absolutely minimum version anyone should be\n> running, then CC everyone up the management chain.  You're using a version\n> that considers your data quite tasty and would like to make a snack of it at\n> the first opportunity that arises.\n\nLast job I worked we had pgsql and a Big Commercial Database and the\nthree other DBAs who worked on mostly that other database were scared\nto death of patches to their dbms. Thank the gods that pgsql updates\nare quite possibly the most reliable and easy to apply of any system.\nRead release notes, and 99% of the time it's just just shut down, rpm\n-Uvh postgres*rpm, start up, and viola you're up to date.\n\nPg updates are focused on security and bug fixes that don't change\naccepted behaviour within a major version. I agree, not applying them\nverges on negligence. Especially if you haven't read the release\nnotes to see what was fixed. Sometimes I read them and don't worry\nabout it if it's a real esoteric bug. But when a data loss bug shows\nup I upgrade right away.\n", "msg_date": "Mon, 16 Nov 2009 21:41:36 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Nov 16, 2009, at 8:31 PM, Tom Lane wrote:\n\n> Make sure you're not in the line of fire when (not if) that version\n> eats your data. Particularly on Windows, insisting on not upgrading\n> that version is unbelievably, irresponsibly stupid. There are a\n> *large* number of known bugs.\n\n\nI hear ya, and have agreed with you for a long while. There is a \nfairly regular and constant fight in house over the issue of \nupgrading. We get hit on a regular basis with problems that as far as \nI know are bugs that have been fixed (transaction log rename crashes \nthat take down PG, as well as queries just vanishing into the aether \nat times of heavy load resulting in hung threads in our Tomcat front \nend as it waits for something to come back that has disappeared).\n\n\n\nOn Nov 16, 2009, at 9:45 PM, Greg Smith wrote:\n\n> Yeah, the prudent thing to do in your situation is to issue a CYA \n> memo that says something like \"I think the hardware is OK, but due \n> to large number of bugs in PostgreSQL 8.0.4 on Windows it's easy for \n> the database to become corrupted anyway\", point toward http://www.postgresql.org/docs/8.4/static/release.html \n> to support that claim and note that 8.0.22 is the absolutely \n> minimum version anyone should be running, then CC everyone up the \n> management chain. You're using a version that considers your data \n> quite tasty and would like to make a snack of it at the first \n> opportunity that arises.\n\nMyself and the other guy responsible for the underlying hardware have \nalready gone down this route. The big bosses know our stance and know \nit isn't us preventing the upgrade. After that, there isn't too much \nmore I can do except sit back and shake my head each time something \ngoes wrong and I get sent on a wild goose chase to find any reason for \nthe failure OTHER than PG.\n\nReally it comes down to the DBMs have a firm stance of nothing \nchanges, ever. Me, I say bug fixes are released for a reason.\n\nMy understanding is, before I joined the company, they did an upgrade \nfrom 7 on Linux to 8 on Windows and got bit by some change in PG that \nbroke a bunch of code. After that, they have just refused to budge \nfrom the 8.0.4 version we are on and know the code works against. I \ndon't really have any details beyond that and asking for them tends to \ninvoke religious wars in house between the Linux/Open Source people \nand the Windows/Buy Everything people. So I've given up fighting, \ncovered my butt, and just do the best I can to keep things running.\n\n\nThanks again for the insights!\n\n-chris\n<www.mythtech.net>\n\n\n", "msg_date": "Mon, 16 Nov 2009 23:57:22 -0500", "msg_from": "cb <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is Diskeeper Automatic Mode safe? " }, { "msg_contents": "cb wrote:\n> My understanding is, before I joined the company, they did an upgrade \n> from 7 on Linux to 8 on Windows and got bit by some change in PG that \n> broke a bunch of code. After that, they have just refused to budge \n> from the 8.0.4 version we are on and know the code works against.\nYes; that's one of the reasons there was a major version number bump \nthere. That's a completely normal and expected issue to run into. A \nsimilar problem would happen if they tried to upgrade to 8.3 or later \nfrom 8.0--you can expect the app to break due to a large change made in 8.3.\n\nSounds to me like the app doesn't really work against the version you're \nrunning against now though, from the issues you described. Which brings \nus to the PostgreSQL patching philosophy, which they may not be aware \nof. Upgrades to later 8.0 releases will contain *nothing* but bug and \nsecurity fixes. The basic guideline for changes made as part of the \nsmall version number changes (8.0.1 to 8.0.2 for example) are that the \nbug must be more serious than the potential to cause a regression \nintroduced by messing with things. You shouldn't get anything by going \nto 8.0.22 but fixes to real problems. A behavior change that broke code \nwould be quite unexpected--the primary way you might run into one is by \nwriting code that expects buggy behavior that then breaks. That's not a \nvery common situation though, whereas the way they got bit before was \nbeyond common--as I said, it was expected to happen.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Tue, 17 Nov 2009 05:33:07 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "cb <[email protected]> wrote:\n> On Nov 16, 2009, at 8:31 PM, Tom Lane wrote:\n> \n>> Make sure you're not in the line of fire when (not if) that version\n>> eats your data. Particularly on Windows, insisting on not\n>> upgrading that version is unbelievably, irresponsibly stupid.\n>> There are a *large* number of known bugs.\n> \n> \n> I hear ya, and have agreed with you for a long while. There is a\n> fairly regular and constant fight in house over the issue of\n> upgrading. We get hit on a regular basis with problems that as far\n> as I know are bugs that have been fixed (transaction log rename\n> crashes that take down PG, as well as queries just vanishing into\n> the aether at times of heavy load resulting in hung threads in our\n> Tomcat front end as it waits for something to come back that has\n> disappeared).\n \nIf you could track down some unmodified 1971 Ford Pintos, you could\ngive them some perspective by having them drive those until they\nupgrade.\n \n-Kevin\n", "msg_date": "Tue, 17 Nov 2009 08:59:22 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Tue, Nov 17, 2009 at 7:59 AM, Kevin Grittner\n<[email protected]> wrote:\n> cb <[email protected]> wrote:\n>> On Nov 16, 2009, at 8:31 PM, Tom Lane wrote:\n>>\n>>> Make sure you're not in the line of fire when (not if) that version\n>>> eats your data.  Particularly on Windows, insisting on not\n>>> upgrading that version is unbelievably, irresponsibly stupid.\n>>> There are a *large* number of known bugs.\n>>\n>>\n>> I hear ya, and have agreed with you for a long while. There is a\n>> fairly regular and constant fight in house over the issue of\n>> upgrading. We get hit on a regular basis with problems that as far\n>> as I know are bugs that have been fixed (transaction log rename\n>> crashes that take down PG, as well as queries just vanishing into\n>> the aether at times of heavy load resulting in hung threads in our\n>> Tomcat front end as it waits for something to come back that has\n>> disappeared).\n>\n> If you could track down some unmodified 1971 Ford Pintos, you could\n> give them some perspective by having them drive those until they\n> upgrade.\n\nAnd they all get 1993 era Pentium 60s with 32 Megs of RAM running\nwindows 3.11 for workgroups and using the trumpet TCP stack.\nUpgrades, who needs 'em?!\n", "msg_date": "Tue, 17 Nov 2009 08:22:09 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "Greg Smith wrote:\n> cb wrote:\n>> My understanding is, before I joined the company, they did an upgrade \n>> from 7 on Linux to 8 on Windows and got bit by some change in PG that \n>> broke a bunch of code. After that, they have just refused to budge \n>> from the 8.0.4 version we are on and know the code works against.\n> Yes; that's one of the reasons there was a major version number bump \n> there. That's a completely normal and expected issue to run into. A \n> similar problem would happen if they tried to upgrade to 8.3 or later \n> from 8.0--you can expect the app to break due to a large change made in \n> 8.3.\n> \n> Sounds to me like the app doesn't really work against the version you're \n> running against now though, from the issues you described. Which brings \n> us to the PostgreSQL patching philosophy, which they may not be aware \n> of. Upgrades to later 8.0 releases will contain *nothing* but bug and \n> security fixes.\n\nTo elaborate on Greg's point: One of the cool things about Postgres \"minor\" releases (e.g. everything in the 8.0.*) series, is that you can backup your software, turn off Postgres, install the new version, and just fire it up again, and it works. Any problems? Just revert to the old version.\n\nIt's an easy sell to management. They can try it, confirm that none of the apps have broken, and if there are problems, you simple say \"oops\", and revert to the old version. If it works, you're the hero, if not, it's just a couple hours of your time.\n\nCraig\n", "msg_date": "Tue, 17 Nov 2009 07:51:40 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" }, { "msg_contents": "On Mon, 2009-11-16 at 23:57 -0500, cb wrote:\n> On Nov 16, 2009, at 8:31 PM, Tom Lane wrote:\n> Myself and the other guy responsible for the underlying hardware have \n> already gone down this route. The big bosses know our stance and know \n> it isn't us preventing the upgrade. After that, there isn't too much \n> more I can do except sit back and shake my head each time something \n> goes wrong and I get sent on a wild goose chase to find any reason for \n> the failure OTHER than PG.\n\n\nWhat you need to do is stop the wild goose chases. If problem is you PG\nversion, no amount of investigation into other areas is going to change\nthat. Your company is simply wasting money by ignoring this and blindly\nhoping that the problem will be something else.\n\nIt can be a difficult battle, but it can be won.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Tue, 17 Nov 2009 12:10:18 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Diskeeper Automatic Mode safe?" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHello,\n\nwe are facing a performance regression regarding certain NOT EXISTS\nclauses when moving from 8.3.8 to 8.4.1. It is my understanding that the\nplaner treats LEFT JOINs and NOT EXISTS equally with antijoin in 8.4,\nbut this is causing an issue for us.\n\nHere is the table and index definition:\n\nantijoin=# \\d a\n Table \"public.a\"\n Column | Type | Modifiers\n- --------+---------+-----------\n a_id | integer | not null\n a_oid | integer |\n b_fk | integer |\nIndexes:\n \"a_pkey\" PRIMARY KEY, btree (a_id)\n \"idx_a_oid\" btree (a_oid)\n\nantijoin=# \\d b\n Table \"public.b\"\n Column | Type | Modifiers\n- --------+---------+-----------\n b_id | integer | not null\n c_id | integer |\n b_fk | integer |\n b_date | date |\nIndexes:\n \"b_pkey\" PRIMARY KEY, btree (b_id)\n \"idx_b_b_date\" btree (b_date)\n \"idx_b_fk\" btree (b_fk)\n \"idx_c_id\" btree (c_id)\n\nantijoin=# \\d c\n Table \"public.c\"\n Column | Type | Modifiers\n- --------+---------+-----------\n c_id | integer | not null\n c_bool | boolean |\nIndexes:\n \"c_pkey\" PRIMARY KEY, btree (c_id)\n\n\nThe statement in question is the following:\n\nselect a_id from a\nwhere a_oid = 5207146\nand (not exists(\n select b.b_id\n from b join c on b.c_id=c.c_id\n where a.b_fk=b.b_fk\n and b.b_date>now())\n);\n\n\nTable statistics:\nantijoin=# select count(*) from a;\n count\n- ---------\n 3249915\n(1 row)\n\nantijoin=# select count(*) from b;\n count\n- ----------\n 30616125\n(1 row)\n\nantijoin=# select count(*) from c;\n count\n- -------\n 261\n(1 row)\n\n\nThe execution plan for 8.3:\n\nQUERY PLAN\n-\n----------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_a_oid on a (cost=0.00..323.38 rows=1 width=4)\n(actual time=22.155..22.156 rows=1 loops=1)\n Index Cond: (a_oid = 5207146)\n Filter: (NOT (subplan))\n SubPlan\n -> Nested Loop (cost=0.00..314.76 rows=1 width=4) (actual\ntime=0.113..0.113 rows=0 loops=1)\n Join Filter: (b.c_id = c.c_id)\n -> Index Scan using idx_b_fk on b (cost=0.00..306.88 rows=1\nwidth=8) (actual time=0.111..0.111 rows=0 loops=1)\n Index Cond: ($0 = b_fk)\n Filter: (b_date > now())\n -> Seq Scan on c (cost=0.00..4.61 rows=261 width=4) (never\nexecuted)\n Total runtime: 22.197 ms\n(11 rows)\n\n\nThe execution plan for 8.4:\n\nQUERY PLAN\n\n-\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Anti Join (cost=3253.47..182470.42 rows=1 width=4) (actual\ntime=377.362..377.370 rows=1 loops=1)\n Join Filter: (a.b_fk = b.b_fk)\n -> Index Scan using idx_a_oid on a (cost=0.00..8.62 rows=1 width=8)\n(actual time=0.019..0.025 rows=1 loops=1)\n Index Cond: (a_oid = 5207146)\n -> Hash Join (cost=3253.47..180297.30 rows=173159 width=4) (actual\ntime=137.360..336.169 rows=187509 loops=1)\n Hash Cond: (b.c_id = c.c_id)\n -> Bitmap Heap Scan on b (cost=3245.59..177908.50 rows=173159\nwidth=8) (actual time=137.144..221.287 rows=187509 loops=1)\n Recheck Cond: (b_date > now())\n -> Bitmap Index Scan on idx_b_b_date\n(cost=0.00..3202.30 rows=173159 width=0) (actual time=135.152..135.152\nrows=187509 loops=1)\n Index Cond: (b_date > now())\n -> Hash (cost=4.61..4.61 rows=261 width=4) (actual\ntime=0.189..0.189 rows=261 loops=1)\n -> Seq Scan on c (cost=0.00..4.61 rows=261 width=4)\n(actual time=0.008..0.086 rows=261 loops=1)\n Total runtime: 377.451 ms\n(13 rows)\n\nThe hardware is a 4 way Quad Core2 96GB box, both databases configured\nwith the values:\n\nshared_buffers=32GB\nwork_mem=128MB\neffective_cache_size=48GB\n\nDefault statistics target is 200, all tables are freshly vacuum analyzed.\nThe system is x86_64 with postgres compiled from source.\n\nAs you can see the 8.4 run is 16 times slower. It was even worse before\nwe added the index idx_b_b_date which we didn't have initially.\nIs there anything we can do about this issue? Do you need more information?\n\n- --\nRegards,\n\n Wiktor Wodecki\n\n net mobile AG, Zollhof 17, 40221 Duesseldorf, Germany\n 923B DCF8 070C 9FDD 5E05 9AE3 E923 5A35 182C 9783\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.9 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/\n\niEYEARECAAYFAksCoygACgkQ6SNaNRgsl4PpKwCguGSDd2ehmVXM6mzzLWABEOnR\nWWcAoM7PnSUyHGr0tLymFLhJuO0JtpZ5\n=Oq8F\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 17 Nov 2009 14:20:40 +0100", "msg_from": "Wiktor Wodecki <[email protected]>", "msg_from_op": true, "msg_subject": "Performance regression 8.3.8 -> 8.4.1 with NOT EXISTS" }, { "msg_contents": "Wiktor Wodecki <[email protected]> writes:\n> As you can see the 8.4 run is 16 times slower. It was even worse before\n> we added the index idx_b_b_date which we didn't have initially.\n> Is there anything we can do about this issue? Do you need more information?\n\nYou could prevent flattening of the EXISTS subquery by adding an OFFSET\n0 or some such to it. A real fix involves being able to handle nestloop\nindexscans where the parameter comes from more than one join level up;\nthere's been some discussion about that but it's not going to happen\nin 8.4.x.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Nov 2009 11:30:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regression 8.3.8 -> 8.4.1 with NOT EXISTS " }, { "msg_contents": "usual answer - use LEFT JOIN luke.\n\nusual answer - use LEFT JOIN luke.", "msg_date": "Wed, 18 Nov 2009 09:17:44 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regression 8.3.8 -> 8.4.1 with NOT EXISTS" } ]
[ { "msg_contents": "Hello everbody,\n\nI�m doing some tests with a large table about 7 milions tuples.\n\nSo, I need to retrieve only the last value for some key. That key has \nabout 20.000 tuples in this table.\n\nSELECT field1\nFROM table_7milions\nWHERE field1 = 'my_key'\nORDER BY field1 DESC\nLIMIT 1\n\nThe statistics tables shows the postgres read about 782656 block from \ndisk for the table and more 24704 blocks from disk for the index.\nA simple select is reading about 98 MB from disk and putting into shared \nmemory.\n\nSo I did some tests like that:\n\n-- I have created a partial table for that key\nSELECT *\nINTO part_table\nFROM table_7milions\nWHERE field1 = 'my_key'\n\n-- Now I do the same select on the same 20.000 tuples, but in the \npart_table\nSELECT field1\nFROM part_table\nWHERE field1 = 'my_key'\nORDER BY field1 desc\nLIMIT 1\n\nNow the statistics shows the postgres read 54016 blocks from disk, only \nfor the table becouse it doesn�t have a index.\nThe same select is reading about 6 MB from disk and putting into shared \nmemory.\n\nI�m thinking It hapens because in the 7 millions tables, the same 8k \nblock has diferent records with different keys, so only a few records \nwith 'my_key' is retrieved when I read a 8k block.\nIn the part_table, all records stored in a 8k block have 'my_key', so \nIt�s much optimized.\n\nMy doubt, there is a way to defrag my 7 millions table to put all \nrecords with the same key in the same 8k block?\n\nHow can I do that?\nIf there is not, I think it�s a good idea for the next versions.\n\nThank you,\n\nWaldomiro Caraiani\n", "msg_date": "Wed, 18 Nov 2009 14:22:30 -0200", "msg_from": "Waldomiro <[email protected]>", "msg_from_op": true, "msg_subject": "Too much blocks read" }, { "msg_contents": "On Wed, 18 Nov 2009, Waldomiro wrote:\n> So, I need to retrieve only the last value for some key. That key has about \n> 20.000 tuples in this table.\n>\n> SELECT field1\n> FROM table_7milions\n> WHERE field1 = 'my_key'\n> ORDER BY field1 DESC\n> LIMIT 1\n\nWhat's the point of this query? You are forcing Postgresql to read in all \nthe rows where field1 = 'my_key', so that they can be sorted, but the sort \nwill be completely unpredictable because all the values will be the same. \nIf you wanted to grab any row, then remove the ORDER BY, and it will just \nreturn the first one it finds.\n\nMatthew\n\n-- \n The best way to accelerate a Microsoft product is at 9.8 metres per second\n per second.\n - Anonymous\n", "msg_date": "Wed, 18 Nov 2009 16:23:48 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too much blocks read" }, { "msg_contents": "Waldomiro wrote:\n> ...\n> I�m thinking It hapens because in the 7 millions tables, the same 8k \n> block has diferent records with different keys, so only a few records \n> with 'my_key' is retrieved when I read a 8k block.\n> In the part_table, all records stored in a 8k block have 'my_key', so \n> It�s much optimized.\n> \n> My doubt, there is a way to defrag my 7 millions table to put all \n> records with the same key in the same 8k block?\n\nRead about the \"CLUSTER ON index-name\" SQL command. It does exactly what you're asking.\n\nCraig\n", "msg_date": "Wed, 18 Nov 2009 08:29:53 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too much blocks read" }, { "msg_contents": "In response to Waldomiro :\n> I?m thinking It hapens because in the 7 millions tables, the same 8k \n> block has diferent records with different keys, so only a few records \n> with 'my_key' is retrieved when I read a 8k block.\n> In the part_table, all records stored in a 8k block have 'my_key', so \n> It?s much optimized.\n> \n> My doubt, there is a way to defrag my 7 millions table to put all \n> records with the same key in the same 8k block?\n> \n> How can I do that?\n\nCLUSTER your table:\nhttp://www.postgresql.org/docs/current/static/sql-cluster.html\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Wed, 18 Nov 2009 17:30:44 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Too much blocks read" } ]
[ { "msg_contents": "Hello,\n\nI've inherited some very...interestingly...\ndesigned tables, and am trying to figure out how to make them usable. I've\ngot an ugly hack in place, but it will not use an index properly, and I'm\nhoping someone will be able to point me in the right direction.\n\nProduction is running 8.1.3, but I'm testing in 8.3.3. I know that's not\ngood, but I'm seeing the exact same problem in both, so hopefully fixing it\nin one will fix the other.\n\nAll tables/functions/views are included at the bottom, somewhat truncated to\nreduce length/repetition.\n\nThe table in question (though not the only one with this problem) has a\nseries of 24 column pairs per row, one holding a code and the other a\nvalue. Any code/value combo could be populated in any of these fields (the\ncodes identify the type of value). The row is keyed into based upon an id\nnumber/qualifier pair. So, for a single id number/qualifier, there can be\nfrom 0 to 24 populated pairs. We need to go in for a single key and pull a\nlist of all codes/values. Hopefully that makes sense.\n\nI created a set-returning function that would pull in the row for a specific\nnumber/qualifier combination, check each code to see if it was null/empty,\nand if not it would return a record containing the code/value.\n\nFor various reasons I needed to create a view based upon this. Due to\npostgres not liking having set-returning pl/pgsql functions in select\nstatements, the only way that I could get the view to work was to create a\npl/sql wrapper that simply pulls the results of the prior pl/pgsql function.\n\nI have the view working, and if I pull straight from the view it uses the\nindex properly (on id_nbr, id_qfr). However, if I try to join to another\ntable, based upon the indexed fields, I get a sequential scan. This is not\nideal at all. I know a lot of this is bad practice and ugly, but I need to\nget something that will work.\n\nAny ideas? I'm willing to rework any and all as far as views/functions are\nconcerned, redesigning the tables is sadly not an option at this time.\n\n\nUgly table:\n\nCREATE TABLE value_codes\n(\n id_nbr integer NOT NULL,\n id_qfr character(1) NOT NULL,\n val_1_cd_1 character varying(30),\n val_1_amt_1 numeric(10,2),\n val_1_cd_2 character varying(30),\n val_1_amt_2 numeric(10,2),\n ...\n val_2_cd_12 character varying(30),\n val_2_amt_12 numeric(10,2),\n CONSTRAINT value_codes_pkey PRIMARY KEY (id_nbr, id_qfr)\n)\nWITH (\n OIDS=TRUE\n);\n\n\n\nJoined table:\n\nCREATE TABLE main_table\n(\n id_nbr integer NOT NULL,\n id_qfr character(1) NOT NULL,\n create_dt character(8),\n create_tm character(8),\n CONSTRAINT main_table_pkey PRIMARY KEY (id_nbr, id_qfr)\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX main_table_create_dt_index\n ON main_table\n USING btree\n (create_dt);\n\n\n\nInitial function:\n\nCREATE OR REPLACE FUNCTION get_value_codes(IN fun_id_nbr integer,\n IN fun_id_qfr character,\n OUT value_code character varying,\n OUT value_amount numeric)\n RETURNS SETOF record AS\n$BODY$\ndeclare\n current_row record;\nbegin\n\n select val_1_cd_1,\n val_1_amt_1,\n val_1_cd_2,\n val_1_amt_2,\n ...\n val_2_cd_12,\n val_2_amt_12\n into current_row\n from value_codes\n where id_nbr = fun_id_nbr\n and id_qfr = fun_id_qfr;\n\n if\n current_row.val_1_cd_1 is not null\n and current_row.val_1_cd_1 != ''\n then\n value_code := current_row.val_1_cd_1;\n value_amount := current_row.val_1_amt_1;\n\n return next;\n end if;\n ...\n if\n current_row.val_2_cd_12 is not null\n and current_row.val_2_cd_12 != ''\n then\n value_code := current_row.val_2_cd_12;\n value_amount := current_row.val_2_amt_12;\n\n return next;\n end if;\n\n return;\nend;\n$BODY$\n LANGUAGE 'plpgsql' VOLATILE\n COST 100\n ROWS 10;\n\n\n\nWrapper function:\n\nCREATE OR REPLACE FUNCTION get_value_codes_wrapper(IN id_nbr integer,\n IN id_qfr character,\n OUT value_code character varying,\n OUT value_amount numeric)\n RETURNS SETOF record AS\n$BODY$\n SELECT * FROM get_value_codes($1, $2);\n$BODY$\n LANGUAGE 'sql' VOLATILE\n COST 100\n ROWS 10;\n\n\n\nView:\n\nCREATE OR REPLACE VIEW value_codes_view AS\n SELECT value_codes.id_nbr,\n value_codes.id_qfr,\n (get_value_codes_wrapper(value_codes.id_nbr,\nvalue_codes.id_qfr)).value_code AS value_code,\n (get_value_codes_wrapper(value_codes.id_nbr,\nvalue_codes.id_qfr)).value_amount AS value_amount\n FROM value_codes;\n\n\n\nSimple query Explained:\n\nexplain analyze select * from value_codes_view where id_nbr >= 90000000;\n\nIndex Scan using value_codes_pkey on value_codes (cost=0.00..128.72 rows=53\nwidth=6) (actual time=17.593..172.031 rows=15 loops=1)\n Index Cond: (id_nbr >= 90000000)\nTotal runtime: 172.141 ms\n\n\nJoin query explained:\n\nexplain analyze select * from main_table, value_codes_view\nwhere create_dt >= '20091001'\nand main_table.id_nbr = value_codes_view.id_nbr\nand main_table.id_qfr = value_codes_view.id_qfr;\n\nHash Join (cost=24.38..312425.40 rows=1 width=97) (actual\ntime=220062.607..220295.870 rows=1 loops=1)\n Hash Cond: ((value_codes.id_nbr = main_table.id_nbr) AND\n(value_codes.id_qfr = main_table.id_qfr))\n -> Seq Scan on value_codes (cost=0.00..297676.77 rows=535427 width=6)\n(actual time=15.846..219553.511 rows=138947 loops=1)\n -> Hash (cost=21.47..21.47 rows=194 width=24) (actual time=0.455..0.455\nrows=53 loops=1)\n -> Index Scan using main_table_create_dt_index on main_table\n(cost=0.00..21.47 rows=194 width=24) (actual time=0.033..0.243 rows=53\nloops=1)\n Index Cond: (create_dt >= '20091001'::bpchar)\nTotal runtime: 220296.173 ms\n\nHello,I've inherited some very...interestingly...designed\ntables, and am trying to figure out how to make them usable.  I've got\nan ugly hack in place, but it will not use an index properly, and I'm\nhoping someone will be able to point me in the right direction.\nProduction is running 8.1.3, but I'm testing in 8.3.3.  I know\nthat's not good, but I'm seeing the exact same problem in both, so\nhopefully fixing it in one will fix the other.All tables/functions/views are included at the bottom, somewhat truncated to reduce length/repetition.\nThe table in question (though not the only one with this problem)\nhas a series of 24 column pairs per row, one holding a code and the other a\nvalue.  Any code/value combo could be populated in any of these fields\n(the codes identify the type of value).  The row is keyed into based\nupon an id number/qualifier pair.  So, for a single id\nnumber/qualifier, there can be from 0 to 24 populated pairs.  We need\nto go in for a single key and pull a list of all codes/values. \nHopefully that makes sense.\nI created a set-returning function that would pull in the row for a\nspecific number/qualifier combination, check each code to see if it was\nnull/empty, and if not it would return a record containing the\ncode/value.\nFor various reasons I needed to create a view based upon this.  Due\nto postgres not liking having set-returning pl/pgsql functions in\nselect statements, the only way that I could get the view to work was\nto create a pl/sql wrapper that simply pulls the results of the prior\npl/pgsql function.\nI have the view working, and if I pull straight from the view it\nuses the index properly (on id_nbr, id_qfr).  However, if I try to join\nto another table, based upon the indexed fields, I get a sequential\nscan.  This is not ideal at all.  I know a lot of this is bad practice\nand ugly, but I need to get something that will work.\nAny ideas?  I'm willing to rework any and all as far as\nviews/functions are concerned, redesigning the tables is sadly not an\noption at this time.Ugly table:CREATE TABLE value_codes(  id_nbr integer NOT NULL,\n  id_qfr character(1) NOT NULL,  val_1_cd_1 character varying(30),  val_1_amt_1 numeric(10,2),  val_1_cd_2 character varying(30),  val_1_amt_2 numeric(10,2),  ...  val_2_cd_12 character varying(30),\n\n  val_2_amt_12 numeric(10,2),  CONSTRAINT value_codes_pkey PRIMARY KEY (id_nbr, id_qfr))WITH (  OIDS=TRUE);Joined table:CREATE TABLE main_table(  id_nbr integer NOT NULL,\n\n  id_qfr character(1) NOT NULL,  create_dt character(8),  create_tm character(8),  CONSTRAINT main_table_pkey PRIMARY KEY (id_nbr, id_qfr))WITH (  OIDS=FALSE);CREATE INDEX main_table_create_dt_index\n\n  ON main_table  USING btree  (create_dt);Initial function:CREATE OR REPLACE FUNCTION get_value_codes(IN fun_id_nbr integer,     IN fun_id_qfr character,     OUT value_code character varying, \n\n    OUT value_amount numeric)  RETURNS SETOF record AS$BODY$declare    current_row    record;begin    select    val_1_cd_1,        val_1_amt_1,        val_1_cd_2,        val_1_amt_2,\n\n        ...        val_2_cd_12,        val_2_amt_12    into     current_row    from     value_codes    where   id_nbr = fun_id_nbr        and id_qfr = fun_id_qfr;    if         current_row.val_1_cd_1 is not null \n\n        and current_row.val_1_cd_1 != ''    then         value_code := current_row.val_1_cd_1;        value_amount := current_row.val_1_amt_1;        return next;    end if;    ...\n    if \n        current_row.val_2_cd_12 is not null         and current_row.val_2_cd_12 != ''    then         value_code := current_row.val_2_cd_12;        value_amount := current_row.val_2_amt_12;\n\n        return next;    end if;    return;end;$BODY$  LANGUAGE 'plpgsql' VOLATILE  COST 100  ROWS 10;Wrapper function:CREATE OR REPLACE FUNCTION get_value_codes_wrapper(IN id_nbr integer, \n\n    IN id_qfr character,     OUT value_code character varying,     OUT value_amount numeric)  RETURNS SETOF record AS$BODY$    SELECT * FROM get_value_codes($1, $2);$BODY$  LANGUAGE 'sql' VOLATILE\n\n  COST 100  ROWS 10;View:CREATE OR REPLACE VIEW value_codes_view AS  SELECT value_codes.id_nbr,       value_codes.id_qfr,       (get_value_codes_wrapper(value_codes.id_nbr, value_codes.id_qfr)).value_code AS value_code, \n\n      (get_value_codes_wrapper(value_codes.id_nbr, value_codes.id_qfr)).value_amount AS value_amount   FROM value_codes;Simple query Explained:explain analyze select * from value_codes_view where id_nbr >= 90000000;\nIndex Scan using value_codes_pkey on value_codes \n(cost=0.00..128.72 rows=53 width=6) (actual time=17.593..172.031\nrows=15 loops=1)  Index Cond: (id_nbr >= 90000000)Total runtime: 172.141 msJoin query explained:\nexplain analyze select * from main_table, value_codes_viewwhere create_dt >= '20091001'and main_table.id_nbr = value_codes_view.id_nbrand main_table.id_qfr = value_codes_view.id_qfr;Hash Join  (cost=24.38..312425.40 rows=1 width=97) (actual time=220062.607..220295.870 rows=1 loops=1)\n\n  Hash Cond: ((value_codes.id_nbr = main_table.id_nbr) AND (value_codes.id_qfr = main_table.id_qfr)) \n->  Seq Scan on value_codes  (cost=0.00..297676.77 rows=535427\nwidth=6) (actual time=15.846..219553.511 rows=138947 loops=1)\n  ->  Hash  (cost=21.47..21.47 rows=194 width=24) (actual time=0.455..0.455 rows=53 loops=1)       \n->  Index Scan using main_table_create_dt_index on main_table \n(cost=0.00..21.47 rows=194 width=24) (actual time=0.033..0.243 rows=53\nloops=1)\n              Index Cond: (create_dt >= '20091001'::bpchar)Total runtime: 220296.173 ms", "msg_date": "Thu, 19 Nov 2009 12:06:06 -0500", "msg_from": "Jonathan Foy <[email protected]>", "msg_from_op": true, "msg_subject": "View based upon function won't use index on joins" }, { "msg_contents": "How about\n\nCREATE OR REPLACE VIEW value_codes_view AS\nselect * from (\n SELECT value_codes.id_nbr,\n value_codes.id_qfr,\n (ARRAY[val_1_cd_1, ... , val_2_cd_12])[i] as value_code,\n (ARRAY[val_1_amt_1, ... , val_2_amt_12])[i] as value_amount,\n FROM value_codes, generate_series(1,24) i) a\nwhere value_code is not null and value_code != '';\n?\n\nHow aboutCREATE OR REPLACE VIEW value_codes_view AS select * from ( SELECT value_codes.id_nbr,       value_codes.id_qfr,       (ARRAY[val_1_cd_1, ... , val_2_cd_12])[i] as value_code,      (ARRAY[val_1_amt_1, ... , val_2_amt_12])[i] as value_amount,\n\n   FROM value_codes, generate_series(1,24) i) awhere value_code is not null and value_code != '';?", "msg_date": "Fri, 20 Nov 2009 12:30:33 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View based upon function won't use index on joins" }, { "msg_contents": "This seems to result in the same problem; should I attempt to pull for a\nspecific id_nbr/id_qfr, postgres uses the index without a problem. If I try\nto join the two tables/views however, it insists on doing a sequential scan\n(actually two in this case) and will not use the index. Any other\nideas/explanations?\n\nThat being said, I probably need to look into arrays more. I haven't used\nthem at all in my relatively brief experience with postgres. More research!\n\n2009/11/20 Віталій Тимчишин <[email protected]>\n\n> How about\n>\n>\n> CREATE OR REPLACE VIEW value_codes_view AS\n> select * from (\n>\n> SELECT value_codes.id_nbr,\n> value_codes.id_qfr,\n> (ARRAY[val_1_cd_1, ... , val_2_cd_12])[i] as value_code,\n> (ARRAY[val_1_amt_1, ... , val_2_amt_12])[i] as value_amount,\n> FROM value_codes, generate_series(1,24) i) a\n> where value_code is not null and value_code != '';\n> ?\n>\n\nThis seems to result in the same problem; should I attempt to pull for a specific id_nbr/id_qfr, postgres uses the index without a problem. If I try to join the two tables/views however, it insists on doing a sequential scan (actually two in this case) and will not use the index.  Any other ideas/explanations?\nThat being said, I probably need to look into arrays more.  I haven't used them at all in my relatively brief experience with postgres.  More research!2009/11/20 Віталій Тимчишин <[email protected]>\nHow aboutCREATE OR REPLACE VIEW value_codes_view AS select * from (\n SELECT value_codes.id_nbr,       value_codes.id_qfr,       (ARRAY[val_1_cd_1, ... , val_2_cd_12])[i] as value_code,      (ARRAY[val_1_amt_1, ... , val_2_amt_12])[i] as value_amount,\n\n   FROM value_codes, generate_series(1,24) i) awhere value_code is not null and value_code != '';?", "msg_date": "Fri, 20 Nov 2009 10:01:35 -0500", "msg_from": "Jonathan Foy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: View based upon function won't use index on joins" }, { "msg_contents": "20 листопада 2009 р. 17:01 Jonathan Foy <[email protected]> написав:\n\n> This seems to result in the same problem; should I attempt to pull for a\n> specific id_nbr/id_qfr, postgres uses the index without a problem. If I try\n> to join the two tables/views however, it insists on doing a sequential scan\n> (actually two in this case) and will not use the index. Any other\n> ideas/explanations?\n>\n\nHave you tried to do same (join) when not using the viewes or converting\ncolumns into records? May be the problem is not in conversion, but in\nsomething simplier, like statistics or index bloat?\n\nBest regards, Vitalii Tymchyshyn\n\n20 листопада 2009 р. 17:01 Jonathan Foy <[email protected]> написав:\nThis seems to result in the same problem; should I attempt to pull for a specific id_nbr/id_qfr, postgres uses the index without a problem. If I try to join the two tables/views however, it insists on doing a sequential scan (actually two in this case) and will not use the index.  Any other ideas/explanations?\nHave you tried to do same (join) when not using the viewes or converting columns into records? May be the problem is not in conversion, but in something simplier, like statistics or index bloat?\nBest regards, Vitalii Tymchyshyn", "msg_date": "Fri, 20 Nov 2009 18:34:09 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View based upon function won't use index on joins" }, { "msg_contents": "I don't think so. I actually dumped the tables involved into stripped down\nversions of themselves in a new database for testing, so the data involved\nshould be completely fresh. I ran a vacuum analyze after the dump of\ncourse.\n\nJust for paranoia's sake though I did do the following:\n\nexplain analyze select id_nbr, id_qfr,\nval_1_cd_1,\n val_1_cd_2,\n ...\n val_2_amt_12\nfrom value_codes\nwhere main_table.create_dt >= '20091001'\nand main_table.id_nbr = value_codes.id_nbr\nand main_table.id_qfr = value_codes.id_qfr\n\nwith the following results\n\n\"Nested Loop (cost=0.00..1592.17 rows=132 width=150) (actual\ntime=0.093..1.075 rows=4 loops=1)\"\n\" -> Index Scan using main_table_create_dt_index on main_table\n(cost=0.00..21.47 rows=194 width=6) (actual time=0.035..0.249 rows=53\nloops=1)\"\n\" Index Cond: (create_dt >= '20091001'::bpchar)\"\n\" -> Index Scan using value_codes_pkey on value_codes (cost=0.00..8.08\nrows=1 width=150) (actual time=0.007..0.007 rows=0 loops=53)\"\n\" Index Cond: ((value_codes.id_nbr = main_table.id_nbr) AND\n(value_codes.id_qfr = main_table.id_qfr))\"\n\"Total runtime: 1.279 ms\"\n\n\nI'm stumped. I'm starting to think that I'm trying to get postgres to do\nsomething that it just doesn't do. Shy of just throwing a trigger in the\ntable to actually populate a second table with the same data solely for\nreporting purposes, which I hate to do for obvious reasons, I don't know\nwhat else to do. And this is only one example of this situation in the\ndatabases that I'm dealing with, I was hoping to come up with a more generic\nsolution that I could apply in any number of locations.\n\nI do very much appreciate the responses...I've been gradually getting deeper\nand deeper into postgres, and am still very much learning as I go. All\nadvice is very helpful.\n\nThanks..\n\n2009/11/20 Віталій Тимчишин <[email protected]>\n\n>\n>\n> 20 листопада 2009 р. 17:01 Jonathan Foy <[email protected]> написав:\n>\n> This seems to result in the same problem; should I attempt to pull for a\n>> specific id_nbr/id_qfr, postgres uses the index without a problem. If I try\n>> to join the two tables/views however, it insists on doing a sequential scan\n>> (actually two in this case) and will not use the index. Any other\n>> ideas/explanations?\n>>\n>\n> Have you tried to do same (join) when not using the viewes or converting\n> columns into records? May be the problem is not in conversion, but in\n> something simplier, like statistics or index bloat?\n>\n> Best regards, Vitalii Tymchyshyn\n>\n\nI don't think so. I actually dumped the tables involved into stripped down versions of themselves in a new database for testing, so the data involved should be completely fresh.  I ran a vacuum analyze after the dump of course.\nJust for paranoia's sake though I did do the following:explain analyze select id_nbr, id_qfr, val_1_cd_1,        val_1_cd_2,        ...        val_2_amt_12from value_codeswhere main_table.create_dt >= '20091001'\nand main_table.id_nbr = value_codes.id_nbrand main_table.id_qfr = value_codes.id_qfrwith the following results\"Nested Loop  (cost=0.00..1592.17 rows=132 width=150) (actual time=0.093..1.075 rows=4 loops=1)\"\n\"  ->  Index Scan using main_table_create_dt_index on main_table  (cost=0.00..21.47 rows=194 width=6) (actual time=0.035..0.249 rows=53 loops=1)\"\"        Index Cond: (create_dt >= '20091001'::bpchar)\"\n\"  ->  Index Scan using value_codes_pkey on value_codes  (cost=0.00..8.08 rows=1 width=150) (actual time=0.007..0.007 rows=0 loops=53)\"\"        Index Cond: ((value_codes.id_nbr = main_table.id_nbr) AND (value_codes.id_qfr = main_table.id_qfr))\"\n\"Total runtime: 1.279 ms\"I'm stumped.  I'm starting to think that I'm trying to get postgres to do something that it just doesn't do.  Shy of just throwing a trigger in the table to actually populate a second table with the same data solely for reporting purposes, which I hate to do for obvious reasons, I don't know what else to do.  And this is only one example of this situation in the databases that I'm dealing with, I was hoping to come up with a more generic solution that I could apply in any number of locations.\nI do very much appreciate the responses...I've been gradually getting deeper and deeper into postgres, and am still very much learning as I go.  All advice is very helpful.Thanks..\n2009/11/20 Віталій Тимчишин <[email protected]>\n20 листопада 2009 р. 17:01 Jonathan Foy <[email protected]> написав:\n\nThis seems to result in the same problem; should I attempt to pull for a specific id_nbr/id_qfr, postgres uses the index without a problem. If I try to join the two tables/views however, it insists on doing a sequential scan (actually two in this case) and will not use the index.  Any other ideas/explanations?\nHave you tried to do same (join) when not using the viewes or converting columns into records? May be the problem is not in conversion, but in something simplier, like statistics or index bloat?\nBest regards, Vitalii Tymchyshyn", "msg_date": "Fri, 20 Nov 2009 13:45:35 -0500", "msg_from": "Jonathan Foy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: View based upon function won't use index on joins" }, { "msg_contents": "2009/11/20 Jonathan Foy <[email protected]>:\n> Shy of just throwing a trigger in the\n> table to actually populate a second table with the same data solely for\n> reporting purposes,\n\nThat's what I would do in your situation, FWIW. Query optimization is\na hard problem even under the best of circumstances; getting the\nplanner to DTRT with a crazy schema is - well, really hard.\n\n...Robert\n", "msg_date": "Mon, 23 Nov 2009 11:47:31 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: View based upon function won't use index on joins" } ]
[ { "msg_contents": "Are the FSM parameters for each database, or the entire Postgres system? In other words, if I have 100 databases, do I need to increase max_fsm_pages and max_fsm_relations by a factor of 100, or keep them the same as if I just have one database?\n\nI suspect they're per-database, i.e. as I add databases, I don't have to increase the FSM parameters, but the documentation isn't 100% clear on this point.\n\nThanks,\nCraig\n", "msg_date": "Thu, 19 Nov 2009 10:12:30 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "FSM - per database or per installation?" }, { "msg_contents": "Craig James wrote:\n> Are the FSM parameters for each database, or the entire Postgres\n> system? In other words, if I have 100 databases, do I need to increase\n> max_fsm_pages and max_fsm_relations by a factor of 100, or keep them the\n> same as if I just have one database?\n> \n> I suspect they're per-database, i.e. as I add databases, I don't have to\n> increase the FSM parameters, but the documentation isn't 100% clear on\n> this point.\n\nIt's per cluster, ie *not* per-database.\n\nThe parameter is gone in 8.4, BTW.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 19 Nov 2009 20:33:16 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FSM - per database or per installation?" }, { "msg_contents": "Heikki Linnakangas wrote:\n> Craig James wrote:\n>> Are the FSM parameters for each database, or the entire Postgres\n>> system? In other words, if I have 100 databases, do I need to increase\n>> max_fsm_pages and max_fsm_relations by a factor of 100, or keep them the\n>> same as if I just have one database?\n>>\n>> I suspect they're per-database, i.e. as I add databases, I don't have to\n>> increase the FSM parameters, but the documentation isn't 100% clear on\n>> this point.\n> \n> It's per cluster, ie *not* per-database.\n\nHmmm ... it seems I have an impossible problem. I have ~250 databases each with about 2500 relations (as in \"select count(1) from pg_class where relname not like 'pg_%'\"). That makes roughly 625,000 relations.\n\nBut ... for max_fsm_pages, the Postgres manual says, \"This setting must be at least 16 * max_fsm_relations. The default is chosen by initdb depending on the amount of available memory, and can range from 20k to 200k pages.\"\n\nSo max_fsm_pages should be 16*625000, or 10,000,000 ... except that the limit is 200,000. Or is it only the *default* that can be 200,000 max, but you can override and set it to any number you like?\n\nIt appears that Postgres 8.3 and earlier can't do garbage collection on a configuration like mine. Do I misunderstand something?\n\n> The parameter is gone in 8.4, BTW.\n\nBoth max_fsm_relations and max_fsm_pages?\n\nThanks,\nCraig\n\n", "msg_date": "Wed, 23 Dec 2009 17:38:48 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FSM - per database or per installation?" }, { "msg_contents": "On Wed, Dec 23, 2009 at 6:38 PM, Craig James <[email protected]> wrote:\n> Heikki Linnakangas wrote:\n>>\n>> Craig James wrote:\n>>>\n>>> Are the FSM parameters for each database, or the entire Postgres\n>>> system?  In other words, if I have 100 databases, do I need to increase\n>>> max_fsm_pages and max_fsm_relations by a factor of 100, or keep them the\n>>> same as if I just have one database?\n>>>\n>>> I suspect they're per-database, i.e. as I add databases, I don't have to\n>>> increase the FSM parameters, but the documentation isn't 100% clear on\n>>> this point.\n>>\n>> It's per cluster, ie *not* per-database.\n>\n> Hmmm ... it seems I have an impossible problem.  I have ~250 databases each\n> with about 2500 relations (as in \"select count(1) from pg_class where\n> relname not like 'pg_%'\").  That makes roughly 625,000 relations.\n>\n> But ... for max_fsm_pages, the Postgres manual says, \"This setting must be\n> at least 16 * max_fsm_relations. The default is chosen by initdb depending\n> on the amount of available memory, and can range from 20k to 200k pages.\"\n>\n> So max_fsm_pages should be 16*625000, or 10,000,000 ... except that the\n> limit is 200,000.  Or is it only the *default* that can be 200,000 max, but\n> you can override and set it to any number you like?\n\nNO! that's not the max (if it was I would be in serious trouble.)\nThat's the max that you'll see done by initdb when creating the\ncluster.\n\nWe run 10M fsm pages on our servers, and use about 2.5M of that.\n", "msg_date": "Wed, 23 Dec 2009 19:03:23 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FSM - per database or per installation?" }, { "msg_contents": "Craig James wrote:\n> Heikki Linnakangas wrote:\n\n> >The parameter is gone in 8.4, BTW.\n> \n> Both max_fsm_relations and max_fsm_pages?\n\nYes, both are gone.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 23 Dec 2009 23:07:15 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FSM - per database or per installation?" }, { "msg_contents": "On 20/11/2009 2:33 AM, Heikki Linnakangas wrote:\n> Craig James wrote:\n>> Are the FSM parameters for each database, or the entire Postgres\n>> system? In other words, if I have 100 databases, do I need to increase\n>> max_fsm_pages and max_fsm_relations by a factor of 100, or keep them the\n>> same as if I just have one database?\n>>\n>> I suspect they're per-database, i.e. as I add databases, I don't have to\n>> increase the FSM parameters, but the documentation isn't 100% clear on\n>> this point.\n>\n> It's per cluster, ie *not* per-database.\n>\n> The parameter is gone in 8.4, BTW.\n\nSee:\n\n http://www.postgresql.org/docs/8.4/static/release-8-4.html#AEN95067\n\nfor why they've been removed, which boils down to \"PostgreSQL manages \nthe fsm automatically now and no longer requires all that RAM to do it, \neither\".\n\nThanks Heikki - the fsm _really_ simplify admin and remove a bunch of \ncommon gotchas for Pg users.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 24 Dec 2009 11:32:47 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FSM - per database or per installation?" } ]
[ { "msg_contents": "Hi All,\n\nI have a stats collection system where I collect stats at specific\nintervals (from network monitoring nodes), and stuff them into a\nPostgreSQL DB. To make make the retrieval faster, I'm using a\npartitioning scheme as follows:\n\nstats_300: data gathered at 5 mins, child tables named stats_300_t1_t2\n(where t2 - t1 = 2 hrs), i.e. 12 tables in one day\nstats_3600: data gathered / calculated over 1 hour, child tables\nsimilar to the above - stats_3600_t1_t2, where (t2 - t1) is 2 days\n(i.e. 15 tables a month)\nstats_86400: data gathered / calculated over 1 day, stored as\nstats_86400_t1_t2 where (t2 - t1) is 30 days (i.e. 12 tables a year).\n\nThe child tables have 4 indexes each (including a unique index, also\nused for CLUSTER). No indexes are defined on the parent tables. Data\ninsert / load happens directly to the child table (no stored procs\ninvolved).\n\nI'm running into the error \"ERROR: out of shared memory HINT: You\nmight need to increase max_locks_per_transaction. \". Looking back, it\nseems acceptable to have max_locks in the thousands (with a\ncorresponding shared_buffers setting so I don't overflow SHMMAX).\nHowever, what I find strange is that I only have 32 tables so far\n(some at 5-min, some at 1-hour). I'm doing some data preloading, and\neven that ran into this problem. I'm running this on a shared server\nwith 4GB total RAM, so I don't want PG to use too much. (Eventually,\nthe system is designed to have\n\nI tried increasing the max_locks_per_transaction, but that just seems\nto delay the inevitable.\n\nAny ideas what I might be doing wrong? If this may be a programmatic\nissue, I'm using Python PygreSQL to load the data as prepared\nstatements. I have one connection to the DB, create and release a\ncursor, and commit transactions when I'm done.\n\n--- begin postgresql.conf ---\ndata_directory = '/data/pg'\nhba_file = '/etc/pg_hba.conf'\nident_file = '/etc/pg_ident.conf'\nexternal_pid_file = '/data/pg/8.4-main.pid'\nport = 5432\nmax_connections = 8\nunix_socket_directory = '/tmp'\nssl = false\nshared_buffers = 128MB # used to be 500\nwork_mem = 64MB\nmaintenance_work_mem = 64MB\nwal_buffers = 1MB\ncheckpoint_segments = 30\ncheckpoint_timeout = 15min\neffective_cache_size = 1024MB\ndefault_statistics_target = 800\nconstraint_exclusion = on\nlog_destination = 'syslog'\nsyslog_facility = 'LOCAL1'\nsyslog_ident = 'postgres'\nclient_min_messages = error\nlog_min_messages = error\nlog_line_prefix = '%t '\nlog_temp_files = 0\ndatestyle = 'iso, mdy'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\ndefault_text_search_config = 'pg_catalog.english'\nmax_locks_per_transaction = 8000 # Originally 500, tried 1k and 2k also\n\nThanks\nHrishikesh\n", "msg_date": "Thu, 19 Nov 2009 17:22:56 -0800", "msg_from": "\n =?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=\n\t=?UTF-8?B?4KWHKQ==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Partitions and max_locks_per_transaction" }, { "msg_contents": "=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?= =?UTF-8?B?4KWHKQ==?= <[email protected]> writes:\n> To make make the retrieval faster, I'm using a\n> partitioning scheme as follows:\n\n> stats_300: data gathered at 5 mins, child tables named stats_300_t1_t2\n> (where t2 - t1 = 2 hrs), i.e. 12 tables in one day\n> stats_3600: data gathered / calculated over 1 hour, child tables\n> similar to the above - stats_3600_t1_t2, where (t2 - t1) is 2 days\n> (i.e. 15 tables a month)\n> stats_86400: data gathered / calculated over 1 day, stored as\n> stats_86400_t1_t2 where (t2 - t1) is 30 days (i.e. 12 tables a year).\n\nSo you've got, um, something less than a hundred rows in any one child\ntable? This is carrying partitioning to an insane degree, and your\nperformance is NOT going to be improved by it.\n\nI'd suggest partitioning on boundaries that will give you order of a\nmillion rows per child. That could be argued an order of magnitude or\ntwo either way, but what you've got is well outside the useful range.\n\n> I'm running into the error \"ERROR: out of shared memory HINT: You\n> might need to increase max_locks_per_transaction.\n\nNo surprise given the number of tables and indexes you're forcing\nthe system to deal with ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Nov 2009 02:08:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitions and max_locks_per_transaction " }, { "msg_contents": "It was Thursday 19 November 2009 11:08:10 pm that the wise Tom Lane thus \nwrote:\n> <[email protected]> writes:\n> > To make make the retrieval faster, I'm using a\n> > partitioning scheme as follows:\n> >\n> > stats_300: data gathered at 5 mins, child tables named stats_300_t1_t2\n> > (where t2 - t1 = 2 hrs), i.e. 12 tables in one day\n> > stats_3600: data gathered / calculated over 1 hour, child tables\n> > similar to the above - stats_3600_t1_t2, where (t2 - t1) is 2 days\n> > (i.e. 15 tables a month)\n> > stats_86400: data gathered / calculated over 1 day, stored as\n> > stats_86400_t1_t2 where (t2 - t1) is 30 days (i.e. 12 tables a year).\n> \n> So you've got, um, something less than a hundred rows in any one child\n> table? This is carrying partitioning to an insane degree, and your\n> performance is NOT going to be improved by it.\n\nSorry I forgot to mention - in the \"normal\" case, each of those tables will \nhave a few hundred thousand records, and in the worst case (the tables store \ninfo on up to 2000 endpoints) it can be around 5 million.\n\nAlso, the partitioning is not final yet (we might move it to 6 hours / 12 \nhours per partition) - which is why I need to run the load test :)\n\n> I'd suggest partitioning on boundaries that will give you order of a\n> million rows per child. That could be argued an order of magnitude or\n> two either way, but what you've got is well outside the useful range.\n> \n> > I'm running into the error \"ERROR: out of shared memory HINT: You\n> > might need to increase max_locks_per_transaction.\n> \n> No surprise given the number of tables and indexes you're forcing\n> the system to deal with ...\n\nHow many locks per table/index does PG require? Even with my current state \n(<50 tables, < 250 (tables + indexes)) is it reasonable to expect 2000 locks \nto run out?\n\nThanks,\nHrishi\n", "msg_date": "Fri, 20 Nov 2009 08:42:11 -0800", "msg_from": "Hrishikesh Mehendale <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitions and max_locks_per_transaction" } ]
[ { "msg_contents": "Dear All,\n\nI've just joined this list, so let me first thank you in advance for \nyour hospitality.\n\nI'm having lots of trouble with variously slow running queries on a \nproduction system. I've tried all the \"obvious\" fixes: changing the \nquery planner, checking for indexing, autovacuum, making sure the thing \nhas heaps of memory (20GB), running on solid state disks etc.\n\n\n1. Is there any way to debug or trace a running query? I think I have \nall the logging options turned on, but I'd like to see something like:\n \"Currently reading 3452 rows from table_x, at 0.2 us per row\" or\nwhatever, being really, really verbose in the logfiles.\n\nLikewise, is there any way to check whether, for example, postgres is \nrunning out of work memory?\n\n\n\n2. Is there any way, whatsoever, to get any kind of \"progress bar\" for a \nrunning query? I know these things have terrible faults, but anything \nmonotonic would be better than running completely blind.\n\n[There's a wonderful paper here:\nhttp://pages.cs.wisc.edu/~naughton/includes/papers/multiquery.pdf\nwhich seems to have got 90% of the way there, but appears to have been \nabandoned as it didn't get all 100% of the way]\n\n\nThe operations people in the warehouse are currently going crazy because \n we can't ever answer the question \"when will this query complete?\". I \nknow it's hard to do accurately, but knowing the difference between \"5 \nseconds to go\" and \"5 hours to go\" would be so fantastically useful.\n\nThanks,\n\nRichard\n\n\n\nP.S. Sometimes, some queries seem to benefit from being cancelled and \nthen immediately re-started. As there are being run in a transaction, I \ncan't see how this could make a difference. Am I missing anything \nobvious? Occasionally, a re-start of postgresql-8.4l itself seems to help.\n\n\n", "msg_date": "Fri, 20 Nov 2009 05:32:45 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres query completion status?" }, { "msg_contents": "Richard --\n\n You might post the results of \"EXPLAIN ANALYZE <your SQL here>;\" ... be sure to run it in a transaction if you want to be able roll it back. Perhaps try \"EXPLAIN <your SQL>;\" first as it is faster, but EXPLAIN ANALYZE shows what the planner is doing.\n\nYou wrote:\n\n\n\n> \n> P.S. Sometimes, some queries seem to benefit from being cancelled and then immediately\n> re-started. As there are being run in a transaction, I can't see how this could make a difference.\n> Am I missing anything obvious? Occasionally, a re-start of postgresql-8.4l itself seems to help.\n\nThis may be the result of caching of the desired rows, either by PostgreSQL or by your operating system. The rollback wouldn't effect this -- the rows are already in memory and not on disk waiting to be grabbed -- much faster on all subsequent queries.\n\nHTH,\n\nGreg Williamson\n\n\n-- Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \n", "msg_date": "Thu, 19 Nov 2009 22:07:08 -0800 (PST)", "msg_from": "Greg Williamson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "Thanks for your help. This issue splits into 2 bits:\n\n1. Fixing specific queries.\n\n2. Finding out when a specific running query is going to complete.\n(At the moment, this is the bit I really need to know).\n\n\nGreg Williamson wrote:\n> Richard --\n> \n> You might post the results of \"EXPLAIN ANALYZE <your SQL here>;\" ... be sure to run it in a transaction if you want to be able roll it back. Perhaps try \"EXPLAIN <your SQL>;\" first as it is faster, but EXPLAIN ANALYZE shows what the planner is doing.\n> \n\nThe offending query (simplified to just do a select - which is the slow\nbit) is:\n\n\n-------------\nSELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining ) FROM\ncore.demand, viwcs.previous_wave LEFT OUTER JOIN viwcs.wave_end_demand\nUSING ( wid ) WHERE core.demand.id = viwcs.wave_end_demand.demand_id;\n------------\n\n\nOver the last few weeks, this has gradually slowed down from 6 minutes\nto about 6.5, then last night it took 25, and today it's taken an hour\nalready and still not completed. The system hasn't been doing anything\nspecial in the last 2 days.\n\n\n\nHere's EXPLAIN (Explain analyze will take too long!)\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12)\n -> Merge Join (cost=885367.03..1115452.17 rows=8688 width=16)\n Merge Cond: ((core.demand.target_id =\nwave_genreorders_map.target_id) AND (core.demand.material_id =\ncore.material.id))\n -> Index Scan using demand_target_id_key on demand\n(cost=0.00..186520.46 rows=3800715 width=24)\n -> Sort (cost=885364.61..893425.30 rows=3224275 width=24)\n Sort Key: wave_genreorders_map.target_id, core.material.id\n -> Hash Join (cost=511934.12..536811.73 rows=3224275\nwidth=24)\n Hash Cond: (core.material.tag =\n(product_info_sku.sid)::text)\n -> Append (cost=0.00..10723.27 rows=689377 width=28)\n -> Seq Scan on material\n(cost=0.00..5474.75 rows=397675 width=21)\n -> Seq Scan on container material\n(cost=0.00..5248.52 rows=291702 width=37)\n -> Hash (cost=506657.25..506657.25 rows=422149\nwidth=42)\n -> Hash Join (cost=474610.85..506657.25\nrows=422149 width=42)\n Hash Cond: ((wave_gol.sid)::text =\n(product_info_sku.sid)::text)\n -> Merge Left Join\n(cost=463919.35..487522.78 rows=422149 width=29)\n Merge Cond:\n(((wave_gol.wid)::text = (du_report_sku.wid)::text) AND\n((wave_gol.storeorderid)::text = (du_report_sku.storeorderid)::text) AND\n((wave_gol.genreorderid)::text = (du_report_sku.genreorderid)::text))\n Join Filter:\n((wave_gol.sid)::text = (du_report_sku.sid)::text)\n -> Merge Join\n(cost=183717.70..197229.24 rows=422149 width=44)\n Merge Cond:\n(((wave_genreorders_map.wid)::text = (wave_gol.wid)::text) AND\n((wave_genreorders_map.storeorderid)::text =\n(wave_gol.storeorderid)::text) AND\n((wave_genreorders_map.genreorderid)::text = (wave_gol.genreorderid)::text))\n -> Index Scan using\n\"wave_genreorders_map_ERR_GENREORDERID_EXISTS\" on wave_genreorders_map\n(cost=0.00..4015.36 rows=116099 width=27)\n -> Sort\n(cost=183717.70..184818.90 rows=440483 width=47)\n Sort Key:\nwave_gol.wid, wave_gol.storeorderid, wave_gol.genreorderid\n -> Nested Loop\n(cost=9769.36..142425.22 rows=440483 width=47)\n -> Index Scan\nusing \"wave_rxw_ERR_WID_EXISTS\" on wave_rxw (cost=0.00..7.08 rows=1\nwidth=11)\n Filter:\nis_previous\n -> Bitmap\nHeap Scan on wave_gol (cost=9769.36..136912.11 rows=440483 width=36)\n Recheck\nCond: ((wave_gol.wid)::text = (wave_rxw.wid)::text)\n ->\nBitmap Index Scan on \"wave_gol_ERR_SID_EXISTS\" (cost=0.00..9659.24\nrows=440483 width=0)\n\nIndex Cond: ((wave_gol.wid)::text = (wave_rxw.wid)::text)\n -> Sort\n(cost=280201.66..281923.16 rows=688602 width=300)\n Sort Key:\ndu_report_sku.wid, du_report_sku.storeorderid, du_report_sku.genreorderid\n -> HashAggregate\n(cost=197936.75..206544.27 rows=688602 width=36)\n -> Seq Scan on\ndu_report_sku (cost=0.00..111861.61 rows=6886011 width=36)\n -> Hash (cost=5681.22..5681.22\nrows=400822 width=13)\n -> Seq Scan on product_info_sku\n (cost=0.00..5681.22 rows=400822 width=13)\n -> Index Scan using demand_pkey on demand (cost=0.00..0.97 rows=1\nwidth=12)\n Index Cond: (core.demand.id = core.demand.id)\n(37 rows)\n\n--------------------------------------------------\n\n\n\n> You wrote:\n> \n> \n> \n>> P.S. Sometimes, some queries seem to benefit from being cancelled and then immediately\n>> re-started. As there are being run in a transaction, I can't see how this could make a difference.\n>> Am I missing anything obvious? Occasionally, a re-start of postgresql-8.4l itself seems to help.\n> \n> This may be the result of caching of the desired rows, either by PostgreSQL or by your operating system. The rollback wouldn't effect this -- the rows are already in memory and not on \ndisk waiting to be grabbed -- much faster on all subsequent queries.\n\n\nYes...but the data should already be in RAM. We've got 20 GB of it,\n(Postgres is given 5GB, and the effective_cache_size is 10GB); the\ndataset size for the relevant part should only be about 100 MB at the most.\n\nAlso we're using solid-state disks (Intel X25-E), and iotop shows that\nthe disk access rate isn't the problem; the CPU is pegged at 100% though.\n\nIt seems to be that the query-planner is doing something radically\ndifferent.\n\n\nRichard\n\n", "msg_date": "Fri, 20 Nov 2009 06:32:31 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "\n> Greg Williamson wrote:\n>> Richard --\n>>\n>> You might post the results of \"EXPLAIN ANALYZE <your SQL here>;\" ... \n>> be sure to run it in a transaction if you want to be able roll it \n>> back. Perhaps try \"EXPLAIN <your SQL>;\" first as it is faster, but \n>> EXPLAIN ANALYZE shows what the planner is doing.\n>>\n> \n\nIs there any way I can gather some information by tracing the query \nthat's currently actually running?\n\nstrace doesn't help much, but is there anything else I can do?\n\nAs far as I know, the only tools that exist are\n pg_stat_activity, top, and iotop\nHave I missed one?\n\nThanks,\n\nRichard\n", "msg_date": "Fri, 20 Nov 2009 06:50:14 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "> \n> Greg Williamson wrote:\n>> Richard --\n>>\n>> You might post the results of \"EXPLAIN ANALYZE <your SQL here>;\" ... \n>> be sure to run it in a transaction if you want to be able roll it \n>> back. Perhaps try \"EXPLAIN <your SQL>;\" first as it is faster, but \n>> EXPLAIN ANALYZE shows what the planner is doing.\n\n\nHere's something very very odd.\nExplain Analyze has now run, in about 4 minutes. (result below)\n\nHowever, I'd be willing to swear that the last time I ran explain on \nthis query about half an hour ago, the final 2 lines were sequential scans.\n\nSo, I've just terminated the real job (which uses this select for an \nupdate) after 77 minutes of fruitless cpu-hogging, and re-started it....\n\n...This time, the same job ran through in 24 minutes.\n[This is running exactly the same transaction on exactly the same data!]\n\n\nRichard\n\n\n\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual \ntime=248577.879..253168.466 rows=347308 loops=1)\n -> Merge Join (cost=885367.03..1115452.17 rows=8688 width=16) \n(actual time=248577.834..252092.536 rows=347308 loops=1)\n Merge Cond: ((core.demand.target_id = \nwave_genreorders_map.target_id) AND (core.demand.material_id = \ncore.material.id))\n -> Index Scan using demand_target_id_key on demand \n(cost=0.00..186520.46 rows=3800715 width=24) (actual \ntime=0.031..2692.661 rows=3800715 loops=1)\n -> Sort (cost=885364.61..893425.30 rows=3224275 width=24) \n(actual time=248577.789..248659.751 rows=347308 loops=1)\n Sort Key: wave_genreorders_map.target_id, core.material.id\n Sort Method: quicksort Memory: 39422kB\n -> Hash Join (cost=511934.12..536811.73 rows=3224275 \nwidth=24) (actual time=247444.988..248263.151 rows=347308 loops=1)\n Hash Cond: (core.material.tag = \n(product_info_sku.sid)::text)\n -> Append (cost=0.00..10723.27 rows=689377 \nwidth=28) (actual time=0.008..177.076 rows=690647 loops=1)\n -> Seq Scan on material \n(cost=0.00..5474.75 rows=397675 width=21) (actual time=0.008..59.234 \nrows=395551 loops=1)\n -> Seq Scan on container material \n(cost=0.00..5248.52 rows=291702 width=37) (actual time=0.008..52.844 \nrows=295096 loops=1)\n -> Hash (cost=506657.25..506657.25 rows=422149 \nwidth=42) (actual time=247444.555..247444.555 rows=347308 loops=1)\n -> Hash Join (cost=474610.85..506657.25 \nrows=422149 width=42) (actual time=182224.904..247282.711 rows=347308 \nloops=1)\n Hash Cond: ((wave_gol.sid)::text = \n(product_info_sku.sid)::text)\n -> Merge Left Join \n(cost=463919.35..487522.78 rows=422149 width=29) (actual \ntime=182025.065..246638.762 rows=347308 loops=1)\n Merge Cond: \n(((wave_gol.wid)::text = (du_report_sku.wid)::text) AND \n((wave_gol.storeorderid)::text = (du_report_sku.storeorderid)::text) AND \n((wave_gol.genreorderid)::text = (du_report_sku.genreorderid)::text))\n Join Filter: \n((wave_gol.sid)::text = (du_report_sku.sid)::text)\n -> Merge Join \n(cost=183717.70..197229.24 rows=422149 width=44) (actual \ntime=859.551..1506.515 rows=347308 loops=1)\n Merge Cond: \n(((wave_genreorders_map.wid)::text = (wave_gol.wid)::text) AND \n((wave_genreorders_map.storeorderid)::text = \n(wave_gol.storeorderid)::text) AND \n((wave_genreorders_map.genreorderid)::text = (wave_gol.genreorderid)::text))\n -> Index Scan using \n\"wave_genreorders_map_ERR_GENREORDERID_EXISTS\" on wave_genreorders_map \n(cost=0.00..4015.36 rows=116099 width=27) (actual time=0.018..23.599 \nrows=116099 loops=1)\n -> Sort \n(cost=183717.70..184818.90 rows=440483 width=47) (actual \ntime=782.102..813.753 rows=347308 loops=1)\n Sort Key: \nwave_gol.wid, wave_gol.storeorderid, wave_gol.genreorderid\n Sort Method: \nquicksort Memory: 39422kB\n -> Nested Loop \n(cost=9769.36..142425.22 rows=440483 width=47) (actual \ntime=33.668..138.668 rows=347308 loops=1)\n -> Index Scan \nusing \"wave_rxw_ERR_WID_EXISTS\" on wave_rxw (cost=0.00..7.08 rows=1 \nwidth=11) (actual time=0.021..0.031 rows=1 loops=1)\n Filter: \nis_previous\n -> Bitmap \nHeap Scan on wave_gol (cost=9769.36..136912.11 rows=440483 width=36) \n(actual time=33.628..75.091 rows=347308 loops=1)\n Recheck \nCond: ((wave_gol.wid)::text = (wave_rxw.wid)::text)\n -> \nBitmap Index Scan on \"wave_gol_ERR_SID_EXISTS\" (cost=0.00..9659.24 \nrows=440483 width=0) (actual time=33.104..33.104 rows=347308 loops=1)\n \nIndex Cond: ((wave_gol.wid)::text = (wave_rxw.wid)::text)\n -> Sort \n(cost=280201.66..281923.16 rows=688602 width=300) (actual \ntime=177511.806..183486.593 rows=41317448 loops=1)\n Sort Key: \ndu_report_sku.wid, du_report_sku.storeorderid, du_report_sku.genreorderid\n Sort Method: external \nsort Disk: 380768kB\n -> HashAggregate \n(cost=197936.75..206544.27 rows=688602 width=36) (actual \ntime=7396.426..11224.839 rows=6282564 loops=1)\n -> Seq Scan on \ndu_report_sku (cost=0.00..111861.61 rows=6886011 width=36) (actual \ntime=0.006..573.419 rows=6897682 loops=1)\n -> Hash (cost=5681.22..5681.22 \nrows=400822 width=13) (actual time=199.422..199.422 rows=400737 loops=1)\n -> Seq Scan on product_info_sku \n (cost=0.00..5681.22 rows=400822 width=13) (actual time=0.004..78.357 \nrows=400737 loops=1)\n -> Index Scan using demand_pkey on demand (cost=0.00..0.97 rows=1 \nwidth=12) (actual time=0.002..0.003 rows=1 loops=347308)\n Index Cond: (core.demand.id = core.demand.id)\n Total runtime: 253455.603 ms\n(41 rows)\n\n\n\n\n", "msg_date": "Fri, 20 Nov 2009 07:03:04 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "Richard Neill wrote:\n> As far as I know, the only tools that exist are\n> pg_stat_activity, top, and iotop\n> Have I missed one?\nThe ui for pgTop might be easier for what you're trying to do: \nhttp://pgfoundry.org/projects/pgtop/\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Fri, 20 Nov 2009 06:56:41 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "2009/11/20 Richard Neill <[email protected]>\n\n>\n>> Greg Williamson wrote:\n>>\n>>> Richard --\n>>>\n>>> You might post the results of \"EXPLAIN ANALYZE <your SQL here>;\" ... be\n>>> sure to run it in a transaction if you want to be able roll it back. Perhaps\n>>> try \"EXPLAIN <your SQL>;\" first as it is faster, but EXPLAIN ANALYZE shows\n>>> what the planner is doing.\n>>>\n>>\n>\n> Here's something very very odd.\n> Explain Analyze has now run, in about 4 minutes. (result below)\n>\n> However, I'd be willing to swear that the last time I ran explain on this\n> query about half an hour ago, the final 2 lines were sequential scans.\n>\n> So, I've just terminated the real job (which uses this select for an\n> update) after 77 minutes of fruitless cpu-hogging, and re-started it....\n>\n> ...This time, the same job ran through in 24 minutes.\n> [This is running exactly the same transaction on exactly the same data!]\n>\n>\n> Richard\n>\n>\n>\nIt looks like your statistics are way out of sync with the real data.\n\n> Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual\ntime=248577.879..253168.466 rows=347308 loops=1)\n\nThis shows that it thinks there will be 8,686 rows, but actually traverses\n347,308.\n\nHave you manually run a VACUUM on these tables? Preferrably a full one if\nyou can. I notice that you appear ot have multiple sorts going on. Are all\nof those actually necessary for your output? Also consider using partial or\nmulticolumn indexes where useful.\n\nAnd which version of PostgreSQL are you using?\n\nThom\n\n2009/11/20 Richard Neill <[email protected]>\n\n\nGreg Williamson wrote:\n\nRichard --\n\n You might post the results of \"EXPLAIN ANALYZE <your SQL here>;\" ... be sure to run it in a transaction if you want to be able roll it back. Perhaps try \"EXPLAIN <your SQL>;\" first as it is faster, but EXPLAIN ANALYZE shows what the planner is doing.\n\n\n\nHere's something very very odd.\nExplain Analyze has now run, in about 4 minutes.  (result below)\n\nHowever, I'd be willing to swear that the last time I ran explain on this query about half an hour ago, the final 2 lines were sequential scans.\n\nSo, I've just terminated the real job (which uses this select for an update) after 77 minutes of fruitless cpu-hogging, and re-started it....\n\n...This time, the same job ran through in 24 minutes.\n[This is running exactly the same transaction on exactly the same data!]\n\n\nRichard\n\nIt looks like your statistics are way out of sync with the real data.> Nested Loop  (cost=885367.03..1123996.87 rows=8686 width=12) (actual time=248577.879..253168.466 rows=347308 loops=1)\nThis shows that it thinks there will be 8,686 rows, but actually traverses 347,308.Have you manually run a VACUUM on these tables?  Preferrably a full one if you can.  I notice that you appear ot have multiple sorts going on.  Are all of those actually necessary for your output?  Also consider using partial or multicolumn indexes where useful.\nAnd which version of PostgreSQL are you using?Thom", "msg_date": "Fri, 20 Nov 2009 12:13:03 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "Sorry for top-posting -- challenged mail client.\n\nThom's suggestion that the estimates are off seems like a useful line of inquiry, but ANALYZE is what builds statistics. If it is not run often enough the planner will base its idea of what a good plan is on bad data. So ANALYZE <table name>; is your friend. You may need to change the statistics for the tables in question if there are odd distributions of data -- as Thom asked -- which version of PostgreSQL ?\n\nStay away from VACUUM FULL ! It will block other activity and will be horribly slow on large tables. It will get rid of bloat but there may be better ways of doing that depending on what version you are using and what you maintenance window looks like.\n\nHTH,\n\nGreg W.\n\n\n\n\n________________________________\nFrom: Thom Brown <[email protected]>\nTo: Richard Neill <[email protected]>\nCc: Greg Williamson <[email protected]>; [email protected]\nSent: Fri, November 20, 2009 4:13:03 AM\nSubject: Re: [PERFORM] Postgres query completion status?\n\n\n2009/11/20 Richard Neill <[email protected]>\n\n>\n>\n>>>>Greg Williamson wrote:\n>>\n>>>>>Richard --\n>>>\n>>>>>> You might post the results of \"EXPLAIN ANALYZE <your SQL here>;\" ... be sure to run it in a transaction if you want to be able roll it back. Perhaps try \"EXPLAIN <your SQL>;\" first as it is faster, but EXPLAIN ANALYZE shows what the planner is doing.\n>>>\n>\n>\n>Here's something very very odd.\n>>Explain Analyze has now run, in about 4 minutes. (result below)\n>\n>>However, I'd be willing to swear that the last time I ran explain on this query about half an hour ago, the final 2 lines were sequential scans.\n>\n>>So, I've just terminated the real job (which uses this select for an update) after 77 minutes of fruitless cpu-hogging, and re-started it....\n>\n>>...This time, the same job ran through in 24 minutes.\n>>[This is running exactly the same transaction on exactly the same data!]\n>\n>\n>>Richard\n>\n>\n>\n\nIt looks like your statistics are way out of sync with the real data.\n\n> Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual time=248577.879..253168.466 rows=347308 loops=1)\n\nThis shows that it thinks there will be 8,686 rows, but actually traverses 347,308.\n\nHave you manually run a VACUUM on these tables? Preferrably a full one if you can. I notice that you appear ot have multiple sorts going on. Are all of those actually necessary for your output? Also consider using partial or multicolumn indexes where useful.\n\nAnd which version of PostgreSQL are you using?\n\nThom\n\n\n\n \nSorry for top-posting -- challenged mail client.Thom's suggestion that the estimates are off seems like a useful line of inquiry, but ANALYZE is what builds statistics. If it is not run often enough the planner will base its idea of what a good plan is on bad data. So ANALYZE <table name>; is your friend. You may need to change the statistics for the tables in question if there are odd distributions of data -- as Thom asked -- which version of PostgreSQL ?Stay away from VACUUM FULL ! It will block other activity and will be horribly slow on large tables. It will get rid of bloat but there may be better ways of doing that depending on what version you are using and what you maintenance window looks like.HTH,Greg W.From: Thom Brown <[email protected]>To: Richard Neill <[email protected]>Cc: Greg Williamson <[email protected]>; [email protected]: Fri, November 20, 2009 4:13:03 AMSubject: Re: [PERFORM] Postgres query completion status?\n2009/11/20 Richard Neill <[email protected]>\n\n\nGreg Williamson wrote:\n\nRichard --\n\n You might post the results of \"EXPLAIN ANALYZE <your SQL here>;\" ... be sure to run it in a transaction if you want to be able roll it back. Perhaps try \"EXPLAIN <your SQL>;\" first as it is faster, but EXPLAIN ANALYZE shows what the planner is doing.\n\n\n\nHere's something very very odd.\nExplain Analyze has now run, in about 4 minutes.  (result below)\n\nHowever, I'd be willing to swear that the last time I ran explain on this query about half an hour ago, the final 2 lines were sequential scans.\n\nSo, I've just terminated the real job (which uses this select for an update) after 77 minutes of fruitless cpu-hogging, and re-started it....\n\n...This time, the same job ran through in 24 minutes.\n[This is running exactly the same transaction on exactly the same data!]\n\n\nRichard\n\nIt looks like your statistics are way out of sync with the real data.> Nested Loop  (cost=885367.03..1123996.87 rows=8686 width=12) (actual time=248577.879..253168.466 rows=347308 loops=1)\nThis shows that it thinks there will be 8,686 rows, but actually traverses 347,308.Have you manually run a VACUUM on these tables?  Preferrably a full one if you can.  I notice that you appear ot have multiple sorts going on.  Are all of those actually necessary for your output?  Also consider using partial or multicolumn indexes where useful.\nAnd which version of PostgreSQL are you using?Thom", "msg_date": "Fri, 20 Nov 2009 05:15:58 -0800 (PST)", "msg_from": "Greg Williamson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "\n\nThom Brown wrote:\n >\n> It looks like your statistics are way out of sync with the real data.\n> \n> > Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual \n> time=248577.879..253168.466 rows=347308 loops=1)\n> \n> This shows that it thinks there will be 8,686 rows, but actually \n> traverses 347,308.\n\nYes, I see what you mean.\n\n> \n> Have you manually run a VACUUM on these tables? Preferrably a full one \n> if you can. \n\nEvery night, it runs Vacuum verbose analyze on the entire database. We \nalso have the autovacuum daemon enabled (in the default config).\n\nAbout 2 weeks ago, I ran cluster followed by vacuum full - which seemed \nto help more than I'd expect.\n\n[As I understand it, the statistics shouldn't change very much from day \nto day, as long as the database workload remains roughly constant. What \nwe're actually doing is running a warehouse sorting books - so from one \nday to the next the particular book changes, but the overall statistics \nbasically don't.]\n\n\nI notice that you appear ot have multiple sorts going on.\n> Are all of those actually necessary for your output? \n\nI think so. I didn't actually write all of this, so I can't be certain.\n\nAlso consider\n> using partial or multicolumn indexes where useful.\n> \n\nAlready done that. The query was originally pretty quick, with a few \nweeks worth of data, but not now. (after a few months). The times don't \nrise gradually, but have a very sudden knee.\n\n> And which version of PostgreSQL are you using?\n\n8.4.1, including this patch:\nhttp://archives.postgresql.org/pgsql-bugs/2009-10/msg00118.php\n\n\nRichard\n", "msg_date": "Fri, 20 Nov 2009 19:16:54 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "2009/11/20 Richard Neill <[email protected]>\n\n>\n>\n> Thom Brown wrote:\n> >\n>\n>> It looks like your statistics are way out of sync with the real data.\n>>\n>> > Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual\n>> time=248577.879..253168.466 rows=347308 loops=1)\n>>\n>> This shows that it thinks there will be 8,686 rows, but actually traverses\n>> 347,308.\n>>\n>\n> Yes, I see what you mean.\n>\n>\n>\n>> Have you manually run a VACUUM on these tables? Preferrably a full one if\n>> you can.\n>>\n>\n> Every night, it runs Vacuum verbose analyze on the entire database. We also\n> have the autovacuum daemon enabled (in the default config).\n>\n> About 2 weeks ago, I ran cluster followed by vacuum full - which seemed to\n> help more than I'd expect.\n>\n> [As I understand it, the statistics shouldn't change very much from day to\n> day, as long as the database workload remains roughly constant. What we're\n> actually doing is running a warehouse sorting books - so from one day to the\n> next the particular book changes, but the overall statistics basically\n> don't.]\n>\n>\n>\n> I notice that you appear ot have multiple sorts going on.\n>\n>> Are all of those actually necessary for your output?\n>>\n>\n> I think so. I didn't actually write all of this, so I can't be certain.\n>\n>\n> Also consider\n>\n>> using partial or multicolumn indexes where useful.\n>>\n>>\n> Already done that. The query was originally pretty quick, with a few weeks\n> worth of data, but not now. (after a few months). The times don't rise\n> gradually, but have a very sudden knee.\n>\n>\n> And which version of PostgreSQL are you using?\n>>\n>\n> 8.4.1, including this patch:\n> http://archives.postgresql.org/pgsql-bugs/2009-10/msg00118.php\n>\n>\n> Richard\n>\n>\n>\nOkay, have you tried monitoring the connections to your database?\n\nTry: select * from pg_stat_activity;\n\nAnd this to see current backend connections:\n\nSELECT pg_stat_get_backend_pid(s.backendid) AS procpid,\n pg_stat_get_backend_activity(s.backendid) AS current_query\n FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;\n\nIt might also help if you posted your postgresql.conf too.\n\nThom\n\n2009/11/20 Richard Neill <[email protected]>\n\n\nThom Brown wrote:\n >\n\nIt looks like your statistics are way out of sync with the real data.\n\n > Nested Loop  (cost=885367.03..1123996.87 rows=8686 width=12) (actual time=248577.879..253168.466 rows=347308 loops=1)\n\nThis shows that it thinks there will be 8,686 rows, but actually traverses 347,308.\n\n\nYes, I see what you mean.\n\n\n\nHave you manually run a VACUUM on these tables?  Preferrably a full one if you can.  \n\n\nEvery night, it runs Vacuum verbose analyze on the entire database. We also have the autovacuum daemon enabled (in the default config).\n\nAbout 2 weeks ago, I ran cluster followed by vacuum full - which seemed to help more than I'd expect.\n\n[As I understand it, the statistics shouldn't change very much from day to day, as long as the database workload remains roughly constant. What we're actually doing is running a warehouse sorting books - so from one day to the next the particular book changes, but the overall statistics basically don't.]\n\n\n\nI notice that you appear ot have multiple sorts going on.\n\nAre all of those actually necessary for your output?  \n\n\nI think so. I didn't actually write all of this, so I can't be certain.\n\nAlso consider\n\nusing partial or multicolumn indexes where useful.\n\n\n\nAlready done that. The query was originally pretty quick, with a few weeks worth of data, but not now. (after a few months). The times don't rise gradually, but have a very sudden knee.\n\n\nAnd which version of PostgreSQL are you using?\n\n\n8.4.1, including this patch:\nhttp://archives.postgresql.org/pgsql-bugs/2009-10/msg00118.php\n\n\nRichardOkay, have you tried monitoring the connections to your database?Try: select * from pg_stat_activity;\nAnd this to see current backend connections:SELECT pg_stat_get_backend_pid(s.backendid) AS procpid,       pg_stat_get_backend_activity(s.backendid) AS current_query\n    FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;It might also help if you posted your postgresql.conf too.Thom", "msg_date": "Fri, 20 Nov 2009 19:39:53 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "Thom Brown wrote:\n\n> \n> Okay, have you tried monitoring the connections to your database?\n> \n> Try: select * from pg_stat_activity;\n\nTried that - it's very useful as far as it goes. I can see that in most \ncases, the DB is running just the one query.\n\nWhat I really want to know is, how far through that query has it got?\n(For example, if the query is an update, then surely it knows how many \nrows have been updated, and how many are yet to go).\n\n> \n> And this to see current backend connections:\n> \n> SELECT pg_stat_get_backend_pid(s.backendid) AS procpid,\n> pg_stat_get_backend_activity(s.backendid) AS current_query\n> FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;\n> \n\nThis looks identical to just some of the columns from pg_stat_activity.\n\n\n> It might also help if you posted your postgresql.conf too.\n\nBelow (have removed the really non-interesting bits).\n\nThanks,\n\nRichard\n\n\n\n> \n> Thom\n\n\n#------------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#------------------------------------------------------------------------------\n\nmax_connections = 500 # (change requires restart)\n\n#------------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#------------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 4500MB # min 128kB\n # (change requires restart)\ntemp_buffers = 64MB # min 800kB\n#max_prepared_transactions = 0 # zero disables the feature\n # (change requires restart)\n# Note: Increasing max_prepared_transactions costs ~600 bytes of shared\n# memory per transaction slot, plus lock space (see\n# max_locks_per_transaction).\n# It is not advisable to set max_prepared_transactions nonzero unless you\n# actively intend to use prepared transactions.\n\nwork_mem = 256MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nmax_stack_depth = 4MB # min 100kB\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n # (change requires restart)\n#shared_preload_libraries = '' # (change requires restart)\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0ms # 0-100 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 1-10000 credits\n\n# - Background Writer -\n\n#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round\n#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers \nscanned/round\n\n# - Asynchronous Behavior -\n\n#effective_io_concurrency = 1 # 1-1000. 0 disables prefetching\n\n\n#------------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#------------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = on # turns forced synchronization \non or off\n#synchronous_commit = on # immediate fsync at commit\n#wal_sync_method = fsync # the default is the first option\n # supported by the operating \nsystem:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes\nwal_buffers = 2MB # min 32kB\n # (change requires restart)\n#wal_writer_delay = 200ms # 1-10000 milliseconds\n\ncommit_delay = 50000 # range 0-100000, in microseconds\ncommit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 64 # in logfile segments, min 1, \n16MB each (was safe value of 4)\n#checkpoint_timeout = 5min # range 30s-1h\n#checkpoint_completion_target = 0.5 # checkpoint target duration, \n0.0 - 1.0\n#checkpoint_warning = 30s # 0 disables\n\n# - Archiving -\n\n#archive_mode = off # allows archiving to be done\n # (change requires restart)\n#archive_command = '' # command to use to archive a logfile \nsegment\n#archive_timeout = 0 # force a logfile segment switch after this\n # number of seconds; 0 disables\n\n\n#------------------------------------------------------------------------------\n# QUERY TUNING\n#------------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1 # measured on an arbitrary scale\n#random_page_cost = 4 # same scale as above\n#seq_page_cost = 0.25 # use 0.25, 0.75 for normal\n#random_page_cost = 0.75 # but 1 and 4 for wave-deactivate.\nseq_page_cost = 0.5 # It looks as though 0.5 and 2 \n(exactly)\nrandom_page_cost = 2 # will work for both problems. \n(very brittle fix!)\n#cpu_tuple_cost = 0.01 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\neffective_cache_size = 10000MB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\ngeqo_threshold = 12\ngeqo_effort = 10 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\ndefault_statistics_target = 1000 # range 1-10000\n#constraint_exclusion = partition # on, off, or partition\n#cursor_tuple_fraction = 0.1 # range 0.0-1.0\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\n # JOIN clauses\n\n\n#------------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#------------------------------------------------------------------------------\n\n\n# - When to Log -\n\n#client_min_messages = notice\n\n#log_error_verbosity = default # terse, default, or verbose \nmessages\n\n#log_min_error_statement = error\n\nlog_min_duration_statement = 80\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = on\n#log_checkpoints = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\n#log_hostname = off\nlog_line_prefix = '%t '\n\n#log_lock_waits = off # log lock waits >= deadlock_timeout\n#log_statement = 'none' # none, ddl, mod, all\n#log_temp_files = -1 # log temporary files equal or \nlarger\n # than the specified size in \nkilobytes;\n # -1 disables, 0 logs all temp \nfiles\n#log_timezone = unknown # actually, defaults to TZ \nenvironment\n # setting\n\n\n#------------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#------------------------------------------------------------------------------\n\n# - Query/Index Statistics Collector -\n\n#track_activities = on\n#track_counts = on\n#track_functions = none # none, pl, all\n#track_activity_query_size = 1024\n#update_process_title = on\n#stats_temp_directory = 'pg_stat_tmp'\n\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n\n#------------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#------------------------------------------------------------------------------\n\n#autovacuum = on # Enable autovacuum subprocess? \n 'on'\nautovacuum = on # requires track_counts \nto also be on.\nlog_autovacuum_min_duration = 1000 # -1 disables, 0 logs all \nactions and\n # their durations, > 0 logs only\n # actions running at least this \nnumber\n # of milliseconds.\n#autovacuum_max_workers = 3 # max number of autovacuum \nsubprocesses\n#autovacuum_naptime = 1min # time between autovacuum runs\n#autovacuum_vacuum_threshold = 50 # min number of row updates before\n # vacuum\n#autovacuum_analyze_threshold = 50 # min number of row updates before\n # analyze\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before \nvacuum\n#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before \nanalyze\n#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced \nvacuum\n # (change requires restart)\n#autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for\n # autovacuum, in milliseconds;\n # -1 means use vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovacuum, -1 means use\n # vacuum_cost_limit\n\n", "msg_date": "Fri, 20 Nov 2009 20:14:15 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "2009/11/20 Richard Neill <[email protected]>\n>\n>\n> It might also help if you posted your postgresql.conf too.\n>>\n>\n> Below (have removed the really non-interesting bits).\n>\n> Thanks,\n>\n> Richard\n>\n>\n> I can't actually see anything in your config that would cause this problem.\n:/\n\nAs for seeing the progress of an update, I would have thought due to the\natomic nature of updates, only the transaction in which the update is\nrunning would have visibility of the as-yet uncommitted updates.\n\nThom\n\n2009/11/20 Richard Neill <[email protected]>\n\n\n\nIt might also help if you posted your postgresql.conf too.\n\n\nBelow (have removed the really non-interesting bits).\n\nThanks,\n\nRichard\n\nI can't actually see anything in your config that would cause this problem. :/As for seeing the progress of an update, I would have thought due to the atomic nature of updates, only the transaction in which the update is running would have visibility of the as-yet uncommitted updates.\nThom", "msg_date": "Fri, 20 Nov 2009 20:35:58 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "\n\nThom Brown wrote:\n> 2009/11/20 Richard Neill <[email protected] <mailto:[email protected]>>\n> \n> \n> It might also help if you posted your postgresql.conf too.\n> \n> \n> Below (have removed the really non-interesting bits).\n> \n> Thanks,\n> \n> Richard\n> \n> \n> I can't actually see anything in your config that would cause this \n> problem. :/\n> \n> As for seeing the progress of an update, I would have thought due to the \n> atomic nature of updates, only the transaction in which the update is \n> running would have visibility of the as-yet uncommitted updates.\n> \n\nYes, but surely the postmaster itself (and any administrative user) \nshould be able to find this out.\n\nWhat I need for slow queries is some kind of progress bar. Any estimate \n(no matter how poor, or non-linear) of the query progress, or time \nremaining would be invaluable.\n\nRichard\n", "msg_date": "Fri, 20 Nov 2009 20:45:10 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> De: Richard Neill\n> \n> \n> max_connections = 500 # (change requires restart)\n> work_mem = 256MB # min 64kB\n\nNot that it has to do with your current problem but this combination could\nbog your server if enough clients run sorted queries simultaneously.\nYou probably should back on work_mem at least an order of magnitude.\n\n\n", "msg_date": "Fri, 20 Nov 2009 18:07:28 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "\n\nFernando Hevia wrote:\n> \n> \n>> -----Mensaje original-----\n>> De: Richard Neill\n>>\n>>\n>> max_connections = 500 # (change requires restart)\n>> work_mem = 256MB # min 64kB\n> \n> Not that it has to do with your current problem but this combination could\n> bog your server if enough clients run sorted queries simultaneously.\n> You probably should back on work_mem at least an order of magnitude.\n> \n\nWhat's the correct way to configure this?\n\n* We have one client which needs to run really big transactions \n(therefore needs the work memory).\n\n* We also have about 200 clients which run always very small, short queries.\n\nRichard\n", "msg_date": "Fri, 20 Nov 2009 21:12:15 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> De: Richard Neill\n> \n> Fernando Hevia wrote:\n> > \n> > \n> >> -----Mensaje original-----\n> >> De: Richard Neill\n> >>\n> >>\n> >> max_connections = 500 # (change requires restart)\n> >> work_mem = 256MB # min 64kB\n> > \n> > Not that it has to do with your current problem but this \n> combination \n> > could bog your server if enough clients run sorted queries \n> simultaneously.\n> > You probably should back on work_mem at least an order of magnitude.\n> > \n> \n> What's the correct way to configure this?\n> \n> * We have one client which needs to run really big \n> transactions (therefore needs the work memory).\n> \n> * We also have about 200 clients which run always very small, \n> short queries.\n> \n> Richard\n> \n\nSet the default value at postgresql.conf much lower, probably 4MB.\nAnd just before running any big transaction raise it for \nthe current session only issuing a:\n set work_mem = '256MB';\n\nRegards,\nFernando.\n\n", "msg_date": "Fri, 20 Nov 2009 18:32:07 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": ">>> max_connections = 500                   # (change requires restart)\n>>> work_mem = 256MB                                # min 64kB\n>>\n>> Not that it has to do with your current problem but this combination could\n>> bog your server if enough clients run sorted queries simultaneously.\n>> You probably should back on work_mem at least an order of magnitude.\n>>\n>\n> What's the correct way to configure this?\n>\n> * We have one client which needs to run really big transactions (therefore\n> needs the work memory).\n>\n\nYou can set the work_mem for the specific user (like \"set work_mem to\nx\") at the begginning of the session.\n\nHere are some things I noticed (it is more like shooting in the dark,\nbut still...)\n\nthe expensive part is this:\n -> Sort\n(cost=280201.66..281923.16 rows=688602 width=300) (actual\ntime=177511.806..183486.593 rows=41317448 loops=1)\n\n Sort Key:\ndu_report_sku.wid, du_report_sku.storeorderid,\ndu_report_sku.genreorderid\n\n Sort Method: external\nsort Disk: 380768kB\n -> HashAggregate\n(cost=197936.75..206544.27 rows=688602 width=36) (actual\ntime=7396.426..11224.839 rows=6282564 loops=1)\n -> Seq Scan on\ndu_report_sku (cost=0.00..111861.61 rows=6886011 width=36) (actual\ntime=0.006..573.419 rows=6897682 loops=1)\n\n\n(it is pretty confusing that the HashAggregate reports ~6M rows, but\nthe sort does 41M rows, but maybe I can not read this).\nAnyway, I think that if You up the work_mem for this query to 512M,\nthe sort will be in memory, an thus plenty faster.\n\nAlso, You say You are experiencing unstable query plans, and this may\nmean that geqo is kicking in (but Your query seems too simple for\nthat, even considering the views involved). A quick way to check that\nwould be to run explain <the query> a coule tens of times, and check\nif the plans change. If they do, try upping geqo_threshold.\n\nYou have seq_page_cost 4 times larger than random_page_cost. You say\nYou are on SSD, so there is no random access penalty. Try setting them\nequal.\n\nYour plan is full of merge-joins, some indices may be in order. Merge\njoin is a kind of \"last-chance\" plan.\n\nthe query is :\nSELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining ) FROM\ncore.demand, viwcs.previous_wave LEFT OUTER JOIN viwcs.wave_end_demand\nUSING ( wid ) WHERE core.demand.id = viwcs.wave_end_demand.demand_id;\n\nIsn`t the left join equivalent to an inner join, since in where You\nare comparing values from the outer side of the join? If they come out\nnulls, they will get discarded anyway...\n\nI hope You find some of this useful.\n\nGreetings\nMarcin\n", "msg_date": "Fri, 20 Nov 2009 22:38:14 +0100", "msg_from": "marcin mank <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "Justin Pitts wrote:\n> Set work_mem in postgresql.conf down to what the 200 clients need, which \n> sounds to me like the default setting.\n> \n> In the session which needs more work_mem, execute:\n> SET SESSION work_mem TO '256MB'\n\nIsn't that terribly ugly? It seems to me less hackish to rely on the \nmany clients not to abuse work_mem (as we know exactly what query they \nwill run, we can be sure it won't happen).\n\nIt's a shame that the work_mem parameter is a per-client one, rather \nthan a single big pool.\n\nRichard\n", "msg_date": "Fri, 20 Nov 2009 21:39:49 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "Richard Neill wrote:\n> Am I missing something though, or is this project dormant, without \n> having released any files?\n\nMy bad--gave you the wrong url. \nhttp://git.postgresql.org/gitweb?p=pg_top.git;a=summary has the project \nI meant to point you toward.\n\n> What I really want to know is, how far through that query has it got?\n> (For example, if the query is an update, then surely it knows how many \n> rows have been updated, and how many are yet to go).\nI understand what you want. The reason you're not getting any \nsuggestions is because that just isn't exposed in PostgreSQL yet. \nClients ask for queries to be run, eventually they get rows of results \nback, but there's no notion of how many they're going to get in advance \nor how far along they are in executing the query's execution plan. \nThere's a couple of academic projects that have started exposing more of \nthe query internals, but I'm not aware of anyone who's even started \nmoving in the direction of what you'd need to produce a progress bar.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Fri, 20 Nov 2009 19:07:24 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "Richard Neill wrote:\n> Likewise, is there any way to check whether, for example, postgres is \n> running out of work memory?\nIt doesn't work like that; it's not an allocation. What happens is that \nthe optimizer estimates how much memory a sort is going to need, and \nthen uses work_mem to decide whether that is something it can do in RAM \nor something that needs to be done via a more expensive disk-based \nsorting method. You can tell if it's not set high enough by toggling on \nlog_temp_files and watching when those get created--those appear when \nsorts bigger than work_mem need to be done.\n\n> commit_delay = 50000 # range 0-100000, in microseconds\n> commit_siblings = 5 # range 1-1000\n\nRandom note: that is way too high of a value for commit_delay. It's \nunlikely to be helping you, and might be hurting sometimes. The whole \ncommit_delay feature is quite difficult to tune correctly, and is really \nonly useful for situations where there's really heavy writing going on \nand you want to carefully tweak write chunking size. The useful range \nfor commit_delay is very small even in that situation, 50K is way too \nhigh. I'd recommend changing this back to the default, if you're not at \nthe point where you're running your own benchmarks to prove the \nparameter is useful to you it's not something you should try to adjust.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Fri, 20 Nov 2009 19:18:09 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "Thanks very much for your help so far.\n> \n> (it is pretty confusing that the HashAggregate reports ~6M rows, but\n> the sort does 41M rows, but maybe I can not read this).\n> Anyway, I think that if You up the work_mem for this query to 512M,\n> the sort will be in memory, an thus plenty faster.\n\nTried this (with work_mem 2GB). It seems to make a difference, but not \nenough: the query time is about halved (from 220 sec to 120 sec)\n\n> \n> Also, You say You are experiencing unstable query plans, and this may\n> mean that geqo is kicking in (but Your query seems too simple for\n> that, even considering the views involved). A quick way to check that\n> would be to run explain <the query> a coule tens of times, and check\n> if the plans change. If they do, try upping geqo_threshold.\n\nIt's not unstable from one run to the next; it's unstable from one day \nto the next (more on this later)\n\n> \n> You have seq_page_cost 4 times larger than random_page_cost. You say\n> You are on SSD, so there is no random access penalty. Try setting them\n> equal.\n> \n\nAgain, experimentally, it seems to be non-equal. I didn't benchmark \nthis, but the random access tests done by TomsHardware et al suggest a \nfactor 2.5 penalty for random access vs sequential. This is very much \nbetter than rotational disks, but still significant.\n\n\n> Your plan is full of merge-joins, some indices may be in order. Merge\n> join is a kind of \"last-chance\" plan.\n> \n\nI think the fix here is going to be to do more work at write-time and \nless at read-time. i.e. rather than having really complex views, we'll \ngenerate some extra tables, and keep them synchronized with triggers.\n\n\nRichard\n\n\n", "msg_date": "Sun, 22 Nov 2009 15:10:11 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "\n\nJustin Pitts wrote:\n> I don't know if I would call it \"terribly\" ugly. Its not especially \n> pretty, but it affords the needed degree of twiddling to get the job \n> done. Relying on the clients is fine - if you can. I suspect the vast \n> majority of DBAs would find that notion unthinkable. The usual result of \n> a memory overrun is a server crash.\n> \n\nIt's probably OK in this context: the multiple clients are all instances \nof the same perl script, running particular, pre-defined queries. So we \ncan trust them not to ask a really memory-intensive query.\n\nBesides which, if you can't trust the clients to ask sensible queries, \nwhy can you trust them to set their own work_mem values?\n\nRichard\n\n\n\n\n> On Nov 20, 2009, at 4:39 PM, Richard Neill wrote:\n> \n>> Justin Pitts wrote:\n>>> Set work_mem in postgresql.conf down to what the 200 clients need, \n>>> which sounds to me like the default setting.\n>>> In the session which needs more work_mem, execute:\n>>> SET SESSION work_mem TO '256MB'\n>>\n>> Isn't that terribly ugly? It seems to me less hackish to rely on the \n>> many clients not to abuse work_mem (as we know exactly what query they \n>> will run, we can be sure it won't happen).\n>>\n>> It's a shame that the work_mem parameter is a per-client one, rather \n>> than a single big pool.\n>>\n>> Richard\n>>\n>> -- \n>> Sent via pgsql-performance mailing list \n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n", "msg_date": "Sun, 22 Nov 2009 15:10:17 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "\nGreg Smith wrote:\n> Richard Neill wrote:\n>> Am I missing something though, or is this project dormant, without \n>> having released any files?\n> \n> My bad--gave you the wrong url. \n> http://git.postgresql.org/gitweb?p=pg_top.git;a=summary has the project \n> I meant to point you toward.\n\nWill try that out...\n\n> \n>> What I really want to know is, how far through that query has it got?\n>> (For example, if the query is an update, then surely it knows how many \n>> rows have been updated, and how many are yet to go).\n> I understand what you want. The reason you're not getting any \n> suggestions is because that just isn't exposed in PostgreSQL yet. \n> Clients ask for queries to be run, eventually they get rows of results \n> back, but there's no notion of how many they're going to get in advance \n> or how far along they are in executing the query's execution plan. \n> There's a couple of academic projects that have started exposing more of \n> the query internals, but I'm not aware of anyone who's even started \n> moving in the direction of what you'd need to produce a progress bar.\n> \n\nIs there any internal table (similar to pg_stat_activity) I can look at?\n\nRichard\n\n", "msg_date": "Sun, 22 Nov 2009 15:14:22 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" } ]
[ { "msg_contents": "\nHi all,\n\nI'm experiencing a strange behavior with my postgresql 8.3:\nperformance is degrading after 3/4 days of running time but if I\njust restart it performance returns back to it's normal value..\nIn normal conditions the postgres process uses about 3% of cpu time\nbut when is in \"degraded\" conditions it can use up to 25% of cpu time.\nThe load of my server is composed of many INSERTs on a table, and\nmany UPDATEs and SELECT on another table, no DELETEs.\nI tried to run vacuum by the pg_maintenance script (Debian Lenny)\nbut it doesn't help. (I have autovacuum off).\n\nSo, my main question is.. how can just a plain simple restart of postgres\nrestore the original performance (3% cpu time)?\nI can post my postgresql.conf if needed.\nThank you for your help,\n\n-- \nLorenzo\n", "msg_date": "Fri, 20 Nov 2009 10:43:40 +0100", "msg_from": "Lorenzo Allegrucci <[email protected]>", "msg_from_op": true, "msg_subject": "Strange performance degradation" }, { "msg_contents": "In response to Lorenzo Allegrucci :\n> \n> Hi all,\n> \n> I'm experiencing a strange behavior with my postgresql 8.3:\n> performance is degrading after 3/4 days of running time but if I\n> just restart it performance returns back to it's normal value..\n> In normal conditions the postgres process uses about 3% of cpu time\n> but when is in \"degraded\" conditions it can use up to 25% of cpu time.\n> The load of my server is composed of many INSERTs on a table, and\n> many UPDATEs and SELECT on another table, no DELETEs.\n> I tried to run vacuum by the pg_maintenance script (Debian Lenny)\n> but it doesn't help. (I have autovacuum off).\n\nBad idea. Really.\n\n> \n> So, my main question is.. how can just a plain simple restart of postgres\n> restore the original performance (3% cpu time)?\n\nYou should enable autovacuum.\n\nAnd you should run vacuum verbose manually and see the output.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Fri, 20 Nov 2009 11:23:22 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance degradation" }, { "msg_contents": "A. Kretschmer wrote:\n> In response to Lorenzo Allegrucci :\n>> Hi all,\n>>\n>> I'm experiencing a strange behavior with my postgresql 8.3:\n>> performance is degrading after 3/4 days of running time but if I\n>> just restart it performance returns back to it's normal value..\n>> In normal conditions the postgres process uses about 3% of cpu time\n>> but when is in \"degraded\" conditions it can use up to 25% of cpu time.\n>> The load of my server is composed of many INSERTs on a table, and\n>> many UPDATEs and SELECT on another table, no DELETEs.\n>> I tried to run vacuum by the pg_maintenance script (Debian Lenny)\n>> but it doesn't help. (I have autovacuum off).\n> \n> Bad idea. Really.\n\nWhy running vacuum by hand is a bad idea?\nvacuum doesn't solve anyway, it seems only a plain restart stops the\nperformance degradation.\n\n>> So, my main question is.. how can just a plain simple restart of postgres\n>> restore the original performance (3% cpu time)?\n> \n> You should enable autovacuum.\n> \n> And you should run vacuum verbose manually and see the output.\n\nbelow is the output of vacuum analyze verbose\n(NOTE: I've already run vacuum this morning, this is a second run)\n\nDETAIL: A total of 58224 page slots are in use (including overhead).\n58224 page slots are required to track all free space.\nCurrent limits are: 2000000 page slots, 1000 relations, using 11784 kB.\n", "msg_date": "Fri, 20 Nov 2009 12:00:28 +0100", "msg_from": "Lorenzo Allegrucci <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange performance degradation" }, { "msg_contents": "Lorenzo Allegrucci <lorenzo.allegrucci 'at' forinicom.it> writes:\n\n> A. Kretschmer wrote:\n>> In response to Lorenzo Allegrucci :\n>>> Hi all,\n>>>\n>>> I'm experiencing a strange behavior with my postgresql 8.3:\n>>> performance is degrading after 3/4 days of running time but if I\n>>> just restart it performance returns back to it's normal value..\n>>> In normal conditions the postgres process uses about 3% of cpu time\n>>> but when is in \"degraded\" conditions it can use up to 25% of cpu time.\n>>> The load of my server is composed of many INSERTs on a table, and\n>>> many UPDATEs and SELECT on another table, no DELETEs.\n>>> I tried to run vacuum by the pg_maintenance script (Debian Lenny)\n>>> but it doesn't help. (I have autovacuum off).\n>>\n>> Bad idea. Really.\n>\n> Why running vacuum by hand is a bad idea?\n\nIt's rather turning autovacuum off which is a bad idea.\n\n> vacuum doesn't solve anyway, it seems only a plain restart stops the\n> performance degradation.\n\nNotice: normally, restarting doesn't help for vacuum-related\nproblems.\n\nYour degradation might come from a big request being intensive on\nPG's and OS's caches, resulting in data useful to other requests\ngetting farther (but it should get back to normal if the big\nrequest is not performed again). And btw, 25% is far from 100% so\nresponse time should be the same if there are no other factors;\nyou should rather have a look at IOs (top, vmstat, iostat) during\nproblematic time. How do you measure your degradation, btw?\n\n>>> So, my main question is.. how can just a plain simple restart of postgres\n>>> restore the original performance (3% cpu time)?\n>>\n>> You should enable autovacuum.\n>>\n>> And you should run vacuum verbose manually and see the output.\n>\n> below is the output of vacuum analyze verbose\n> (NOTE: I've already run vacuum this morning, this is a second run)\n>\n> DETAIL: A total of 58224 page slots are in use (including overhead).\n> 58224 page slots are required to track all free space.\n> Current limits are: 2000000 page slots, 1000 relations, using 11784 kB.\n\nWhich means your FSM settings look fine; but doesn't mean your\ndatabase is not bloated (and with many UPDATEs and no correct\nvacuuming, it should be bloated). One way to know is to restore a\nrecent backup, issue VACUUM VERBOSE on a table known to be large\nand regularly UPDATE's/DELETE'd on both databases (in production,\nand on the restore) and compare the reported number of pages\nneeded. The difference is the potential benefit of running VACUUM\nFULL (or CLUSTER) in production (once your DB is bloated, a\nnormal VACUUM doesn't remove the bloat).\n\n db_production=# VACUUM VERBOSE table;\n [...]\n INFO: \"table\": found 408 removable, 64994 nonremovable row versions in 4395 pages\n \n db_restored=# VACUUM VERBOSE table;\n [...]\n INFO: \"table\": found 0 removable, 64977 nonremovable row versions in 628 pages\n\nIn that 628/4395 example, we have 85% bloat in production.\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Fri, 20 Nov 2009 12:17:07 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance degradation" }, { "msg_contents": "Is there any idle connections exists ? \n\n\n--\nThanks \nSam Jas\n\n\n--- On Fri, 20/11/09, Lorenzo Allegrucci <[email protected]> wrote:\n\nFrom: Lorenzo Allegrucci <[email protected]>\nSubject: [GENERAL] Strange performance degradation\nTo: [email protected]\nCc: [email protected]\nDate: Friday, 20 November, 2009, 9:43 AM\n\n\nHi all,\n\nI'm experiencing a strange behavior with my postgresql 8.3:\nperformance is degrading after 3/4 days of running time but if I\njust restart it performance returns back to it's normal value..\nIn normal conditions the postgres process uses about 3% of cpu time\nbut when is in \"degraded\" conditions it can use up to 25% of cpu time.\nThe load of my server is composed of many INSERTs on a table, and\nmany UPDATEs and SELECT on another table, no DELETEs.\nI tried to run vacuum by the pg_maintenance script (Debian Lenny)\nbut it doesn't help. (I have autovacuum off).\n\nSo, my main question is.. how can just a plain simple restart of postgres\nrestore the original performance (3% cpu time)?\nI can post my postgresql.conf if needed.\nThank you for your help,\n\n-- Lorenzo\n\n-- Sent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n\n\n\n The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/\nIs there any idle connections exists ? --Thanks Sam Jas--- On Fri, 20/11/09, Lorenzo Allegrucci <[email protected]> wrote:From: Lorenzo Allegrucci <[email protected]>Subject: [GENERAL] Strange performance degradationTo: [email protected]: [email protected]: Friday, 20 November, 2009, 9:43 AMHi all,I'm experiencing a strange behavior with my postgresql 8.3:performance is degrading after 3/4 days of running time but if Ijust restart it performance returns back to it's normal value..In normal conditions the postgres process uses about 3% of cpu timebut when is in\n \"degraded\" conditions it can use up to 25% of cpu time.The load of my server is composed of many INSERTs on a table, andmany UPDATEs and SELECT on another table, no DELETEs.I tried to run vacuum by the pg_maintenance script (Debian Lenny)but it doesn't help. (I have autovacuum off).So, my main question is.. how can just a plain simple restart of postgresrestore the original performance (3% cpu time)?I can post my postgresql.conf if needed.Thank you for your help,-- Lorenzo-- Sent via pgsql-general mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general\n \nThe INTERNET now has a personality. YOURS! See your Yahoo! Homepage.", "msg_date": "Fri, 20 Nov 2009 17:48:14 +0530 (IST)", "msg_from": "Sam Jas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance degradation" }, { "msg_contents": "2009/11/20 Lorenzo Allegrucci <[email protected]>:\n>\n> Hi all,\n>\n> I'm experiencing a strange behavior with my postgresql 8.3:\n> performance is degrading after 3/4 days of running time but if I\n> just restart it performance returns back to it's normal value..\n> In normal conditions the postgres process uses about 3% of cpu time\n> but when is in \"degraded\" conditions it can use up to 25% of cpu time.\n> The load of my server is composed of many INSERTs on a table, and\n> many UPDATEs and SELECT on another table, no DELETEs.\n> I tried to run vacuum by the pg_maintenance script (Debian Lenny)\n> but it doesn't help. (I have autovacuum off).\n\nI had a similar problem: I did a large delete, and then a selct which\n\"covered\" the previous rows.\nIt took ages, because the index still had those deleted rows.\nPossibly the same happens with update.\n\nTry this:\nvacuum analyse\nreindex database ....\n(your database name instead of ...)\n\nor, rather do this table by table:\nvacuum analyse ....\nreindex table ...\n\n\nAutovacuum is a generally good thing.\n\n> So, my main question is.. how can just a plain simple restart of postgres\n> restore the original performance (3% cpu time)?\n\nthere were probably some long transactions running. Stopping postgres\neffectively kills them off.\n\n> I can post my postgresql.conf if needed.\n> Thank you for your help,\n>\n> --\n> Lorenzo\n>\n> --\n> Sent via pgsql-general mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-general\n>\n\n\n\n-- \nBrian Modra Land line: +27 23 5411 462\nMobile: +27 79 69 77 082\n5 Jan Louw Str, Prince Albert, 6930\nPostal: P.O. Box 2, Prince Albert 6930\nSouth Africa\nhttp://www.zwartberg.com/\n", "msg_date": "Fri, 20 Nov 2009 16:06:37 +0200", "msg_from": "Brian Modra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance degradation" }, { "msg_contents": "Lorenzo Allegrucci <[email protected]> writes:\n> So, my main question is.. how can just a plain simple restart of postgres\n> restore the original performance (3% cpu time)?\n\nAre you killing off any long-running transactions when you restart?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Nov 2009 09:49:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance degradation " }, { "msg_contents": "On Fri, 20 Nov 2009, Lorenzo Allegrucci wrote:\n> performance is degrading...\n\n> In normal conditions the postgres process uses about 3% of cpu time\n> but when is in \"degraded\" conditions it can use up to 25% of cpu time.\n\nYou don't really give enough information to determine what is going on \nhere. This could be one of two situations:\n\n1. You have a constant incoming stream of short-lived requests at a \nconstant rate, and Postgres is taking eight times as much CPU to service \nit as normal. You're looking at CPU usage in aggregate over long periods \nof time. In this case, we should look at long running transactions and \nother slowdown possibilities.\n\n2. You are running a complex query, and you look at top and see that \nPostgres uses eight times as much CPU as when it has been freshly started. \nIn this case, the \"performance degradation\" could actually be that the \ndata is more in cache, and postgres is able to process it eight times \n*faster*. Restarting Postgres kills the cache and puts you back at square \none.\n\nWhich of these is it?\n\nMatthew\n\n-- \n Reality is that which, when you stop believing in it, doesn't go away.\n -- Philip K. Dick\n", "msg_date": "Fri, 20 Nov 2009 15:08:00 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Strange performance degradation" }, { "msg_contents": "Sam Jas wrote:\n> \n> Is there any idle connections exists ?\n\nI didn't see any, I'll look better next time.\n", "msg_date": "Fri, 20 Nov 2009 21:24:38 +0100", "msg_from": "Lorenzo Allegrucci <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange performance degradation" }, { "msg_contents": "Brian Modra wrote:\n> I had a similar problem: I did a large delete, and then a selct which\n> \"covered\" the previous rows.\n> It took ages, because the index still had those deleted rows.\n> Possibly the same happens with update.\n> \n> Try this:\n> vacuum analyse\n> reindex database ....\n> (your database name instead of ...)\n> \n> or, rather do this table by table:\n> vacuum analyse ....\n> reindex table ...\n> \n> \n> Autovacuum is a generally good thing.\n> \n>> So, my main question is.. how can just a plain simple restart of postgres\n>> restore the original performance (3% cpu time)?\n> \n> there were probably some long transactions running. Stopping postgres\n> effectively kills them off.\n\nI'll try that, thanks for your help Brian.\n", "msg_date": "Fri, 20 Nov 2009 21:26:10 +0100", "msg_from": "Lorenzo Allegrucci <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange performance degradation" }, { "msg_contents": "Tom Lane wrote:\n> Lorenzo Allegrucci <[email protected]> writes:\n>> So, my main question is.. how can just a plain simple restart of postgres\n>> restore the original performance (3% cpu time)?\n> \n> Are you killing off any long-running transactions when you restart?\n\nAfter three days of patient waiting it looks like the common\n'<IDLE> in transaction' problem..\n\n[sorry for >80 cols]\n\n19329 ? S 15:54 /usr/lib/postgresql/8.3/bin/postgres -D /var/lib/postgresql/8.3/main -c config_file=/etc/postgresql/8.3/main/postgresql.conf\n19331 ? Ss 3:40 \\_ postgres: writer process\n19332 ? Ss 0:42 \\_ postgres: wal writer process\n19333 ? Ss 15:01 \\_ postgres: stats collector process\n19586 ? Ss 114:00 \\_ postgres: forinicom weadmin [local] idle\n20058 ? Ss 0:00 \\_ postgres: forinicom weadmin [local] idle\n13136 ? Ss 0:00 \\_ postgres: forinicom weadmin 192.168.4.253(43721) idle in transaction\n\nMy app is a Django webapp, maybe there's some bug in the Django+psycopg2 stack?\n\nAnyway, how can I get rid those \"idle in transaction\" processes?\nCan I just kill -15 them or is there a less drastic way to do it?\n", "msg_date": "Mon, 23 Nov 2009 21:46:41 +0100", "msg_from": "Lorenzo Allegrucci <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange performance degradation" }, { "msg_contents": "In response to Lorenzo Allegrucci <[email protected]>:\n\n> Tom Lane wrote:\n> > Lorenzo Allegrucci <[email protected]> writes:\n> >> So, my main question is.. how can just a plain simple restart of postgres\n> >> restore the original performance (3% cpu time)?\n> > \n> > Are you killing off any long-running transactions when you restart?\n> \n> After three days of patient waiting it looks like the common\n> '<IDLE> in transaction' problem..\n> \n> [sorry for >80 cols]\n> \n> 19329 ? S 15:54 /usr/lib/postgresql/8.3/bin/postgres -D /var/lib/postgresql/8.3/main -c config_file=/etc/postgresql/8.3/main/postgresql.conf\n> 19331 ? Ss 3:40 \\_ postgres: writer process\n> 19332 ? Ss 0:42 \\_ postgres: wal writer process\n> 19333 ? Ss 15:01 \\_ postgres: stats collector process\n> 19586 ? Ss 114:00 \\_ postgres: forinicom weadmin [local] idle\n> 20058 ? Ss 0:00 \\_ postgres: forinicom weadmin [local] idle\n> 13136 ? Ss 0:00 \\_ postgres: forinicom weadmin 192.168.4.253(43721) idle in transaction\n> \n> My app is a Django webapp, maybe there's some bug in the Django+psycopg2 stack?\n> \n> Anyway, how can I get rid those \"idle in transaction\" processes?\n> Can I just kill -15 them or is there a less drastic way to do it?\n\nConnections idle in transaction do not cause performance problems simply\nby being there, at least not when there are so few.\n\nIf you -TERM them, any uncommitted data will be rolled back, which may\nnot be what you want. Don't -KILL them, that will upset the postmaster.\n\nMy answer to your overarching question is that you need to dig deeper to\nfind the real cause of your problem, you're just starting to isolate it.\nTry turning full query logging on and track what those connections are\nactually doing.\n\n-- \nBill Moran\nhttp://www.potentialtech.com\nhttp://people.collaborativefusion.com/~wmoran/\n", "msg_date": "Mon, 23 Nov 2009 16:05:17 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance degradation" }, { "msg_contents": "Bill Moran <[email protected]> writes:\n> In response to Lorenzo Allegrucci <[email protected]>:\n>> Tom Lane wrote:\n>>> Are you killing off any long-running transactions when you restart?\n\n>> Anyway, how can I get rid those \"idle in transaction\" processes?\n>> Can I just kill -15 them or is there a less drastic way to do it?\n\n> Connections idle in transaction do not cause performance problems simply\n> by being there, at least not when there are so few.\n\nThe idle transaction doesn't eat resources in itself. What it does do\nis prevent VACUUM from reclaiming dead rows that are recent enough that\nthey could still be seen by the idle transaction. The described\nbehavior sounds to me like other transactions are wasting lots of cycles\nscanning through dead-but-not-yet-reclaimed rows. There are some other\nthings that also get slower as the window between oldest and newest\nactive XID gets wider.\n\n(8.4 alleviates this problem in many cases, but the OP said he was\nrunning 8.3.)\n\n> If you -TERM them, any uncommitted data will be rolled back, which may\n> not be what you want. Don't -KILL them, that will upset the postmaster.\n\n-TERM isn't an amazingly safe thing either in 8.3. Don't you have a way\nto kill the client-side sessions?\n\n> My answer to your overarching question is that you need to dig deeper to\n> find the real cause of your problem, you're just starting to isolate it.\n\nAgreed, what you really want to do is find and fix the transaction leak\non the client side.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Nov 2009 16:26:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance degradation " }, { "msg_contents": "You may use connection pooling for \"idle connections\" like pgbouncer or pgpool. Following link will give you details about pgbouncer & pgpool. \n\nhttps://developer.skype.com/SkypeGarage/DbProjects/PgBouncer\nhttp://pgpool.projects.postgresql.org/pgpool-II/doc/tutorial-en.html\n\n\nHope it may help you!!! \n\n\n--\nThanks \nSam Jas\n\n\n--- On Mon, 23/11/09, Tom Lane <[email protected]> wrote:\n\nFrom: Tom Lane <[email protected]>\nSubject: Re: [GENERAL] Strange performance degradation\nTo: \"Bill Moran\" <[email protected]>\nCc: \"Lorenzo Allegrucci\" <[email protected]>, [email protected], [email protected]\nDate: Monday, 23 November, 2009, 9:26 PM\n\nBill Moran <[email protected]> writes:\n> In response to Lorenzo Allegrucci <[email protected]>:\n>> Tom Lane wrote:\n>>> Are you killing off any long-running transactions when you restart?\n\n>> Anyway, how can I get rid those \"idle in transaction\" processes?\n>> Can I just kill -15 them or is there a less drastic way to do it?\n\n> Connections idle in transaction do not cause performance problems simply\n> by being there, at least not when there are so few.\n\nThe idle transaction doesn't eat resources in itself.  What it does do\nis prevent VACUUM from reclaiming dead rows that are recent enough that\nthey could still be seen by the idle transaction.  The described\nbehavior sounds to me like other transactions are wasting lots of cycles\nscanning through dead-but-not-yet-reclaimed rows.  There are some other\nthings that also get slower as the window between oldest and newest\nactive XID gets wider.\n\n(8.4 alleviates this problem in many cases, but the OP said he was\nrunning 8.3.)\n\n> If you -TERM them, any uncommitted data will be rolled back, which may\n> not be what you want.  Don't -KILL them, that will upset the postmaster.\n\n-TERM isn't an amazingly safe thing either in 8.3.  Don't you have a way\nto kill the client-side sessions?\n\n> My answer to your overarching question is that you need to dig deeper to\n> find the real cause of your problem, you're just starting to isolate it.\n\nAgreed, what you really want to do is find and fix the transaction leak\non the client side.\n\n            regards, tom lane\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n\n\n\n The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/\nYou may use connection pooling for \"idle connections\" like pgbouncer or pgpool. Following link will give you details about pgbouncer & pgpool. https://developer.skype.com/SkypeGarage/DbProjects/PgBouncerhttp://pgpool.projects.postgresql.org/pgpool-II/doc/tutorial-en.htmlHope it may help you!!! --Thanks Sam Jas--- On Mon, 23/11/09, Tom Lane <[email protected]> wrote:From: Tom Lane <[email protected]>Subject: Re: [GENERAL] Strange performance degradationTo: \"Bill Moran\" <[email protected]>Cc: \"Lorenzo Allegrucci\" <[email protected]>, [email protected], [email protected]: Monday, 23 November, 2009,\n 9:26 PMBill Moran <[email protected]> writes:> In response to Lorenzo Allegrucci <[email protected]>:>> Tom Lane wrote:>>> Are you killing off any long-running transactions when you restart?>> Anyway, how can I get rid those \"idle in transaction\" processes?>> Can I just kill -15 them or is there a less drastic way to do it?> Connections idle in transaction do not cause performance problems simply> by being there, at least not when there are so few.The idle transaction doesn't eat resources in itself.  What it does dois prevent VACUUM from reclaiming dead rows that are recent enough thatthey could\n still be seen by the idle transaction.  The describedbehavior sounds to me like other transactions are wasting lots of cyclesscanning through dead-but-not-yet-reclaimed rows.  There are some otherthings that also get slower as the window between oldest and newestactive XID gets wider.(8.4 alleviates this problem in many cases, but the OP said he wasrunning 8.3.)> If you -TERM them, any uncommitted data will be rolled back, which may> not be what you want.  Don't -KILL them, that will upset the postmaster.-TERM isn't an amazingly safe thing either in 8.3.  Don't you have a wayto kill the client-side sessions?> My answer to your overarching question is that you need to dig deeper to> find the real cause of your problem, you're just starting to isolate it.Agreed, what you really want to do is find and fix the transaction leakon the client\n side.            regards, tom lane-- Sent via pgsql-general mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general\n \nThe INTERNET now has a personality. YOURS! See your Yahoo! Homepage.", "msg_date": "Tue, 24 Nov 2009 15:20:35 +0530 (IST)", "msg_from": "Sam Jas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange performance degradation" }, { "msg_contents": "On Mon, 23 Nov 2009, Lorenzo Allegrucci wrote:\n> Anyway, how can I get rid those \"idle in transaction\" processes?\n> Can I just kill -15 them or is there a less drastic way to do it?\n\nAre you crazy? Sure, if you want to destroy all of the changes made to the \ndatabase in that transaction and thoroughly confuse the client \napplication, you can send a TERM signal to a backend, but the consequences \nto your data are on your own head.\n\nFix the application, don't tell Postgres to stop being a decent database.\n\nMatthew\n\n-- \n I would like to think that in this day and age people would know better than\n to open executables in an e-mail. I'd also like to be able to flap my arms\n and fly to the moon. -- Tim Mullen\n", "msg_date": "Tue, 24 Nov 2009 11:14:12 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Strange performance degradation" }, { "msg_contents": "IMHO the client application is already confused and it's in Prod.\nShouldn't he perhaps terminate/abort the IDLE connections in Prod and\nwork on correcting the problem so it doesn't occur in Dev/Test??\n\n\nOn 11/24/09, Matthew Wakeling <[email protected]> wrote:\n> On Mon, 23 Nov 2009, Lorenzo Allegrucci wrote:\n>> Anyway, how can I get rid those \"idle in transaction\" processes?\n>> Can I just kill -15 them or is there a less drastic way to do it?\n>\n> Are you crazy? Sure, if you want to destroy all of the changes made to the\n> database in that transaction and thoroughly confuse the client\n> application, you can send a TERM signal to a backend, but the consequences\n> to your data are on your own head.\n>\n> Fix the application, don't tell Postgres to stop being a decent database.\n>\n> Matthew\n>\n> --\n> I would like to think that in this day and age people would know better\n> than\n> to open executables in an e-mail. I'd also like to be able to flap my arms\n> and fly to the moon. -- Tim Mullen\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 24 Nov 2009 10:10:17 -0500", "msg_from": "Denis Lussier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Strange performance degradation" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Mon, 23 Nov 2009, Lorenzo Allegrucci wrote:\n>> Anyway, how can I get rid those \"idle in transaction\" processes?\n>> Can I just kill -15 them or is there a less drastic way to do it?\n> \n> Are you crazy? Sure, if you want to destroy all of the changes made to \n> the database in that transaction and thoroughly confuse the client \n> application, you can send a TERM signal to a backend, but the \n> consequences to your data are on your own head.\n\nI'm not crazy, it was just a question..\nAnyway, problem solved in the Django application.\n\n", "msg_date": "Tue, 24 Nov 2009 16:32:25 +0100", "msg_from": "Lorenzo Allegrucci <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Strange performance degradation" }, { "msg_contents": "On Tue, 24 Nov 2009, Denis Lussier wrote:\n> IMHO the client application is already confused and it's in Prod.\n> Shouldn't he perhaps terminate/abort the IDLE connections in Prod and\n> work on correcting the problem so it doesn't occur in Dev/Test??\n\nThe problem is, the connection isn't just IDLE - it is idle IN \nTRANSACTION. This means that there is quite possibly some data that has \nbeen modified in that transaction. If you kill the backend, then that will \nautomatically roll back the transaction, and all of those changes would be \nlost.\n\nI agree that correcting the problem in dev/test is the priority, but I \nwould be very cautious about killing transactions in production. You don't \nknow what data is uncommitted. The safest thing to do may be to bounce the \napplication, rather than Postgres.\n\nMatthew\n\n-- \n All of this sounds mildly turgid and messy and confusing... but what the\n heck. That's what programming's all about, really\n -- Computer Science Lecturer\n", "msg_date": "Tue, 24 Nov 2009 15:41:09 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Strange performance degradation" }, { "msg_contents": "Lorenzo Allegrucci escribi�:\n> Matthew Wakeling wrote:\n>> On Mon, 23 Nov 2009, Lorenzo Allegrucci wrote:\n>>> Anyway, how can I get rid those \"idle in transaction\" processes?\n>>> Can I just kill -15 them or is there a less drastic way to do it?\n>>\n>> Are you crazy? Sure, if you want to destroy all of the changes made \n>> to the database in that transaction and thoroughly confuse the client \n>> application, you can send a TERM signal to a backend, but the \n>> consequences to your data are on your own head.\n>\n> I'm not crazy, it was just a question..\n> Anyway, problem solved in the Django application.\n>\n>\nMatthew replied to you of that way because this is not a good manner to \ndo this, not fot thr fact that you are crazy.\n\nYou can find better ways to do this.\n\nRegards\n", "msg_date": "Tue, 24 Nov 2009 15:31:36 -0500", "msg_from": "\"Ing. Marcos Ortiz Valmaseda\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Strange performance degradation" }, { "msg_contents": "Bouncing the app will roll back the transactions. If there were any\npending updates/inserts, wouldn't he be able to see them in one of the\nsystem tables...\n\n\nOn 11/24/09, Matthew Wakeling <[email protected]> wrote:\n> On Tue, 24 Nov 2009, Denis Lussier wrote:\n>> IMHO the client application is already confused and it's in Prod.\n>> Shouldn't he perhaps terminate/abort the IDLE connections in Prod and\n>> work on correcting the problem so it doesn't occur in Dev/Test??\n>\n> The problem is, the connection isn't just IDLE - it is idle IN\n> TRANSACTION. This means that there is quite possibly some data that has\n> been modified in that transaction. If you kill the backend, then that will\n> automatically roll back the transaction, and all of those changes would be\n> lost.\n>\n> I agree that correcting the problem in dev/test is the priority, but I\n> would be very cautious about killing transactions in production. You don't\n> know what data is uncommitted. The safest thing to do may be to bounce the\n> application, rather than Postgres.\n>\n> Matthew\n>\n> --\n> All of this sounds mildly turgid and messy and confusing... but what the\n> heck. That's what programming's all about, really\n> -- Computer Science Lecturer\n>\n", "msg_date": "Tue, 24 Nov 2009 18:47:16 -0500", "msg_from": "Denis Lussier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Strange performance degradation" }, { "msg_contents": "On Tue, 24 Nov 2009, Denis Lussier wrote:\n> Bouncing the app will roll back the transactions.\n\nDepends on the application. Some certainly use a shutdown hook to flush \ndata out to a database cleanly.\n\nObviously if you kill -9 it, then all bets are off.\n\nMatthew\n\n-- \n Software suppliers are trying to make their software packages more\n 'user-friendly'.... Their best approach, so far, has been to take all\n the old brochures, and stamp the words, 'user-friendly' on the cover.\n -- Bill Gates\n", "msg_date": "Wed, 25 Nov 2009 11:07:36 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Strange performance degradation" } ]
[ { "msg_contents": "Richard Neill wrote:\n \n> SELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining )\n> FROM\n> core.demand,\n> viwcs.previous_wave\n> LEFT OUTER JOIN viwcs.wave_end_demand USING ( wid )\n> WHERE core.demand.id = viwcs.wave_end_demand.demand_id;\n \nFor comparison, how does this do?:\n \nSELECT (core.demand.qty - viwcs.wave_end_demand.qty_remaining)\n FROM core.demand,\n JOIN viwcs.previous_wave\n ON (core.demand.id = viwcs.wave_end_demand.demand_id)\n LEFT OUTER JOIN viwcs.wave_end_demand USING (wid);\n \n-Kevin\n\n", "msg_date": "Fri, 20 Nov 2009 09:24:17 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres query completion status?" }, { "msg_contents": "\nKevin Grittner wrote:\n> Richard Neill wrote:\n> \n>> SELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining )\n>> FROM\n>> core.demand,\n>> viwcs.previous_wave\n>> LEFT OUTER JOIN viwcs.wave_end_demand USING ( wid )\n>> WHERE core.demand.id = viwcs.wave_end_demand.demand_id;\n> \n> For comparison, how does this do?:\n> \n> SELECT (core.demand.qty - viwcs.wave_end_demand.qty_remaining)\n> FROM core.demand\n> JOIN viwcs.previous_wave\n> ON (core.demand.id = viwcs.wave_end_demand.demand_id)\n> LEFT OUTER JOIN viwcs.wave_end_demand USING (wid);\n> \n\n\nThanks for your help,\n\nUnfortunately, it just complains:\n\nERROR: missing FROM-clause entry for table \"wave_end_demand\"\nLINE 4: ON (core.demand.id = viwcs.wave_end_demand.demand_id)\n\nIncidentally, I don't think that this particular re-ordering will make\nmuch difference: viwcs.previous_wave is a table with a single row, and 3\ncolumns in it. Here are the bits of schema, if they're helpful.\n\n\n View \"viwcs.wave_end_demand\"\n Column | Type | Modifiers\n---------------+-----------------------+-----------\n wid | character varying(10) |\n storeorderid | character varying(30) |\n genreorderid | character varying(30) |\n target_id | bigint |\n sid | character varying(30) |\n material_id | bigint |\n demand_id | bigint |\n eqa | integer |\n aqu | bigint |\n qty_remaining | bigint |\nView definition:\n SELECT wave_gol.wid, wave_gol.storeorderid, wave_gol.genreorderid,\nwave_genreorders_map.target_id, wave_gol.sid,\nproduct_info_sku_map.material_id, demand.id AS demand_id, wave_gol.eqa,\nCOALESCE(du_report_sku_sum.aqu, 0::bigint) AS aqu, wave_gol.eqa -\nCOALESCE(du_report_sku_sum.aqu, 0::bigint) AS qty_remaining\n FROM viwcs.wave_gol\n LEFT JOIN viwcs.wave_genreorders_map USING (wid, storeorderid,\ngenreorderid)\n LEFT JOIN viwcs.product_info_sku_map USING (sid)\n LEFT JOIN core.demand USING (target_id, material_id)\n LEFT JOIN ( SELECT du_report_sku.wid, du_report_sku.storeorderid,\ndu_report_sku.genreorderid, du_report_sku.sid, sum(du_report_sku.aqu) AS aqu\n FROM viwcs.du_report_sku\n GROUP BY du_report_sku.wid, du_report_sku.storeorderid,\ndu_report_sku.genreorderid, du_report_sku.sid) du_report_sku_sum USING\n(wid, storeorderid, genreorderid, sid);\n\n\n\n View \"viwcs.previous_wave\"\n Column | Type | Modifiers\n--------+-----------------------+-----------\n wid | character varying(10) |\nView definition:\n SELECT wave_rxw.wid\n FROM viwcs.wave_rxw\n WHERE wave_rxw.is_previous;\n\n\n\n\n Table \"core.demand\"\n Column | Type | Modifiers\n-------------+---------+--------------------------------\n id | bigint | not null default core.new_id()\n target_id | bigint | not null\n material_id | bigint | not null\n qty | integer | not null\n benefit | integer | not null default 0\nIndexes:\n \"demand_pkey\" PRIMARY KEY, btree (id)\n \"demand_target_id_key\" UNIQUE, btree (target_id, material_id)\n \"demand_material_id\" btree (material_id)\n \"demand_target_id\" btree (target_id)\nForeign-key constraints:\n \"demand_material_id_fkey\" FOREIGN KEY (material_id) REFERENCES\ncore.__material_id(id)\n \"demand_target_id_fkey\" FOREIGN KEY (target_id) REFERENCES\ncore.waypoint(id)\nReferenced by:\n TABLE \"core.inventory\" CONSTRAINT \"inventory_demand_id_fkey\"\nFOREIGN KEY (demand_id) REFERENCES core.demand(id)\n\n\n\n\n\n\nThanks,\n\nRichard\n\n", "msg_date": "Fri, 20 Nov 2009 19:00:20 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres query completion status?" } ]
[ { "msg_contents": "Hi,\n\nMy PostgreSQL server has two CPUs (OS: Fedora 11), each with 4 cores. Total\nis 8cores. Now I have several clients running at the same time to do insert\nand update on the same table, each client having its own connection. I have\nmade two testing with clients running in parallel to load 20M data in\ntotal. Each testing, the data is split evenly by the client number such that\neach client only loads a piece of data.\n\n1) Long transaction: A client does the commit at the end of loading. Result:\nEach postgres consumes 95% CPU. The more clients run in parallel, the slower\nthe total runing time is (when 8 clients, it is slowest). However, I expect\nthe more clients run in parallel, it should be faster to load all the data.\n\n2) Short transaction: I set the clients to do a commit on loading every 500\nrecords. Results: Each postgres consumes about 50%CPU. Now the total\nrunning is as what i have expected; the more clients run in parallel, the\nfaster it is (when 8 clients, it is fastest).\n\nCould anybody help to why when I do the long transaction with 8 clients, it\nis slowest? How can I solve this problem? As I don't want to use the 2), in\nwhich I have to set the commit size each time.\n\nThanks a lot!!\n\n-Afancy\n\nHi,My PostgreSQL server has two CPUs (OS: Fedora 11), each with 4 cores. Total is 8cores.  Now I have several clients running at the same time to do insert and update on the same table, each client having its own connection.  I have made  two testing with  clients running in parallel to load 20M data in total. Each testing, the data is split evenly by the client number such that each client only loads a piece of data. \n1) Long transaction: A client does the commit at the end of loading. Result: Each postgres consumes 95% CPU. The more clients run in parallel, the slower the total runing time is (when 8 clients, it is slowest). However, I expect the more clients run in parallel, it should be faster to load all the data.\n2) Short transaction: I set the clients to do a commit on loading every 500 records. Results:  Each postgres consumes about 50%CPU. Now the total running is as what i have expected; the more clients run in parallel, the faster it is (when 8 clients, it is fastest). \nCould anybody help to why when I do the long transaction with 8 clients, it is slowest? How can I solve this problem?  As I don't want to use the 2), in which I have to set the commit size each time. Thanks a lot!!\n-Afancy", "msg_date": "Sat, 21 Nov 2009 23:56:51 +0100", "msg_from": "afancy <[email protected]>", "msg_from_op": true, "msg_subject": "Performance degrade running on multicore computer" }, { "msg_contents": "afancy <[email protected]> writes:\n> My PostgreSQL server has two CPUs (OS: Fedora 11), each with 4 cores. Total\n> is 8cores. Now I have several clients running at the same time to do insert\n> and update on the same table, each client having its own connection. I have\n> made two testing with clients running in parallel to load 20M data in\n> total. Each testing, the data is split evenly by the client number such that\n> each client only loads a piece of data.\n\nWhat exactly are you doing when you \"load data\"? There are some code\npaths that are slower if they have to examine not-yet-committed tuples,\nand your report sounds a bit like that might be what's happening.\nBut with so few details (not even a Postgres version number :-()\nit's difficult to be sure of anything.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Nov 2009 18:13:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degrade running on multicore computer " }, { "msg_contents": "Hi,\n\nI am using the PostgreSQL 8.4. What is the code path? After a row is\ninserted to the table, it will update the fields of \"validfrom\", and\n\"validto\". Followings are the table structure, data, and the performance\ndata:\n\nxiliu=# \\d page\n Table \"pyetlexa.page\"\n Column | Type | Modifiers\n-----------------+-------------------+-----------\n pageid | integer | not null\n url | character varying |\n size | integer |\n validfrom | date |\n validto | date |\n version | integer |\n domainid | integer |\n serverversionid | integer |\nIndexes:\n \"page_pkey\" PRIMARY KEY, btree (pageid)\n \"url_version_idx\" btree (url, version DESC)\n\nHere is the data in this table:\nhttp://imagebin.ca/img/KyxMDIKq.png\n\n\nHere is the performance data by \"top\":\nhttp://imagebin.ca/img/2ssw4wEQ.png\n\n\n\nRegards,\n\nafancy\n\nOn Sun, Nov 22, 2009 at 12:13 AM, Tom Lane <[email protected]> wrote:\n\n> afancy <[email protected]> writes:\n> > My PostgreSQL server has two CPUs (OS: Fedora 11), each with 4 cores.\n> Total\n> > is 8cores. Now I have several clients running at the same time to do\n> insert\n> > and update on the same table, each client having its own connection. I\n> have\n> > made two testing with clients running in parallel to load 20M data in\n> > total. Each testing, the data is split evenly by the client number such\n> that\n> > each client only loads a piece of data.\n>\n> What exactly are you doing when you \"load data\"? There are some code\n> paths that are slower if they have to examine not-yet-committed tuples,\n> and your report sounds a bit like that might be what's happening.\n> But with so few details (not even a Postgres version number :-()\n> it's difficult to be sure of anything.\n>\n> regards, tom lane\n>\n\nHi,I am using the PostgreSQL 8.4. What is the code path?   After a row is inserted to the table, it will update the fields of \"validfrom\", and \"validto\". Followings are the table structure, data, and the performance data: \nxiliu=# \\d page              Table \"pyetlexa.page\"     Column      |       Type        | Modifiers -----------------+-------------------+----------- pageid          | integer           | not null\n\n\n url             | character varying |  size            | integer           |  validfrom       | date              |  validto         | date              |  version         | integer           |  domainid        | integer           | \n\n\n serverversionid | integer           | Indexes:    \"page_pkey\" PRIMARY KEY, btree (pageid)    \"url_version_idx\" btree (url, version DESC)Here is the data in this table:http://imagebin.ca/img/KyxMDIKq.png\nHere is the performance data by \"top\":http://imagebin.ca/img/2ssw4wEQ.pngRegards,afancy\n\nOn Sun, Nov 22, 2009 at 12:13 AM, Tom Lane <[email protected]> wrote:\nafancy <[email protected]> writes:\n\n\n\n> My PostgreSQL server has two CPUs (OS: Fedora 11), each with 4 cores. Total\n> is 8cores.  Now I have several clients running at the same time to do insert\n> and update on the same table, each client having its own connection.  I have\n> made  two testing with  clients running in parallel to load 20M data in\n> total. Each testing, the data is split evenly by the client number such that\n> each client only loads a piece of data.\n\nWhat exactly are you doing when you \"load data\"?  There are some code\npaths that are slower if they have to examine not-yet-committed tuples,\nand your report sounds a bit like that might be what's happening.\nBut with so few details (not even a Postgres version number :-()\nit's difficult to be sure of anything.\n\n                        regards, tom lane", "msg_date": "Sun, 22 Nov 2009 10:02:20 +0100", "msg_from": "afancy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance degrade running on multicore computer" }, { "msg_contents": "On 01/-10/-28163 11:59 AM, afancy wrote:\n> Hi,\n>\n> My PostgreSQL server has two CPUs (OS: Fedora 11), each with 4 cores.\n> Total is 8cores. Now I have several clients running at the same time\n> to do insert and update on the same table, each client having its own\n> connection. I have made two testing with clients running in\n> parallel to load 20M data in total. Each testing, the data is split\n> evenly by the client number such that each client only loads a piece\n> of data.\n>\n> 1) Long transaction: A client does the commit at the end of loading.\n> Result: Each postgres consumes 95% CPU. The more clients run in\n> parallel, the slower the total runing time is (when 8 clients, it is\n> slowest). However, I expect the more clients run in parallel, it\n> should be faster to load all the data.\n>\n> 2) Short transaction: I set the clients to do a commit on loading\n> every 500 records. Results: Each postgres consumes about 50%CPU. Now\n> the total running is as what i have expected; the more clients run in\n> parallel, the faster it is (when 8 clients, it is fastest).\n>\n> Could anybody help to why when I do the long transaction with 8\n> clients, it is slowest? How can I solve this problem? As I don't want\n> to use the 2), in which I have to set the commit size each time.\n>\n> Thanks a lot!!\n>\n> -Afancy\n>\n\nSince you have 2 cpus, you may want to try setting the processor\naffinity for postgres (server and client programs) to the 4 cores on one\nof the cpus (taskset command on linux). Here's an excerpt from a\nmodified /etc/init.d/postgresql:\n\n $SU -l postgres -c \"taskset -c 4-7 $PGENGINE/postmaster -p '$PGPORT' -D '$PGDATA' ${PGOPTS} &\" >> \"$PGLOG\" 2>&1 < /dev/null \n\n\nThanks to Greg Smith to pointing this out when we had a similar issue\nw/a 2-cpu server.\nNB: This was with postgresql 8.3. Don't know if 8.4+ has built-in\nprocessor affinity.\n\n(Apologies in advance for the email formatting.)\n", "msg_date": "Mon, 23 Nov 2009 11:10:55 -0800", "msg_from": "Dave Youatt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degrade running on multicore computer" } ]
[ { "msg_contents": "Hi all,\n\n(Sorry, I know this is a repeat, but if you're using message threads,\nthe previous one was a reply to an OLD subject.)\n\nThe query below is fairly fast if the commented sub-select is\ncommented, but once I included that column, it takes over 10 minutes to\nreturn results. Can someone shed some light on it? I was able to redo\nthe query using left joins instead, and it only marginally increased\nresult time. This is an application (Quasar by Linux Canada) I can't\nchange the query in, so want to see if there's a way to tune the\ndatabase for it to perform faster. Application developer says that\nSybase is able to run this same query with the price column included\nwith only marginal increase in time.\n\n\nselect item.item_id,item_plu.number,item.description,\n(select dept.name from dept where dept.dept_id = item.dept_id)\n-- ,(select price from item_price\n-- where item_price.item_id = item.item_id\n-- and item_price.zone_id = 'OUsEaRcAA3jQrg42WHUm8A'\n-- and item_price.price_type = 0\n-- and item_price.size_name = item.sell_size)\nfrom item join item_plu on item.item_id = item_plu.item_id and\nitem_plu.seq_num = 0\nwhere item.inactive_on is null and exists (select item_num.number from\nitem_num\nwhere item_num.item_id = item.item_id)\nand exists (select stocked from item_store where stocked = 'Y'\nand item_store.item_id = item.item_id)\n\n\nExplain analyze without price column:\n QUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=1563.82..13922.00 rows=10659 width=102) (actual\ntime=165.988..386.737 rows=10669 loops=1) \n Hash Cond: (item.item_id =\nitem_store.item_id) \n\n -> Hash Join (cost=1164.70..2530.78 rows=10659 width=148) (actual\ntime=129.804..222.008 rows=10669 loops=1) \n Hash Cond: (item.item_id =\nitem_plu.item_id) \n\n -> Hash Join (cost=626.65..1792.86 rows=10661 width=93)\n(actual time=92.930..149.267 rows=10669 loops=1) \n Hash Cond: (item.item_id =\nitem_num.item_id) \n\n -> Seq Scan on item (cost=0.00..882.67 rows=10665\nwidth=70) (actual time=0.006..17.706 rows=10669 loops=1) \n Filter: (inactive_on IS\nNULL) \n\n -> Hash (cost=493.39..493.39 rows=10661 width=23)\n(actual time=92.872..92.872 rows=10672 loops=1) \n -> HashAggregate (cost=386.78..493.39 rows=10661\nwidth=23) (actual time=59.193..75.303 rows=10672 loops=1) \n -> Seq Scan on item_num (cost=0.00..339.22\nrows=19022 width=23) (actual time=0.007..26.013 rows=19040 loops=1)\n -> Hash (cost=404.76..404.76 rows=10663 width=55) (actual\ntime=36.835..36.835 rows=10672 loops=1) \n -> Seq Scan on item_plu (cost=0.00..404.76 rows=10663\nwidth=55) (actual time=0.010..18.609 rows=10672 loops=1) \n Filter: (seq_num =\n0) \n\n -> Hash (cost=265.56..265.56 rows=10685 width=23) (actual\ntime=36.123..36.123 rows=10672\nloops=1) \n -> Seq Scan on item_store (cost=0.00..265.56 rows=10685\nwidth=23) (actual time=0.015..17.959 rows=10672 loops=1) \n Filter: (stocked =\n'Y'::bpchar) \n\n SubPlan\n1 \n\n -> Seq Scan on dept (cost=0.00..1.01 rows=1 width=32) (actual\ntime=0.002..0.004 rows=1 loops=10669) \n Filter: (dept_id =\n$0) \n\n Total runtime: 401.560\nms \n\n(21 rows)\n\n\nExplain with price column:\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=1563.82..4525876.70 rows=10659 width=106) (actual\ntime=171.186..20863.887 rows=10669 loops=1)\n Hash Cond: (item.item_id = item_store.item_id)\n -> Hash Join (cost=1164.70..2530.78 rows=10659 width=152) (actual\ntime=130.025..236.528 rows=10669 loops=1)\n Hash Cond: (item.item_id = item_plu.item_id)\n -> Hash Join (cost=626.65..1792.86 rows=10661 width=97)\n(actual time=92.780..158.514 rows=10669 loops=1)\n Hash Cond: (item.item_id = item_num.item_id)\n -> Seq Scan on item (cost=0.00..882.67 rows=10665\nwidth=74) (actual time=0.008..18.836 rows=10669 loops=1)\n Filter: (inactive_on IS NULL)\n -> Hash (cost=493.39..493.39 rows=10661 width=23)\n(actual time=92.727..92.727 rows=10672 loops=1)\n -> HashAggregate (cost=386.78..493.39 rows=10661\nwidth=23) (actual time=59.064..75.243 rows=10672 loops=1)\n -> Seq Scan on item_num (cost=0.00..339.22\nrows=19022 width=23) (actual time=0.009..26.287 rows=19040 loops=1)\n -> Hash (cost=404.76..404.76 rows=10663 width=55) (actual\ntime=37.206..37.206 rows=10672 loops=1)\n -> Seq Scan on item_plu (cost=0.00..404.76 rows=10663\nwidth=55) (actual time=0.011..18.823 rows=10672 loops=1)\n Filter: (seq_num = 0)\n -> Hash (cost=265.56..265.56 rows=10685 width=23) (actual\ntime=36.395..36.395 rows=10672 loops=1)\n -> Seq Scan on item_store (cost=0.00..265.56 rows=10685\nwidth=23) (actual time=0.015..18.120 rows=10672 loops=1)\n Filter: (stocked = 'Y'::bpchar)\n SubPlan 1\n -> Seq Scan on dept (cost=0.00..1.01 rows=1 width=32) (actual\ntime=0.002..0.004 rows=1 loops=10669)\n Filter: (dept_id = $0)\n SubPlan 2\n -> Seq Scan on item_price (cost=0.00..423.30 rows=1 width=8)\n(actual time=1.914..1.914 rows=0 loops=10669)\n Filter: ((item_id = $1) AND (zone_id =\n'OUsEaRcAA3jQrg42WHUm8A'::bpchar) AND (price_type = 0) AND\n((size_name)::text = ($2)::text))\n Total runtime: 20879.388 ms\n(24 rows)\n\n\n", "msg_date": "Sat, 21 Nov 2009 23:13:47 -0600", "msg_from": "Mark Dueck <[email protected]>", "msg_from_op": true, "msg_subject": "sub-select makes query take too long - unusable" }, { "msg_contents": "Hello,\n\nSubPlan 2\n -> Seq Scan on item_price (cost=0.00..423.30 rows=1 width=8)\n(actual time=1.914..1.914 rows=0 loops=10669)\n Filter: ((item_id = $1) AND (zone_id =\n'OUsEaRcAA3jQrg42WHUm8A'::bpchar) AND (price_type = 0) AND\n((size_name)::text = ($2)::text))\n\nThis means that, for every one of 10669 output rows, DB scanned whole\nitem_price table, spending 20.4 of 20.8 secs there. Do you have any\nindexes there? Especially, on item_id column.\n\nBest regards,\nSergey Aleynikov\n", "msg_date": "Sun, 22 Nov 2009 16:06:18 +0800", "msg_from": "Sergey Aleynikov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sub-select makes query take too long - unusable" } ]
[ { "msg_contents": "I have a table with a number of columns.\n \nI perform \n \nSelect * \nfrom table\norder by a,b\n \nThere is an index on a,b which is clustered (as well as indexes on a and b\nalone).\nI have issued the cluster and anyalze commands.\n \nNevertheless, PostgreSQL performs a Sequential Scan on the table and then\nperforms a sort.\n\nAm I missing something?\n \nJonathan Blitz\n\n\n\n\n\nI \nhave a table with a number of columns.\n \nI \nperform \n \nSelect * \nfrom table\norder by a,b\n \nThere is an index on a,b which is clustered (as well as indexes \non a and b alone).\nI \nhave issued the cluster and anyalze commands.\n \nNevertheless, PostgreSQL performs a Sequential Scan on the table and \nthen performs a sort.\nAm I missing something?\n \nJonathan Blitz", "msg_date": "Sun, 22 Nov 2009 14:50:51 +0200", "msg_from": "Jonathan Blitz <[email protected]>", "msg_from_op": true, "msg_subject": "Why is the query not using the index for sorting?" }, { "msg_contents": "2009/11/22 Jonathan Blitz <[email protected]>\n\n> I have a table with a number of columns.\n>\n> I perform\n>\n> Select *\n> from table\n> order by a,b\n>\n> There is an index on a,b which is clustered (as well as indexes on a and b\n> alone).\n> I have issued the cluster and anyalze commands.\n>\n> Nevertheless, PostgreSQL performs a Sequential Scan on the table and then\n> performs a sort.\n> Am I missing something?\n>\n> Jonathan Blitz\n>\n\nIt depends on firstly the size of the table, and also the distribution of\ndata in columns a and b. If the stats for that table knows that the table\nhas a natural order (i.e. they happen to be in roughly the order you've\nasked for them in), or the table isn't big enough to warrant using an index,\nthen it won't bother using one. It will pick whichever it believes to be\nthe most efficient method.\n\nRegards\n\nThom\n\n2009/11/22 Jonathan Blitz <[email protected]>\n\nI \nhave a table with a number of columns.\n \nI \nperform \n \nSelect * \nfrom table\norder by a,b\n \nThere is an index on a,b which is clustered (as well as indexes \non a and b alone).\nI \nhave issued the cluster and anyalze commands.\n \nNevertheless, PostgreSQL performs a Sequential Scan on the table and \nthen performs a sort.\nAm I missing something?\n \nJonathan Blitz\nIt depends on firstly the size of the table, and also the distribution of data in columns a and b.  If the stats for that table knows that the table has a natural order (i.e. they happen to be in roughly the order you've asked for them in), or the table isn't big enough to warrant using an index, then it won't bother using one.  It will pick whichever it believes to be the most efficient method.\nRegardsThom", "msg_date": "Sun, 22 Nov 2009 13:10:26 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is the query not using the index for sorting?" }, { "msg_contents": "On 22/11/2009 8:50 PM, Jonathan Blitz wrote:\n> I have a table with a number of columns.\n> \n> I perform\n> \n> Select *\n> from table\n> order by a,b\n> \n> There is an index on a,b which is clustered (as well as indexes on a and\n> b alone).\n> I have issued the cluster and anyalze commands.\n> \n> Nevertheless, PostgreSQL performs a Sequential Scan on the table and\n> then performs a sort.\n\nPostgreSQL's query planner probably thinks it'll be faster to read the\npages off the disk sequentially then sort them in memory. To use an\nindex instead, Pg would have to read the whole index from disk\n(sequentially) then fetch all the pages off the disk in a probably\nnear-random order. So it'd be doing more disk I/O, and much more of it\nwould be random I/O, which is a LOT slower.\n\nSo Pg does it the fast way, reading the table into memory then sorting\nit there.\n\nThe most important thing to understand is that sometimes, a sequential\nscan is just the fastest way to do the job.\n\nI suspect you're working on the assumption that Pg can get all the data\nit needs from the index, so it doesn't need to read the tables proper.\nIn some other database systems this *might* be possible if you had an\nindex on fields \"a\" and \"b\" and issued a \"select a,b from table\" instead\nof a \"select *\". PostgreSQL, though, can not do this. PostgreSQL's\nindexes do not contain all the information required to return values\nfrom queries, only enough information to find the places in the main\ntables where those values are to be found.\n\nIf you want to know more and understand why that's the case, search for\nthe phrase \"covered index\" and the words \"index visibility\". Suffice it\nto say that there are pretty good reasons why it works how it does, and\nthere would be very large downsides to changing how it works as well as\nlarge technical problems to solve to even make it possible. It's to do\nwith the trade-off between update/insert/delete speeds and query speeds,\nthe cost of \"fatter\" indexes taking longer to read from disk, and lots more.\n\nBy the way, if you want to test out different query plans for a query to\nsee which way is faster, you can use the \"enable_\" parameters like\n\"enable_seqscan\", \"enable_hashjoin\" etc to control how PostgreSQL\nperforms queries. There's *LOTS* to be learned about this in the mailing\nlist archives. You should also read the following page:\n\n http://www.postgresql.org/docs/current/static/runtime-config-query.html\n\nbut understand that the planner method configuration parameters are\nintended mostly for testing and performance analysis, not for production\nuse.\n\nIf you find a query that's lots faster with a particular enable_\nparameter set to \"off\", try increasing your statistics targets on the\ntables / columns of interest, re-ANALYZEing, and re-testing. See these\npages re statistics:\n\nhttp://www.postgresql.org/docs/current/static/using-explain.html\nhttp://www.postgresql.org/docs/current/static/planner-stats.html\nhttp://www.postgresql.org/docs/current/static/planner-stats-details.html\n\nIf after increasing your stats targets the planner still picks a vastly\nslower plan, consider posting to the mailing list with the full output\nof \"EXPLAIN ANALYZE SELECT myquery....\", the full exact text of your\nquery, and your table schema as shown by \"\\d tablename\" in psql. Someone\nmay be able to help you or at least explain why it's happening.\n\n--\nCraig Ringer\n", "msg_date": "Sun, 22 Nov 2009 21:25:17 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is the query not using the index for sorting?" }, { "msg_contents": "Many thanks.\nI'll give it a try and see what happens. \n\n-----Original Message-----\nFrom: Craig Ringer [mailto:[email protected]] \nSent: Sunday, November 22, 2009 3:25 PM\nTo: Jonathan Blitz\nCc: [email protected]\nSubject: Re: [PERFORM] Why is the query not using the index for sorting?\n\nOn 22/11/2009 8:50 PM, Jonathan Blitz wrote:\n> I have a table with a number of columns.\n> \n> I perform\n> \n> Select *\n> from table\n> order by a,b\n> \n> There is an index on a,b which is clustered (as well as indexes on a \n> and b alone).\n> I have issued the cluster and anyalze commands.\n> \n> Nevertheless, PostgreSQL performs a Sequential Scan on the table and \n> then performs a sort.\n\nPostgreSQL's query planner probably thinks it'll be faster to read the pages\noff the disk sequentially then sort them in memory. To use an index instead,\nPg would have to read the whole index from disk\n(sequentially) then fetch all the pages off the disk in a probably\nnear-random order. So it'd be doing more disk I/O, and much more of it would\nbe random I/O, which is a LOT slower.\n\nSo Pg does it the fast way, reading the table into memory then sorting it\nthere.\n\nThe most important thing to understand is that sometimes, a sequential scan\nis just the fastest way to do the job.\n\nI suspect you're working on the assumption that Pg can get all the data it\nneeds from the index, so it doesn't need to read the tables proper.\nIn some other database systems this *might* be possible if you had an index\non fields \"a\" and \"b\" and issued a \"select a,b from table\" instead of a\n\"select *\". PostgreSQL, though, can not do this. PostgreSQL's indexes do not\ncontain all the information required to return values from queries, only\nenough information to find the places in the main tables where those values\nare to be found.\n\nIf you want to know more and understand why that's the case, search for the\nphrase \"covered index\" and the words \"index visibility\". Suffice it to say\nthat there are pretty good reasons why it works how it does, and there would\nbe very large downsides to changing how it works as well as large technical\nproblems to solve to even make it possible. It's to do with the trade-off\nbetween update/insert/delete speeds and query speeds, the cost of \"fatter\"\nindexes taking longer to read from disk, and lots more.\n\nBy the way, if you want to test out different query plans for a query to see\nwhich way is faster, you can use the \"enable_\" parameters like\n\"enable_seqscan\", \"enable_hashjoin\" etc to control how PostgreSQL performs\nqueries. There's *LOTS* to be learned about this in the mailing list\narchives. You should also read the following page:\n\n http://www.postgresql.org/docs/current/static/runtime-config-query.html\n\nbut understand that the planner method configuration parameters are intended\nmostly for testing and performance analysis, not for production use.\n\nIf you find a query that's lots faster with a particular enable_ parameter\nset to \"off\", try increasing your statistics targets on the tables / columns\nof interest, re-ANALYZEing, and re-testing. See these pages re statistics:\n\nhttp://www.postgresql.org/docs/current/static/using-explain.html\nhttp://www.postgresql.org/docs/current/static/planner-stats.html\nhttp://www.postgresql.org/docs/current/static/planner-stats-details.html\n\nIf after increasing your stats targets the planner still picks a vastly\nslower plan, consider posting to the mailing list with the full output of\n\"EXPLAIN ANALYZE SELECT myquery....\", the full exact text of your query, and\nyour table schema as shown by \"\\d tablename\" in psql. Someone may be able to\nhelp you or at least explain why it's happening.\n\n--\nCraig Ringer\nNo virus found in this incoming message.\nChecked by AVG - www.avg.com\nVersion: 9.0.709 / Virus Database: 270.14.76/2517 - Release Date: 11/21/09\n21:41:00\n\n", "msg_date": "Sun, 22 Nov 2009 15:34:31 +0200", "msg_from": "Jonathan Blitz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is the query not using the index for sorting?" }, { "msg_contents": "On Sun, 22 Nov 2009, Jonathan Blitz wrote:\n> I have a table with a number of columns.\n>  \n> I perform\n>  \n> Select *\n> from table\n> order by a,b\n>  \n> There is an index on a,b which is clustered (as well as indexes on a and b alone).\n> I have issued the cluster and anyalze commands.\n\nDid you analyse *after* creating the index and clustering, or before?\n\nMatthew\n\n-- \n [About NP-completeness] These are the problems that make efficient use of\n the Fairy Godmother. -- Computer Science Lecturer", "msg_date": "Mon, 23 Nov 2009 11:00:09 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is the query not using the index for sorting?" }, { "msg_contents": "Definitely after.\n\nJonathan \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Matthew\nWakeling\nSent: Monday, November 23, 2009 1:00 PM\nTo: Jonathan Blitz\nCc: [email protected]\nSubject: Re: [PERFORM] Why is the query not using the index for sorting?\n\nOn Sun, 22 Nov 2009, Jonathan Blitz wrote:\n> I have a table with a number of columns.\n>  \n> I perform\n>  \n> Select *\n> from table\n> order by a,b\n>  \n> There is an index on a,b which is clustered (as well as indexes on a and b\nalone).\n> I have issued the cluster and anyalze commands.\n\nDid you analyse *after* creating the index and clustering, or before?\n\nMatthew\n\n--\n [About NP-completeness] These are the problems that make efficient use of\n the Fairy Godmother. -- Computer Science Lecturer\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nNo virus found in this incoming message.\nChecked by AVG - www.avg.com\nVersion: 9.0.709 / Virus Database: 270.14.76/2517 - Release Date: 11/22/09\n21:40:00\n\n", "msg_date": "Mon, 23 Nov 2009 16:10:57 +0200", "msg_from": "Jonathan Blitz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is the query not using the index for sorting?" } ]
[ { "msg_contents": "Dear All,\n\nThanks for your help earlier with the previous question. I wonder if I \nmight ask another.\n\n\nWe have various queries that need to run, of which I'm going to focus on \n2, \"vox\" and \"du_report\".\n\nBoth of them are extremely sensitive to the precise values of \nrandom_page_cost and seq_page_cost. Experimentally, I've used:\n\n A: seq_page_cost = 0.25; random_page_cost = 0.75\n B: seq_page_cost = 0.5; random_page_cost = 2\n C: seq_page_cost = 1; random_page_cost = 4\n\n(and a few in between).\n\n\nIf I pick the wrong one, then either vox becomes 2 orders of magnitude \nslower (22ms -> 3.5 seconds), or du_report becomes 10x slower. I can't \nuse the same setting for both.\n\nSo, as a very ugly hack, I've tuned the sweet spots for each query.\nVox normally sits at B; du_report at C.\n\n\nNow, the real killer is that the position of that sweet spot changes \nover time as the DB ages over a few days (even though autovacuum is on).\n\nWorse still, doing a cluster of most of the tables and vacuum full \nanalyze made most of the queries respond much better, but the vox \nquery became very slow again, until I set it to A (which, a few days \nago, did not work well).\n\n\n* Why is the query planner so precisely sensitive to the combination of \npage costs and time since last vacuum full?\n\n* Why is it that what improves one query can make another get so much worse?\n\n* Is there any way I can nail the query planner to a particular query \nplan, rather than have it keep changing its mind?\n\n* Is it normal to keep having to tune the query-planner's settings, or \nshould it be possible to set it once, and leave it?\n\n\nTuning this feels rather like adjusting several old radios, which are \nexceptionally finicky about the precise settings, having a very sharp \nresonance peak (in different places), and which drift out of tune at \ndifferent rates. I must be doing something wrong, but what?\n\nThanks for your advice,\n\nRichard\n\n\n", "msg_date": "Sun, 22 Nov 2009 15:31:19 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Query times change by orders of magnitude as DB ages" }, { "msg_contents": "Hello,\n\n> * Is there any way I can nail the query planner to a particular query plan,\n> rather than have it keep changing its mind?\n\nAll these setting leads to choosing different plans. If you have small\nnumber of complex sensitive queires, you can run explain on them with\ncorrect settings, then re-order query (joins, subselects) according to\ngiven query plan, and, before running it, call\n\nset local join_collapse_limit = 1;\nset local from_collapse_limit = 1;\n\nThis will prevent joins/subselects reordering inside current\ntransaction block, leading to consistent plans. But that gives no 100%\nguarantee for chosing, for example, hash join over nested loop.\n\nYou can, as noted in presiouse message, experiment with gego_*\nconstants - especially, lower geqo_threshold to catch better plans\n(but this can take many runs). Or, for production, set geqo=off - this\ncan dramatically increasy query planning, but results would be more\nconsistent.\n\n>Is it normal to keep having to tune the query-planner's settings, or should it be possible to >set it once, and leave it?\n\nI have collapse limits set for some complex reporting queries, and\nthink it's adequate solutuon.\n\n>Worse still, doing a cluster of most of the tables and vacuum full analyze made most of the queries >respond much better, but the vox query became very slow again, until I set it to A (which, a few days >ago, did not work well).\n\nIs your autovacuuming tuned correctly? For large tables, i set it\nrunning much more agressivly then in default install.\n\nBest regards,\nSergey Aleynikov\n", "msg_date": "Mon, 23 Nov 2009 00:13:46 +0800", "msg_from": "Sergey Aleynikov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "On Sun, 22 Nov 2009, Richard Neill wrote:\n> Worse still, doing a cluster of most of the tables and vacuum full analyze\n\nWhy are you doing a vacuum full? That command is not meant to be used \nexcept in the most unusual of circumstances, as it causes bloat to \nindexes.\n\nIf you have run a cluster command, then running vacuum full will make the \ntable and index layout worse, not better.\n\nMatthew\n\n-- \n Riker: Our memory pathways have become accustomed to your sensory input.\n Data: I understand - I'm fond of you too, Commander. And you too Counsellor\n", "msg_date": "Mon, 23 Nov 2009 12:44:05 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "On Sun, Nov 22, 2009 at 10:31 AM, Richard Neill <[email protected]> wrote:\n> Dear All,\n>\n> Thanks for your help earlier with the previous question. I wonder if I might\n> ask another.\n>\n>\n> We have various queries that need to run, of which I'm going to focus on 2,\n> \"vox\" and \"du_report\".\n>\n> Both of them are extremely sensitive to the precise values of\n> random_page_cost and seq_page_cost. Experimentally, I've used:\n>\n>  A:  seq_page_cost = 0.25;  random_page_cost = 0.75\n>  B:  seq_page_cost = 0.5;  random_page_cost = 2\n>  C: seq_page_cost = 1;  random_page_cost = 4\n>\n> (and a few in between).\n>\n>\n> If I pick the wrong one, then either vox becomes 2 orders of magnitude\n> slower (22ms -> 3.5 seconds), or du_report becomes 10x slower. I can't use\n> the same setting for both.\n>\n> So, as a very ugly hack, I've tuned the sweet spots for each query.\n> Vox normally sits at B; du_report at C.\n>\n>\n> Now, the real killer is that the position of that sweet spot changes over\n> time as the DB ages over a few days (even though autovacuum is on).\n>\n> Worse still, doing a cluster of most of the tables and vacuum full analyze\n> made most of the queries respond much better, but the vox query became very\n> slow again, until I set it to A (which, a few days ago, did not work well).\n>\n>\n> *  Why is the query planner so precisely sensitive to the combination of\n> page costs and time since last vacuum full?\n\nIt sounds like your tables are getting bloated. If you have\nautovacuum turned on, this shouldn't be happening. What sort of\nworkload is this? What PG version?\n\n> * Why is it that what improves one query can make another get so much worse?\n\nBecause it changes the plan you get.\n\n> * Is there any way I can nail the query planner to a particular query plan,\n> rather than have it keep changing its mind?\n\nSee other responses.\n\n> * Is it normal to keep having to tune the query-planner's settings, or\n> should it be possible to set it once, and leave it?\n\nLeave it.\n\n...Robert\n", "msg_date": "Mon, 23 Nov 2009 13:11:32 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "\n\nMatthew Wakeling wrote:\n> On Sun, 22 Nov 2009, Richard Neill wrote:\n>> Worse still, doing a cluster of most of the tables and vacuum full \n>> analyze\n> \n> Why are you doing a vacuum full? That command is not meant to be used \n> except in the most unusual of circumstances, as it causes bloat to indexes.\n\nWe'd left it too long, and the DB was reaching 90% of disk space. I \ndidn't realise that vacuum full was ever actively bad, only sometimes \nunneeded. I do now - thanks for the tip.\n\n> \n> If you have run a cluster command, then running vacuum full will make \n> the table and index layout worse, not better.\n> \n\nSo, having managed to bloat the indexes in this way, what can I do to \nfix it? Will a regular vacuum do the job?\n\nRichard\n", "msg_date": "Wed, 25 Nov 2009 12:10:42 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "\n\nMatthew Wakeling wrote:\n> On Sun, 22 Nov 2009, Richard Neill wrote:\n>> Worse still, doing a cluster of most of the tables and vacuum full \n>> analyze\n> \n> Why are you doing a vacuum full? That command is not meant to be used \n> except in the most unusual of circumstances, as it causes bloat to indexes.\n\nWe'd left it too long, and the DB was reaching 90% of disk space. I\ndidn't realise that vacuum full was ever actively bad, only sometimes\nunneeded. I do now - thanks for the tip.\n\n> \n> If you have run a cluster command, then running vacuum full will make \n> the table and index layout worse, not better.\n> \n\nSo, having managed to bloat the indexes in this way, what can I do to\nfix it? Will a regular vacuum do the job?\n\nRichard\n\n", "msg_date": "Wed, 25 Nov 2009 12:11:09 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "On Wed, 25 Nov 2009, Richard Neill wrote:\n>> On Sun, 22 Nov 2009, Richard Neill wrote:\n>>> Worse still, doing a cluster of most of the tables and vacuum full analyze\n>> \n>> Why are you doing a vacuum full? That command is not meant to be used \n>> except in the most unusual of circumstances, as it causes bloat to indexes.\n>\n> We'd left it too long, and the DB was reaching 90% of disk space. I\n> didn't realise that vacuum full was ever actively bad, only sometimes\n> unneeded. I do now - thanks for the tip.\n\nThe problem is that vacuum full does a full compact of the table, but it \nhas to update all the indexes as it goes. This makes it slow, and causes \nbloat to the indexes. There has been some discussion of removing the \ncommand or at least putting a big warning next to it.\n\n> So, having managed to bloat the indexes in this way, what can I do to\n> fix it? Will a regular vacuum do the job?\n\nIn fact, cluster is exactly the command you are looking for. It will drop \nthe indexes, do a complete table rewrite (in the correct order), and then \nrecreate all the indexes again.\n\nIn normal operation, a regular vacuum will keep the table under control, \nbut if you actually want to shrink the database files in exceptional \ncircumstances, then cluster is the tool for the job.\n\nMatthew\n\n-- \n Matthew: That's one of things about Cambridge - all the roads keep changing\n names as you walk along them, like Hills Road in particular.\n Sagar: Yes, Sidney Street is a bit like that too.\n Matthew: Sidney Street *is* Hills Road.\n", "msg_date": "Wed, 25 Nov 2009 12:18:53 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "\n\nMatthew Wakeling wrote:\n> On Wed, 25 Nov 2009, Richard Neill wrote:\n>>> On Sun, 22 Nov 2009, Richard Neill wrote:\n>>>> Worse still, doing a cluster of most of the tables and vacuum full \n>>>> analyze\n> \n> In fact, cluster is exactly the command you are looking for. It will \n> drop the indexes, do a complete table rewrite (in the correct order), \n> and then recreate all the indexes again.\n> \n> In normal operation, a regular vacuum will keep the table under control, \n> but if you actually want to shrink the database files in exceptional \n> circumstances, then cluster is the tool for the job.\n> \n\nThanks - now I understand.\n\nIn terms of just index bloat, does a regular vacuum help?\n\nRichard\n", "msg_date": "Wed, 25 Nov 2009 12:22:40 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "Sergey Aleynikov wrote:\n> Hello,\n> \n>> * Is there any way I can nail the query planner to a particular query plan,\n>> rather than have it keep changing its mind?\n> \n> All these setting leads to choosing different plans. If you have small\n> number of complex sensitive queires, you can run explain on them with\n> correct settings, then re-order query (joins, subselects) according to\n> given query plan, and, before running it, call\n> \n> set local join_collapse_limit = 1;\n> set local from_collapse_limit = 1;\n\nIt's a simple query, but using a complex view. So I can't really \nre-order it.\n\n> This will prevent joins/subselects reordering inside current\n> transaction block, leading to consistent plans. But that gives no 100%\n> guarantee for chosing, for example, hash join over nested loop.\n\nAre you saying that this means that the query planner frequently makes \nthe wrong choice here?\n\n> \n>> Worse still, doing a cluster of most of the tables and vacuum full analyze \n made most of the queries >respond much better, but the vox query \nbecame very slow again, until I set it to A (which, a few days >ago, did \nnot work well).\n> \n> Is your autovacuuming tuned correctly? For large tables, i set it\n> running much more agressivly then in default install.\n\nI hadn't changed it from the defaults; now I've changed it to:\n\nautovacuum_max_workers = 6\nautovacuum_vacuum_scale_factor = 0.002\nautovacuum_analyze_scale_factor = 0.001\n\nis that enough?\n\nThe DB isn't growing that much, but it does seem to need frequent \nvacuum/analyze.\n\n\nRichard\n\n", "msg_date": "Wed, 25 Nov 2009 12:27:28 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "Richard Neill <[email protected]> wrote:\n \n> In terms of just index bloat, does a regular vacuum help?\n \nYou might want to use the REINDEX command to correct serious index\nbloat. A regular vacuum will make dead space available for re-use,\nbut won't eliminate bloat directly. (If run regularly, it will\nprevent bloat.)\n \n-Kevin\n", "msg_date": "Wed, 25 Nov 2009 10:26:15 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB\n\t ages" }, { "msg_contents": "On Wed, Nov 25, 2009 at 4:26 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Richard Neill <[email protected]> wrote:\n>\n> > In terms of just index bloat, does a regular vacuum help?\n>\n> You might want to use the REINDEX command to correct serious index\n> bloat. A regular vacuum will make dead space available for re-use,\n> but won't eliminate bloat directly. (If run regularly, it will\n> prevent bloat.)\n>\n> for that reason, it makes sense to actually partition your data - even tho\nyou don't see performance degradation because of data size, but purely\nbecause of nature of data.\nOther way, is to perform regular cluster && reindex - but this blocks\nrelations you are clustering..\n\n\n\n-- \nGJ\n\nOn Wed, Nov 25, 2009 at 4:26 PM, Kevin Grittner <[email protected]> wrote:\nRichard Neill <[email protected]> wrote:\n\n> In terms of just index bloat, does a regular vacuum help?\n\nYou might want to use the REINDEX command to correct serious index\nbloat.  A regular vacuum will make dead space available for re-use,\nbut won't eliminate bloat directly.  (If run regularly, it will\nprevent bloat.)\nfor that reason, it makes sense to actually partition your data - even tho you don't see performance degradation because of data size, but purely because of nature of data. \nOther way, is to perform regular cluster && reindex - but this blocks relations you are clustering.. -- GJ", "msg_date": "Wed, 25 Nov 2009 16:33:20 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "Grzegorz Jaᅵkiewicz<[email protected]> wrote:\n \n> Other way, is to perform regular cluster && reindex\n \nIf you CLUSTER there is no reason to REINDEX; indexes are rebuilt by\nthe CLUSTER command.\n \nAlso, if you do a good job with regular VACUUMs, there isn't any bloat\nto fix. In that case a regular CLUSTER would only be needed if it was\nworth the cost to keep data physically organized in the index\nsequence.\n \n-Kevin\n", "msg_date": "Wed, 25 Nov 2009 10:58:20 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB\n\t ages" }, { "msg_contents": "On Wed, Nov 25, 2009 at 4:58 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Grzegorz Jaœkiewicz<[email protected]> wrote:\n>\n> > Other way, is to perform regular cluster && reindex\n>\n> If you CLUSTER there is no reason to REINDEX; indexes are rebuilt by\n> the CLUSTER command.\n>\n> Also, if you do a good job with regular VACUUMs, there isn't any bloat\n> to fix. In that case a regular CLUSTER would only be needed if it was\n> worth the cost to keep data physically organized in the index\n> sequence.\n>\n> the out of order data layout is primary reason for index bloat. And that\nhappens , and gets worse over time once data is more and more distributed.\n(\"random\" deletes, etc).\nThus suggestion of partitioning. I for one, hope in 8.5 we will get much\nmore user friendly partitioning interface - and we would no longer have to\nwrite custom triggers. Which is probably the only reason I am only going to\npartition a table only if it is really really really ... needed.\n\n\n\n\n-- \nGJ\n\nOn Wed, Nov 25, 2009 at 4:58 PM, Kevin Grittner <[email protected]> wrote:\nGrzegorz Jaœkiewicz<[email protected]> wrote:\n\n> Other way, is to perform regular cluster && reindex\n\nIf you CLUSTER there is no reason to REINDEX; indexes are rebuilt by\nthe CLUSTER command.\n\nAlso, if you do a good job with regular VACUUMs, there isn't any bloat\nto fix.  In that case a regular CLUSTER would only be needed if it was\nworth the cost to keep data physically organized in the index\nsequence.the out of order data layout is primary reason for index bloat. And that happens , and gets worse over time once data is more and more distributed. (\"random\" deletes, etc).\nThus suggestion of partitioning.  I for one, hope in 8.5 we will get much more user friendly partitioning interface - and we would no longer have to write custom triggers. Which is probably the only reason I am only going to partition a table only if it is really really really ... needed.\n-- GJ", "msg_date": "Wed, 25 Nov 2009 17:05:04 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "On Wed, Nov 25, 2009 at 7:27 AM, Richard Neill <[email protected]> wrote:\n> Sergey Aleynikov wrote:\n>>\n>> Hello,\n>>\n>>> * Is there any way I can nail the query planner to a particular query\n>>> plan,\n>>> rather than have it keep changing its mind?\n>>\n>> All these setting leads to choosing different plans. If you have small\n>> number of complex sensitive queires, you can run explain on them with\n>> correct settings, then re-order query (joins, subselects) according to\n>> given query plan, and, before running it, call\n>>\n>> set local join_collapse_limit = 1;\n>> set local from_collapse_limit = 1;\n>\n> It's a simple query, but using a complex view. So I can't really re-order\n> it.\n\nAlmost all queries can be reordered to some degree, but you might have\nto inline the view into the main query to actually be able to do it.\nForcing a particular query plan in the manner described here is\ngenerally sort of a last resort, though. Usually you want to figure\nout how to tune things so that the query planner picks the right plan\nby itself - that's sort of the point of having a query planner...\n\n...Robert\n", "msg_date": "Wed, 25 Nov 2009 14:01:38 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "On Wed, 25 Nov 2009, Grzegorz Jaśkiewicz wrote:\n> the out of order data layout is primary reason for index bloat. And that happens , and\n> gets worse over time once data is more and more distributed. (\"random\" deletes, etc).\n\nThat's not index bloat. Sure, having the table not in the same order as \nthe index will slow down an index scan, but that's a completely different \nproblem altogether.\n\nIndex bloat is caused by exactly the same mechanism as table bloat. The \nindex needs to have an entry for every row in the table that may be \nvisible by anyone. As with the table, it is not possible to \ndeterministically delete the rows as they become non-visible, so the \nindex (and the table) will be left with dead entries on delete and update. \nThe vacuum command performs garbage collection and marks these dead rows \nand index entries as free, so that some time in the future more data can \nbe written to those places.\n\nIndex bloat is when there is an excessive amount of dead space in an \nindex. It can be prevented by (auto)vacuuming regularly, but can only be \nreversed by REINDEX (or of course deleting the index, or adding loads of \nnew entries to fill up the dead space after vacuuming).\n\nMatthew\n\n-- \n for a in past present future; do\n for b in clients employers associates relatives neighbours pets; do\n echo \"The opinions here in no way reflect the opinions of my $a $b.\"\n done; done", "msg_date": "Thu, 26 Nov 2009 11:14:14 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "Hello,\n\n2009/11/25 Richard Neill <[email protected]>:\n\n>It's a simple query, but using a complex view. So I can't really re-order it.\nView is inserted directly into your query by PG, and then reordered\naccording to from_collapse_limit. Probably, problems lies in the view?\nHow good is it performing? Or from_collapse_limit is _too low_, so\nview isn't expanded right?\n\n>Are you saying that this means that the query planner frequently makes the wrong choice here?\nLook at explain analyze. If on some step estimation from planner\ndiffers by (for start) two order of magnitude from what's really\nretrieved, then there's a wrong statistics count. But if, on every\nstep, estimation is not too far away from reality - you suffer from\nwhat i've described - planner can't reoder efficiently enough query.\nBecause of it happen sometimes - i suspect gego. Or wrong statistics.\n\n>I hadn't changed it from the defaults; now I've changed it to:\n> autovacuum_max_workers = 6\n> autovacuum_vacuum_scale_factor = 0.002\n> autovacuum_analyze_scale_factor = 0.001\n\nIf your tables are not >100mln rows, that's agressive enough. On\n100mln rows, this'd analyze table every 100k changed\n(inserted/updated/deleted) rows. Is this enough for you? Default on\nlarge tables are definatly too low. If you get now consistent times -\nthen you've been hit by wrong statistics.\n\nBest regards,\nSergey Aleynikov\n", "msg_date": "Thu, 26 Nov 2009 20:11:54 +0800", "msg_from": "Sergey Aleynikov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "Hello,\n\n2009/11/25 Richard Neill <[email protected]>:\n\nAlso, if you find odd statistics of freshly analyzed table - try\nincreasing statistics target, using\nALTER TABLE .. ALTER COLUMN .. SET STATISTICS ...\n\nIf you're using defaults - it's again low for large tables. Start with\n200, for example.\n\nBest regards,\nSergey Aleynikov\n", "msg_date": "Thu, 26 Nov 2009 20:36:16 +0800", "msg_from": "Sergey Aleynikov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "\n\nSergey Aleynikov wrote:\n> Hello,\n> \n> 2009/11/25 Richard Neill <[email protected]>:\n> \n> Also, if you find odd statistics of freshly analyzed table - try\n> increasing statistics target, using\n> ALTER TABLE .. ALTER COLUMN .. SET STATISTICS ...\n> \n> If you're using defaults - it's again low for large tables. Start with\n> 200, for example.\n\nThanks. I already had it set way up: 3000.\n\nIs there a good description of exactly what analyse does, and how?\n(in particular, what sort of statistics it gathers).\n\nRichard\n", "msg_date": "Thu, 26 Nov 2009 16:04:32 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" }, { "msg_contents": "\nOn 11/25/09 4:18 AM, \"Matthew Wakeling\" <[email protected]> wrote:\n> \n> The problem is that vacuum full does a full compact of the table, but it\n> has to update all the indexes as it goes. This makes it slow, and causes\n> bloat to the indexes. There has been some discussion of removing the\n> command or at least putting a big warning next to it.\n> \n\nFor tables without an index, you still need something. Vacuum full isn't\nthat bad here, but cluster has other advantages.\nIdeally, you could CLUSTER without using an index, maybe something like\nCLUSTER table using (column a, ...)\nTo order it by specific columns Or even simply\nCLUSTER using ()\nFor when you don't care about the order at all, and just want to compact the\nwhole thing to its proper size (including fillfactor) and most likely\ndefragmented too.\n\nAdditionally, I've found it very important to set fillfactor to something\nother than the default for tables that have lots of updates, especially if\nthere are bulk updates on non-indexed columns.\n\n\n", "msg_date": "Wed, 2 Dec 2009 19:15:37 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query times change by orders of magnitude as DB ages" } ]
[ { "msg_contents": "Question:\n\nIs an INSERT command with a SELECT statement in the RETURNING * parameter faster than say an INSERT and then a SELECT? Does the RETURNING * parameter simply amount to a normal SELECT command on the added rows? We need to basically insert a lot of rows as fast as possible, and get the ids that were added. The number of rows we are inserting is dynamic and is not of fixed length.\n\nThanks,\n-Jason\n\n\n----------------------------------\r\nCheck out the Barracuda Spam & Virus Firewall - offering the fastest\r\nvirus & malware protection in the industry: www.barracudanetworks.com/spam\r\n\n\nQuestion: Is an INSERT command with a SELECT statement in the RETURNING * parameter faster than say an INSERT and then a SELECT? Does the RETURNING * parameter simply amount to a normal SELECT command on the added rows? We need to basically insert a lot of rows as fast as possible, and get the ids that were added.  The number of rows we are inserting is dynamic and is not of fixed length. Thanks,-Jason \r\n---------------------------------- \r\nCheck out the Barracuda Spam & Virus Firewall - offering the fastest\r\nvirus & malware protection in the industry: www.barracudanetworks.com/spam", "msg_date": "Mon, 23 Nov 2009 12:53:10 -0800", "msg_from": "Jason Dictos <[email protected]>", "msg_from_op": true, "msg_subject": "Best possible way to insert and get returned ids" }, { "msg_contents": "On Mon, Nov 23, 2009 at 1:53 PM, Jason Dictos <[email protected]> wrote:\n> Question:\n>\n> Is an INSERT command with a SELECT statement in the RETURNING * parameter\n> faster than say an INSERT and then a SELECT? Does the RETURNING * parameter\n> simply amount to a normal SELECT command on the added rows? We need to\n> basically insert a lot of rows as fast as possible, and get the ids that\n> were added.  The number of rows we are inserting is dynamic and is not of\n> fixed length.\n\nWell, if you do an insert, then a select, how can you tell, with that\nselect, which rows you just inserted? how can you be sure they're not\nsomebody elses?\n\nInsert returning is fantastic for this type of thing. The beauty of\nit is that it returns a SET if you insert multiple rows. And, if\nyou've got two insert threads running, and one inserts to a sequence a\nset of rows with pk values of 10,11,13,15,18,20 while another thread\ninserts to the same table and creates a set of rows with pk values of\n12,14,16,17,19 then those are the two sets you'll get back with\nreturning.\n", "msg_date": "Mon, 23 Nov 2009 14:55:07 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best possible way to insert and get returned ids" }, { "msg_contents": "On Mon, Nov 23, 2009 at 3:53 PM, Jason Dictos <[email protected]> wrote:\n> Is an INSERT command with a SELECT statement in the RETURNING * parameter\n> faster than say an INSERT and then a SELECT? Does the RETURNING * parameter\n> simply amount to a normal SELECT command on the added rows? We need to\n> basically insert a lot of rows as fast as possible, and get the ids that\n> were added.  The number of rows we are inserting is dynamic and is not of\n> fixed length.\n\nWith INSERT ... RETURNING, you only make one trip to the heap, so I\nwould expect it to be faster. Plus, of course, it means you don't\nhave to worry about writing a WHERE clause that can identify the\nrow(s) you just added. It sounds like the right tool for your use\ncase.\n\n...Robert\n", "msg_date": "Wed, 25 Nov 2009 09:40:00 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best possible way to insert and get returned ids" } ]
[ { "msg_contents": "\nHi everybody,\n\nI've got two queries that needs optimizing. Actually, there are others, \nbut these are pretty representative.\n\nYou can see the queries and the corresponding plans at\n\nhttp://bulldog.duhs.duke.edu/~faheem/snpdb/opt.pdf\n\nor\n\nhttp://bulldog.duhs.duke.edu/~faheem/snpdb/opt.tex\n\nif you prefer text (latex file, effectively text in this case)\n\nThe background to this is at \nhttp://bulldog.duhs.duke.edu/~faheem/snpdb/diag.pdf\n\nIf more details are required, let me know and I can add them. I'd \nappreciate suggestions about how to make these queries go faster.\n\nPlease CC this email address on any replies.\n\n Regards, Faheem.\n", "msg_date": "Mon, 23 Nov 2009 17:47:15 -0500 (EST)", "msg_from": "Faheem Mitha <[email protected]>", "msg_from_op": true, "msg_subject": "query optimization" }, { "msg_contents": "2009/11/23 Faheem Mitha <[email protected]>\n\n>\n> Hi everybody,\n>\n> I've got two queries that needs optimizing. Actually, there are others, but\n> these are pretty representative.\n>\n> You can see the queries and the corresponding plans at\n>\n> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.pdf\n>\n> or\n>\n> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.tex\n>\n> if you prefer text (latex file, effectively text in this case)\n>\n> The background to this is at\n> http://bulldog.duhs.duke.edu/~faheem/snpdb/diag.pdf\n>\n> If more details are required, let me know and I can add them. I'd\n> appreciate suggestions about how to make these queries go faster.\n>\n> Please CC this email address on any replies.\n>\n> Regards, Faheem.\n>\n>\n>\nHi Faheem,\n\nThere appears to be a discrepancy between the 2 PDFs you provided. One says\nyou're using PostgreSQL 8.3, and the other shows you using common table\nexpressions, which are only available in 8.4+.\n\nThom\n\n2009/11/23 Faheem Mitha <[email protected]>\n\nHi everybody,\n\nI've got two queries that needs optimizing. Actually, there are others, but these are pretty representative.\n\nYou can see the queries and the corresponding plans at\n\nhttp://bulldog.duhs.duke.edu/~faheem/snpdb/opt.pdf\n\nor\n\nhttp://bulldog.duhs.duke.edu/~faheem/snpdb/opt.tex\n\nif you prefer text (latex file, effectively text in this case)\n\nThe background to this is at http://bulldog.duhs.duke.edu/~faheem/snpdb/diag.pdf\n\nIf more details are required, let me know and I can add them. I'd appreciate suggestions about how to make these queries go faster.\n\nPlease CC this email address on any replies.\n\n                                   Regards, Faheem.\nHi Faheem,There appears to be a discrepancy between the 2 PDFs you provided.  One says you're using PostgreSQL 8.3, and the other shows you using common table expressions, which are only available in 8.4+.\nThom", "msg_date": "Mon, 23 Nov 2009 23:25:24 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query optimization" }, { "msg_contents": "\n\nOn Mon, 23 Nov 2009, Thom Brown wrote:\n\n> Hi Faheem,\n> \n> There appears to be a discrepancy between the 2 PDFs you provided. �One \n> says you're using PostgreSQL 8.3, and the other shows you using common \n> table expressions, which are only available in 8.4+.\n\nYes, sorry. I'm using Postgresql 8.4. I guess I should go through diag.pdf \nand make sure all the information is current. Thanks for pointing out my \nerror.\n\n Regards, Faheem.", "msg_date": "Mon, 23 Nov 2009 18:49:18 -0500 (EST)", "msg_from": "Faheem Mitha <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query optimization" }, { "msg_contents": "On Tue, Nov 24, 2009 at 12:49 AM, Faheem Mitha <[email protected]> wrote:\n>\n> Yes, sorry. I'm using Postgresql 8.4. I guess I should go through diag.pdf\n> and make sure all the information is current. Thanks for pointing out my\n> error.\n>\n\nexcellent report!\n\nabout the copy problem: You seem to have created the primary key\nbefore doing the copy (at least that`s what the dump before copy\nsays). This is bad. Create it after the copy.\n\nGreetings\nMarcin\n", "msg_date": "Tue, 24 Nov 2009 00:52:30 +0100", "msg_from": "marcin mank <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query optimization" }, { "msg_contents": "How often are the tables you query from updated?\n\nRgds\nSebastian\n\nOn Tue, Nov 24, 2009 at 12:52 AM, marcin mank <[email protected]> wrote:\n\n> On Tue, Nov 24, 2009 at 12:49 AM, Faheem Mitha <[email protected]>\n> wrote:\n> >\n> > Yes, sorry. I'm using Postgresql 8.4. I guess I should go through\n> diag.pdf\n> > and make sure all the information is current. Thanks for pointing out my\n> > error.\n> >\n>\n> excellent report!\n>\n> about the copy problem: You seem to have created the primary key\n> before doing the copy (at least that`s what the dump before copy\n> says). This is bad. Create it after the copy.\n>\n> Greetings\n> Marcin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHow often are the tables you query from updated?RgdsSebastianOn Tue, Nov 24, 2009 at 12:52 AM, marcin mank <[email protected]> wrote:\nOn Tue, Nov 24, 2009 at 12:49 AM, Faheem Mitha <[email protected]> wrote:\n\n>\n> Yes, sorry. I'm using Postgresql 8.4. I guess I should go through diag.pdf\n> and make sure all the information is current. Thanks for pointing out my\n> error.\n>\n\nexcellent report!\n\nabout the copy problem: You seem to have created the primary key\nbefore doing the copy (at least that`s what the dump before copy\nsays). This is bad. Create it after the copy.\n\nGreetings\nMarcin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 24 Nov 2009 01:07:33 +0100", "msg_from": "=?ISO-8859-1?Q?Sebastian_J=F6rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query optimization" }, { "msg_contents": "\n\nOn Tue, 24 Nov 2009, Sebastian Jörgensen wrote:\n\n> How often are the tables you query from updated?\n\nQuite rarely. Once in a while. The large tables, eg. geno, are basically \nstatic.\n\n Regards, Faheem.\n\n> Rgds\n> Sebastian\n> \n> On Tue, Nov 24, 2009 at 12:52 AM, marcin mank <[email protected]> wrote:\n> On Tue, Nov 24, 2009 at 12:49 AM, Faheem Mitha <[email protected]> wrote:\n> >\n> > Yes, sorry. I'm using Postgresql 8.4. I guess I should go through diag.pdf\n> > and make sure all the information is current. Thanks for pointing out my\n> > error.\n> >\n> \n> excellent report!\n> \n> about the copy problem: You seem to have created the primary key\n> before doing the copy (at least that`s what the dump before copy\n> says). This is bad. Create it after the copy.\n> \n> Greetings\n> Marcin\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n>", "msg_date": "Mon, 23 Nov 2009 23:49:41 -0500 (EST)", "msg_from": "Faheem Mitha <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query optimization" }, { "msg_contents": "On Mon, Nov 23, 2009 at 5:47 PM, Faheem Mitha <[email protected]> wrote:\n>\n> Hi everybody,\n>\n> I've got two queries that needs optimizing. Actually, there are others, but\n> these are pretty representative.\n>\n> You can see the queries and the corresponding plans at\n>\n> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.pdf\n>\n> or\n>\n> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.tex\n>\n> if you prefer text (latex file, effectively text in this case)\n>\n> The background to this is at\n> http://bulldog.duhs.duke.edu/~faheem/snpdb/diag.pdf\n>\n> If more details are required, let me know and I can add them. I'd appreciate\n> suggestions about how to make these queries go faster.\n>\n> Please CC this email address on any replies.\n\nI've found that a good way to approach optimizing queries of this type\nis to look at the EXPLAIN ANALYZE results and figure out which parts\nof the query are slow. Then simplify the rest of the query as much as\npossible without eliminating the slowness. Then try to figure out how\nto optimize the simplified query: rewrite the logic, add indices,\nchange the schema, etc. Lastly start adding the other bits back in.\n\nIt looks like the dedup_patient_anno CTE is part of your problem. Try\npulling that piece out and optimizing it separately. I wonder if that\ncould be rewritten to use SELECT DISTINCT ON (...) and whether that\nwould be any faster. If not, you might want to look at some way of\npre-marking the non-duplicate rows so that you don't have to recompute\nthat each time. Then you might be able to use the underlying table\ndirectly in the next CTE, which will usually permit better\noptimization, more use of indices, etc. It seems pretty unfortunate\nthat dedup_patient_anno joins against geno and then patient_geno does\nwhat appears to be the same join again. Is there some way to\neliminate that? If so it will probably help.\n\nOnce you've got those parts of the query as well-optimized as you can,\nadd the next pieces in and start hacking on those.\n\n...Robert\n", "msg_date": "Wed, 25 Nov 2009 12:27:22 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query optimization" }, { "msg_contents": "\nHi Robert,\n\nThanks very much for your suggestions.\n\nOn Wed, 25 Nov 2009, Robert Haas wrote:\n\n> On Mon, Nov 23, 2009 at 5:47 PM, Faheem Mitha <[email protected]> wrote:\n>>\n>> Hi everybody,\n>>\n>> I've got two queries that needs optimizing. Actually, there are others, \n>> but these are pretty representative.\n>>\n>> You can see the queries and the corresponding plans at\n>>\n>> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.pdf\n>>\n>> or\n>>\n>> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.tex\n>>\n>> if you prefer text (latex file, effectively text in this case)\n>>\n>> The background to this is at\n>> http://bulldog.duhs.duke.edu/~faheem/snpdb/diag.pdf\n>>\n>> If more details are required, let me know and I can add them. I'd appreciate\n>> suggestions about how to make these queries go faster.\n>>\n>> Please CC this email address on any replies.\n>\n> I've found that a good way to approach optimizing queries of this type\n> is to look at the EXPLAIN ANALYZE results and figure out which parts\n> of the query are slow. Then simplify the rest of the query as much as\n> possible without eliminating the slowness. Then try to figure out how\n> to optimize the simplified query: rewrite the logic, add indices,\n> change the schema, etc. Lastly start adding the other bits back in.\n\nGood strategy. Now I just have to understand EXPLAIN ANALYZE well enough \nto figure out which bits are slow. :-)\n\n> It looks like the dedup_patient_anno CTE is part of your problem. Try\n> pulling that piece out and optimizing it separately. I wonder if that\n> could be rewritten to use SELECT DISTINCT ON (...) and whether that\n> would be any faster.\n\nIsn't SELECT DISTINCT supposed to be evil, since in general the result is \nnot deterministic? I think I had SELECT DISTINCT earlier, and removed it \nbecause of that, with the help of Andrew (RhodiumToad on #postgresql) I \ndidn't compare the corresponding subqueries separately, so don't know what \nspeed difference this made.\n\n> If not, you might want to look at some way of pre-marking the \n> non-duplicate rows so that you don't have to recompute that each time.\n\nWhat are the options re pre-marking?\n\n> Then you might be able to use the underlying table directly in the next \n> CTE, which will usually permit better optimization, more use of indices, \n> etc. It seems pretty unfortunate that dedup_patient_anno joins against \n> geno and then patient_geno does what appears to be the same join again. \n> Is there some way to eliminate that? If so it will probably help.\n\nYou don't say whether you are looking at the PED or TPED query, so I'll \nassume PED. They are similar anyway.\n\nI see your point re the joins. You mean\n\nanno INNER JOIN geno\n\nfollowed by\n\ngeno INNER JOIN dedup_patient_anno\n\n? I think the point of the first join is to reduce the anno table based on \ninformation from the geno table. The result is basically a subset of the \nanno table with some potential duplication removed, which is then \nre-joined to the geno table. I agree this seems a bit suboptimal, and \nthere might be a better way to do this.\n\n> Once you've got those parts of the query as well-optimized as you can,\n> add the next pieces in and start hacking on those.\n\n Regards, Faheem.\n", "msg_date": "Wed, 25 Nov 2009 17:54:46 -0500 (EST)", "msg_from": "Faheem Mitha <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query optimization" }, { "msg_contents": "On Wed, Nov 25, 2009 at 5:54 PM, Faheem Mitha <[email protected]> wrote:\n>\n> Hi Robert,\n>\n> Thanks very much for your suggestions.\n>\n>>> Hi everybody,\n>>>\n>>> I've got two queries that needs optimizing. Actually, there are others,\n>>> but these are pretty representative.\n>>>\n>>> You can see the queries and the corresponding plans at\n>>>\n>>> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.pdf\n>>>\n>>> or\n>>>\n>>> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.tex\n>>>\n>>> if you prefer text (latex file, effectively text in this case)\n>>>\n>>> The background to this is at\n>>> http://bulldog.duhs.duke.edu/~faheem/snpdb/diag.pdf\n>>>\n>>> If more details are required, let me know and I can add them. I'd\n>>> appreciate\n>>> suggestions about how to make these queries go faster.\n>>>\n>>> Please CC this email address on any replies.\n>>\n>> I've found that a good way to approach optimizing queries of this type\n>> is to look at the EXPLAIN ANALYZE results and figure out which parts\n>> of the query are slow.  Then simplify the rest of the query as much as\n>> possible without eliminating the slowness.  Then try to figure out how\n>> to optimize the simplified query: rewrite the logic, add indices,\n>> change the schema, etc.  Lastly start adding the other bits back in.\n>\n> Good strategy. Now I just have to understand EXPLAIN ANALYZE well enough to\n> figure out which bits are slow. :-)\n\nWell, you basically just look for the big numbers. The \"actual\"\nnumbers are in ms, and each node includes the times for the things\nbeneath it, so usually my approach is to just look at lower and lower\nlevels of the tree (i.e. the parts that are more indented) until I\nfind the lowest level that is slow. Then I look at the query bits\npresented there to figure out which piece of the SQL it corresponds\nto.\n\nLooking at the estimates (which are not in ms or any other particular\nunit) can be helpful too, in that it can help you find places where\nthe planner thought it would be fast but it was actually slow. To do\nthis, look at the top level of the query and get a sense of what the\nratio between estimated-cost-units and actual-ms is. Then look for\nbig (order of magnitude) deviations from this throughout the plan.\nThose are places where you want to either gather better statistics, or\nrewrite the query so that it can make better use of statistics. The\nlatter is more of an art than a science - I or someone else on this\nlist can help you with it if we find a specific case to look at.\n\n>> It looks like the dedup_patient_anno CTE is part of your problem.  Try\n>> pulling that piece out and optimizing it separately.  I wonder if that\n>> could be rewritten to use SELECT DISTINCT ON (...) and whether that\n>> would be any faster.\n>\n> Isn't SELECT DISTINCT supposed to be evil, since in general the result is\n> not deterministic? I think I had SELECT DISTINCT earlier, and removed it\n> because of that, with the help of Andrew (RhodiumToad on #postgresql) I\n> didn't compare the corresponding subqueries separately, so don't know what\n> speed difference this made.\n\nWell, any method of DISTINCT-ifying is likely to be somewhat slow, but\nI've had good luck with SELECT DISTINCT ON (...) in the past, as\ncompared with other methods. YMMV - the only way to find out is to\nbenchmark it. I don't think it's non-deterministic if you order by\nthe DISTINCT-ON columns and enough extras to break any ties - you\nshould get the first one of each set.\n\n>> If not, you might want to look at some way of pre-marking the\n>> non-duplicate rows so that you don't have to recompute that each time.\n>\n> What are the options re pre-marking?\n\nWell, what I usually do is - if I'm going to do the same\ndistinct-ification frequently, I add an extra column (say, a boolean)\nand set it to true for all and only those rows which will pass the\ndistinct-ification filter. Then I can just say WHERE <that column\nname>.\n\n>> Then you might be able to use the underlying table directly in the next\n>> CTE, which will usually permit better optimization, more use of indices,\n>> etc.  It seems pretty unfortunate that dedup_patient_anno joins against geno\n>> and then patient_geno does what appears to be the same join again. Is there\n>> some way to eliminate that?  If so it will probably help.\n>\n> You don't say whether you are looking at the PED or TPED query, so I'll\n> assume PED. They are similar anyway.\n>\n> I see your point re the joins. You mean\n>\n> anno INNER JOIN geno\n>\n> followed by\n>\n> geno INNER JOIN dedup_patient_anno\n>\n> ? I think the point of the first join is to reduce the anno table based on\n> information from the geno table. The result is basically a subset of the\n> anno table with some potential duplication removed, which is then re-joined\n> to the geno table. I agree this seems a bit suboptimal, and there might be a\n> better way to do this.\n\nYeah, I didn't think about it in detail, but it looks like it should\nbe possible. Eliminating joins can sometimes have *dramatic* effects\non query performance, and it never hurts.\n\n...Robert\n", "msg_date": "Wed, 25 Nov 2009 18:26:50 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query optimization" }, { "msg_contents": "\n\nOn Wed, 25 Nov 2009, Robert Haas wrote:\n\n> On Wed, Nov 25, 2009 at 5:54 PM, Faheem Mitha <[email protected]> wrote:\n\n> Well, any method of DISTINCT-ifying is likely to be somewhat slow, but\n> I've had good luck with SELECT DISTINCT ON (...) in the past, as\n> compared with other methods. YMMV - the only way to find out is to\n> benchmark it. I don't think it's non-deterministic if you order by\n> the DISTINCT-ON columns and enough extras to break any ties - you\n> should get the first one of each set.\n\nRight, but adding enough extras to break ties is up to the user, and the \nlanguage doesn't guarantee anything, so it feels more fragile.\n\n>>> If not, you might want to look at some way of pre-marking the\n>>> non-duplicate rows so that you don't have to recompute that each time.\n>>\n>> What are the options re pre-marking?\n>\n> Well, what I usually do is - if I'm going to do the same\n> distinct-ification frequently, I add an extra column (say, a boolean)\n> and set it to true for all and only those rows which will pass the\n> distinct-ification filter. Then I can just say WHERE <that column\n> name>.\n\nYes, I see. The problem with is premarking is that the selection is \nsomewhat dynamic, in the sense that this depends on the idlink table, \nwhich depends on patient data, which can change.\n\n>>> Then you might be able to use the underlying table directly in the next\n>>> CTE, which will usually permit better optimization, more use of indices,\n>>> etc. �It seems pretty unfortunate that dedup_patient_anno joins against geno\n>>> and then patient_geno does what appears to be the same join again. Is there\n>>> some way to eliminate that? �If so it will probably help.\n>>\n>> You don't say whether you are looking at the PED or TPED query, so I'll\n>> assume PED. They are similar anyway.\n>>\n>> I see your point re the joins. You mean\n>>\n>> anno INNER JOIN geno\n>>\n>> followed by\n>>\n>> geno INNER JOIN dedup_patient_anno\n>>\n>> ? I think the point of the first join is to reduce the anno table based on\n>> information from the geno table. The result is basically a subset of the\n>> anno table with some potential duplication removed, which is then re-joined\n>> to the geno table. I agree this seems a bit suboptimal, and there might be a\n>> better way to do this.\n>\n> Yeah, I didn't think about it in detail, but it looks like it should\n> be possible. Eliminating joins can sometimes have *dramatic* effects\n> on query performance, and it never hurts.\n\nFailing all else, couldn't I smoosh together the two queries and do a \ntriple join? For reference, the two CTEs in question, from the PED query, \nare as follows.\n\n dedup_patient_anno AS\n ( SELECT *\n FROM\n (SELECT *,\n row_number() OVER(PARTITION BY anno.rsid ORDER BY \nanno.id)\n FROM anno\n INNER JOIN geno\n ON anno.id = geno.anno_id\n WHERE idlink_id =\n (SELECT MIN(id)\n FROM idlink\n )\n ) AS s\n WHERE row_number = '1'\n ),\n patient_geno AS\n ( SELECT geno.idlink_id AS idlink_id,\n geno.anno_id AS anno_id,\n geno.snpval_id AS snpval_id,\n allelea_id, alleleb_id\n FROM geno\n INNER JOIN dedup_patient_anno\n ON geno.anno_id = dedup_patient_anno.id\n ),\n\n Regards, Faheem.", "msg_date": "Fri, 27 Nov 2009 16:47:37 -0500 (EST)", "msg_from": "Faheem Mitha <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query optimization" }, { "msg_contents": "On Fri, Nov 27, 2009 at 4:47 PM, Faheem Mitha <[email protected]> wrote:\n>>>> If not, you might want to look at some way of pre-marking the\n>>>> non-duplicate rows so that you don't have to recompute that each time.\n>>>\n>>> What are the options re pre-marking?\n>>\n>> Well, what I usually do is - if I'm going to do the same\n>> distinct-ification frequently, I add an extra column (say, a boolean)\n>> and set it to true for all and only those rows which will pass the\n>> distinct-ification filter.  Then I can just say WHERE <that column\n>> name>.\n>\n> Yes, I see. The problem with is premarking is that the selection is somewhat\n> dynamic, in the sense that this depends on the idlink table, which depends\n> on patient data, which can change.\n\nYeah. For things like this I find you have to think hard about how to\norganize your schema so that you can optimize the queries you care\nabout. There are no \"just do this and it works\" solutions to\nperformance problems of this type. Still, many of them are solvable\nby making the right decisions elsewhere. Sometimes you can use\ntriggers to recompute your premarks when the data in the other table\nchanges. Another strategy is to keep a cache of precomputed results\nsomewhere. When the underlying data changes, you use triggers to\ninvalidate anything in the cache that might now be wrong, and set\nthings up so that it will be recomputed when next it is used. But in\neither case you have to figure out the right place to do the\ncomputation so that it gains you more than it saves you, and adjusting\nyour schema is often necessary.\n\n>>>> Then you might be able to use the underlying table directly in the next\n>>>> CTE, which will usually permit better optimization, more use of indices,\n>>>> etc.  It seems pretty unfortunate that dedup_patient_anno joins against\n>>>> geno\n>>>> and then patient_geno does what appears to be the same join again. Is\n>>>> there\n>>>> some way to eliminate that?  If so it will probably help.\n>>>\n>>> You don't say whether you are looking at the PED or TPED query, so I'll\n>>> assume PED. They are similar anyway.\n>>>\n>>> I see your point re the joins. You mean\n>>>\n>>> anno INNER JOIN geno\n>>>\n>>> followed by\n>>>\n>>> geno INNER JOIN dedup_patient_anno\n>>>\n>>> ? I think the point of the first join is to reduce the anno table based\n>>> on\n>>> information from the geno table. The result is basically a subset of the\n>>> anno table with some potential duplication removed, which is then\n>>> re-joined\n>>> to the geno table. I agree this seems a bit suboptimal, and there might\n>>> be a\n>>> better way to do this.\n>>\n>> Yeah, I didn't think about it in detail, but it looks like it should\n>> be possible.  Eliminating joins can sometimes have *dramatic* effects\n>> on query performance, and it never hurts.\n>\n> Failing all else, couldn't I smoosh together the two queries and do a triple\n> join? For reference, the two CTEs in question, from the PED query, are as\n> follows.\n>\n>    dedup_patient_anno AS\n>     ( SELECT *\n>     FROM\n>             (SELECT  *,\n>                      row_number() OVER(PARTITION BY anno.rsid ORDER BY\n> anno.id)\n>             FROM     anno\n>                      INNER JOIN geno\n>                      ON       anno.id = geno.anno_id\n>             WHERE    idlink_id        =\n>                      (SELECT MIN(id)\n>                      FROM    idlink\n>                      )\n>             ) AS s\n>     WHERE   row_number = '1'\n>     ),\n>     patient_geno AS\n>     ( SELECT geno.idlink_id AS idlink_id,\n>       geno.anno_id AS anno_id,\n>       geno.snpval_id AS snpval_id,\n>       allelea_id, alleleb_id\n>       FROM    geno\n>             INNER JOIN dedup_patient_anno\n>             ON      geno.anno_id = dedup_patient_anno.id\n>     ),\n\nIf that will give the same results, which I'm not immediately certain\nabout, then I highly recommend it. In general I would recommend only\nusing CTEs to express concepts that can't sensibly be expressed in\nother ways, not to beautify your queries. Keep in mind that joins can\nbe reordered and/or executed using different methods but most other\noperations can't be, so trying to get your joins together in one place\nis usually a good strategy, in my experience. And of course if that\nlets you reduce the total number of joins, that's even better.\n\n...Robert\n", "msg_date": "Mon, 30 Nov 2009 10:40:48 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query optimization" } ]
[ { "msg_contents": "Dear all,\n The query is slow when executing in the stored procedure(it is taking around 1 minute). when executing as a sql it is taking 4 seconds.\nbasically i am selecting the varchar column which contain 4000 character. We have as iindex on the table. We have analyzed the table also. What could be the reason. How to improve it?\n\nThanks in Advance\nRam\n\n\n\n\n\n\nDear all,\n    The query is slow when executing \nin the stored procedure(it is taking around 1 minute). when executing as a sql \nit is taking 4 seconds.\nbasically i am selecting the varchar column which \ncontain 4000 character. We have as iindex on the table. We have analyzed the \ntable also. What could be the reason. How to improve it?\n \nThanks in Advance\nRam", "msg_date": "Tue, 24 Nov 2009 11:08:20 +0530", "msg_from": "\"ramasubramanian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query is slow when executing in procedure" }, { "msg_contents": "2009/11/24 ramasubramanian <[email protected]>:\n> Dear all,\n>     The query is slow when executing in the stored procedure(it is taking\n> around 1 minute). when executing as a sql it is taking 4 seconds.\n> basically i am selecting the varchar column which contain 4000 character. We\n> have as iindex on the table. We have analyzed the table also. What could be\n> the reason. How to improve it?\n\nHello\n\nuse a dynamic query - plpgsql uses prepared statements. It use plans\ngenerated without knowledge of real params. Sometime it should to do\nperformance problem. EXECUTE statement (in plpgsql) uses new plan for\nevery call (and generated with knowledge of real params) - so it is a\nsolution for you.\n\nhttp://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\n\nRegards\nPavel Stehule\n\n\n\n>\n> Thanks in Advance\n> Ram\n", "msg_date": "Tue, 24 Nov 2009 07:10:54 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slow when executing in procedure" }, { "msg_contents": "In response to ramasubramanian :\n> Dear all,\n> The query is slow when executing in the stored procedure(it is taking\n> around 1 minute). when executing as a sql it is taking 4 seconds.\n> basically i am selecting the varchar column which contain 4000 character. We\n> have as iindex on the table. We have analyzed the table also. What could be the\n> reason. How to improve it?\n\nThe reason is hard to guess, because you don't provide enough\ninformations like the function code.\n\nMy guess:\n\nYou calls the function with a parameter, and the planner isn't able to\nchose a fast plan because he doesn't know the parameter. That's why he\nis choosen a seq-scan. You can rewrite your function to using dynamical\nexecute a string that contains your sql to force the planner search an\noptimal plan for your actual parameter.\n\nBut yes, that's only a wild guess (and sorry about my english...)\n\nPlease, show us the table and the function-code.\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Tue, 24 Nov 2009 07:15:52 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query is slow when executing in procedure" }, { "msg_contents": "Thanks a lot Pavel . i will try it .\n\n----- Original Message ----- \nFrom: \"Pavel Stehule\" <[email protected]>\nTo: \"ramasubramanian\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, November 24, 2009 11:40 AM\nSubject: Re: [PERFORM] Query is slow when executing in procedure\n\n\n2009/11/24 ramasubramanian <[email protected]>:\n> Dear all,\n> The query is slow when executing in the stored procedure(it is taking\n> around 1 minute). when executing as a sql it is taking 4 seconds.\n> basically i am selecting the varchar column which contain 4000 character. \n> We\n> have as iindex on the table. We have analyzed the table also. What could \n> be\n> the reason. How to improve it?\n\nHello\n\nuse a dynamic query - plpgsql uses prepared statements. It use plans\ngenerated without knowledge of real params. Sometime it should to do\nperformance problem. EXECUTE statement (in plpgsql) uses new plan for\nevery call (and generated with knowledge of real params) - so it is a\nsolution for you.\n\nhttp://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\n\nRegards\nPavel Stehule\n\n\n\n>\n> Thanks in Advance\n> Ram\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n", "msg_date": "Tue, 24 Nov 2009 11:59:09 +0530", "msg_from": "\"ramasubramanian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query is slow when executing in procedure" }, { "msg_contents": "Thanks a lot Kretschmer. i will try it .\n\nRegards,\nRam\n\n----- Original Message ----- \nFrom: \"A. Kretschmer\" <[email protected]>\nTo: <[email protected]>\nSent: Tuesday, November 24, 2009 11:45 AM\nSubject: Re: [PERFORM] Query is slow when executing in procedure\n\n\n> In response to ramasubramanian :\n>> Dear all,\n>> The query is slow when executing in the stored procedure(it is taking\n>> around 1 minute). when executing as a sql it is taking 4 seconds.\n>> basically i am selecting the varchar column which contain 4000 character. \n>> We\n>> have as iindex on the table. We have analyzed the table also. What could \n>> be the\n>> reason. How to improve it?\n>\n> The reason is hard to guess, because you don't provide enough\n> informations like the function code.\n>\n> My guess:\n>\n> You calls the function with a parameter, and the planner isn't able to\n> chose a fast plan because he doesn't know the parameter. That's why he\n> is choosen a seq-scan. You can rewrite your function to using dynamical\n> execute a string that contains your sql to force the planner search an\n> optimal plan for your actual parameter.\n>\n> But yes, that's only a wild guess (and sorry about my english...)\n>\n> Please, show us the table and the function-code.\n>\n>\n> Regards, Andreas\n> -- \n> Andreas Kretschmer\n> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n> GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n", "msg_date": "Tue, 24 Nov 2009 11:59:48 +0530", "msg_from": "\"ramasubramanian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query is slow when executing in procedure" } ]
[ { "msg_contents": "Dear All.\n Can any one give me dynamic sql in postgres stored procedure using \"USING CLAUSE\"\nRegards,\nRam\n\n\n\n\n\n\nDear All.\n    Can any one give me dynamic sql \nin postgres stored procedure using \"USING CLAUSE\"\nRegards,\nRam", "msg_date": "Tue, 24 Nov 2009 16:55:56 +0530", "msg_from": "\"ramasubramanian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Dynamic sql example" }, { "msg_contents": "2009/11/24 ramasubramanian <[email protected]>:\n> Dear All.\n>     Can any one give me dynamic sql in postgres stored procedure using\n> \"USING CLAUSE\"\n\nCREATE TABLE tab(a integer);\n\nCREATE OR REPLACE FUNCTION foo(_a integer)\nRETURNS void AS $$\nDECLARE r record;\nBEGIN\n FOR r IN EXECUTE 'SELECT * FROM tab WHERE a = $1' USING _a LOOP\n RAISE NOTICE '%', r.a;\n END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\nregards\nPavel Stehule\n\n\n\n> Regards,\n> Ram\n", "msg_date": "Tue, 24 Nov 2009 13:42:33 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dynamic sql example" }, { "msg_contents": "Hi all,\n I have a table emp. using where condition can i get the result \nprioritized.\nTake the example below.\n\nselect ENAME,ORIG_SALARY from employee where (ename='Tom' and \norig_salary=2413)or(orig_salary=1234 )\n\nif the fist condition(ename='Tom' and orig_salary=2413) is satified then 10 \nrows will be returned, for the second condition (orig_salary=1234 ) there \nare 20 rows will be returned.\nThe order of display should be\n\nThe first 10 rows then\nnext 20 rows.\nThanks & Regards,\nRam\n\n\n", "msg_date": "Mon, 25 Jan 2010 15:06:25 +0530", "msg_from": "\"ramasubramanian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Sql result b where condition" }, { "msg_contents": "In response to ramasubramanian :\n\nPlease, create a new mail for a new topic and don't hijack other\nthreads.\n\n\n> Hi all,\n> I have a table emp. using where condition can i get the result \n> prioritized.\n> Take the example below.\n> \n> select ENAME,ORIG_SALARY from employee where (ename='Tom' and \n> orig_salary=2413)or(orig_salary=1234 )\n> \n> if the fist condition(ename='Tom' and orig_salary=2413) is satified then 10 \n> rows will be returned, for the second condition (orig_salary=1234 ) there \n> are 20 rows will be returned.\n> The order of display should be\n> \n> The first 10 rows then\n> next 20 rows.\n> Thanks & Regards,\n> Ram\n\nFor instance:\n\nselect ENAME,ORIG_SALARY, 1 as my_order from employee where (ename='Tom' and\norig_salary=2413) union all select ENAME,ORIG_SALARY, 2 employee where\n(orig_salary=1234 ) order by my_order.\n\nother solution (untested):\n\nselect ENAME,ORIG_SALARY, case when (ename='Tom' and orig_salary=2413)\nthen 1 else 2 end as my_order from employee where (ename='Tom' and\norig_salary=2413)or(orig_salary=1234 ) order by my_order;\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Mon, 25 Jan 2010 11:13:27 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sql result b where condition" }, { "msg_contents": "On Mon, 25 Jan 2010, A. Kretschmer wrote:\n> In response to ramasubramanian :\n>\n> Please, create a new mail for a new topic and don't hijack other\n> threads.\n\nEven more so - this isn't probably the right mailing list for generic sql \nhelp questions.\n\n>> select ENAME,ORIG_SALARY from employee where (ename='Tom' and\n>> orig_salary=2413)or(orig_salary=1234 )\n>>\n>> if the fist condition(ename='Tom' and orig_salary=2413) is satified then 10\n>> rows will be returned, for the second condition (orig_salary=1234 ) there\n>> are 20 rows will be returned.\n>> The order of display should be\n>>\n>> The first 10 rows then\n>> next 20 rows.\n\n> select ENAME,ORIG_SALARY, 1 as my_order from employee where (ename='Tom' and\n> orig_salary=2413) union all select ENAME,ORIG_SALARY, 2 employee where\n> (orig_salary=1234 ) order by my_order.\n\nOr just:\n\nselect ENAME,ORIG_SALARY from employee where (ename='Tom' and\norig_salary=2413)or(orig_salary=1234 ) ORDER BY orig_salary DESC\n\nas there is going to be only two values for orig_salary.\n\nMatthew\n\n-- \n The early bird gets the worm. If you want something else for breakfast, get\n up later.\n", "msg_date": "Mon, 25 Jan 2010 12:10:34 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sql result b where condition" }, { "msg_contents": "In response to Matthew Wakeling :\n> On Mon, 25 Jan 2010, A. Kretschmer wrote:\n> >In response to ramasubramanian :\n> >\n> >Please, create a new mail for a new topic and don't hijack other\n> >threads.\n> \n> Even more so - this isn't probably the right mailing list for generic sql \n> help questions.\n\nACK.\n\n> >select ENAME,ORIG_SALARY, 1 as my_order from employee where (ename='Tom' \n> >and\n> >orig_salary=2413) union all select ENAME,ORIG_SALARY, 2 employee where\n> >(orig_salary=1234 ) order by my_order.\n> \n> Or just:\n> \n> select ENAME,ORIG_SALARY from employee where (ename='Tom' and\n> orig_salary=2413)or(orig_salary=1234 ) ORDER BY orig_salary DESC\n> \n> as there is going to be only two values for orig_salary.\n\nhehe, yes, overseen that fact ;-)\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n", "msg_date": "Mon, 25 Jan 2010 13:24:00 +0100", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sql result b where condition" } ]
[ { "msg_contents": "Hello,\nI've run in a severe performance problem with the following statement:\n\nDELETE FROM t1 WHERE t1.annotation_id IN (\n\tSELECT t2.annotation_id FROM t2)\n\nt1 contains about 48M record (table size is 5.8GB), while t2 contains about 60M\nrecord (total size 8.6GB). annotation_id is the PK in t1 but not in t2 (it's\nnot even unique, in fact there are duplicates - there are about 20M distinct\nannotation_id in this table). There are no FKs on either tables.\nI've killed the query after 14h(!) of runtime...\n\nI've reproduced the problem using a only the ids (extracted from the full\ntables) with the following schemas:\n\ntest2=# \\d t1\n Table \"public.t1\"\n Column | Type | Modifiers\n---------------+--------+-----------\n annotation_id | bigint | not null\nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (annotation_id)\n\ntest2=# \\d t2\n Table \"public.t2\"\n Column | Type | Modifiers\n---------------+--------+-----------\n annotation_id | bigint |\nIndexes:\n \"t2_idx\" btree (annotation_id)\n\nThe query above takes about 30 minutes to complete. The slowdown is not as\nsevere, but (IMHO) the behaviour is strange. On a win2k8 with 8.3.8 using\nprocexp I see the process churning the disk and using more memory until it hits\nsome limit (at about 1.8GB) then the IO slows down considerably. See this\nscreenshot[1].\nThis is exactly what happens with the full dataset.\n\nThis is the output of the explain:\n\ntest2=> explain analyze delete from t1 where annotation_id in (select annotation\n_id from t2);\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n---------------------------------------------------------\n Hash Join (cost=1035767.26..2158065.55 rows=181605 width=6) (actual time=64339\n5.565..1832056.588 rows=26185953 loops=1)\n Hash Cond: (t1.annotation_id = t2.annotation_id)\n -> Seq Scan on t1 (cost=0.00..661734.12 rows=45874812 width=14) (actual tim\ne=0.291..179119.487 rows=45874812 loops=1)\n -> Hash (cost=1033497.20..1033497.20 rows=181605 width=8) (actual time=6433\n93.742..643393.742 rows=26185953 loops=1)\n -> HashAggregate (cost=1031681.15..1033497.20 rows=181605 width=8) (a\nctual time=571807.575..610178.552 rows=26185953 loops=1)\n -> Seq Scan on t2 (cost=0.00..879289.12 rows=60956812 width=8)\n(actual time=2460.595..480446.581 rows=60956812 loops=1)\n Total runtime: 2271122.474 ms\n(7 rows)\n\nTime: 2274723,284 ms\n\n\nAn identital linux machine (with 8.4.1) shows the same issue; with strace I see\na lots of seeks:\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 90.37 0.155484 15 10601 read\n 9.10 0.015649 5216 3 fadvise64\n 0.39 0.000668 0 5499 write\n 0.15 0.000253 0 10733 lseek\n 0.00 0.000000 0 3 open\n 0.00 0.000000 0 3 close\n 0.00 0.000000 0 3 semop\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.172054 26845 total\n\n(30s sample) \n\nBefore hitting the memory \"limit\" (AS on win2k8, unsure about Linux) the trace\nis the following:\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.063862 0 321597 read\n 0.00 0.000000 0 3 lseek\n 0.00 0.000000 0 76 mmap\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.063862 321676 total\n\n\nThe machines have 8 cores (2 Xeon E5320), 8GB of RAM. Postgres data directory\nis on hardware (Dell PERC5) raid mirror, with the log on a separate array.\nOne machine is running linux 64bit (Debian/stable), the other win2k8 (32 bit).\n\nshared_buffers = 512MB\nwork_mem = 512MB\nmaintenance_work_mem = 1GB\ncheckpoint_segments = 16\nwal_buffers = 8MB\nfsync = off # Just in case... usually it's enabled\neffective_cache_size = 4096MB\n\n(the machine with win2k8 is running with a smaller shared_buffers - 16MB)\n\nAny idea on what's going wrong here?\n\nthanks,\nLuca\n[1] http://img10.imageshack.us/i/psql2.png/\n", "msg_date": "Tue, 24 Nov 2009 14:37:08 +0100", "msg_from": "Luca Tettamanti <[email protected]>", "msg_from_op": true, "msg_subject": "DELETE performance problem" }, { "msg_contents": "You may want to consider using partitioning. That way you can drop the\nappropriate partition and never have the overhead of a delete.\n\nJerry Champlin|Absolute Performance Inc.|Mobile: 303-588-2547\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Luca Tettamanti\nSent: Tuesday, November 24, 2009 6:37 AM\nTo: [email protected]\nSubject: [PERFORM] DELETE performance problem\n\nHello,\nI've run in a severe performance problem with the following statement:\n\nDELETE FROM t1 WHERE t1.annotation_id IN (\n\tSELECT t2.annotation_id FROM t2)\n\nt1 contains about 48M record (table size is 5.8GB), while t2 contains about\n60M\nrecord (total size 8.6GB). annotation_id is the PK in t1 but not in t2 (it's\nnot even unique, in fact there are duplicates - there are about 20M distinct\nannotation_id in this table). There are no FKs on either tables.\nI've killed the query after 14h(!) of runtime...\n\nI've reproduced the problem using a only the ids (extracted from the full\ntables) with the following schemas:\n\ntest2=# \\d t1\n Table \"public.t1\"\n Column | Type | Modifiers\n---------------+--------+-----------\n annotation_id | bigint | not null\nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (annotation_id)\n\ntest2=# \\d t2\n Table \"public.t2\"\n Column | Type | Modifiers\n---------------+--------+-----------\n annotation_id | bigint |\nIndexes:\n \"t2_idx\" btree (annotation_id)\n\nThe query above takes about 30 minutes to complete. The slowdown is not as\nsevere, but (IMHO) the behaviour is strange. On a win2k8 with 8.3.8 using\nprocexp I see the process churning the disk and using more memory until it\nhits\nsome limit (at about 1.8GB) then the IO slows down considerably. See this\nscreenshot[1].\nThis is exactly what happens with the full dataset.\n\nThis is the output of the explain:\n\ntest2=> explain analyze delete from t1 where annotation_id in (select\nannotation\n_id from t2);\n QUERY PLAN\n\n----------------------------------------------------------------------------\n----\n---------------------------------------------------------\n Hash Join (cost=1035767.26..2158065.55 rows=181605 width=6) (actual\ntime=64339\n5.565..1832056.588 rows=26185953 loops=1)\n Hash Cond: (t1.annotation_id = t2.annotation_id)\n -> Seq Scan on t1 (cost=0.00..661734.12 rows=45874812 width=14) (actual\ntim\ne=0.291..179119.487 rows=45874812 loops=1)\n -> Hash (cost=1033497.20..1033497.20 rows=181605 width=8) (actual\ntime=6433\n93.742..643393.742 rows=26185953 loops=1)\n -> HashAggregate (cost=1031681.15..1033497.20 rows=181605\nwidth=8) (a\nctual time=571807.575..610178.552 rows=26185953 loops=1)\n -> Seq Scan on t2 (cost=0.00..879289.12 rows=60956812\nwidth=8)\n(actual time=2460.595..480446.581 rows=60956812 loops=1)\n Total runtime: 2271122.474 ms\n(7 rows)\n\nTime: 2274723,284 ms\n\n\nAn identital linux machine (with 8.4.1) shows the same issue; with strace I\nsee\na lots of seeks:\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n 90.37 0.155484 15 10601 read\n 9.10 0.015649 5216 3 fadvise64\n 0.39 0.000668 0 5499 write\n 0.15 0.000253 0 10733 lseek\n 0.00 0.000000 0 3 open\n 0.00 0.000000 0 3 close\n 0.00 0.000000 0 3 semop\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.172054 26845 total\n\n(30s sample) \n\nBefore hitting the memory \"limit\" (AS on win2k8, unsure about Linux) the\ntrace\nis the following:\n\n% time seconds usecs/call calls errors syscall\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.063862 0 321597 read\n 0.00 0.000000 0 3 lseek\n 0.00 0.000000 0 76 mmap\n------ ----------- ----------- --------- --------- ----------------\n100.00 0.063862 321676 total\n\n\nThe machines have 8 cores (2 Xeon E5320), 8GB of RAM. Postgres data\ndirectory\nis on hardware (Dell PERC5) raid mirror, with the log on a separate array.\nOne machine is running linux 64bit (Debian/stable), the other win2k8 (32\nbit).\n\nshared_buffers = 512MB\nwork_mem = 512MB\nmaintenance_work_mem = 1GB\ncheckpoint_segments = 16\nwal_buffers = 8MB\nfsync = off # Just in case... usually it's enabled\neffective_cache_size = 4096MB\n\n(the machine with win2k8 is running with a smaller shared_buffers - 16MB)\n\nAny idea on what's going wrong here?\n\nthanks,\nLuca\n[1] http://img10.imageshack.us/i/psql2.png/\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n", "msg_date": "Tue, 24 Nov 2009 07:59:10 -0700", "msg_from": "\"Jerry Champlin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE performance problem" }, { "msg_contents": "On Tue, Nov 24, 2009 at 3:59 PM, Jerry Champlin\n<[email protected]> wrote:\n> You may want to consider using partitioning.  That way you can drop the\n> appropriate partition and never have the overhead of a delete.\n\nHum, I don't think it's doable in my case; the partitioning is not\nknow a priori. First t1 is fully populated, then the data is loaded\nand manipulated by my application, the result is stored in t2; only\nthen I want to remove (part of) the data from t1.\n\nthanks,\nLuca\n", "msg_date": "Tue, 24 Nov 2009 16:14:46 +0100", "msg_from": "Luca Tettamanti <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DELETE performance problem" }, { "msg_contents": "2009/11/24 Luca Tettamanti <[email protected]>\n\n> On Tue, Nov 24, 2009 at 3:59 PM, Jerry Champlin\n> <[email protected]> wrote:\n> > You may want to consider using partitioning. That way you can drop the\n> > appropriate partition and never have the overhead of a delete.\n>\n> Hum, I don't think it's doable in my case; the partitioning is not\n> know a priori. First t1 is fully populated, then the data is loaded\n> and manipulated by my application, the result is stored in t2; only\n> then I want to remove (part of) the data from t1.\n>\n> thanks,\n> Luca\n>\n>\nIt's a shame there isn't a LIMIT option on DELETE so this can be done in\nsmall batches.\n\nThom\n\n2009/11/24 Luca Tettamanti <[email protected]>\n\nOn Tue, Nov 24, 2009 at 3:59 PM, Jerry Champlin\n<[email protected]> wrote:\n> You may want to consider using partitioning.  That way you can drop the\n> appropriate partition and never have the overhead of a delete.\n\nHum, I don't think it's doable in my case; the partitioning is not\nknow a priori. First t1 is fully populated, then the data is loaded\nand manipulated by my application, the result is stored in t2; only\nthen I want to remove (part of) the data from t1.\n\nthanks,\nLuca\nIt's a shame there isn't a LIMIT option on DELETE so this can be done in small batches.Thom", "msg_date": "Tue, 24 Nov 2009 15:19:23 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE performance problem" }, { "msg_contents": "On Tue, Nov 24, 2009 at 3:19 PM, Thom Brown <[email protected]> wrote:\n\n> 2009/11/24 Luca Tettamanti <[email protected]>\n>\n> On Tue, Nov 24, 2009 at 3:59 PM, Jerry Champlin\n>> <[email protected]> wrote:\n>> > You may want to consider using partitioning. That way you can drop the\n>> > appropriate partition and never have the overhead of a delete.\n>>\n>> Hum, I don't think it's doable in my case; the partitioning is not\n>> know a priori. First t1 is fully populated, then the data is loaded\n>> and manipulated by my application, the result is stored in t2; only\n>> then I want to remove (part of) the data from t1.\n>>\n>> thanks,\n>> Luca\n>>\n>>\n> It's a shame there isn't a LIMIT option on DELETE so this can be done in\n> small batches.\n>\n\nyou sort of can do it, using PK on table as pointer. DELETE FROM foo USING\n... etc.\nwith subquery in using that will limit number of rows ;)\n\n\n\n>\n> Thom\n>\n\n\n\n-- \nGJ\n\nOn Tue, Nov 24, 2009 at 3:19 PM, Thom Brown <[email protected]> wrote:\n2009/11/24 Luca Tettamanti <[email protected]>\n\n\nOn Tue, Nov 24, 2009 at 3:59 PM, Jerry Champlin\n<[email protected]> wrote:\n> You may want to consider using partitioning.  That way you can drop the\n> appropriate partition and never have the overhead of a delete.\n\nHum, I don't think it's doable in my case; the partitioning is not\nknow a priori. First t1 is fully populated, then the data is loaded\nand manipulated by my application, the result is stored in t2; only\nthen I want to remove (part of) the data from t1.\n\nthanks,\nLuca\nIt's a shame there isn't a LIMIT option on DELETE so this can be done in small batches.you sort of can do it, using PK on table as pointer. DELETE FROM foo USING ... etc. \nwith subquery in using that will limit number of rows ;) \nThom\n-- GJ", "msg_date": "Tue, 24 Nov 2009 15:36:39 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE performance problem" }, { "msg_contents": "On Tuesday 24 November 2009, Thom Brown <[email protected]> wrote:\n>\n> It's a shame there isn't a LIMIT option on DELETE so this can be done in\n> small batches.\n\ndelete from table where pk in (select pk from table where delete_condition \nlimit X);\n\n\n-- \n\"No animals were harmed in the recording of this episode. We tried but that \ndamn monkey was just too fast.\"\n", "msg_date": "Tue, 24 Nov 2009 08:07:59 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE performance problem" }, { "msg_contents": " Even though the column in question is not unique on t2 could you not \nindex it? That should improve the performance of the inline query.\n\nAre dates applicable in any way? In some cases adding a date field, \npartitioning or indexing on that and adding where date>x days. That \ncan be an effective way to limit records searched.\n\nKris\n\nOn 24-Nov-09, at 9:59, \"Jerry Champlin\" <[email protected] \n > wrote:\n\n> You may want to consider using partitioning. That way you can drop \n> the\n> appropriate partition and never have the overhead of a delete.\n>\n> Jerry Champlin|Absolute Performance Inc.|Mobile: 303-588-2547\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Luca \n> Tettamanti\n> Sent: Tuesday, November 24, 2009 6:37 AM\n> To: [email protected]\n> Subject: [PERFORM] DELETE performance problem\n>\n> Hello,\n> I've run in a severe performance problem with the following statement:\n>\n> DELETE FROM t1 WHERE t1.annotation_id IN (\n> SELECT t2.annotation_id FROM t2)\n>\n> t1 contains about 48M record (table size is 5.8GB), while t2 \n> contains about\n> 60M\n> record (total size 8.6GB). annotation_id is the PK in t1 but not in \n> t2 (it's\n> not even unique, in fact there are duplicates - there are about 20M \n> distinct\n> annotation_id in this table). There are no FKs on either tables.\n> I've killed the query after 14h(!) of runtime...\n>\n> I've reproduced the problem using a only the ids (extracted from the \n> full\n> tables) with the following schemas:\n>\n> test2=# \\d t1\n> Table \"public.t1\"\n> Column | Type | Modifiers\n> ---------------+--------+-----------\n> annotation_id | bigint | not null\n> Indexes:\n> \"t1_pkey\" PRIMARY KEY, btree (annotation_id)\n>\n> test2=# \\d t2\n> Table \"public.t2\"\n> Column | Type | Modifiers\n> ---------------+--------+-----------\n> annotation_id | bigint |\n> Indexes:\n> \"t2_idx\" btree (annotation_id)\n>\n> The query above takes about 30 minutes to complete. The slowdown is \n> not as\n> severe, but (IMHO) the behaviour is strange. On a win2k8 with 8.3.8 \n> using\n> procexp I see the process churning the disk and using more memory \n> until it\n> hits\n> some limit (at about 1.8GB) then the IO slows down considerably. See \n> this\n> screenshot[1].\n> This is exactly what happens with the full dataset.\n>\n> This is the output of the explain:\n>\n> test2=> explain analyze delete from t1 where annotation_id in (select\n> annotation\n> _id from t2);\n> QUERY \n> PLAN\n>\n> --- \n> --- \n> ----------------------------------------------------------------------\n> ----\n> ---------------------------------------------------------\n> Hash Join (cost=1035767.26..2158065.55 rows=181605 width=6) (actual\n> time=64339\n> 5.565..1832056.588 rows=26185953 loops=1)\n> Hash Cond: (t1.annotation_id = t2.annotation_id)\n> -> Seq Scan on t1 (cost=0.00..661734.12 rows=45874812 width=14) \n> (actual\n> tim\n> e=0.291..179119.487 rows=45874812 loops=1)\n> -> Hash (cost=1033497.20..1033497.20 rows=181605 width=8) (actual\n> time=6433\n> 93.742..643393.742 rows=26185953 loops=1)\n> -> HashAggregate (cost=1031681.15..1033497.20 rows=181605\n> width=8) (a\n> ctual time=571807.575..610178.552 rows=26185953 loops=1)\n> -> Seq Scan on t2 (cost=0.00..879289.12 rows=60956812\n> width=8)\n> (actual time=2460.595..480446.581 rows=60956812 loops=1)\n> Total runtime: 2271122.474 ms\n> (7 rows)\n>\n> Time: 2274723,284 ms\n>\n>\n> An identital linux machine (with 8.4.1) shows the same issue; with \n> strace I\n> see\n> a lots of seeks:\n>\n> % time seconds usecs/call calls errors syscall\n> ------ ----------- ----------- --------- --------- ----------------\n> 90.37 0.155484 15 10601 read\n> 9.10 0.015649 5216 3 fadvise64\n> 0.39 0.000668 0 5499 write\n> 0.15 0.000253 0 10733 lseek\n> 0.00 0.000000 0 3 open\n> 0.00 0.000000 0 3 close\n> 0.00 0.000000 0 3 semop\n> ------ ----------- ----------- --------- --------- ----------------\n> 100.00 0.172054 26845 total\n>\n> (30s sample)\n>\n> Before hitting the memory \"limit\" (AS on win2k8, unsure about Linux) \n> the\n> trace\n> is the following:\n>\n> % time seconds usecs/call calls errors syscall\n> ------ ----------- ----------- --------- --------- ----------------\n> 100.00 0.063862 0 321597 read\n> 0.00 0.000000 0 3 lseek\n> 0.00 0.000000 0 76 mmap\n> ------ ----------- ----------- --------- --------- ----------------\n> 100.00 0.063862 321676 total\n>\n>\n> The machines have 8 cores (2 Xeon E5320), 8GB of RAM. Postgres data\n> directory\n> is on hardware (Dell PERC5) raid mirror, with the log on a separate \n> array.\n> One machine is running linux 64bit (Debian/stable), the other win2k8 \n> (32\n> bit).\n>\n> shared_buffers = 512MB\n> work_mem = 512MB\n> maintenance_work_mem = 1GB\n> checkpoint_segments = 16\n> wal_buffers = 8MB\n> fsync = off # Just in case... usually it's enabled\n> effective_cache_size = 4096MB\n>\n> (the machine with win2k8 is running with a smaller shared_buffers - \n> 16MB)\n>\n> Any idea on what's going wrong here?\n>\n> thanks,\n> Luca\n> [1] http://img10.imageshack.us/i/psql2.png/\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 Nov 2009 19:47:46 -0500", "msg_from": "Kris Kewley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE performance problem" }, { "msg_contents": "On Tue, Nov 24, 2009 at 2:37 PM, Luca Tettamanti <[email protected]> wrote:\n>         ->  HashAggregate  (cost=1031681.15..1033497.20 rows=181605 width=8) (a\n> ctual time=571807.575..610178.552 rows=26185953 loops=1)\n\n\nThis is Your problem. The system`s estimate for the number of distinct\nannotation_ids in t2 is wildly off.\n\nThe disk activity is almost certainly swapping (You can check it\niostat on the linux machine).\n\nCan You try \"analyze t2\" just before the delete quety? maybe try\nraising statistics target for the annotation_id column.\n\nIf all else fails, You may try \"set enable_hashagg to false\" just\nbefore the query.\n\nGreetings\nMarcin Mańk\n\n\nGreetings\nMarcin Mańk\n", "msg_date": "Wed, 25 Nov 2009 16:22:47 +0100", "msg_from": "marcin mank <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE performance problem" }, { "msg_contents": "On Wed, Nov 25, 2009 at 04:22:47PM +0100, marcin mank wrote:\n> On Tue, Nov 24, 2009 at 2:37 PM, Luca Tettamanti <[email protected]> wrote:\n> > � � � � -> �HashAggregate �(cost=1031681.15..1033497.20 rows=181605 width=8) (a\n> > ctual time=571807.575..610178.552 rows=26185953 loops=1)\n> \n> \n> This is Your problem. The system`s estimate for the number of distinct\n> annotation_ids in t2 is wildly off.\n\nAh, I see.\n\n> The disk activity is almost certainly swapping (You can check it\n> iostat on the linux machine).\n\nNope, zero swap activity. Under Linux postgres tops up at about 4.4GB, leaving\n3.6GB of page cache (nothing else is running right now).\n\n> Can You try \"analyze t2\" just before the delete quety? maybe try\n> raising statistics target for the annotation_id column.\n\nI already tried, the estimation is still way off.\n\n> If all else fails, You may try \"set enable_hashagg to false\" just\n> before the query.\n\n Hash IN Join (cost=1879362.27..11080576.17 rows=202376 width=6) (actual time=250281.607..608638.141 rows=26185953 loops=1)\n Hash Cond: (t1.annotation_id = t2.annotation_id)\n -> Seq Scan on t1 (cost=0.00..661734.12 rows=45874812 width=14) (actual time=0.017..193661.353 rows=45874812 loops=1)\n -> Hash (cost=879289.12..879289.12 rows=60956812 width=8) (actual time=250271.012..250271.012 rows=60956812 loops=1)\n\t -> Seq Scan on t2 (cost=0.00..879289.12 rows=60956812 width=8) (actual time=0.023..178297.862 rows=60956812 loops=1)\n Total runtime: 900019.033 ms\n(6 rows)\n\nThis is after an analyze.\n\nThe alternative query suggested by Shrirang Chitnis:\n\nDELETE FROM t1 WHERE EXISTS (SELECT 1 FROM t2 WHERE t1.annotation_id = t2.annotation_id)\n\nperforms event better:\n\n Seq Scan on t1 (cost=0.00..170388415.89 rows=22937406 width=6) (actual time=272.625..561241.294 rows=26185953 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using t2_idx on t2 (cost=0.00..1113.63 rows=301 width=0) (actual time=0.008..0.008 rows=1 loops=45874812)\n\t Index Cond: ($0 = annotation_id)\n Total runtime: 629426.014 ms\n(6 rows)\n\nWill try on the full data set.\n\nthanks,\nLuca\n", "msg_date": "Wed, 25 Nov 2009 17:13:09 +0100", "msg_from": "Luca Tettamanti <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DELETE performance problem" }, { "msg_contents": "On Wed, Nov 25, 2009 at 4:13 PM, Luca Tettamanti <[email protected]>wrote:\n\n>\n>\n> DELETE FROM t1 WHERE EXISTS (SELECT 1 FROM t2 WHERE t1.annotation_id =\n> t2.annotation_id)\n>\n> performs event better:\n>\n> Seq Scan on t1 (cost=0.00..170388415.89 rows=22937406 width=6) (actual\n> time=272.625..561241.294 rows=26185953 loops=1)\n> Filter: (subplan)\n> SubPlan\n> -> Index Scan using t2_idx on t2 (cost=0.00..1113.63 rows=301\n> width=0) (actual time=0.008..0.008 rows=1 loops=45874812)\n> Index Cond: ($0 = annotation_id)\n> Total runtime: 629426.014 ms\n> (6 rows)\n>\n> Have you tried:\nDELETE FROM t1 USING t2 WHERE t1.annotation_id = t2.annotation_id;\n\n?\n\n\n\n\n-- \nGJ\n\nOn Wed, Nov 25, 2009 at 4:13 PM, Luca Tettamanti <[email protected]> wrote:\n\n\nDELETE FROM t1 WHERE EXISTS (SELECT 1 FROM t2 WHERE t1.annotation_id = t2.annotation_id)\n\nperforms event better:\n\n Seq Scan on t1  (cost=0.00..170388415.89 rows=22937406 width=6) (actual time=272.625..561241.294 rows=26185953 loops=1)\n    Filter: (subplan)\n       SubPlan\n            ->  Index Scan using t2_idx on t2  (cost=0.00..1113.63 rows=301 width=0) (actual time=0.008..0.008 rows=1 loops=45874812)\n                       Index Cond: ($0 = annotation_id)\n Total runtime: 629426.014 ms\n(6 rows)\nHave you tried: DELETE FROM t1 USING t2 WHERE  t1.annotation_id = t2.annotation_id;?-- GJ", "msg_date": "Wed, 25 Nov 2009 16:16:00 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE performance problem" } ]
[ { "msg_contents": "\nWe're about to purchase a new server to store some of our old databases, \nand I was wondering if someone could advise me on a RAID card. We want to \nmake a 6-drive SATA RAID array out of 2TB drives, and it will be RAID 5 or \n6 because there will be zero write traffic. The priority is stuffing as \nmuch storage into a small 2U rack as possible, with performance less \nimportant. We will be running Debian Linux.\n\nPeople have mentioned Areca as making good RAID controllers. We're looking \nat the \"Areca ARC-1220 PCI-Express x8 SATA II\" as a possibility. Does \nanyone have an opinion on whether it is a turkey or a star?\n\nAnother possibility is a 3-ware card of some description.\n\nThanks in advance,\n\nMatthew\n\n-- \n Now you see why I said that the first seven minutes of this section will have\n you looking for the nearest brick wall to beat your head against. This is\n why I do it at the end of the lecture - so I can run.\n -- Computer Science lecturer\n", "msg_date": "Tue, 24 Nov 2009 17:23:10 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "RAID card recommendation" }, { "msg_contents": "Matthew Wakeling wrote:\n> \n> We're about to purchase a new server to store some of our old databases, \n> and I was wondering if someone could advise me on a RAID card. We want \n> to make a 6-drive SATA RAID array out of 2TB drives, and it will be RAID \n> 5 or 6 because there will be zero write traffic. The priority is \n> stuffing as much storage into a small 2U rack as possible, with \n> performance less important. We will be running Debian Linux.\n> \n> People have mentioned Areca as making good RAID controllers. We're \n> looking at the \"Areca ARC-1220 PCI-Express x8 SATA II\" as a possibility. \n> Does anyone have an opinion on whether it is a turkey or a star?\n> \n> Another possibility is a 3-ware card of some description.\n> \n\nDo you actually need a RAID card at all? It's just another point of \nfailure: the Linux software raid (mdadm) is pretty good.\n\nAlso, be very wary of RAID5 for an array that size. It is highly \nprobable that, if one disk has failed, then during the recovery process, \nyou may lose a second disk. The unrecoverable error rate on standard \ndisks is about 1 in 10^14 bits; your disk array is 10^11 bits in size...\n\nWe got bitten by this....\n\nRichard\n\n", "msg_date": "Tue, 24 Nov 2009 17:39:27 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "On Nov 24, 2009, at 9:23 AM, Matthew Wakeling wrote:\n\n> We're about to purchase a new server to store some of our old \n> databases, and I was wondering if someone could advise me on a RAID \n> card. We want to make a 6-drive SATA RAID array out of 2TB drives, \n> and it will be RAID 5 or 6 because there will be zero write traffic. \n> The priority is stuffing as much storage into a small 2U rack as \n> possible, with performance less important. We will be running Debian \n> Linux.\n>\n> People have mentioned Areca as making good RAID controllers. We're \n> looking at the \"Areca ARC-1220 PCI-Express x8 SATA II\" as a \n> possibility. Does anyone have an opinion on whether it is a turkey \n> or a star?\n\nWe've used that card and have been quite happy with it. Looking \nthrough the release notes for firmware upgrades can be pretty worrying \n(\"you needed to fix what?!\"), but we never experienced any problems \nourselves, and its not like 3ware release notes are any different.\n\nBut the main benefits of a RAID card are a write cache and easy hot \nswap. It sounds like you don't need a write cache. Can you be happy \nwith the kernel's hotswap ability?\nOn Nov 24, 2009, at 9:23 AM, Matthew Wakeling wrote:We're about to purchase a new server to store some of our old databases, and I was wondering if someone could advise me on a RAID card. We want to make a 6-drive SATA RAID array out of 2TB drives, and it will be RAID 5 or 6 because there will be zero write traffic. The priority is stuffing as much storage into a small 2U rack as possible, with performance less important. We will be running Debian Linux.People have mentioned Areca as making good RAID controllers. We're looking at the \"Areca ARC-1220 PCI-Express x8 SATA II\" as a possibility. Does anyone have an opinion on whether it is a turkey or a star?We've used that card and have been quite happy with it. Looking through the release notes for firmware upgrades can be pretty worrying (\"you needed to fix what?!\"), but we never experienced any problems ourselves, and its not like 3ware release notes are any different.But the main benefits of a RAID card are a write cache and easy hot swap. It sounds like you don't need a write cache. Can you be happy with the kernel's hotswap ability?", "msg_date": "Tue, 24 Nov 2009 09:55:00 -0800", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "\n----- \"Richard Neill\" <[email protected]> escreveu:\n\n> Matthew Wakeling wrote:\n> > \n> > We're about to purchase a new server to store some of our old\n> databases, \n> > and I was wondering if someone could advise me on a RAID card. We\n> want \n> > to make a 6-drive SATA RAID array out of 2TB drives, and it will be\n> RAID \n> > 5 or 6 because there will be zero write traffic. The priority is \n> > stuffing as much storage into a small 2U rack as possible, with \n> > performance less important. We will be running Debian Linux.\n> > \n> > People have mentioned Areca as making good RAID controllers. We're \n> > looking at the \"Areca ARC-1220 PCI-Express x8 SATA II\" as a\n> possibility. \n> > Does anyone have an opinion on whether it is a turkey or a star?\n> > \n> > Another possibility is a 3-ware card of some description.\n> > \n> \n> Do you actually need a RAID card at all? It's just another point of \n> failure: the Linux software raid (mdadm) is pretty good.\n> \n> Also, be very wary of RAID5 for an array that size. It is highly \n> probable that, if one disk has failed, then during the recovery\n> process, \n> you may lose a second disk. The unrecoverable error rate on standard \n> disks is about 1 in 10^14 bits; your disk array is 10^11 bits in\n> size...\n> \n> We got bitten by this....\n> \n> Richard\n\nLinux kernel software RAID is fully supported in Debian Lenny, is quite cheap to implement and powerful.\nI would avoid SATA disks but it's just me. SAS controllers and disks are expensive but worth every penny spent on them.\n\nPrefer RAID 1+0 over RAID 5 not only because of the risk of failure of a second disk, but I have 3 cases of performance issues caused by RAID 5.\nIt's said that performance is not the problem but think twice because a good application tends to scale fast to several users.\nOf course, keep a good continuous backup strategy of your databases and don't trust just the mirroring of disks in a RAID fashion.\n\nFlavio Henrique A. Gurgel\nConsultor -- 4Linux\ntel. 55-11-2125.4765\nfax. 55-11-2125.4777\nwww.4linux.com.br\n\n", "msg_date": "Tue, 24 Nov 2009 15:59:27 -0200 (BRST)", "msg_from": "\"Gurgel, Flavio\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "On Tue, Nov 24, 2009 at 10:23 AM, Matthew Wakeling <[email protected]> wrote:\n>\n> We're about to purchase a new server to store some of our old databases, and\n> I was wondering if someone could advise me on a RAID card. We want to make a\n> 6-drive SATA RAID array out of 2TB drives, and it will be RAID 5 or 6\n> because there will be zero write traffic. The priority is stuffing as much\n> storage into a small 2U rack as possible, with performance less important.\n> We will be running Debian Linux.\n>\n> People have mentioned Areca as making good RAID controllers. We're looking\n> at the \"Areca ARC-1220 PCI-Express x8 SATA II\" as a possibility. Does anyone\n> have an opinion on whether it is a turkey or a star?\n\nWe run a 12xx series on our office server in RAID-6 over 8 1TB 7200RPM\nserver class SATA drives. Our production server runs the 1680 on top\nof 16 15k5 seagates in RAID-10. The performance difference between\nthese two are enormous. Things that take minutes on the production\nserver can take hours on the office server. Production handles\n1.5Million users, office handles 20 or 30 users.\n\nI've been really happy with the reliability of the 12xx card here at\nwork. 100% uptime for a year, that machine goes down for kernel\nupdates and only that. But it's not worked that hard all day\neveryday, so I can't compare its reliability with production in\nRAID-10 which has had one drive fail the week it was delivered and\nnone since in 400+days. We have two hot spares there.\n\n> Another possibility is a 3-ware card of some description.\n\nThey get good reviews as well. Both manufacturers have their \"star\"\nperformers, and their \"utility\" or work group class controllers. For\nwhat you're doing the areca 12xx or 3ware 95xx series should do fine.\n\nAs far as drives go we've been really happy with WD of late, they make\nlarge enterprise class SATA drives that don't pull a lot of power\n(green series) and fast SATA drives that pull a bit more but are\nfaster (black series). We've used both and are quite happy with each.\n We use a pair of blacks to build slony read slaves and they're very\nfast, with write speeds of ~100MB/second and read speeds double that\nin linux under sw RAID-1\n", "msg_date": "Tue, 24 Nov 2009 12:13:03 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "\r\nSince I'm currently looking at upgrading my own database server, maybe some\nof the experts can give a comment on one of the following controllers:\n\n- Promise Technology Supertrak ES4650 + additional BBU\n- Adaptec RAID 5405 SGL/256 SATA/SAS + additional BBU\n- Adaptec RAID 5405Z SGL/512 SATA/SAS\n\nMy personal favourite currently is the 5405Z, since it does not require \nregular battery replacements and because it has 512MB of cache.\n\nSince my server only has room for four disks, I'd choose the following\none:\n\n- Seagate Cheetah 15K.6 147GB SAS\n\nDrives would be organized as RAID-0 for fast access, I do not need \nterabytes of storage.\n\nThe database currently is about 150 GB in size (including indexes), the\nmain table having a bit less than 1 billion rows (maximum will be about 2\nbillion) and getting about 10-20 million updates per day, so update speed\nis critical.\n\nCurrently the database is running on a mdadm raid-0 with four S-ATA drives \n(7.2k rpm), which was ok when the database was half this size...\n\nOperating System is Gentoo Linux 2.6.31-r1 on a Fujitsu Siemens Primergy\n200 S2 (2xXEON @ 1.6 GHz) with 4 GB of RAM (which also would be increased\nto its maximum of 8 GB during the above update)\n\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Tue, 24 Nov 2009 20:28:04 +0100", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "On Tue, Nov 24, 2009 at 12:28 PM, Jochen Erwied\n<[email protected]> wrote:\n>\n> Since I'm currently looking at upgrading my own database server, maybe some\n> of the experts can give a comment on one of the following controllers:\n>\n> - Promise Technology Supertrak ES4650 + additional BBU\n> - Adaptec RAID 5405 SGL/256 SATA/SAS + additional BBU\n> - Adaptec RAID 5405Z SGL/512 SATA/SAS\n>\n> My personal favourite currently is the 5405Z, since it does not require\n> regular battery replacements and because it has 512MB of cache.\n\nHave you searched the -performance archives for references to them?\nI'm not that familiar with Adaptec RAID controllers. Not requiring a\nbattery check / replacement is nice.\n\n> Since my server only has room for four disks, I'd choose the following\n> one:\n>\n> - Seagate Cheetah 15K.6 147GB SAS\n\nWe use the older gen 15k.5 and have been very happy with them.\nNowadays it seems the fastest Seagates and Hitachis own the market for\nsuper fast drives.\n\n> Drives would be organized as RAID-0 for fast access, I do not need\n> terabytes of storage.\n\nSo, you're willing (or forced by economics) to suffer downtime due to\ndrive failure every so often.\n\n> The database currently is about 150 GB in size (including indexes), the\n> main table having a bit less than 1 billion rows (maximum will be about 2\n> billion) and getting about 10-20 million updates per day, so update speed\n> is critical.\n\nSo, assuming this means an 8 hour work day for ~20M rows, you're\nlooking at around 700 per second.\n\n> Currently the database is running on a mdadm raid-0 with four S-ATA drives\n> (7.2k rpm), which was ok when the database was half this size...\n>\n> Operating System is Gentoo Linux 2.6.31-r1 on a Fujitsu Siemens Primergy\n> 200 S2 (2xXEON @ 1.6 GHz) with 4 GB of RAM (which also would be increased\n> to its maximum of 8 GB during the above update)\n\nI'd definitely test the heck out of whatever RAID card you're buying\nto make sure it performs well enough. For some loads and against some\nHW RAID cards, SW RAID might be the winner.\n\nAnother option might be a JBOD box attached to the machine that holds\n12 or so 2.5\" 15k like the hitachi ultrastar 147G 2.5\" drives. This\nsounds like a problem you need to be able to throw a lot of drives at\nat one time. Is it likely to grow much after this?\n", "msg_date": "Tue, 24 Nov 2009 13:05:28 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Gurgel, Flavio escribió:\n> ----- \"Richard Neill\" <[email protected]> escreveu:\n>\n> \n>> Matthew Wakeling wrote:\n>> \n>>> We're about to purchase a new server to store some of our old\n>>> \n>> databases, \n>> \n>>> and I was wondering if someone could advise me on a RAID card. We\n>>> \n>> want \n>> \n>>> to make a 6-drive SATA RAID array out of 2TB drives, and it will be\n>>> \n>> RAID \n>> \n>>> 5 or 6 because there will be zero write traffic. The priority is \n>>> stuffing as much storage into a small 2U rack as possible, with \n>>> performance less important. We will be running Debian Linux.\n>>>\n>>> People have mentioned Areca as making good RAID controllers. We're \n>>> looking at the \"Areca ARC-1220 PCI-Express x8 SATA II\" as a\n>>> \n>> possibility. \n>> \n>>> Does anyone have an opinion on whether it is a turkey or a star?\n>>>\n>>> Another possibility is a 3-ware card of some description.\n>>>\n>>> \n>> Do you actually need a RAID card at all? It's just another point of \n>> failure: the Linux software raid (mdadm) is pretty good.\n>>\n>> Also, be very wary of RAID5 for an array that size. It is highly \n>> probable that, if one disk has failed, then during the recovery\n>> process, \n>> you may lose a second disk. The unrecoverable error rate on standard \n>> disks is about 1 in 10^14 bits; your disk array is 10^11 bits in\n>> size...\n>>\n>> We got bitten by this....\n>>\n>> Richard\n>> \n>\n> Linux kernel software RAID is fully supported in Debian Lenny, is quite cheap to implement and powerful.\n> I would avoid SATA disks but it's just me. SAS controllers and disks are expensive but worth every penny spent on them.\n>\n> Prefer RAID 1+0 over RAID 5 not only because of the risk of failure of a second disk, but I have 3 cases of performance issues caused by RAID 5.\n> It's said that performance is not the problem but think twice because a good application tends to scale fast to several users.\n> Of course, keep a good continuous backup strategy of your databases and don't trust just the mirroring of disks in a RAID fashion.\n>\n> Flavio Henrique A. Gurgel\n> Consultor -- 4Linux\n> tel. 55-11-2125.4765\n> fax. 55-11-2125.4777\n> www.4linux.com.br\n>\n>\n> \nDo you expose that performance issued caused by RAID 5? Because this is \none of our solutions here on my country to save the data of our \nPostgreSQL database. Which model do you recommend ? RAID 0,RAID 1, RAID \n5 or RAID 10?\n", "msg_date": "Tue, 24 Nov 2009 15:37:38 -0500", "msg_from": "\"Ing. Marcos Ortiz Valmaseda\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "On Tue, Nov 24, 2009 at 1:37 PM, Ing. Marcos Ortiz Valmaseda\n<[email protected]> wrote:\n> Do you expose that performance issued caused by RAID 5? Because this is one\n> of our solutions here on my country to save the data of our PostgreSQL\n> database. Which model do you recommend ? RAID 0,RAID 1, RAID 5 or RAID 10?\n\nRAID-1 or RAID-10 are the default, mostly safe choices.\n\nFor disposable dbs, RAID-0 is fine.\n\nFor very large dbs with very little writing and mostly reading and on\na budget, RAID-6 is ok.\n\nIn most instances I never recommend RAID-5 anymore.\n", "msg_date": "Tue, 24 Nov 2009 13:42:19 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Tuesday, November 24, 2009, 9:05:28 PM you wrote:\n\n> Have you searched the -performance archives for references to them?\n> I'm not that familiar with Adaptec RAID controllers. Not requiring a\n> battery check / replacement is nice.\n\nEither I searched for the wrong terms, or there isn't really that much\nreference on RAID-controllers on this list. Aberdeen is menthioned once and\nlooks interesting, but I didn't find a reseller in Germany. As far as I see\nfrom the list, Promise and Adaptec both seem to be not too bad choices.\n\n> So, you're willing (or forced by economics) to suffer downtime due to\n> drive failure every so often.\n\nI haven't experienced any downtime due to a disk failure for quite a while \nnow (call me lucky), although I had a really catastrophic experience with a \nRAID-5 some time ago (1 drive crashed, the second one during rebuild :-()\n\nBut for this application losing one day of updates is not a big deal, and \ndowntime isn't either. It's a long running project of mine, with growing \nstorage needs, but not with 100% of integrity or uptime.\n\n> So, assuming this means an 8 hour work day for ~20M rows, you're\n> looking at around 700 per second.\n\nIt's an automated application running 24/7, so I require 'only' about \n200-250 updates per second.\n\n> I'd definitely test the heck out of whatever RAID card you're buying\n> to make sure it performs well enough. For some loads and against some\n> HW RAID cards, SW RAID might be the winner.\n\nWell, I haven't got so much opportunities to test out different kind of \nhardware, so I have to rely on experience or reports.\n\n> Another option might be a JBOD box attached to the machine that holds\n> 12 or so 2.5\" 15k like the hitachi ultrastar 147G 2.5\" drives. This\n> sounds like a problem you need to be able to throw a lot of drives at\n> at one time. Is it likely to grow much after this?\n\nJBOD in an external casing would be an alternative, especially when using \nan external case. And no, the database will not grow too much after \nreaching its final size.\n\nBut looking at the prices for anything larger than 4+1 drives in an\nexternal casing is not funny at all :-(\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Tue, 24 Nov 2009 21:59:04 +0100", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "\n----- \"Scott Marlowe\" <[email protected]> escreveu:\n\n> On Tue, Nov 24, 2009 at 1:37 PM, Ing. Marcos Ortiz Valmaseda\n> <[email protected]> wrote:\n> > Do you expose that performance issued caused by RAID 5? Because this\n> is one\n> > of our solutions here on my country to save the data of our\n> PostgreSQL\n> > database. Which model do you recommend ? RAID 0,RAID 1, RAID 5 or\n> RAID 10?\n> \n> RAID-1 or RAID-10 are the default, mostly safe choices.\n> \n> For disposable dbs, RAID-0 is fine.\n> \n> For very large dbs with very little writing and mostly reading and on\n> a budget, RAID-6 is ok.\n> \n> In most instances I never recommend RAID-5 anymore.\n\nI would never recommend RAID-5 for database customers (any database system), some of the current ones are using it and the worst nightmares in disk performance are related to RAID-5.\nAs Scott said, RAID-1 is safe, RAID-0 is fast (and accept more request load too), RAID-10 is a great combination of both worlds.\n\nFlavio Henrique A. Gurgel\nConsultor -- 4Linux\ntel. 55-11-2125.4765\nfax. 55-11-2125.4777\nwww.4linux.com.br\n", "msg_date": "Tue, 24 Nov 2009 19:08:12 -0200 (BRST)", "msg_from": "\"Gurgel, Flavio\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "On Tue, Nov 24, 2009 at 1:59 PM, Jochen Erwied\n<[email protected]> wrote:\n> Tuesday, November 24, 2009, 9:05:28 PM you wrote:\n>\n>> Have you searched the -performance archives for references to them?\n>> I'm not that familiar with Adaptec RAID controllers.  Not requiring a\n>> battery check / replacement is nice.\n>\n> Either I searched for the wrong terms, or there isn't really that much\n> reference on RAID-controllers on this list. Aberdeen is menthioned once and\n> looks interesting, but I didn't find a reseller in Germany. As far as I see\n> from the list, Promise and Adaptec both seem to be not too bad choices.\n\nAberdeen is the builder I use. They'll put any card in you want\n(within reason) including our preference here, Areca. Perhaps you\nmeant Areca?\n\n>> So, assuming this means an 8 hour work day for ~20M rows, you're\n>> looking at around 700 per second.\n>\n> It's an automated application running 24/7, so I require 'only' about\n> 200-250 updates per second.\n\nOh, much better. A decent hardware RAID controller with battery\nbacked cache could handle that load with a pair of spinning 15k drives\nin RAID-1 probably.\n\n>> Another option might be a JBOD box attached to the machine that holds\n>> 12 or so 2.5\" 15k like the hitachi ultrastar 147G 2.5\" drives.  This\n>> sounds like a problem you need to be able to throw a lot of drives at\n>> at one time.  Is it likely to grow much after this?\n>\n> JBOD in an external casing would be an alternative, especially when using\n> an external case. And no, the database will not grow too much after\n> reaching its final size.\n\nYeah, if it's not gonna grow a lot more after the 2B rows, then you\nprobably won't need an external case.\n", "msg_date": "Tue, 24 Nov 2009 14:34:00 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "The problem with RAID-5 or RAID-6 is not the normal speed operation, it's\nthe degraded performance when there is a drive failure. This includes\nread-only scenarios. A DB server getting any kind of real use will\neffectively appear to be down to client apps if it loses a drive from that\nRAID set.\n\nBasically, think of RAID-5/6 as RAID-0 but with much slower writes, and a\nway to recover the data without going to backup tapes if there is a disc\nloss. It is NOT a solution for staying up in case of a failure.\n\nPresumably, there is a business reason that you're thinking of using\nRAID-5/6 with hardware RAID and maybe a hot spare, rather than software\nRAID-0 which would save you 2-3 spindles of formatted capacity, plus the\ncost of the RAID card. Whatever that reason is, it's also a reason to use\nRAID-10.\n\nIf you absolutely need it to fit in 2U of rack space, you can get a 2U\nserver with a bunch of 2.5\" spindles and with 24x 500GB SATA you can get the\nsame formatted size with RAID-10; or you can use an external SAS expander to\nput additional 3.5\" drives in another enclosure.\n\nIf we're taking rackmount server RAID card votes, I've had good experiences\nwith the LSI 8888 under Linux.\n\nCheers\nDave\n\nOn Tue, Nov 24, 2009 at 11:23 AM, Matthew Wakeling <[email protected]>wrote:\n\n>\n> We're about to purchase a new server to store some of our old databases,\n> and I was wondering if someone could advise me on a RAID card. We want to\n> make a 6-drive SATA RAID array out of 2TB drives, and it will be RAID 5 or 6\n> because there will be zero write traffic. The priority is stuffing as much\n> storage into a small 2U rack as possible, with performance less important.\n> We will be running Debian Linux.\n>\n> People have mentioned Areca as making good RAID controllers. We're looking\n> at the \"Areca ARC-1220 PCI-Express x8 SATA II\" as a possibility. Does anyone\n> have an opinion on whether it is a turkey or a star?\n>\n> Another possibility is a 3-ware card of some description.\n>\n> Thanks in advance,\n>\n> Matthew\n>\n> --\n> Now you see why I said that the first seven minutes of this section will\n> have\n> you looking for the nearest brick wall to beat your head against. This is\n> why I do it at the end of the lecture - so I can run.\n> -- Computer Science lecturer\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThe problem with RAID-5 or RAID-6 is not the normal speed operation, it's the degraded performance when there is a drive failure. This includes read-only scenarios. A DB server getting any kind of real use will effectively appear to be down to client apps if it loses a drive from that RAID set.\nBasically, think of RAID-5/6 as RAID-0 but with much slower writes, and a way to recover the data without going to backup tapes if there is a disc loss. It is NOT a solution for staying up in case of a failure.\nPresumably, there is a business reason that you're thinking of using RAID-5/6 with hardware RAID and maybe a hot spare, rather than software RAID-0 which would save you 2-3 spindles of formatted capacity, plus the cost of the RAID card. Whatever that reason is, it's also a reason to use RAID-10.\nIf you absolutely need it to fit in 2U of rack space, you can get a 2U\nserver with a bunch of 2.5\" spindles and with 24x 500GB SATA you can get the\nsame formatted size with RAID-10; or you can use an external SAS expander to put additional 3.5\" drives in another enclosure.If we're taking rackmount server RAID card votes, I've had good experiences with the LSI 8888 under Linux.\nCheersDave\nOn Tue, Nov 24, 2009 at 11:23 AM, Matthew Wakeling <[email protected]> wrote:\n\nWe're about to purchase a new server to store some of our old databases, and I was wondering if someone could advise me on a RAID card. We want to make a 6-drive SATA RAID array out of 2TB drives, and it will be RAID 5 or 6 because there will be zero write traffic. The priority is stuffing as much storage into a small 2U rack as possible, with performance less important. We will be running Debian Linux.\n\nPeople have mentioned Areca as making good RAID controllers. We're looking at the \"Areca ARC-1220 PCI-Express x8 SATA II\" as a possibility. Does anyone have an opinion on whether it is a turkey or a star?\n\nAnother possibility is a 3-ware card of some description.\n\nThanks in advance,\n\nMatthew\n\n-- \nNow you see why I said that the first seven minutes of this section will have\nyou looking for the nearest brick wall to beat your head against. This is\nwhy I do it at the end of the lecture - so I can run.\n                                       -- Computer Science lecturer\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 24 Nov 2009 15:54:52 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Tuesday, November 24, 2009, 10:34:00 PM you wrote:\n\n> Aberdeen is the builder I use. They'll put any card in you want\n> (within reason) including our preference here, Areca. Perhaps you\n> meant Areca?\n\nI knew Areca only for their internal arrays (which one of our customers\nuses for his 19\" systems), but did not know they manufacture their own\ncontrollers. Added the ARC-1212+BBU to my wishlist :-)\n\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Wed, 25 Nov 2009 00:02:40 +0100", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "\n\n\n\n\n\nJochen Erwied wrote:\n\nTuesday, November 24, 2009, 10:34:00 PM you wrote:\n\n \n\nAberdeen is the builder I use. They'll put any card in you want\n(within reason) including our preference here, Areca. Perhaps you\nmeant Areca?\n \n\n\nI knew Areca only for their internal arrays (which one of our customers\nuses for his 19\" systems), but did not know they manufacture their own\ncontrollers. Added the ARC-1212+BBU to my wishlist :-)\n\n\nFor what it's worth I'm using the Adaptec 5445Z on my new server (don't\nhave Postgre running on it) and have been happy with it.  For storage\non my servers I use\nhttp://www.pc-pitstop.com/sas_cables_enclosures/scsase16.asp which has\nan Areca ARC-8020 expander in it. With the 5445Z I use the 4 internal\nports for a fast RAID0 \"working array\" with 450G Seagate 15k6 drives\nand the external goes to the 16 drive enclosure through the expander. \nWith 16 drives you have a lot of possibilities for configuring arrays. \nI have another server with an Adaptec 52445 (don't have Postgre running\non it either) connected to two of the 16 drive enclosures and am happy\nwith it.  I'm running Postgre on my workstation that has an Adaptec\n52445 hooked up to two EnhanceBox-E8MS\n(http://www.enhance-tech.com/products/desktop/E8_Series.html).  I have\n8 ST373455SS drives in my tower and 8 in the EnhanceBox so my database\nis running off 16 drives in RAID5.  Everyone complains about RAID5 but\nit works for me in my situation.  Very very rarely am I waiting on the\ndisks when running queries. The other EnhanceBox has 8 ST31000640SS\ndrives in RAID5 just for backup images.  All 24 drives run off the\n52445 and again, I've been satisfied with it.  I've also been happy\nwith the Enhance Technology products.  Sorry for being so long but just\nwanted to put a plug in for the Adaptec cards and let you know about\nthe external options.  The 5 series cards are a huge improvement over\nthe 3 series. I had a 3805 and wasn't that impressed.  It's actually\nsitting on my shelf now collecting dust.\n\nBob\n\n\n\n\n", "msg_date": "Tue, 24 Nov 2009 17:49:31 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Matthew Wakeling wrote:\n> People have mentioned Areca as making good RAID controllers. We're \n> looking at the \"Areca ARC-1220 PCI-Express x8 SATA II\" as a \n> possibility. Does anyone have an opinion on whether it is a turkey or \n> a star?\nPerformance should be OK but not great compared with some of the newer \nalternatives (this design is a few years old now). The main issue I've \nhad with this series of cards is that the command-line tools are very \nhit or miss. See \nhttp://notemagnet.blogspot.com/2008/08/linux-disk-failures-areca-is-not-so.html \nfor a long commentary about the things I was disappointed by on the \nsimilar ARC-1210 once I actually ran into a drive failure on one. As \nScott points out there, they have other cards with a built-in management \nNIC that allows an alternate management path, and I believe those have \nbetter performance too.\n\n> Another possibility is a 3-ware card of some description.\nI've put a fair number of 9690SA cards in systems with little to \ncomplain about. Performance was reasonable as long as you make sure to \ntweak the read-ahead: http://www.3ware.com/kb/article.aspx?id=11050 \nIgnore most of the rest of their advice on that page though--for \nexample, increasing vm.dirty_background_ratio and vm.dirty_ratio is an \nawful idea for PostgreSQL use, where if anything you want to decrease \nthe defaults.\n\nAlso, while they claim you can connect SAS drives to these cards, they \ndon't support sending SMART commands to them and support seemed pretty \nlimited overall for them. Stick with plain on SATA ones.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Tue, 24 Nov 2009 21:29:55 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Scott Marlowe wrote:\n> As far as drives go we've been really happy with WD of late, they make\n> large enterprise class SATA drives that don't pull a lot of power\n> (green series) and fast SATA drives that pull a bit more but are\n> faster (black series).\nBe careful to note the caveat that you need their *enterprise class* \ndrives. When you run into an error on their regular consumer drives, \nthey get distracted for a while trying to cover the whole thing up, in a \nway that's exactly the opposite of the behavior you want for a RAID \nconfiguration. I have a regular consumer WD drive that refuses to admit \nthat it has a problem such that I can RMA it, but that always generates \nan error if I rewrite the whole drive. The behavior of the firmware is \ndownright shameful. As cheap consumer drives go, I feel like WD has \npulled ahead of everybody else on performance and possibly even actual \nreliability, but the error handling of their firmware is so bad I'm \nstill using Seagate drives--when those fail, as least they're honest \nabout it.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Tue, 24 Nov 2009 21:35:48 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Jochen Erwied wrote:\n> - Promise Technology Supertrak ES4650 + additional BBU\n> - Adaptec RAID 5405 SGL/256 SATA/SAS + additional BBU\n> - Adaptec RAID 5405Z SGL/512 SATA/SAS\n> \nI've never seen a Promise controller that had a Linux driver you would \nwant to rely on under any circumstances. Adaptec used to have seriously \nbad Linux drivers too. I've gotten the impression they've cleaned up \ntheir act considerably the last few years, but they've been on my list \nof hardware to shun for so long I haven't bothered investigating. \nEasier to just buy from a company that has always cared about good Linux \nsupport, like 3ware. In any case, driver quality is what you want to \nresearch before purchasing any of these; doesn't matter how fast the \ncards are if they crash or corrupt your data.\n\nWhat I like to do is look at what companies who sell high-quality \nproduction servers with Linux preinstalled and see what hardware they \ninclude. You can find a list of vendors people here like at \nhttp://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks#Helpful_vendors_of_SATA_RAID_systems \n\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Tue, 24 Nov 2009 21:44:55 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "On Tue, Nov 24, 2009 at 7:35 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> As far as drives go we've been really happy with WD of late, they make\n>> large enterprise class SATA drives that don't pull a lot of power\n>> (green series) and fast SATA drives that pull a bit more but are\n>> faster (black series).\n>\n> Be careful to note the caveat that you need their *enterprise class* drives.\n>  When you run into an error on their regular consumer drives, they get\n> distracted for a while trying to cover the whole thing up, in a way that's\n> exactly the opposite of the behavior you want for a RAID configuration.  I\n> have a regular consumer WD drive that refuses to admit that it has a problem\n> such that I can RMA it, but that always generates an error if I rewrite the\n> whole drive.  The behavior of the firmware is downright shameful.  As cheap\n> consumer drives go, I feel like WD has pulled ahead of everybody else on\n> performance and possibly even actual reliability, but the error handling of\n> their firmware is so bad I'm still using Seagate drives--when those fail, as\n> least they're honest about it.\n\nWhen I inquired earlier this summer about using the consumer WDs in a\nnew server I was told rather firmly by my sales guy \"uhm, no\". They\nput the enterprise drives through the wringer before he said they\nseemed ok. They have been great, both green and black series. For\nwhat they are, big SATA drives in RAID-6 or RAID-10 they're quite\ngood. Moderate to quite good performers at a reasonable price.\n", "msg_date": "Tue, 24 Nov 2009 19:46:26 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "--- On Tue, 24/11/09, Scott Marlowe <[email protected]> wrote:\n\n> Jochen Erwied\n> <[email protected]>\n> wrote:\n> >\n> > Since I'm currently looking at upgrading my own\n> database server, maybe some\n> > of the experts can give a comment on one of the\n> following controllers:\n> >\n> > - Promise Technology Supertrak ES4650 + additional\n> BBU\n> > - Adaptec RAID 5405 SGL/256 SATA/SAS + additional BBU\n> > - Adaptec RAID 5405Z SGL/512 SATA/SAS\n> >\n> > My personal favourite currently is the 5405Z, since it\n> does not require\n> > regular battery replacements and because it has 512MB\n> of cache.\n> \n> Have you searched the -performance archives for references\n> to them?\n> I'm not that familiar with Adaptec RAID controllers. \n> Not requiring a\n> battery check / replacement is nice.\n> \n\nWe've been running Adaptec 5805s for the past year and I've been pretty happy, I think they have the same dual core IOP348 as the Areca 1680s.\n\nI've a bunch of 5805Zs on my desk ready to go in some new servers too (that means more perc6 cards to chuck on my smash pile) and I'm excited to see how they go; I feer the unknown a bit though, and I'm not sure the sight big capacitors is reassuruing me...\n\nOnly problem I've seen is one controller periodically report it's too hot, but I suspect that may be something to do with the server directly above it having fanless power supplies.\n\n\n \n", "msg_date": "Wed, 25 Nov 2009 11:09:32 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Greg Smith wrote:\n> Jochen Erwied wrote:\n>> - Promise Technology Supertrak ES4650 + additional BBU\n>> - Adaptec RAID 5405 SGL/256 SATA/SAS + additional BBU\n>> - Adaptec RAID 5405Z SGL/512 SATA/SAS\n>> \n> I've never seen a Promise controller that had a Linux driver you would \n> want to rely on under any circumstances...Easier to just buy from a \n> company that has always cared about good Linux support, like 3ware.\n+1\n\nI haven't tried Promise recently, but last time I did I determined that \nthey got the name because they \"Promise\" the Linux driver for your card \nwill be available real-soon-now. Actually got strung along for a couple \nmonths before calling my supplier and telling him to swap it out for a \n3ware. The 3ware \"just works\". I currently have a couple dozen Linux \nservers, including some PostgreSQL machines, running the 3ware cards.\n\nCheers,\nSteve\n", "msg_date": "Wed, 25 Nov 2009 08:45:46 -0800", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Steve Crawford wrote:\n> Greg Smith wrote:\n>> Jochen Erwied wrote:\n>>> - Promise Technology Supertrak ES4650 + additional BBU\n>>> \n>> I've never seen a Promise controller that had a Linux driver you would\n>> want to rely on under any circumstances...\n> +1\n> \n> I haven't tried Promise recently, but last time I did I determined that\n> they got the name because they \"Promise\" the Linux driver for your card\n> will be available real-soon-now. \n\nOne more data point, it's not confidence inspiring that google turns up\nPromise Technologies customers that are quite vocal about suing them.\n\nhttp://www.carbonite.com/blog/post/2009/03/Further-clarification-on-our-lawsuit-against-Promise-Technologies.aspx\n\n", "msg_date": "Thu, 26 Nov 2009 20:49:47 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "\nOn 11/24/09 11:13 AM, \"Scott Marlowe\" <[email protected]> wrote:\n\n\n> \n> They get good reviews as well. Both manufacturers have their \"star\"\n> performers, and their \"utility\" or work group class controllers. For\n> what you're doing the areca 12xx or 3ware 95xx series should do fine.\n> \n\n-1 to 3ware's SATA solutions\n\n3ware 95xx and 96xx had performance somewhere between PERC 5 (horrid) and\nPERC 6 (mediocre) when I tested them with large SATA drives with RAID 10.\nHaven't tried raid 6 or 5. Haven't tried the \"SA\" model that supports SAS.\nWhen a competing card (Areca or Adaptec) gets 3x the sequential throughput\non an 8 disk RAID 10 and only catches up to be 60% the speed after heavy\ntuning of readahead value, there's something wrong.\nRandom access throughput doesn't suffer like that however -- but its nice\nwhen the I/O can sequential scan faser than postgres can read the tuples.\n\n\n> As far as drives go we've been really happy with WD of late, they make\n> large enterprise class SATA drives that don't pull a lot of power\n> (green series) and fast SATA drives that pull a bit more but are\n> faster (black series). We've used both and are quite happy with each.\n> We use a pair of blacks to build slony read slaves and they're very\n> fast, with write speeds of ~100MB/second and read speeds double that\n> in linux under sw RAID-1\n> \n\n\n", "msg_date": "Tue, 1 Dec 2009 17:37:38 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Scott Carey wrote:\n> On 11/24/09 11:13 AM, \"Scott Marlowe\" <[email protected]> wrote:\n>\n>\n> \n>> They get good reviews as well. Both manufacturers have their \"star\"\n>> performers, and their \"utility\" or work group class controllers. For\n>> what you're doing the areca 12xx or 3ware 95xx series should do fine.\n>> \n>\n> -1 to 3ware's SATA solutions\n>\n> 3ware 95xx and 96xx had performance somewhere between PERC 5 (horrid) and\n> PERC 6 (mediocre) when I tested them with large SATA drives with RAID 10.\n> Haven't tried raid 6 or 5. Haven't tried the \"SA\" model that supports SAS.\n> When a competing card (Areca or Adaptec) gets 3x the sequential throughput\n> on an 8 disk RAID 10 and only catches up to be 60% the speed after heavy\n> tuning of readahead value, there's something wrong.\n> Random access throughput doesn't suffer like that however -- but its nice\n> when the I/O can sequential scan faser than postgres can read the tuples.\n> \nWhat operating system?\n\nI am running under FreeBSD with 96xx series and am getting EXCELLENT\nperformance. Under Postgres 8.4.x on identical hardware except for the\ndisk controller, I am pulling a literal 3x the iops on the same disks\nthat I do with the Adaptec (!)\n\nI DID note that under Linux the same hardware was a slug. \n\nHmmmmm...\n\n\n-- Karl", "msg_date": "Tue, 01 Dec 2009 20:08:53 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Scott Carey wrote:\n> 3ware 95xx and 96xx had performance somewhere between PERC 5 (horrid) and\n> PERC 6 (mediocre) when I tested them with large SATA drives with RAID 10.\n> Haven't tried raid 6 or 5. Haven't tried the \"SA\" model that supports SAS\nThe only models I've tested and recommended lately are exactly those \nthough. The 9690SA is the earliest 3ware card I've mentioned as seeming \nto have reasonable performance. The 95XX cards are certainly much \nslower than similar models from, say, Areca. I've never had one of the \nearlier 96XX models to test. Now you've got me wondering what the \ndifference between the earlier and current 96XX models really is.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Tue, 01 Dec 2009 21:49:40 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "\n\n\nOn 12/1/09 6:08 PM, \"Karl Denninger\" <[email protected]> wrote:\n\n> Scott Carey wrote:\n>> \n>> On 11/24/09 11:13 AM, \"Scott Marlowe\" <[email protected]>\n>> <mailto:[email protected]> wrote:\n>> \n>> \n>> \n>> \n>>> \n>>> They get good reviews as well. Both manufacturers have their \"star\"\n>>> performers, and their \"utility\" or work group class controllers. For\n>>> what you're doing the areca 12xx or 3ware 95xx series should do fine.\n>>> \n>>> \n>> \n>> \n>> -1 to 3ware's SATA solutions\n>> \n>> 3ware 95xx and 96xx had performance somewhere between PERC 5 (horrid) and\n>> PERC 6 (mediocre) when I tested them with large SATA drives with RAID 10.\n>> Haven't tried raid 6 or 5. Haven't tried the \"SA\" model that supports SAS.\n>> When a competing card (Areca or Adaptec) gets 3x the sequential throughput\n>> on an 8 disk RAID 10 and only catches up to be 60% the speed after heavy\n>> tuning of readahead value, there's something wrong.\n>> Random access throughput doesn't suffer like that however -- but its nice\n>> when the I/O can sequential scan faser than postgres can read the tuples.\n>> \n> What operating system?\n> \n> I am running under FreeBSD with 96xx series and am getting EXCELLENT\n> performance. Under Postgres 8.4.x on identical hardware except for the disk\n> controller, I am pulling a literal 3x the iops on the same disks that I do\n> with the Adaptec (!)\n> \n> I DID note that under Linux the same hardware was a slug.\n> \n> Hmmmmm...\n> \n\nLinux, Centos 5.3. Drivers/OS can certainly make a big difference.\n\n> \n> -- Karl\n> \n\n", "msg_date": "Mon, 7 Dec 2009 11:04:02 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "\n\n\nOn 12/1/09 6:49 PM, \"Greg Smith\" <[email protected]> wrote:\n\n> Scott Carey wrote:\n>> 3ware 95xx and 96xx had performance somewhere between PERC 5 (horrid) and\n>> PERC 6 (mediocre) when I tested them with large SATA drives with RAID 10.\n>> Haven't tried raid 6 or 5. Haven't tried the \"SA\" model that supports SAS\n> The only models I've tested and recommended lately are exactly those\n> though. The 9690SA is the earliest 3ware card I've mentioned as seeming\n> to have reasonable performance. The 95XX cards are certainly much\n> slower than similar models from, say, Areca. I've never had one of the\n> earlier 96XX models to test. Now you've got me wondering what the\n> difference between the earlier and current 96XX models really is.\n\n9650 was made by 3Ware, essentially a PCIe version of the 9550. The 9690SA\nwas from some sort of acquisition/merger. They are not the same product line\nat all.\n3Ware, IIRC, has its roots in ATA and SATA RAID.\n\n\nI gave up on them after the 9650 and 9550 experiences (on Linux) though.\n\n> \n> --\n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n> \n> \n\n", "msg_date": "Mon, 7 Dec 2009 11:10:13 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Scott Carey wrote:\n> On 12/1/09 6:49 PM, \"Greg Smith\" <[email protected]> wrote:\n>\n> \n>> Scott Carey wrote:\n>> \n>>> 3ware 95xx and 96xx had performance somewhere between PERC 5 (horrid) and\n>>> PERC 6 (mediocre) when I tested them with large SATA drives with RAID 10.\n>>> Haven't tried raid 6 or 5. Haven't tried the \"SA\" model that supports SAS\n>>> \n>> The only models I've tested and recommended lately are exactly those\n>> though. The 9690SA is the earliest 3ware card I've mentioned as seeming\n>> to have reasonable performance. The 95XX cards are certainly much\n>> slower than similar models from, say, Areca. I've never had one of the\n>> earlier 96XX models to test. Now you've got me wondering what the\n>> difference between the earlier and current 96XX models really is.\n>> \n>\n> 9650 was made by 3Ware, essentially a PCIe version of the 9550. The 9690SA\n> was from some sort of acquisition/merger. They are not the same product line\n> at all.\n> 3Ware, IIRC, has its roots in ATA and SATA RAID.\n>\n>\n> I gave up on them after the 9650 and 9550 experiences (on Linux) though.\n> \nMy experience under FreeBSD:\n\n1. The Adaptecs suck. 1/3rd to 1/2 the performance of....\n2. The 9650s 3ware boards, which under FreeBSD are quite fast.\n3. However, the Areca 1680-IX is UNBELIEVABLY fast. Ridiculously so in\nfact.\n\nI have a number of 9650s in service and have been happy with them under\nFreeBSD. Under Linux, however, they bite in comparison.\n\nThe Areca 1680 is not cheap. However, it comes with out-of-band\nmanagement (IP-KVM, direct SMTP and SNMP connectivity, etc) which is\nEXTREMELY nice, especially for colocated machines where you need a way\nin if things go horribly wrong.\n\nOne warning: I have had problems with the Areca under FreeBSD if you set\nup a passthrough (e.g. JBOD) disc, delete it from the config while\nrunning and then either accidentally touch the device nodes OR try to\nuse FreeBSD's \"camcontrol\" utility to tell it to pick up driver\nchanges. Either is a great way to panic the machine. \n\nAs such for RAID it's fine but use care if you need to be able to swap\nNON-RAID disks while the machine is operating (e.g. for backup purposes\n- run a dump then dismount it and pull the carrier) - it is dangerous to\nattempt this (the 3Ware card does NOT have this problem.) I am trying\nto figure out exactly what provokes this and if I can get around it at\nthis point (in the lab of course!)\n\nNo experience with the 9690 3Wares as of yet.\n\n-- Karl", "msg_date": "Mon, 07 Dec 2009 13:47:33 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Scott Carey wrote:\n> 9650 was made by 3Ware, essentially a PCIe version of the 9550. The 9690SA\n> was from some sort of acquisition/merger. They are not the same product line\n> at all.\n> \n3ware became a division of AMCC, which was then bought by LSI. The \n9590SA came out while they were a part of AMCC.\n\nI was under the impression that the differences between the 9650 and the \n9690SA were mainly related to adding SAS support, which was sort of a \nbridge addition rather than a fundamental change in the design of the \ncard. You'll often see people refer to \"9650/9690\" as if they're the \nsame card; they may never run the same firmware. They certainly always \nget firmware updates at the same time, and as part of the same download \npackage.\n\nAnother possibility for the difference between Scott's experience and \nmine is that I've only evaluated those particular cards recently, and \nthere seems to be evidence that 3ware did some major firmware overhauls \nin late 2008, i.e. \nhttp://unix.derkeiler.com/Mailing-Lists/FreeBSD/performance/2008-10/msg00005.html\n\nLet me try to summarize where things are at a little more clearly, with \nthe data accumulated during this long thread:\n\n-Areca: Usually the fastest around. Management tools are limited \nenough that you really want the version with the on-board management \nNIC. May require some testing to find a good driver version.\n\n-3ware: Performance on current models not as good as Areca, but with a \ngreat set of management tools (unless you're using SAS) and driver \nreliability. Exact magnitude of the performance gap with Areca is \nsomewhat controversial and may depend on OS--FreeBSD performance might \nbe better than Linux in particular. Older 3ware cards were really slow.\n\nOne of these days I need to wrangle up enough development cash to buy \ncurrent Areca and 3ware cards, an Intel SSD, and disappear into the lab \n(already plenty of drives here) until I've sorted this all out to my \nsatisfaction.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Mon, 07 Dec 2009 16:17:25 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Greg Smith wrote:\n> Let me try to summarize where things are at a little more clearly, with \n> the data accumulated during this long thread:\n> \n> -Areca: Usually the fastest around. Management tools are limited \n> enough that you really want the version with the on-board management \n> NIC. May require some testing to find a good driver version.\n> \n> -3ware: Performance on current models not as good as Areca, but with a \n> great set of management tools (unless you're using SAS) and driver \n> reliability. Exact magnitude of the performance gap with Areca is \n> somewhat controversial and may depend on OS--FreeBSD performance might \n> be better than Linux in particular. Older 3ware cards were really slow.\n> \n> One of these days I need to wrangle up enough development cash to buy \n> current Areca and 3ware cards, an Intel SSD, and disappear into the lab \n> (already plenty of drives here) until I've sorted this all out to my \n> satisfaction.\n\n... and do I hear you saying that no other vendor is worth considering? Just how far off are they?\n\nThanks,\nCraig\n\n", "msg_date": "Mon, 07 Dec 2009 13:53:45 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Greg Smith wrote:\n> Scott Carey wrote:\n>> 9650 was made by 3Ware, essentially a PCIe version of the 9550. The\n>> 9690SA\n>> was from some sort of acquisition/merger. They are not the same\n>> product line\n>> at all.\n>> \n> 3ware became a division of AMCC, which was then bought by LSI. The\n> 9590SA came out while they were a part of AMCC.\n>\n> I was under the impression that the differences between the 9650 and\n> the 9690SA were mainly related to adding SAS support, which was sort\n> of a bridge addition rather than a fundamental change in the design of\n> the card. You'll often see people refer to \"9650/9690\" as if they're\n> the same card; they may never run the same firmware. They certainly\n> always get firmware updates at the same time, and as part of the same\n> download package.\n>\n> Another possibility for the difference between Scott's experience and\n> mine is that I've only evaluated those particular cards recently, and\n> there seems to be evidence that 3ware did some major firmware\n> overhauls in late 2008, i.e.\n> http://unix.derkeiler.com/Mailing-Lists/FreeBSD/performance/2008-10/msg00005.html\n>\n>\n> Let me try to summarize where things are at a little more clearly,\n> with the data accumulated during this long thread:\n>\n> -Areca: Usually the fastest around. Management tools are limited\n> enough that you really want the version with the on-board management\n> NIC. May require some testing to find a good driver version.\n>\n> -3ware: Performance on current models not as good as Areca, but with\n> a great set of management tools (unless you're using SAS) and driver\n> reliability. Exact magnitude of the performance gap with Areca is\n> somewhat controversial and may depend on OS--FreeBSD performance might\n> be better than Linux in particular. Older 3ware cards were really slow.\n>\n> One of these days I need to wrangle up enough development cash to buy\n> current Areca and 3ware cards, an Intel SSD, and disappear into the\n> lab (already plenty of drives here) until I've sorted this all out to\n> my satisfaction.\nMost common SSDs will NOT come up on the 3ware cards at present. Not\nsure why as of yet - I've tried several.\n\nNot had the time to screw with them on the ARECA cards yet.\n\n-- Karl", "msg_date": "Mon, 07 Dec 2009 16:31:44 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Craig James wrote:\n> ... and do I hear you saying that no other vendor is worth \n> considering? Just how far off are they?\nI wasn't trying to summarize every possible possibility, just the \ncomplicated ones there's some debate over.\n\nWhat else is OK besides Areca and 3ware? HP's P800 is good, albeit not \nso easy to buy unless you're getting an HP system. The LSI Megaraid \nstuff and its close relative the Dell PERC6 are OK for some apps too; my \nintense hatred of Dell usually results in my forgetting about them. (As \nan example, \nhttp://en.wikipedia.org/wiki/ATX#Issues_with_Dell_power_supplies \ndocuments what I consider the worst design decision ever made by a PC \nmanufacturer)\n\nI don't think any of the other vendors on the market are viable for a \nLinux system due to driver issues and general low quality, which \nincludes Adaptec, Promise, Highpoint, and all the motherboard Fake RAID \nstuff from Silicon Image, Intel, Via, etc. I don't feel there's any \njustification for using those products instead of using a simple SATA \ncontroller and Linux software RAID in a PostgreSQL context.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Mon, 07 Dec 2009 17:43:20 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Karl Denninger wrote:\n> Most common SSDs will NOT come up on the 3ware cards at present. Not\n> sure why as of yet - I've tried several.\n> \nRight, and they're being rather weasly at \nhttp://www.3ware.com/kb/Article.aspx?id=15470 talking about it too.\n> Not had the time to screw with them on the ARECA cards yet.\n> \nI know the situation there is much better, like:\nhttp://hothardware.com/News/24-Samsung-SSDs-Linked-Together-for-2GBSec/\n\nSomebody at Newegg has said they got their Areca 1680 working with one \nof the Intel X-25 drives, but wasn't impressed by the write\nperformance of the result. Makes me wonder if the Areca card is messing \nwith the write cache of the drive.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Mon, 07 Dec 2009 17:57:52 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" }, { "msg_contents": "Greg Smith wrote:\n> Craig James wrote:\n>> ... and do I hear you saying that no other vendor is worth\n>> considering? Just how far off are they?\n> I wasn't trying to summarize every possible possibility, just the\n> complicated ones there's some debate over.\n>\n> What else is OK besides Areca and 3ware? HP's P800 is good, albeit\n> not so easy to buy unless you're getting an HP system. The LSI\n> Megaraid stuff and its close relative the Dell PERC6 are OK for some\n> apps too; my intense hatred of Dell usually results in my forgetting\n> about them. (As an example,\n> http://en.wikipedia.org/wiki/ATX#Issues_with_Dell_power_supplies\n> documents what I consider the worst design decision ever made by a PC\n> manufacturer)\n>\n> I don't think any of the other vendors on the market are viable for a\n> Linux system due to driver issues and general low quality, which\n> includes Adaptec, Promise, Highpoint, and all the motherboard Fake\n> RAID stuff from Silicon Image, Intel, Via, etc. I don't feel there's\n> any justification for using those products instead of using a simple\n> SATA controller and Linux software RAID in a PostgreSQL context.\nThe LSI Megaraid (and Intel's repackaging of it, among others) is\nreasonably good under FreeBSD.\n\nPerformance is slightly worse than the 3ware 95xx series boards, but not\nmaterially so.\n\nTheir CLI interface is \"interesting\" (it drops a log file in the working\ndirectly BY DEFAULT unless you tell it otherwise, among other things.) \n\n-- Karl", "msg_date": "Mon, 07 Dec 2009 17:12:31 -0600", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID card recommendation" } ]
[ { "msg_contents": "Dear All,\n\nThanks very much for your help so far. My understanding of PG is getting \na lot better!\n\nI wonder if I've understood analyze properly: I'm not sure I quite \nunderstand how specific the statistics gathered actually are.\n\n\nIn particular, what happens in the following case:\n 1. I start with have a table with 100 million rows, and column wid has\n linearly distributed values from 45-90. (wid is indexed)\n\n 2. I run vacuum analyze\n\n 3. I insert about 2 million rows, all of which have the new wid of 91.\n\n 4. I then do a select * WHERE wid = 91.\n\nHow smart is analyze? Will it actually say \"well, I've never seen 91 in \nthis table, because all the values only go up to 90, so you'd better do \na sequential scan\"?\n\n\n-----\n\nOn another note, I notice that if I ever manually run vacuum or analyze, \nthe performance of the database drops to the point where many of the \noperators get kicked out. Is there any way to run them \"nice\" ?\n\nWe need to maintain a response time of under 1 second all day for simple \nqueries (which usually run in about 22ms). But Vacuum or Analyze seem to \nlock up the system for a few minutes, during which other queries block \non them, although there is still plenty of CPU spare.\n\n-----\n\n\nAlso, I find that, even with the autovacuum daemon running, there was \none query last night that I had to terminate after an hour. In \ndesperation, I restarted postgres, let it take 15 mins to vacuum the \nentire DB, and then re-ran the query (in 8 minutes)\n\nAny ideas how I can troubleshoot this better? The database is only 30GB \nin total - it should (if my intuition is right) be impossible that any \nsimple select (even over a modestly complex view) should take longer \nthan a multiple of the time required to read all the data from disk?\n\n\n\nThanks very much,\n\nRichard\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 25 Nov 2009 12:34:26 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "How exactly does Analyze work?" }, { "msg_contents": "On Wednesday 25 November 2009 05:34:26 Richard Neill wrote:\n> Dear All,\n> \n> Thanks very much for your help so far. My understanding of PG is getting\n> a lot better!\n> \n> I wonder if I've understood analyze properly: I'm not sure I quite\n> understand how specific the statistics gathered actually are.\n> \n> \n> In particular, what happens in the following case:\n> 1. I start with have a table with 100 million rows, and column wid has\n> linearly distributed values from 45-90. (wid is indexed)\n> \n> 2. I run vacuum analyze\n> \n> 3. I insert about 2 million rows, all of which have the new wid of 91.\n> \n> 4. I then do a select * WHERE wid = 91.\n> \n> How smart is analyze? Will it actually say \"well, I've never seen 91 in\n> this table, because all the values only go up to 90, so you'd better do\n> a sequential scan\"?\n> \n> \n> -----\n> \n> On another note, I notice that if I ever manually run vacuum or analyze,\n> the performance of the database drops to the point where many of the\n> operators get kicked out. Is there any way to run them \"nice\" ?\n\nincreasing maintenance_work_mem to several GB (if you have the memory) will \nhelp\n\n> \n> We need to maintain a response time of under 1 second all day for simple\n> queries (which usually run in about 22ms). But Vacuum or Analyze seem to\n> lock up the system for a few minutes, during which other queries block\n> on them, although there is still plenty of CPU spare.\n> \n> -----\n> \n> \n> Also, I find that, even with the autovacuum daemon running, there was\n> one query last night that I had to terminate after an hour. In\n> desperation, I restarted postgres, let it take 15 mins to vacuum the\n> entire DB, and then re-ran the query (in 8 minutes)\n> \n> Any ideas how I can troubleshoot this better? The database is only 30GB\n> in total - it should (if my intuition is right) be impossible that any\n> simple select (even over a modestly complex view) should take longer\n> than a multiple of the time required to read all the data from disk?\n> \n> \n> \n> Thanks very much,\n> \n> Richard\n> \n", "msg_date": "Wed, 25 Nov 2009 08:22:01 -0700", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How exactly does Analyze work?" }, { "msg_contents": "Richard Neill <[email protected]> writes:\n> In particular, what happens in the following case:\n> 1. I start with have a table with 100 million rows, and column wid has\n> linearly distributed values from 45-90. (wid is indexed)\n\n> 2. I run vacuum analyze\n\n> 3. I insert about 2 million rows, all of which have the new wid of 91.\n\n> 4. I then do a select * WHERE wid = 91.\n\n> How smart is analyze? Will it actually say \"well, I've never seen 91 in \n> this table, because all the values only go up to 90, so you'd better do \n> a sequential scan\"?\n\nANALYZE is not magic. The system won't know that the 91's are there\nuntil you re-ANALYZE (either manually or automatically). In a case\nlike this I expect the planner would assume there are very few matching\nrows and go for an indexscan. That might still be the right thing given\nthis specific scenario (need to fetch 2% of the table), but it certainly\nwouldn't be if you had say half of the table matching the query.\nMoral: re-ANALYZE after any bulk load.\n\n> On another note, I notice that if I ever manually run vacuum or analyze, \n> the performance of the database drops to the point where many of the \n> operators get kicked out. Is there any way to run them \"nice\" ?\n\nSee vacuum_cost_delay.\n\n> We need to maintain a response time of under 1 second all day for simple \n> queries (which usually run in about 22ms). But Vacuum or Analyze seem to \n> lock up the system for a few minutes, during which other queries block \n> on them, although there is still plenty of CPU spare.\n\nIt sounds to me like you don't really have enough disk I/O bandwidth\nto meet your performance requirements. All the CPU in the world won't\nhelp you if you didn't spend any money on the disks :-(. You might be\nable to alleviate this with vacuum_cost_delay, but it's a band-aid.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Nov 2009 10:22:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How exactly does Analyze work? " } ]
[ { "msg_contents": "Dear All,\n\nI'm wondering whether Vacuum/analyse (notably by the autovaccuum daemon) \nis responsible for some deadlocks/dropouts I'm seeing.\n\nOne particular table gets hit about 5 times a second (for single row \nupdates and inserts) + associated index changes. This is a very light \nload for the hardware; we have 7 CPU cores idling, and very little disk \nactivity. The query normally runs in about 20 ms.\n\nHowever, the query must always respond within 200ms, or userspace gets \nnasty errors. [we're routing books on a sorter machine, and the book \nmisses its exit opportunity]. Although this is a low load, it's a bit \nlike a heartbeat.\n\nThe question is, could the autovacuum daemon (running either in vacuum \nor in analyse mode) be taking out locks on this table that sometimes \ncause the query response time to go way up (exceeding 10 seconds)?\n\nI think I've set up autovacuum to do \"little and often\", using\n autovacuum_vacuum_cost_delay = 20ms\n autovacuum_vacuum_cost_limit = 20\nbut I'm not sure this is doing exactly what I think it is. In \nparticular, the system-wide I/O (and CPU) limit of autovacuum is \nnegligible, but it's possible that queries may be waiting on locks.\n\nIn particular, I want to make sure that the autovacuum daemon never \nholds any lock for more than about 50ms at a time. (or will release it \nimmediately if something else wants it)\n\nOr am I barking up the wrong tree entirely?\n\nThanks,\n\nRichard\n", "msg_date": "Thu, 26 Nov 2009 16:20:35 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Analyse without locking?" }, { "msg_contents": "On Thu, Nov 26, 2009 at 4:20 PM, Richard Neill <[email protected]> wrote:\n\n> Dear All,\n>\n> I'm wondering whether Vacuum/analyse (notably by the autovaccuum daemon) is\n> responsible for some deadlocks/dropouts I'm seeing.\n>\n> One particular table gets hit about 5 times a second (for single row\n> updates and inserts) + associated index changes. This is a very light load\n> for the hardware; we have 7 CPU cores idling, and very little disk activity.\n> The query normally runs in about 20 ms.\n>\n> However, the query must always respond within 200ms, or userspace gets\n> nasty errors. [we're routing books on a sorter machine, and the book misses\n> its exit opportunity]. Although this is a low load, it's a bit like a\n> heartbeat.\n>\n> The question is, could the autovacuum daemon (running either in vacuum or\n> in analyse mode) be taking out locks on this table that sometimes cause the\n> query response time to go way up (exceeding 10 seconds)?\n>\n> I think I've set up autovacuum to do \"little and often\", using\n> autovacuum_vacuum_cost_delay = 20ms\n> autovacuum_vacuum_cost_limit = 20\n>\n\nthose are basically thresholds. So in essence you are forcing your\nautovacuum to be active pretty often,\n\nAnd from what I can read here, you are looking for completely opposite\nbehaviour. Unless you think statistical image of your table will be\ncompletely invalid, after 20 modifications to it, which I am sure is not\ntrue.\n\n\n\n\n-- \nGJ\n\nOn Thu, Nov 26, 2009 at 4:20 PM, Richard Neill <[email protected]> wrote:\nDear All,\n\nI'm wondering whether Vacuum/analyse (notably by the autovaccuum daemon) is responsible for some deadlocks/dropouts I'm seeing.\n\nOne particular table gets hit about 5 times a second (for single row updates and inserts) + associated index changes. This is a very light load for the hardware; we have 7 CPU cores idling, and very little disk activity. The query normally runs in about 20 ms.\n\nHowever, the query must always respond within 200ms, or userspace gets nasty errors.  [we're routing books on a sorter machine, and the book misses its exit opportunity]. Although this is a low load, it's a bit like a heartbeat.\n\nThe question is, could the autovacuum daemon (running either in vacuum or in analyse mode) be taking out locks on this table that sometimes cause the query response time to go way up (exceeding 10 seconds)?\n\nI think I've set up autovacuum to do \"little and often\", using\n  autovacuum_vacuum_cost_delay = 20ms\n  autovacuum_vacuum_cost_limit = 20those are basically thresholds. So in essence you are forcing your autovacuum to be active pretty often, And from what I can read here, you are looking for completely opposite behaviour. Unless you think statistical image of your table will be completely invalid, after 20 modifications to it, which I am sure is not true.\n-- GJ", "msg_date": "Thu, 26 Nov 2009 16:26:30 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyse without locking?" }, { "msg_contents": "Richard Neill <[email protected]> writes:\n> I'm wondering whether Vacuum/analyse (notably by the autovaccuum daemon) \n> is responsible for some deadlocks/dropouts I'm seeing.\n\n> One particular table gets hit about 5 times a second (for single row \n> updates and inserts) + associated index changes. This is a very light \n> load for the hardware; we have 7 CPU cores idling, and very little disk \n> activity. The query normally runs in about 20 ms.\n\n> However, the query must always respond within 200ms, or userspace gets \n> nasty errors. [we're routing books on a sorter machine, and the book \n> misses its exit opportunity]. Although this is a low load, it's a bit \n> like a heartbeat.\n\n> The question is, could the autovacuum daemon (running either in vacuum \n> or in analyse mode) be taking out locks on this table that sometimes \n> cause the query response time to go way up (exceeding 10 seconds)?\n\nHmm. Autovacuum does sometimes take an exclusive lock. It is supposed\nto release it \"on demand\" but if I recall the details correctly, that\ncould involve a delay of about deadlock_timeout, or 1s by default.\nIt would be reasonable to reduce deadlock_timeout to 100ms to ensure\nyour external constraint is met.\n\nDelays of up to 10s would not be explained by that though. Do you have\nusage spikes of other types? I wonder in particular if you've got\ncheckpoints smoothed out enough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Nov 2009 11:26:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyse without locking? " }, { "msg_contents": "On Thursday 26 November 2009 17:20:35 Richard Neill wrote:\n> Dear All,\n> \n> I'm wondering whether Vacuum/analyse (notably by the autovaccuum daemon)\n> is responsible for some deadlocks/dropouts I'm seeing.\n> \n> One particular table gets hit about 5 times a second (for single row\n> updates and inserts) + associated index changes. This is a very light\n> load for the hardware; we have 7 CPU cores idling, and very little disk\n> activity. The query normally runs in about 20 ms.\n> \n> However, the query must always respond within 200ms, or userspace gets\n> nasty errors. [we're routing books on a sorter machine, and the book\n> misses its exit opportunity]. Although this is a low load, it's a bit\n> like a heartbeat.\n> \n> The question is, could the autovacuum daemon (running either in vacuum\n> or in analyse mode) be taking out locks on this table that sometimes\n> cause the query response time to go way up (exceeding 10 seconds)?\n> \n> I think I've set up autovacuum to do \"little and often\", using\n> autovacuum_vacuum_cost_delay = 20ms\n> autovacuum_vacuum_cost_limit = 20\n> but I'm not sure this is doing exactly what I think it is. In\n> particular, the system-wide I/O (and CPU) limit of autovacuum is\n> negligible, but it's possible that queries may be waiting on locks.\n> \n> In particular, I want to make sure that the autovacuum daemon never\n> holds any lock for more than about 50ms at a time. (or will release it\n> immediately if something else wants it)\n> \n> Or am I barking up the wrong tree entirely?\nI would suggest enabling log_log_wait and setting deadlock_timeout to a low \nvalue - should give you more information.\n\nAndres\n", "msg_date": "Thu, 26 Nov 2009 18:26:11 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyse without locking?" }, { "msg_contents": "Richard Neill wrote:\n> Or am I barking up the wrong tree entirely?\nIf you haven't already tuned checkpoint behavior, it's more likely \nthat's causing a dropout than autovacuum. See the checkpoint_segments \nsection of http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server \nfor an intro.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Fri, 27 Nov 2009 06:51:33 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyse without locking?" }, { "msg_contents": "Greg Smith wrote:\n> Richard Neill wrote:\n>> Or am I barking up the wrong tree entirely?\n> If you haven't already tuned checkpoint behavior, it's more likely \n> that's causing a dropout than autovacuum. See the checkpoint_segments \n> section of http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server \n> for an intro.\n> \n\nGreg Smith wrote:\n > Richard Neill wrote:\n >> Or am I barking up the wrong tree entirely?\n > If you haven't already tuned checkpoint behavior, it's more likely\n > that's causing a dropout than autovacuum. See the checkpoint_segments\n > section of http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n > for an intro.\n >\n\nThanks - I did that already - it's currently\n checkpoint_segments = 64\n\nNow, I understand that increasing checkpoint_segments is generally a \ngood thing (subject to some limit), but doesn't that just mean that \ninstead of say a 1 second outage every minute, it's a 10 second outage \nevery 10 minutes?\n\nAlso, correct me if I'm wrong, but mere selects shouldn't cause any \naddition to the WAL. I'd expect that a simple row insert might require \nperhaps 1kB of disk writes(*), in which case we're looking at only a few \nkB/sec at most of writes in normal use.?\n\nIs it possible (or even sensible) to do a manual vacuum analyze with \nnice/ionice?\n\nRichard\n\n\n\n(*)A typical write should be about 80 Bytes of data, in terms of how \nmuch is actually being stored. I'm using the engineers' \"rule of 10\" \napproximation to call that 1kB, based on indexes, and incomplete pages.\n\n", "msg_date": "Sat, 28 Nov 2009 17:57:11 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analyse without locking?" }, { "msg_contents": "Richard Neill <[email protected]> writes:\n> Now, I understand that increasing checkpoint_segments is generally a \n> good thing (subject to some limit), but doesn't that just mean that \n> instead of say a 1 second outage every minute, it's a 10 second outage \n> every 10 minutes?\n\nIn recent PG versions you can spread the checkpoint I/O out over a\nperiod of time, so it shouldn't be an \"outage\" at all, just background\nload. Other things being equal, a longer checkpoint cycle is better\nsince it improves the odds of being able to coalesce multiple changes\nto the same page into a single write. The limiting factor is your\nthreshold of pain on how much WAL-replay work would be needed to recover\nafter a crash.\n\n> Is it possible (or even sensible) to do a manual vacuum analyze with \n> nice/ionice?\n\nThere's no support for that in PG. You could try manually renice'ing\nthe backend that's running your VACUUM but I'm not sure how well it\nwould work; there are a number of reasons why it might be\ncounterproductive. Fooling with the vacuum_cost_delay parameters is the\nrecommended way to make a vacuum run slower and use less of the machine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Nov 2009 14:21:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyse without locking? " }, { "msg_contents": "Richard Neill wrote:\n> Now, I understand that increasing checkpoint_segments is generally a \n> good thing (subject to some limit), but doesn't that just mean that \n> instead of say a 1 second outage every minute, it's a 10 second outage \n> every 10 minutes?\nThat was the case in versions before 8.3. Now, the I/O is spread out \nover most of the next checkpoint's time period. So what actually \nhappens is that all the I/O that happens over 10 minutes will be spread \nout over the next five minutes of time. With the defaults, there's so \nlittle time between checkpoints under heavy writes that the spreading \ndoesn't have enough room to work, leading to higher write bursts.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Sat, 28 Nov 2009 17:28:39 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyse without locking?" }, { "msg_contents": "Thanks for your explanations.\n\nTom Lane wrote:\n> Richard Neill <[email protected]> writes:\n>> Now, I understand that increasing checkpoint_segments is generally a \n>> good thing (subject to some limit), but doesn't that just mean that \n>> instead of say a 1 second outage every minute, it's a 10 second outage \n>> every 10 minutes?\n> \n> In recent PG versions you can spread the checkpoint I/O out over a\n> period of time, so it shouldn't be an \"outage\" at all, just background\n> load. Other things being equal, a longer checkpoint cycle is better\n> since it improves the odds of being able to coalesce multiple changes\n> to the same page into a single write. The limiting factor is your\n> threshold of pain on how much WAL-replay work would be needed to recover\n> after a crash.\n\nThat makes sense. I think that 64 is sane - it means crash-recovery \ntakes less than 1 minute, yet we aren't seeing the warning that \ncheckpoints are too frequent.\n> \n>> Is it possible (or even sensible) to do a manual vacuum analyze with \n>> nice/ionice?\n> \n> There's no support for that in PG. You could try manually renice'ing\n> the backend that's running your VACUUM but I'm not sure how well it\n> would work; there are a number of reasons why it might be\n> counterproductive. Fooling with the vacuum_cost_delay parameters is the\n> recommended way to make a vacuum run slower and use less of the machine.\n\nI see why it might not work well - priority inversion etc.\n\nWhat I was trying to achieve is to say that vacuum can have all the \nspare idle CPU/IO that's available, but must *immediately* back off when \nsomething else needs the CPU/IO/Locks.\n\nFor example,\n nice -n 20 yes > /dev/null\n ionice -c 3 dd if=/dev/zero > tmp.del\n\nwill both get quite a lot of work done on a medium-loaded system (try \nthis on your own laptop), but have zero impact on other processes.\n\nOn the other hand, changing vacuum_cost_delay means that vacuum runs \nslowly even if the CPU is otherwise idle; yet it still impacts on the \nresponsiveness of some queries.\n\n\nRichard\n", "msg_date": "Sun, 29 Nov 2009 01:31:23 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analyse without locking?" }, { "msg_contents": "Dear All,\n\nI'm still puzzled by this one - it looks like it's causing about 5% of \nqueries to rise in duration from ~300ms to 2-6 seconds.\n\nOn the other hand, the system never seems to be I/O bound. (we have at \nleast 25 MB/sec of write bandwidth, and use a small fraction of that \nnormally).\n\nHere's the typical checkpoint logs:\n\n2009-12-03 06:21:21 GMT LOG: checkpoint complete: wrote 12400 buffers \n(2.2%); 0 transaction log file(s) added, 0 removed, 12 recycled; \nwrite=149.883 s, sync=5.143 s, total=155.040 s\n\nWe're using 8.4.1, on ext4 with SSD. Is it possible that something \nexotic is occurring to do with write barriers (on by default in ext4, \nand we haven't changed this).\n\nPerhaps a low priority IO process for writing the previous WAL to disk \nis blocking a high-priority transaction (which is trying to write to the \nnew WAL). If the latter is trying to sync, could the large amount of \nlower priority IO be getting in the way thanks to write barriers?\n\nIf so, can I safely turn off write barriers?\n\nThanks,\n\nRichard\n\n\nP.S. Should I rename this thread?\n\n\n\n\nRichard Neill wrote:\n> Dear All,\n> \n> It definitely looks checkpoint-related - the checkpoint timeout is set \n> to 5 minutes, and here is a graph of our response time (in ms) over a 1 \n> hour period. The query is pretty much identical each time.\n> \n> Any ideas what I could do to make checkpoints not hurt performance like \n> this?\n> \n> Thanks,\n> \n> Richard\n> \n> \n> \n> Tom Lane wrote:\n>> Richard Neill <[email protected]> writes:\n>>> Now, I understand that increasing checkpoint_segments is generally a \n>>> good thing (subject to some limit), but doesn't that just mean that \n>>> instead of say a 1 second outage every minute, it's a 10 second \n>>> outage every 10 minutes?\n>>\n>> In recent PG versions you can spread the checkpoint I/O out over a\n>> period of time, so it shouldn't be an \"outage\" at all, just background\n>> load. Other things being equal, a longer checkpoint cycle is better\n>> since it improves the odds of being able to coalesce multiple changes\n>> to the same page into a single write. The limiting factor is your\n>> threshold of pain on how much WAL-replay work would be needed to recover\n>> after a crash.\n> \n> \n> ------------------------------------------------------------------------\n> \n", "msg_date": "Thu, 03 Dec 2009 06:23:12 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analyse without locking?" }, { "msg_contents": "Richard Neill wrote:\n> On the other hand, the system never seems to be I/O bound. (we have at \n> least 25 MB/sec of write bandwidth, and use a small fraction of that \n> normally).\nI would bet that if you sampled vmstat or iostat every single second, \nyou'd discover there's a large burst in write speed for the same few \nseconds that queries are stuck. If you're averaging out the data over a \n5 second or longer period, you'll never see it--the spike will get lost \nin the average. You just can't monitor checkpoint spikes unless you're \nwatching I/O with an extremely tight time resolution. Watching the \n\"Writeback\" figure in /proc/meminfo is helpful too, that is where I \nnormally see everything jammed up.\n\n> Here's the typical checkpoint logs:\n> 2009-12-03 06:21:21 GMT LOG: checkpoint complete: wrote 12400 buffers \n> (2.2%); 0 transaction log file(s) added, 0 removed, 12 recycled; \n> write=149.883 s, sync=5.143 s, total=155.040 s\nSee that \"sync\" number there? That's your problem; while that sync \noperation is going on, everybody else is grinding to a halt waiting for \nit. Not a coincidence that the duration is about the same amount of \ntime that your queries are getting stuck. This example shows 12400 \nbuffers = 97MB of total data written. Since those writes are pretty \nrandom I/O, it's easily possible to get stuck for a few seconds waiting \nfor that much data to make it out to disk. You only gave the write \nphase a couple of minutes to spread things out over; meanwhile, Linux \nmay not even bother starting to write things out until 30 seconds into \nthat, so the effective time between when writes to disk start and when \nthe matching sync happens on your system is extremely small. That's not \ngood--you have to give that several minutes of breathing room if you \nwant to avoid checkpoint spikes.\n\n> We're using 8.4.1, on ext4 with SSD. Is it possible that something \n> exotic is occurring to do with write barriers (on by default in ext4, \n> and we haven't changed this).\n> Perhaps a low priority IO process for writing the previous WAL to disk \n> is blocking a high-priority transaction (which is trying to write to \n> the new WAL). If the latter is trying to sync, could the large amount \n> of lower priority IO be getting in the way thanks to write barriers?\n> If so, can I safely turn off write barriers?\nLinux is pretty dumb in general here. fsync operations will usually end \nup writing out way more of the OS buffer cache than they need to. And \nthe write cache can get quite big before pdflush decides it should \nactually do some work, the whole thing is optimized for throughput \nrather than latency. I don't really trust barriers at all, so I don't \nknow if there's some specific tuning you can do with those to improve \nthings. Your whole system is bleeding edge craziness IMHO--SSD, ext4, \nwrite barriers, all stuff that just doesn't work reliably yet far as I'm \nconcerned.\n\n...but that's not what you want to hear. When I can suggest that should \nhelp is increasing checkpoint_segments (>32), checkpoint_timeout (>=10 \nminutes), checkpoint_completion_target (0.9), and lowering the amount of \nwrites Linux will cache before it gets more aggressive about flushing \nthem. Those things will fight the root cause of the problem, by giving \nmore time between the \"write\" and \"sync\" phases of the checkpoint. It's \nok if \"write\" takes a long while, decreasing the \"sync\" number is your \ngoal you need to keep your eye on.\n\nI've written a couple of articles on this specific topic if you want \nmore background on the underlying issues, it's kind of heavy reading:\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\nhttp://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 03 Dec 2009 02:04:01 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint spikes" }, { "msg_contents": "On Sat, Nov 28, 2009 at 6:57 PM, Richard Neill <[email protected]> wrote:\n> Greg Smith wrote:\n>>\n>> Richard Neill wrote:\n>>>\n>>> Or am I barking up the wrong tree entirely?\n>>\n>> If you haven't already tuned checkpoint behavior, it's more likely that's\n>> causing a dropout than autovacuum.  See the checkpoint_segments section of\n>> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for an intro.\n>>\n>\n> Greg Smith wrote:\n>> Richard Neill wrote:\n>>> Or am I barking up the wrong tree entirely?\n>> If you haven't already tuned checkpoint behavior, it's more likely\n>> that's causing a dropout than autovacuum.  See the checkpoint_segments\n>> section of http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>> for an intro.\n>>\n>\n> Thanks - I did that already - it's currently\n>   checkpoint_segments = 64\n>\n> Now, I understand that increasing checkpoint_segments is generally a good\n> thing (subject to some limit), but doesn't that just mean that instead of\n> say a 1 second outage every minute, it's a 10 second outage every 10\n> minutes?\n>\n> Also, correct me if I'm wrong, but mere selects shouldn't cause any addition\n> to the WAL. I'd expect that a simple row insert might require perhaps 1kB of\n> disk writes(*), in which case we're looking at only a few kB/sec at most of\n> writes in normal use.?\n>\n> Is it possible (or even sensible) to do a manual vacuum analyze with\n> nice/ionice?\n\nthis is the job of autovacuum_vacuum_cost_delay and vacuum_cost_delay.\n\nAbout checkpoint, you may eventually set :\nsynchronous_commit = off\n\nPlease note that you may loose some queries if the server badly crash.\n(but that shouldn't cause database corruption like a fsync = off)\n\nIf you are running on linux, you could try to monitor (rrd is your\nfriend) /proc/meminfo and specifically the \"Dirty\" field.\n\nRead your syslog log to see if the checkpoint is a problem.\nHere is a sample of mine (cleaned) :\ncheckpoint complete: wrote 3117 buffers (1.2%); 0 transaction log\nfile(s) added, 0 removed, 3 recycled;\nwrite=280.213 s, sync=0.579 s, total=280.797 s\n\nThe more Dirty page (/proc/meminfo), the longer is your sync time.\nA high sync time can easily \"lock\" your server.\n\nTo reduce the dirty page, tune /proc/sys/vm/dirty_background_ratio\nI have set it to \"1\" on my 32GB servers.\n\nYou should also be carefull about all the other\n/proc/sys/vm/dirty_*\nAnd specifically /proc/sys/vm/dirty_ratio :\nMaximum percentage of total memory that can be filled with dirty pages\nbefore processes are forced to write dirty buffers themselves during\ntheir time slice instead of being allowed to do more writes.\nNote that all processes are blocked for writes when this happens, not\njust the one that filled the write buffers.\n\nAbout \"ionice\" : it only work with the CFQ I/O Scheduler.\nAnd CFQ is a very bad idea when using postgresql.\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Thu, 3 Dec 2009 10:44:28 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyse without locking?" }, { "msg_contents": "Greg Smith wrote:\n> Richard Neill wrote:\n>> Here's the typical checkpoint logs:\n>> 2009-12-03 06:21:21 GMT LOG: checkpoint complete: wrote 12400 buffers\n>> (2.2%); 0 transaction log file(s) added, 0 removed, 12 recycled;\n>> write=149.883 s, sync=5.143 s, total=155.040 s\n> See that \"sync\" number there? That's your problem; while that sync\n> operation is going on, everybody else is grinding to a halt waiting for\n> it. Not a coincidence that the duration is about the same amount of\n> time that your queries are getting stuck. This example shows 12400\n> buffers = 97MB of total data written. Since those writes are pretty\n> random I/O, it's easily possible to get stuck for a few seconds waiting\n> for that much data to make it out to disk. You only gave the write\n> phase a couple of minutes to spread things out over; meanwhile, Linux\n> may not even bother starting to write things out until 30 seconds into\n> that, so the effective time between when writes to disk start and when\n> the matching sync happens on your system is extremely small. That's not\n> good--you have to give that several minutes of breathing room if you\n> want to avoid checkpoint spikes.\n\nI wonder how common this issue is? When we implemented spreading of the\nwrite phase, we had long discussions about spreading out the fsyncs too,\nbut in the end it wasn't done. Perhaps it is time to revisit that now\nthat 8.3 has been out for some time and people have experience with the\nload-distributed checkpoints.\n\nI'm not sure how the spreading of the fsync()s should work, it's hard to\nestimate how long each fsync() is going to take, for example, but surely\nsomething would be better than nothing.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 03 Dec 2009 13:27:26 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint spikes" }, { "msg_contents": "Heikki Linnakangas wrote:\n> I wonder how common this issue is? When we implemented spreading of the\n> write phase, we had long discussions about spreading out the fsyncs too,\n> but in the end it wasn't done. Perhaps it is time to revisit that now\n> that 8.3 has been out for some time and people have experience with the\n> load-distributed checkpoints.\n> \nCirca 8.2, I ran into checkpoint problems all the time. With the \nspreading logic in 8.3, properly setup, the worst case is so improved \nthat I usually find something else more pressing to tune, rather than \nworry about the exact details of the sync process. It seems to have hit \nthe \"good enough\" point where it's hard to justify time for further \nimprovements when there are other things to work on. I'd still like to \nsee spread fsync happen one day, just hasn't been a priority for any \nsystems I have to improve lately.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 03 Dec 2009 15:57:31 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint spikes" }, { "msg_contents": "Dear All,\n\nThanks for all your help so far. This page was particularly helpful:\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n(does the advice for 8.3 apply unchanged to 8.4?)\n\nI'm still hitting issues with this though: sync is taking 7-10 seconds\nand I need to get it down to nearer 3.\n\nWe're running a lightly-loaded system which has a realtime response \nrequirement of 0.5 seconds all the time (with a few seconds permissible \noccasionally, but never exceeding 10).\n\nSo far, I've set checkpoint_segments to 128, timeout to 10min, and \ncompletion_target to 0.8. This helps, but not as much as I'd hoped.\n\nBut I haven't touched any of the other WAL or BG Writer settings.\n\nWhere should I look next?\n Should I be looking at the BG Writer settings,\n or should I look at the Linux VM configuration?\n (eg changing /proc/sys/vm/dirty_background_ratio from 5 to 1)\n\n Or would it be most useful to try to move the WAL to a different disk?\n\n\nLatest messages:\n\n# tail -f /var/log/postgresql/postgresql-8.4-main.log | grep check\n\n2009-12-08 09:12:00 GMT LOG: checkpoint starting: time\n2009-12-08 09:20:09 GMT LOG: checkpoint complete: wrote 51151 buffers \n(8.9%); 0 transaction log file(s) added, 0 removed, 23 recycled; \nwrite=479.669 s, sync=9.852 s, total=489.553 s\n\n2009-12-08 09:22:00 GMT LOG: checkpoint starting: time\n2009-12-08 09:30:07 GMT LOG: checkpoint complete: wrote 45772 buffers \n(7.9%); 0 transaction log file(s) added, 0 removed, 24 recycled; \nwrite=479.706 s, sync=7.337 s, total=487.120 s\n\n2009-12-08 09:32:00 GMT LOG: checkpoint starting: time\n2009-12-08 09:40:09 GMT LOG: checkpoint complete: wrote 47043 buffers \n(8.2%); 0 transaction log file(s) added, 0 removed, 22 recycled; \nwrite=479.744 s, sync=9.300 s, total=489.122 s\n\n2009-12-08 09:42:00 GMT LOG: checkpoint starting: time\n2009-12-08 09:50:07 GMT LOG: checkpoint complete: wrote 48210 buffers \n(8.4%); 0 transaction log file(s) added, 0 removed, 23 recycled; \nwrite=479.689 s, sync=7.707 s, total=487.416 s\n\n\nThanks a lot,\n\nRichard\n\n\n", "msg_date": "Tue, 08 Dec 2009 10:07:28 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint spikes" }, { "msg_contents": "Dear All,\n\nThanks for all your help so far. This page was particularly helpful:\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n(does the advice for 8.3 apply unchanged to 8.4?)\n\nI'm still hitting issues with this though: sync is taking 7-10 seconds\nand I need to get it down to nearer 3.\n\nWe're running a lightly-loaded system which has a realtime response\nrequirement of 0.5 seconds all the time (with a few seconds permissible\noccasionally, but never exceeding 10).\n\nSo far, I've set checkpoint_segments to 128, timeout to 10min, and\ncompletion_target to 0.8. This helps, but not as much as I'd hoped.\n\nBut I haven't touched any of the other WAL or BG Writer settings.\n\nWhere should I look next?\n Should I be looking at the BG Writer settings,\n or should I look at the Linux VM configuration?\n (eg changing /proc/sys/vm/dirty_background_ratio from 5 to 1)\n\n Or would it be most useful to try to move the WAL to a different disk?\n\n\nLatest messages:\n\n# tail -f /var/log/postgresql/postgresql-8.4-main.log | grep check\n\n2009-12-08 09:12:00 GMT LOG: checkpoint starting: time\n2009-12-08 09:20:09 GMT LOG: checkpoint complete: wrote 51151 buffers\n(8.9%); 0 transaction log file(s) added, 0 removed, 23 recycled;\nwrite=479.669 s, sync=9.852 s, total=489.553 s\n\n2009-12-08 09:22:00 GMT LOG: checkpoint starting: time\n2009-12-08 09:30:07 GMT LOG: checkpoint complete: wrote 45772 buffers\n(7.9%); 0 transaction log file(s) added, 0 removed, 24 recycled;\nwrite=479.706 s, sync=7.337 s, total=487.120 s\n\n2009-12-08 09:32:00 GMT LOG: checkpoint starting: time\n2009-12-08 09:40:09 GMT LOG: checkpoint complete: wrote 47043 buffers\n(8.2%); 0 transaction log file(s) added, 0 removed, 22 recycled;\nwrite=479.744 s, sync=9.300 s, total=489.122 s\n\n2009-12-08 09:42:00 GMT LOG: checkpoint starting: time\n2009-12-08 09:50:07 GMT LOG: checkpoint complete: wrote 48210 buffers\n(8.4%); 0 transaction log file(s) added, 0 removed, 23 recycled;\nwrite=479.689 s, sync=7.707 s, total=487.416 s\n\n\nThanks a lot,\n\nRichard\n\n\n\n", "msg_date": "Tue, 08 Dec 2009 10:08:02 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Checkpoint spikes" }, { "msg_contents": "Richard Neill <[email protected]> wrote:\n \n> So far, I've set checkpoint_segments to 128, timeout to 10min, and\n> completion_target to 0.8. This helps, but not as much as I'd\n> hoped.\n> \n> But I haven't touched any of the other WAL or BG Writer settings.\n> \n> Where should I look next?\n \nOn our web servers, where we had similar issues, we seem to be doing\nOK using:\n \nbgwriter_lru_maxpages = 1000\nbgwriter_lru_multiplier = 4.0\n \nThe other thing which can help this problem is keeping\nshared_buffers smaller than most people recommend. We use 512MB on\nour larger web server and 256MB on other servers. (Be sure to test\nwith your actual load, as this might or might not degrade overall\nperformance.)\n \n-Kevin\n", "msg_date": "Tue, 08 Dec 2009 09:05:04 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint spikes" }, { "msg_contents": "Richard Neill wrote:\n> (does the advice for 8.3 apply unchanged to 8.4?)\nYes; no changes in this area for 8.4. The main things performance \nrelated that changed between 8.3 and 8.4 are:\n1) VACUUM free space management reimplemented so that the max_fsm_* \nparameters aren't needed anymore\n2) default_statistics_target now starts at 100 instead of 10\n\n> So far, I've set checkpoint_segments to 128, timeout to 10min, and\n> completion_target to 0.8. This helps, but not as much as I'd hoped.\nGood, if the problem is moving in the right direction you're making \nprogress.\n\n> But I haven't touched any of the other WAL or BG Writer settings.\n> Where should I look next?\n> Should I be looking at the BG Writer settings,\n> or should I look at the Linux VM configuration?\n> (eg changing /proc/sys/vm/dirty_background_ratio from 5 to 1)\nI would start by reducing dirty_background_ratio; as RAM sizes climb, \nthis keeps becoming a bigger issue. The whole disk flushing code \nfinally got a major overhaul in the 2.6.32 Linux kernel, I'm hoping this \nwhole class of problem was improved from the changes made.\n\nChanges to the background writer behavior will probably not work as \nyou'd expect. The first thing I'd try it in your situation turning it \noff altogether; it can be slightly counterproductive for reducing \ncheckpoint issues if they're really bad, which yours are. If that goes \nin the wrong direction, experimenting with increasing the maximum pages \nand the multiplier might be useful, I wouldn't bet on it helping through.\n\nAs Kevin already mentioned, reducing the size of the buffer cache can \nhelp too. That's worth trying if you're exhausted the other obvious \npossibilities.\n\n> Or would it be most useful to try to move the WAL to a different disk?\nOn Linux having the WAL on a separate disk can improve things much more \nthan you might expect, simply because of how brain-dead the filesystem \nfsync implementation is. Reducing the seeks for WAL traffic can help a \nlot too.\n\nIf you've lowered Linux's caching, tried some BGW tweaks, and moved the \nWAL to somewhere else, if latency is still high you may be facing a \nhardware upgrade to improve things. Sometimes these problems just \nrequire more burst write throughput (regardless of how good average \nperformance looks) and nothing else will substitute. Hopefully you'll \nfind a tuning solution before that though.\n\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Tue, 08 Dec 2009 21:05:40 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint spikes" }, { "msg_contents": "On Wednesday 09 December 2009 03:05:40 Greg Smith wrote:\n> On Linux having the WAL on a separate disk can improve things much more\n> than you might expect, simply because of how brain-dead the filesystem\n> fsync implementation is. Reducing the seeks for WAL traffic can help a\n> lot too.\nNot using ext3's data=ordered helps massively already. \n\nAndres\n", "msg_date": "Wed, 9 Dec 2009 03:13:06 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint spikes" } ]
[ { "msg_contents": "Hi,\n\nI am trying to run postgresql functions with threads by using OpenMP. \nI tried to parallelize slot_deform_tuple function(src/backend/access/ \ncommon/heaptuple.c) and added below lines to the code.\n\n#pragma omp parallel\n{\n\t#pragma omp sections\n\t{\n\t\t#pragma omp section\n\t\tvalues[attnum] = fetchatt(thisatt, tp + off);\n\n\t\t#pragma omp section\n\t\toff = att_addlength_pointer(off, thisatt->attlen, tp + off);\n\t}\n}\n\nDuring ./configure I saw the information message for heaptuple.c as \nbelow:\n\"OpenMP defined section was parallelized.\"\n\nBelow is the configure that I have run:\n./configure CC=\"/path/to/icc -openmp\" CFLAGS=\"-O2\" --prefix=/path/to/ \npgsql --bindir=/path/to/pgsql/bin --datadir=/path/to/pgsql/share -- \nsysconfdir=/path/to/pgsql/etc --libdir=/path/to/pgsql/lib -- \nincludedir=/path/to/pgsql/include --mandir=/path/to/pgsql/man --with- \npgport=65432 --with-readline --without-zlib\n\nAfter configure I ran gmake and gmake install and I saw \"PostgreSQL \ninstallation complete.\"\n\nWhen I begin to configure for initdb and run below command:\n /path/to/pgsql/bin/initdb -D /path/to/pgsql/data\n\nI get following error:\n\nThe files belonging to this database system will be owned by user \n\"reydan.cankur\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with locale en_US.UTF-8.\nThe default database encoding has accordingly been set to UTF8.\nThe default text search configuration will be set to \"english\".\n\nfixing permissions on existing directory /path/to/pgsql/data ... ok\ncreating subdirectories ... ok\nselecting default max_connections ... 100\nselecting default shared_buffers ... 32MB\ncreating configuration files ... ok\ncreating template1 database in /path/to/pgsql/data/base/1 ... FATAL: \ncould not create unique index \"pg_type_typname_nsp_index\"\nDETAIL: Table contains duplicated values.\nchild process exited with exit code 1\ninitdb: removing contents of data directory \"/path/to/pgsql/data\"\n\nI could not get the point between initdb process and the change that I \nhave made.\nI need your help on solution of this issue.\n\nThanks in advance,\nReydan\n\n\n\n\nHi,I am trying to run postgresql functions with threads by using OpenMP. I tried to parallelize slot_deform_tuple function(src/backend/access/common/heaptuple.c) and added below lines to the code.#pragma omp parallel{ #pragma omp sections { #pragma omp section values[attnum] = fetchatt(thisatt, tp + off);  #pragma omp section off = att_addlength_pointer(off, thisatt->attlen, tp + off);  }}During ./configure I saw the information message for  heaptuple.c as below:\"OpenMP defined section was parallelized.\"Below is the configure that I have run:./configure CC=\"/path/to/icc -openmp\" CFLAGS=\"-O2\" --prefix=/path/to/pgsql --bindir=/path/to/pgsql/bin --datadir=/path/to/pgsql/share --sysconfdir=/path/to/pgsql/etc --libdir=/path/to/pgsql/lib --includedir=/path/to/pgsql/include --mandir=/path/to/pgsql/man --with-pgport=65432 --with-readline --without-zlibAfter configure I ran gmake and gmake install and I saw \"PostgreSQL installation complete.\"When I begin to configure for initdb and run below command: /path/to/pgsql/bin/initdb -D /path/to/pgsql/dataI get following error:The files belonging to this database system will be owned by user \"reydan.cankur\".This user must also own the server process.The database cluster will be initialized with locale en_US.UTF-8.The default database encoding has accordingly been set to UTF8.The default text search configuration will be set to \"english\".fixing permissions on existing directory /path/to/pgsql/data ... okcreating subdirectories ... okselecting default max_connections ... 100selecting default shared_buffers ... 32MBcreating configuration files ... okcreating template1 database in /path/to/pgsql/data/base/1 ... FATAL:  could not create unique index \"pg_type_typname_nsp_index\"DETAIL:  Table contains duplicated values.child process exited with exit code 1initdb: removing contents of data directory \"/path/to/pgsql/data\"I could not get the point between initdb process and the change that I have made.I need your help on solution of this issue.Thanks in advance,Reydan", "msg_date": "Sat, 28 Nov 2009 14:10:32 +0200", "msg_from": "Reydan Cankur <[email protected]>", "msg_from_op": true, "msg_subject": "OpenMP in PostgreSQL-8.4.0" }, { "msg_contents": "Sounds more like a school project than a proper performance question.\n\nOn 11/28/09, Reydan Cankur <[email protected]> wrote:\n> Hi,\n>\n> I am trying to run postgresql functions with threads by using OpenMP.\n> I tried to parallelize slot_deform_tuple function(src/backend/access/\n> common/heaptuple.c) and added below lines to the code.\n>\n> #pragma omp parallel\n> {\n> \t#pragma omp sections\n> \t{\n> \t\t#pragma omp section\n> \t\tvalues[attnum] = fetchatt(thisatt, tp + off);\n>\n> \t\t#pragma omp section\n> \t\toff = att_addlength_pointer(off, thisatt->attlen, tp + off);\n> \t}\n> }\n>\n> During ./configure I saw the information message for heaptuple.c as\n> below:\n> \"OpenMP defined section was parallelized.\"\n>\n> Below is the configure that I have run:\n> ./configure CC=\"/path/to/icc -openmp\" CFLAGS=\"-O2\" --prefix=/path/to/\n> pgsql --bindir=/path/to/pgsql/bin --datadir=/path/to/pgsql/share --\n> sysconfdir=/path/to/pgsql/etc --libdir=/path/to/pgsql/lib --\n> includedir=/path/to/pgsql/include --mandir=/path/to/pgsql/man --with-\n> pgport=65432 --with-readline --without-zlib\n>\n> After configure I ran gmake and gmake install and I saw \"PostgreSQL\n> installation complete.\"\n>\n> When I begin to configure for initdb and run below command:\n> /path/to/pgsql/bin/initdb -D /path/to/pgsql/data\n>\n> I get following error:\n>\n> The files belonging to this database system will be owned by user\n> \"reydan.cankur\".\n> This user must also own the server process.\n>\n> The database cluster will be initialized with locale en_US.UTF-8.\n> The default database encoding has accordingly been set to UTF8.\n> The default text search configuration will be set to \"english\".\n>\n> fixing permissions on existing directory /path/to/pgsql/data ... ok\n> creating subdirectories ... ok\n> selecting default max_connections ... 100\n> selecting default shared_buffers ... 32MB\n> creating configuration files ... ok\n> creating template1 database in /path/to/pgsql/data/base/1 ... FATAL:\n> could not create unique index \"pg_type_typname_nsp_index\"\n> DETAIL: Table contains duplicated values.\n> child process exited with exit code 1\n> initdb: removing contents of data directory \"/path/to/pgsql/data\"\n>\n> I could not get the point between initdb process and the change that I\n> have made.\n> I need your help on solution of this issue.\n>\n> Thanks in advance,\n> Reydan\n>\n>\n>\n>\n", "msg_date": "Sat, 28 Nov 2009 09:21:02 -0500", "msg_from": "Denis Lussier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OpenMP in PostgreSQL-8.4.0" }, { "msg_contents": "Reydan Cankur <[email protected]> writes:\n> I am trying to run postgresql functions with threads by using OpenMP. \n\nThis is pretty much doomed to failure. It's *certainly* doomed to\nfailure if you just hack up one area of the source code without dealing\nwith the backend's general lack of support for threading.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Nov 2009 11:42:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OpenMP in PostgreSQL-8.4.0 " }, { "msg_contents": "You mean that backend does not support threading and everything I try \nis useless\nIs there a way to overcome this issue?\nIs there anything I can adjust on backend to enable threading?\nIs there any documentation to advise?\n\nBest Regards,\nReydan\n\n\nOn Nov 28, 2009, at 6:42 PM, Tom Lane wrote:\n\n> Reydan Cankur <[email protected]> writes:\n>> I am trying to run postgresql functions with threads by using OpenMP.\n>\n> This is pretty much doomed to failure. It's *certainly* doomed to\n> failure if you just hack up one area of the source code without \n> dealing\n> with the backend's general lack of support for threading.\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Sat, 28 Nov 2009 23:00:42 +0200", "msg_from": "Reydan Cankur <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OpenMP in PostgreSQL-8.4.0 " }, { "msg_contents": "Reydan Cankur wrote:\n> You mean that backend does not support threading and everything I try \n> is useless\n> Is there a way to overcome this issue?\n> Is there anything I can adjust on backend to enable threading?\n> Is there any documentation to advise?\n\nUh, \"no\" to all those questions. We offer client-side threading, but\nnot in the server.\n\n---------------------------------------------------------------------------\n\n\n> \n> Best Regards,\n> Reydan\n> \n> \n> On Nov 28, 2009, at 6:42 PM, Tom Lane wrote:\n> \n> > Reydan Cankur <[email protected]> writes:\n> >> I am trying to run postgresql functions with threads by using OpenMP.\n> >\n> > This is pretty much doomed to failure. It's *certainly* doomed to\n> > failure if you just hack up one area of the source code without \n> > dealing\n> > with the backend's general lack of support for threading.\n> >\n> > \t\t\tregards, tom lane\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Sun, 29 Nov 2009 08:05:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OpenMP in PostgreSQL-8.4.0" }, { "msg_contents": "So I am trying to understand that can anyone rewrite some functions in \npostgresql with OpenMP in order to increase performance.\ndoes this work?\n\nOn Nov 29, 2009, at 3:05 PM, Bruce Momjian wrote:\n\n> Reydan Cankur wrote:\n>> You mean that backend does not support threading and everything I try\n>> is useless\n>> Is there a way to overcome this issue?\n>> Is there anything I can adjust on backend to enable threading?\n>> Is there any documentation to advise?\n>\n> Uh, \"no\" to all those questions. We offer client-side threading, but\n> not in the server.\n>\n> ---------------------------------------------------------------------------\n>\n>\n>>\n>> Best Regards,\n>> Reydan\n>>\n>>\n>> On Nov 28, 2009, at 6:42 PM, Tom Lane wrote:\n>>\n>>> Reydan Cankur <[email protected]> writes:\n>>>> I am trying to run postgresql functions with threads by using \n>>>> OpenMP.\n>>>\n>>> This is pretty much doomed to failure. It's *certainly* doomed to\n>>> failure if you just hack up one area of the source code without\n>>> dealing\n>>> with the backend's general lack of support for threading.\n>>>\n>>> \t\t\tregards, tom lane\n>>\n>>\n>> -- \n>> Sent via pgsql-performance mailing list ([email protected] \n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n> -- \n> Bruce Momjian <[email protected]> http://momjian.us\n> EnterpriseDB http://enterprisedb.com\n>\n> + If your life is a hard drive, Christ can be your backup. +\n\n", "msg_date": "Sun, 29 Nov 2009 15:24:30 +0200", "msg_from": "Reydan Cankur <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OpenMP in PostgreSQL-8.4.0" }, { "msg_contents": "Reydan Cankur <[email protected]> writes:\n> So I am trying to understand that can anyone rewrite some functions in \n> postgresql with OpenMP in order to increase performance.\n> does this work?\n\nNot without doing a truly vast amount of infrastructure work first.\nInfrastructure work that, by and large, would add cycles and lose\nperformance. So by the time you got to the point of being able to\ndo micro-optimizations like parallelizing individual functions, you'd\nhave dug a pretty large performance hole that you'd have to climb out\nof before showing any net benefit for all this work.\n\nIf you search the PG archives for discussions of threading you should\nfind lots and lots of prior material.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 29 Nov 2009 09:52:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OpenMP in PostgreSQL-8.4.0 " }, { "msg_contents": "On Sun, Nov 29, 2009 at 1:24 PM, Reydan Cankur <[email protected]> wrote:\n> So I am trying to understand that can anyone rewrite some functions in\n> postgresql with OpenMP in order to increase performance.\n> does this work?\n\nWell you have to check the code path you're parallelizing for any\nfunction calls which might manipulate any data structures and protect\nthose data structures with locks. That will be a huge job and\nintroduce extra overhead. If you try to find code which does nothing\nlike that you'll be limited to a few low-level pieces of code because\nPostgres goes to great lengths to be generic and allow\nuser-configurable code in lots of places. To give one example, the\nnatural place to introduce parallelism would be in the sorting\nroutines -- but the comparison routine is a data-type-specific\nfunction that users can specify at the SQL level and is allowed to do\nalmost anything.\n\nThen you'll have to worry about things like signal handlers. Anything\nbig enough to be worth parallelizing is going to have a\nCHECK_FOR_INTERRUPTS in it which you'll have to make sure gets\nreceived by and processed correctly, cancelling all threads and\nthrowing an error properly.\n\nCome to think of it you'll have to handle PG_TRY() and PG_THROW()\nproperly. That will mean if an error occurs in any thread you have to\nmake sure that you kill all the threads that have been spawned in that\nPG_TRY block and throw the correct error up.\n\nIncidentally I doubt heap_deformtuple is suitable for parallelization.\nIt loops over the tuple and the procesing for each field depends\ncompletely on the previous one. When you have that kind of chained\ndependency adding threads doesn't help. You need a loop somewhere\nwhere each iteration of the loop can be processed independently. You\nmight find such loops in the executor for things like hash joins or\nnested loops. But they will definitely involve user-defined functions\nand even i/o for each iteration of the loop so you'll definitely have\nto take precautions against the usual multi-threading dangers.\n\n-- \ngreg\n", "msg_date": "Sun, 29 Nov 2009 15:34:06 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OpenMP in PostgreSQL-8.4.0" } ]
[ { "msg_contents": "Regards to all the list.\nZFS, the new filesystem developed by the Solaris Development team and \nported to FreeBSD too, have many advantages that can do that all \nsysadmins are questioned\nabout if it is a good filesystem to the PostgreSQL installation.\nAny of you haved tested this filesystem like PostgreSQL installation fs?\nRegards.\n\n\n-- \n-------------------------------------\n\"TIP 4: No hagas 'kill -9' a postmaster\"\nIng. Marcos Lu�s Ort�z Valmaseda\nPostgreSQL System DBA \nCentro de Tecnolog�as de Almacenamiento y An�lis de Datos (CENTALAD)\nUniversidad de las Ciencias Inform�ticas\n\nLinux User # 418229\nhttp://www.postgresql-es.org\nhttp://www.postgresql.org\nhttp://www.planetpostgresql.org\n\n\n\n", "msg_date": "Sun, 29 Nov 2009 22:42:24 +0100", "msg_from": "=?ISO-8859-1?Q?=22Ing_=2E_Marcos_Lu=EDs_Ort=EDz_Valmaseda?=\n\t=?ISO-8859-1?Q?=22?= <[email protected]>", "msg_from_op": true, "msg_subject": "Any have tested ZFS like PostgreSQL installation filesystem?" }, { "msg_contents": "Ivan Voras escribió:\n> Ing . Marcos Luís Ortíz Valmaseda wrote:\n>> Regards to all the list.\n>> ZFS, the new filesystem developed by the Solaris Development team and \n>> ported to FreeBSD too, have many advantages that can do that all \n>> sysadmins are questioned\n>> about if it is a good filesystem to the PostgreSQL installation.\n>> Any of you haved tested this filesystem like PostgreSQL installation fs?\n>\n> It will work but as to if it is a good file system for databases, the \n> debate still goes on.\n>\n> Here are some links about ZFS and databases:\n>\n> http://blogs.sun.com/paulvandenbogaard/entry/postgresql_on_ufs_versus_zfs\n> http://blogs.sun.com/paulvandenbogaard/entry/running_postgresql_on_zfs_file \n>\n> http://blogs.sun.com/realneel/entry/mysql_innodb_zfs_best_practices\n> http://dev.mysql.com/tech-resources/articles/mysql-zfs.html\n> http://blogs.smugmug.com/don/2008/10/13/zfs-mysqlinnodb-compression-update/ \n>\n>\n> A separate issue (I think it is not explored enough in the above \n> links) is that ZFS writes data in a semi-continuous log, meaning there \n> are no in-place modifications of files (every such write is made on a \n> different place), which leads to heavy fragmentation. I don't think I \n> have seen a study of this particular effect. OTOH, it will only matter \n> if the DB usage pattern is sequential reads and lots of updates - and \n> even here it might be hidden by internal DB data fragmentation.\n>\n>\n>\nOK, thanks for the answers, I ´ll study the efects now. This tests was \nwith the FreeBSD-8.0 version?\n\nRegards.\n\n\n-- \n-------------------------------------\n\"TIP 4: No hagas 'kill -9' a postmaster\"\nIng. Marcos Luís Ortíz Valmaseda\nPostgreSQL System DBA \nCentro de Tecnologías de Almacenamiento y Anális de Datos (CENTALAD)\nUniversidad de las Ciencias Informáticas\n\nLinux User # 418229\nhttp://www.postgresql-es.org\nhttp://www.postgresql.org\nhttp://www.planetpostgresql.org\n\n\n\n", "msg_date": "Mon, 30 Nov 2009 07:27:09 +0100", "msg_from": "=?UTF-8?B?IkluZyAuIE1hcmNvcyBMdcOtcyBPcnTDrXogVmFsbWFzZWRhIg==?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any have tested ZFS like PostgreSQL installation filesystem?" }, { "msg_contents": "Ing . Marcos Luís Ortíz Valmaseda wrote:\n> Regards to all the list.\n> ZFS, the new filesystem developed by the Solaris Development team and \n> ported to FreeBSD too, have many advantages that can do that all \n> sysadmins are questioned\n> about if it is a good filesystem to the PostgreSQL installation.\n> Any of you haved tested this filesystem like PostgreSQL installation fs?\n\nIt will work but as to if it is a good file system for databases, the \ndebate still goes on.\n\nHere are some links about ZFS and databases:\n\nhttp://blogs.sun.com/paulvandenbogaard/entry/postgresql_on_ufs_versus_zfs\nhttp://blogs.sun.com/paulvandenbogaard/entry/running_postgresql_on_zfs_file\nhttp://blogs.sun.com/realneel/entry/mysql_innodb_zfs_best_practices\nhttp://dev.mysql.com/tech-resources/articles/mysql-zfs.html\nhttp://blogs.smugmug.com/don/2008/10/13/zfs-mysqlinnodb-compression-update/\n\nA separate issue (I think it is not explored enough in the above links) \nis that ZFS writes data in a semi-continuous log, meaning there are no \nin-place modifications of files (every such write is made on a different \nplace), which leads to heavy fragmentation. I don't think I have seen a \nstudy of this particular effect. OTOH, it will only matter if the DB \nusage pattern is sequential reads and lots of updates - and even here it \nmight be hidden by internal DB data fragmentation.\n\n\n", "msg_date": "Mon, 30 Nov 2009 13:00:47 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any have tested ZFS like PostgreSQL installation filesystem?" }, { "msg_contents": "Ing . Marcos Luís Ortíz Valmaseda wrote:\n> Ivan Voras escribió:\n>> Ing . Marcos Luís Ortíz Valmaseda wrote:\n>>> Regards to all the list.\n>>> ZFS, the new filesystem developed by the Solaris Development team and \n>>> ported to FreeBSD too, have many advantages that can do that all \n>>> sysadmins are questioned\n>>> about if it is a good filesystem to the PostgreSQL installation.\n>>> Any of you haved tested this filesystem like PostgreSQL installation fs?\n>>\n>> It will work but as to if it is a good file system for databases, the \n>> debate still goes on.\n>>\n>> Here are some links about ZFS and databases:\n>>\n>> http://blogs.sun.com/paulvandenbogaard/entry/postgresql_on_ufs_versus_zfs\n>> http://blogs.sun.com/paulvandenbogaard/entry/running_postgresql_on_zfs_file \n>>\n>> http://blogs.sun.com/realneel/entry/mysql_innodb_zfs_best_practices\n>> http://dev.mysql.com/tech-resources/articles/mysql-zfs.html\n>> http://blogs.smugmug.com/don/2008/10/13/zfs-mysqlinnodb-compression-update/ \n>>\n>>\n>> A separate issue (I think it is not explored enough in the above \n>> links) is that ZFS writes data in a semi-continuous log, meaning there \n>> are no in-place modifications of files (every such write is made on a \n>> different place), which leads to heavy fragmentation. I don't think I \n>> have seen a study of this particular effect. OTOH, it will only matter \n>> if the DB usage pattern is sequential reads and lots of updates - and \n>> even here it might be hidden by internal DB data fragmentation.\n>>\n>>\n>>\n> OK, thanks for the answers, I ´ll study the efects now. This tests was \n> with the FreeBSD-8.0 version?\n\nNo, AFAIK all of them were on some (and different) versions of \n(Open)Solaris.\n\n", "msg_date": "Mon, 30 Nov 2009 13:33:46 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any have tested ZFS like PostgreSQL installation filesystem?" } ]
[ { "msg_contents": "Friendly greetings !\nI use postgresql 8.3.6.\n\nhere is a few info about the table i'm querying :\n-------------------------------------------------------------\n- select count(*) from _article : 17301610\n- select count(*) from _article WHERE (_article.bitfield && getbit(0)) : 6729\n\n\nHere are both request with problems :\n--------------------------------------------------\n\nQUERY 1 : Very fast !\n-------------\n\nexplain SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nORDER BY _article.id ASC\nLIMIT 500;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Limit (cost=66114.13..66115.38 rows=500 width=1114)\n -> Sort (cost=66114.13..66157.37 rows=17296 width=1114)\n Sort Key: id\n -> Bitmap Heap Scan on _article (cost=138.32..65252.29\nrows=17296 width=1114)\n Recheck Cond: (bitfield && B'1'::bit varying)\n -> Bitmap Index Scan on idx_article_bitfield\n(cost=0.00..134.00 rows=17296 width=0)\n Index Cond: (bitfield && B'1'::bit varying)\n\n\n\n\nQUERY 2 : Endless ... (more than 30mn... i stopped the query)\n-------------\n\nexplain SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nORDER BY _article.id ASC\nLIMIT 5;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2042.87 rows=5 width=1114)\n -> Index Scan using _article_pkey on _article\n(cost=0.00..7066684.46 rows=17296 width=1114)\n Filter: (bitfield && B'1'::bit varying)\n(3 rows)\n\n\nWith LIMIT 5 and LIMIT 500, the query plan are differents.\nPostgresql estimate that it can do a a simple index scan to find only 5 row.\nWith more than LIMIT ~400 it estimate that it's faster to do a more\ncomplex plan.\nand it make sense !\n\nThe problem is in the order by, of course.\nIf i remove the \"order by\" the LIMIT 5 is faster (0.044 ms) and do an\nindex scan.\nAt limit 500 (without order) it still use an index scan and it is\nslightly slower.\nAt limit 5000 (without order) it switch to a Bitmap Index Scan +\nBitmap Heap Scan and it's slower but acceptable (5.275 ms)\n\nWhy, with the \"QUERY 2\", postgresql doesn't estimate the cost of the\nSort/ORDER BY ?\nOf course, by ignoring the order, both query plan are right and the\nchoice for thoses differents plans totally make sense.\n\nBut... if the planner would be kind enough to considerate the cost of\nthe order by, it would certainly choose the Bitmap Index + Bitmap Heap\nscan for the limit 5.\nAnd not an index_scan pkey !\n\nI have set the statistics to 1000 for _article.bitfield, just in case\n(and ran a vacuum analyze), it doesn't change anything.\n\nIs that a bug ? any Idea ?\n\nThank you :)\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Mon, 30 Nov 2009 17:54:03 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Cost of sort/order by not estimated by the query planner" }, { "msg_contents": "hummm.... Adding pgsql-perf :)\n\nOn Mon, Nov 30, 2009 at 5:54 PM, Laurent Laborde <[email protected]> wrote:\n> Friendly greetings !\n> I use postgresql 8.3.6.\n>\n> here is a few info about the table i'm querying :\n> -------------------------------------------------------------\n> - select count(*) from _article : 17301610\n> - select count(*) from _article WHERE (_article.bitfield && getbit(0)) : 6729\n>\n>\n> Here are both request with problems :\n> --------------------------------------------------\n>\n> QUERY 1 : Very fast !\n> -------------\n>\n> explain SELECT *\n> FROM   _article\n> WHERE (_article.bitfield && getbit(0))\n> ORDER BY _article.id ASC\n> LIMIT 500;\n>                                             QUERY PLAN\n> -----------------------------------------------------------------------------------------------------\n>  Limit  (cost=66114.13..66115.38 rows=500 width=1114)\n>   ->  Sort  (cost=66114.13..66157.37 rows=17296 width=1114)\n>         Sort Key: id\n>         ->  Bitmap Heap Scan on _article  (cost=138.32..65252.29\n> rows=17296 width=1114)\n>               Recheck Cond: (bitfield && B'1'::bit varying)\n>               ->  Bitmap Index Scan on idx_article_bitfield\n> (cost=0.00..134.00 rows=17296 width=0)\n>                     Index Cond: (bitfield && B'1'::bit varying)\n>\n>\n>\n>\n> QUERY 2 : Endless ... (more than 30mn... i stopped the query)\n> -------------\n>\n> explain SELECT *\n> FROM   _article\n> WHERE (_article.bitfield && getbit(0))\n> ORDER BY _article.id ASC\n> LIMIT 5;\n>                                           QUERY PLAN\n> -------------------------------------------------------------------------------------------------\n>  Limit  (cost=0.00..2042.87 rows=5 width=1114)\n>   ->  Index Scan using _article_pkey on _article\n> (cost=0.00..7066684.46 rows=17296 width=1114)\n>         Filter: (bitfield && B'1'::bit varying)\n> (3 rows)\n>\n>\n> With LIMIT 5 and LIMIT 500, the query plan are differents.\n> Postgresql estimate that it can do a a simple index scan to find only 5 row.\n> With more than LIMIT ~400 it estimate that it's faster to do a more\n> complex plan.\n> and it make sense !\n>\n> The problem is in the order by, of course.\n> If i remove the \"order by\" the LIMIT 5 is faster (0.044 ms) and do an\n> index scan.\n> At limit 500 (without order) it still use an index scan and it is\n> slightly slower.\n> At limit 5000 (without order) it switch to a Bitmap Index Scan +\n> Bitmap Heap Scan and it's slower but acceptable (5.275 ms)\n>\n> Why, with the \"QUERY 2\", postgresql doesn't estimate the cost of the\n> Sort/ORDER BY ?\n> Of course, by ignoring the order, both query plan are right and the\n> choice for thoses differents plans totally make sense.\n>\n> But... if the planner would be kind enough to considerate the cost of\n> the order by, it would certainly choose the Bitmap Index + Bitmap Heap\n> scan for the limit 5.\n> And not an index_scan pkey !\n>\n> I have set the statistics to 1000 for _article.bitfield, just in case\n> (and ran a vacuum analyze), it doesn't change anything.\n>\n> Is that a bug ? any Idea ?\n>\n> Thank you :)\n>\n> --\n> Laurent \"ker2x\" Laborde\n> Sysadmin & DBA at http://www.over-blog.com/\n>\n\n\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Wed, 2 Dec 2009 12:13:35 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cost of sort/order by not estimated by the query planner" }, { "msg_contents": "On Wed, Dec 2, 2009 at 11:13 AM, Laurent Laborde <[email protected]> wrote:\n>>                                             QUERY PLAN\n>> -----------------------------------------------------------------------------------------------------\n>>  Limit  (cost=66114.13..66115.38 rows=500 width=1114)\n>>   ->  Sort  (cost=66114.13..66157.37 rows=17296 width=1114)\n>>         Sort Key: id\n>>         ->  Bitmap Heap Scan on _article  (cost=138.32..65252.29\n>> rows=17296 width=1114)\n>>               Recheck Cond: (bitfield && B'1'::bit varying)\n>>               ->  Bitmap Index Scan on idx_article_bitfield\n>> (cost=0.00..134.00 rows=17296 width=0)\n>>                     Index Cond: (bitfield && B'1'::bit varying)\n\n\nUhm, what kind of index is idx_article_bitfield?\n\n\n\n-- \ngreg\n", "msg_date": "Wed, 2 Dec 2009 12:42:16 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cost of sort/order by not estimated by the query planner" }, { "msg_contents": "On Wed, Dec 2, 2009 at 1:42 PM, Greg Stark <[email protected]> wrote:\n> On Wed, Dec 2, 2009 at 11:13 AM, Laurent Laborde <[email protected]> wrote:\n>>>                                             QUERY PLAN\n>>> -----------------------------------------------------------------------------------------------------\n>>>  Limit  (cost=66114.13..66115.38 rows=500 width=1114)\n>>>   ->  Sort  (cost=66114.13..66157.37 rows=17296 width=1114)\n>>>         Sort Key: id\n>>>         ->  Bitmap Heap Scan on _article  (cost=138.32..65252.29\n>>> rows=17296 width=1114)\n>>>               Recheck Cond: (bitfield && B'1'::bit varying)\n>>>               ->  Bitmap Index Scan on idx_article_bitfield\n>>> (cost=0.00..134.00 rows=17296 width=0)\n>>>                     Index Cond: (bitfield && B'1'::bit varying)\n>\n>\n> Uhm, what kind of index is idx_article_bitfield?\n\nMmm, i forgot about that ! It's in a GIN index.\n\"idx_article_bitfield\" gin (bitfield), tablespace \"indexspace\"\n\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Wed, 2 Dec 2009 13:47:16 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cost of sort/order by not estimated by the query planner" }, { "msg_contents": "On Wed, Dec 2, 2009 at 11:13 AM, Laurent Laborde <[email protected]> wrote:\n>>                                           QUERY PLAN\n>> -------------------------------------------------------------------------------------------------\n>>  Limit  (cost=0.00..2042.87 rows=5 width=1114)\n>>   ->  Index Scan using _article_pkey on _article\n>> (cost=0.00..7066684.46 rows=17296 width=1114)\n>>         Filter: (bitfield && B'1'::bit varying)\n>\n\nAh, I missed this the first time around. It's scanning _article_pkey\nhere. Ie, it's scanning the table from the oldest to the newest\narticle assuming that the values wihch satisfy that constraint are\nevenly distributed and it'll find five of them pretty quickly. In\nreality there's a correlation between this bit being set and the value\nof _article.id and all the ones with it set are towards the end.\nPostgres doesn't have any statistics on how multiple columns are\nrelated yet so it can't know this.\n\nIf this is an important query you might try having an index on\n<bitfield,id> or a partial index on \"id where bitfield && B'1' \". The\nlatter sounds like what you really need\n\n-- \ngreg\n", "msg_date": "Wed, 2 Dec 2009 12:47:31 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cost of sort/order by not estimated by the query planner" }, { "msg_contents": "On Wed, Dec 2, 2009 at 1:47 PM, Greg Stark <[email protected]> wrote:\n> On Wed, Dec 2, 2009 at 11:13 AM, Laurent Laborde <[email protected]> wrote:\n>>>                                           QUERY PLAN\n>>> -------------------------------------------------------------------------------------------------\n>>>  Limit  (cost=0.00..2042.87 rows=5 width=1114)\n>>>   ->  Index Scan using _article_pkey on _article\n>>> (cost=0.00..7066684.46 rows=17296 width=1114)\n>>>         Filter: (bitfield && B'1'::bit varying)\n>>\n>\n> Ah, I missed this the first time around. It's scanning _article_pkey\n> here. Ie, it's scanning the table from the oldest to the newest\n> article assuming that the values wihch satisfy that constraint are\n> evenly distributed and it'll find five of them pretty quickly. In\n> reality there's a correlation between this bit being set and the value\n> of _article.id and all the ones with it set are towards the end.\n> Postgres doesn't have any statistics on how multiple columns are\n> related yet so it can't know this.\n>\n> If this is an important query you might try having an index on\n> <bitfield,id> or a partial index on \"id where bitfield && B'1' \". The\n> latter sounds like what you really need\n\nThere is, indeed, a lot of tricks and hacks.\nMaybe my question was too confusing.\n\nThe question is : why a limit 5 is much much slower than a limit 500 ?\n\nThe problem is in the \"order by\" and not \"finding enough the data that\nmatch the filter\".\nEven if it's not evenly distributed, the queries without \"order by\"\nare much much faster, EVEN when using the \"pkey query plan\".\n\nwithout \"order by\" using the bitmap -> fast\nwithout \"order by\" using the pkey index -> fast\nwith \"order by\" using the bitmap -> fast\nwith \"order by\" using the pkey index -> slowwwwwwwwwwwww\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Wed, 2 Dec 2009 14:01:55 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cost of sort/order by not estimated by the query planner" }, { "msg_contents": "On Wed, Dec 2, 2009 at 8:01 AM, Laurent Laborde <[email protected]> wrote:\n> On Wed, Dec 2, 2009 at 1:47 PM, Greg Stark <[email protected]> wrote:\n>> On Wed, Dec 2, 2009 at 11:13 AM, Laurent Laborde <[email protected]> wrote:\n>>>>                                           QUERY PLAN\n>>>> -------------------------------------------------------------------------------------------------\n>>>>  Limit  (cost=0.00..2042.87 rows=5 width=1114)\n>>>>   ->  Index Scan using _article_pkey on _article\n>>>> (cost=0.00..7066684.46 rows=17296 width=1114)\n>>>>         Filter: (bitfield && B'1'::bit varying)\n>>>\n>>\n>> Ah, I missed this the first time around. It's scanning _article_pkey\n>> here. Ie, it's scanning the table from the oldest to the newest\n>> article assuming that the values wihch satisfy that constraint are\n>> evenly distributed and it'll find five of them pretty quickly. In\n>> reality there's a correlation between this bit being set and the value\n>> of _article.id and all the ones with it set are towards the end.\n>> Postgres doesn't have any statistics on how multiple columns are\n>> related yet so it can't know this.\n>>\n>> If this is an important query you might try having an index on\n>> <bitfield,id> or a partial index on \"id where bitfield && B'1' \". The\n>> latter sounds like what you really need\n>\n> There is, indeed, a lot of tricks and hacks.\n> Maybe my question was too confusing.\n>\n> The question is : why a limit 5 is much much slower than a limit 500 ?\n>\n> The problem is in the \"order by\" and not \"finding enough the data that\n> match the filter\".\n> Even if it's not evenly distributed, the queries without \"order by\"\n> are much much faster, EVEN when using the \"pkey query plan\".\n>\n> without \"order by\" using the bitmap -> fast\n> without \"order by\" using the pkey index -> fast\n> with \"order by\" using the bitmap -> fast\n> with \"order by\" using the pkey index -> slowwwwwwwwwwwww\n\nI'm confused. I think you've only shown us two query plans, so it's\nhard to judge what's going on here in the two cases you haven't shown.\n Also, you haven't shown the EXPLAIN ANALYZE output, so it's a bit\ntricky to judge what is really happening.\n\nHowever... as a general rule, the usual reason why the planner makes\nbad decisions with small LIMITs is that it overestimates the impact of\nthe startup cost. If one plan has a startup cost of 1 and a run cost\nof 100, and another plan has a startup cost of 0 and a run cost of\n1000000, the planner will pick the latter plan if a sufficiently small\nfraction of the rows are being fetched (less than a millionth of\nthem). It's easy for the estimates to be off by enough to make this\nis a bad decision, especially if using operations that the planner\ndoesn't have good estimates for (&& may be one such).\n\n...Robert\n", "msg_date": "Wed, 2 Dec 2009 08:17:41 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cost of sort/order by not estimated by the query\n\tplanner" }, { "msg_contents": "On Wed, Dec 2, 2009 at 2:17 PM, Robert Haas <[email protected]> wrote:\n>\n> I'm confused.  I think you've only shown us two query plans, so it's\n> hard to judge what's going on here in the two cases you haven't shown.\n>  Also, you haven't shown the EXPLAIN ANALYZE output, so it's a bit\n> tricky to judge what is really happening.\n\nI will provide all the explain analyze.\nBut considering that the request with limit 5 take more than an half\nhour (i don't know how much exactly), it will take some times.\nSee you soon :)\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Wed, 2 Dec 2009 14:20:30 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cost of sort/order by not estimated by the query\n\tplanner" }, { "msg_contents": "* without order by, limit 5 : 70ms\n----------------------------------\n explain analyze SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nLIMIT 5;\n\nQUERY PLAN :\nLimit (cost=0.00..20.03 rows=5 width=1109) (actual\ntime=70.190..70.265 rows=5 loops=1)\n -> Index Scan using idx_article_bitfield on _article\n(cost=0.00..69290.99 rows=17298 width=1109) (actual\ntime=70.188..70.260 rows=5 loops=1)\n Index Cond: (bitfield && B'1'::bit varying)\n Total runtime: 70.406 ms\n(4 rows)\n\n* without order by, limit 500 (same plan as above) : 371ms\n------------------------------------------------------------------\nexplain analyze SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nLIMIT 500;\n\nQUERY PLAN:\n Limit (cost=0.00..2002.86 rows=500 width=1109) (actual\ntime=0.087..371.257 rows=500 loops=1)\n -> Index Scan using idx_article_bitfield on _article\n(cost=0.00..69290.99 rows=17298 width=1109) (actual\ntime=0.086..371.075 rows=500 loops=1)\n Index Cond: (bitfield && B'1'::bit varying)\n Total runtime: 371.369 ms\n\n* without order by, limit 5000 (query plan changed) : 1307ms\n-------------------------------------------------------------------\n explain analyze SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nLIMIT 5000;\n\nQUERY PLAN :\n Limit (cost=138.34..18971.86 rows=5000 width=1109) (actual\ntime=53.782..1307.173 rows=5000 loops=1)\n -> Bitmap Heap Scan on _article (cost=138.34..65294.79 rows=17298\nwidth=1109) (actual time=53.781..1305.565 rows=5000 loops=1)\n Recheck Cond: (bitfield && B'1'::bit varying)\n -> Bitmap Index Scan on idx_article_bitfield\n(cost=0.00..134.01 rows=17298 width=0) (actual time=53.606..53.606\nrows=6743 loops=1)\n Index Cond: (bitfield && B'1'::bit varying)\n Total runtime: 1307.972 ms\n\n\nSo... *without* \"order by\", differents limit and different query plan\n: the queries are fast.\n\n* with order by, limit 5 :\n------------------------------\nexplain analyze SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nORDER BY _article.id ASC\nLIMIT 5;\n\nQUERY PLAN :\nMmmm.... the query is running since 2h ... waiting, waiting.\n\n\n* with order by, limit 500 : 546ms\n-------------------------------\nexplain analyze SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nORDER BY _article.id ASC\nLIMIT 500;\nQUERY PLAN :\n Limit (cost=66156.73..66157.98 rows=500 width=1109) (actual\ntime=545.671..545.900 rows=500 loops=1)\n -> Sort (cost=66156.73..66199.98 rows=17298 width=1109) (actual\ntime=545.670..545.766 rows=500 loops=1)\n Sort Key: id\n Sort Method: top-N heapsort Memory: 603kB\n -> Bitmap Heap Scan on _article (cost=138.34..65294.79\nrows=17298 width=1109) (actual time=1.059..541.359 rows=6729 loops=1)\n Recheck Cond: (bitfield && B'1'::bit varying)\n -> Bitmap Index Scan on idx_article_bitfield\n(cost=0.00..134.01 rows=17298 width=0) (actual time=0.922..0.922\nrows=6743 loops=1)\n Index Cond: (bitfield && B'1'::bit varying)\n Total runtime: 546.163 ms\n\n\nNow... with ordery by, different limit, different query plan, the\nlimit 5 query is insanly *SLOW* (while the limit 500 is super fast).\n\nWhat is think : The query planner do not consider the time taken by\nthe order by... which is *much* slower !!\n\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Wed, 2 Dec 2009 16:32:44 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cost of sort/order by not estimated by the query\n\tplanner" }, { "msg_contents": "On Wed, Dec 2, 2009 at 10:32 AM, Laurent Laborde <[email protected]> wrote:\n> * without order by, limit 5 : 70ms\n> ----------------------------------\n>  explain analyze SELECT *\n> FROM   _article\n> WHERE (_article.bitfield && getbit(0))\n> LIMIT 5;\n>\n> QUERY PLAN  :\n> Limit  (cost=0.00..20.03 rows=5 width=1109) (actual\n> time=70.190..70.265 rows=5 loops=1)\n>   ->  Index Scan using idx_article_bitfield on _article\n> (cost=0.00..69290.99 rows=17298 width=1109) (actual\n> time=70.188..70.260 rows=5 loops=1)\n>         Index Cond: (bitfield && B'1'::bit varying)\n>  Total runtime: 70.406 ms\n> (4 rows)\n>\n> * without order by, limit 500 (same plan as above) : 371ms\n> ------------------------------------------------------------------\n> explain analyze SELECT *\n> FROM   _article\n> WHERE (_article.bitfield && getbit(0))\n> LIMIT 500;\n>\n> QUERY PLAN:\n>  Limit  (cost=0.00..2002.86 rows=500 width=1109) (actual\n> time=0.087..371.257 rows=500 loops=1)\n>   ->  Index Scan using idx_article_bitfield on _article\n> (cost=0.00..69290.99 rows=17298 width=1109) (actual\n> time=0.086..371.075 rows=500 loops=1)\n>         Index Cond: (bitfield && B'1'::bit varying)\n>  Total runtime: 371.369 ms\n>\n> * without order by, limit 5000 (query plan changed) : 1307ms\n> -------------------------------------------------------------------\n>  explain analyze SELECT *\n> FROM   _article\n> WHERE (_article.bitfield && getbit(0))\n> LIMIT 5000;\n>\n> QUERY PLAN :\n>  Limit  (cost=138.34..18971.86 rows=5000 width=1109) (actual\n> time=53.782..1307.173 rows=5000 loops=1)\n>   ->  Bitmap Heap Scan on _article  (cost=138.34..65294.79 rows=17298\n> width=1109) (actual time=53.781..1305.565 rows=5000 loops=1)\n>         Recheck Cond: (bitfield && B'1'::bit varying)\n>         ->  Bitmap Index Scan on idx_article_bitfield\n> (cost=0.00..134.01 rows=17298 width=0) (actual time=53.606..53.606\n> rows=6743 loops=1)\n>               Index Cond: (bitfield && B'1'::bit varying)\n>  Total runtime: 1307.972 ms\n>\n>\n> So... *without* \"order by\", differents limit and different query plan\n> : the queries are fast.\n>\n> * with order by, limit 5 :\n> ------------------------------\n> explain analyze SELECT *\n> FROM   _article\n> WHERE (_article.bitfield && getbit(0))\n> ORDER BY _article.id ASC\n> LIMIT 5;\n>\n> QUERY PLAN :\n> Mmmm.... the query is running since 2h ... waiting, waiting.\n>\n>\n> * with order by, limit 500 : 546ms\n> -------------------------------\n> explain analyze SELECT *\n> FROM   _article\n> WHERE (_article.bitfield && getbit(0))\n> ORDER BY _article.id ASC\n> LIMIT 500;\n> QUERY PLAN :\n>  Limit  (cost=66156.73..66157.98 rows=500 width=1109) (actual\n> time=545.671..545.900 rows=500 loops=1)\n>   ->  Sort  (cost=66156.73..66199.98 rows=17298 width=1109) (actual\n> time=545.670..545.766 rows=500 loops=1)\n>         Sort Key: id\n>         Sort Method:  top-N heapsort  Memory: 603kB\n>         ->  Bitmap Heap Scan on _article  (cost=138.34..65294.79\n> rows=17298 width=1109) (actual time=1.059..541.359 rows=6729 loops=1)\n>               Recheck Cond: (bitfield && B'1'::bit varying)\n>               ->  Bitmap Index Scan on idx_article_bitfield\n> (cost=0.00..134.01 rows=17298 width=0) (actual time=0.922..0.922\n> rows=6743 loops=1)\n>                     Index Cond: (bitfield && B'1'::bit varying)\n>  Total runtime: 546.163 ms\n>\n>\n> Now... with ordery by, different limit, different query plan, the\n> limit 5 query is insanly *SLOW* (while the limit 500 is super fast).\n>\n> What is think : The query planner do not consider the time taken by\n> the order by... which is *much* slower !!\n\nThat is certainly not the case. If the query planner did not consider\nthe time required to perform a sort, well, that would have been fixed\na lot sooner than now. The problem real problem here is exactly what\nI said upthread. Without order-by, the query planner picks an\nindex-scan or a bitmap-index-scan and just runs it until it gets\nenough rows to satisfy the LIMIT. No problem. With order-by, it has\nto make a decision: should it fetch ALL the rows that satisfy the\nbitfield condition, sort them by article ID, and then pick the top\nfive? Or should it instead use the index on article ID to start\nretrieving the lowest-numbered article IDs and hope to find 5 that\nsatisfy the bitfield condition before it goes through too many rows?\n\nThe answer depends on how frequently the bitfield condition will be\nsatisfied. If most rows in the table satisfy the bitfield condition,\nthen the second plan is better; if very few do, the first plan is\nbetter. Somewhat more subtly, the plan also depends on the LIMIT.\nThe first plan requires almost the same amount of work for a small\nlimit as it does for a large one - you still have to find ALL the rows\nthat match the bitfield condition and sort them. Then you return a\nlarger or smaller number of rows from the result of the sort depending\non the LIMIT. But the amount of work that the second plan requires\nvaries dramatically depending on the LIMIT. If the LIMIT is only\none-hundredth as large (5 instead of 500), then the second plan\nfigures to have to scan only one one-hundredth as many rows, so it\ntakes about a hundredth as much work for LIMIT 5 rather than LIMIT\n500, whereas the cost of the first plan hasn't changed much.\n\nThe exact break-even point between the two plans will vary depending\non what percentage of the rows in the table satisfy the bitmap\ncondition. In your particular case, the break-even point is less than\none row, so the first plan is always better, but the planner doesn't\nknow that. I don't think the planner has any statistics that can tell\nit how many of the rows in the table satisfy (_article.bitfield &&\ngetbit(0)), and it's probably estimating high because the actual\nselectivity based on the numbers you provided is quite low. That\nmakes the second plan look appealing for small numbers of rows. If\nthe rows that it needs are clustered at the \"wrong end\" of the\narticle-ID index, which the planner certainly has no way of knowing,\nthen things get really ugly.\n\nI've sometimes thought that the planner should outright discard plans\nwhose total cost (ignoring the effect of LIMIT) is too many orders of\nmagnitude more than some other available plan. But it's hard to know\nwhere to put the cutoff, and there are cases where the planner makes\nthis kind of trade-off and gets it right which we don't want to break,\nso it's not a simple problem. The best solution I've found is to\navoid using expressions that depend on operators other than equality\nwhenever possible. The planner is very smart about equality. It's\nsomewhat smart about >, <, >=, <=, <>, and pretty stupid about most\nother things.\n\n...Robert\n", "msg_date": "Wed, 2 Dec 2009 11:47:44 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cost of sort/order by not estimated by the query\n\tplanner" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> The exact break-even point between the two plans will vary depending\n> on what percentage of the rows in the table satisfy the bitmap\n> condition.\n\nIt's worse than that. The planner is not too bad about understanding\nthe percentage-of-rows problem --- at least, assuming you are using\na condition it has statistics for, which it doesn't for bitvector &&.\nBut whether the indexscan plan is fast will also depend on where the\nmatching rows are in the index ordering. If they're all towards the\nend you can lose big, and the planner hasn't got stats to let it\npredict that. It just assumes the filter condition is uncorrelated\nto the ordering condition.\n\nMy own advice would be to forget the bitmap field and see if you can't\nuse a collection of plain boolean columns instead. You might still\nlose if there's a correlation problem, but \"bitfield && B'1'\" is\nabsolutely positively guaranteed to produce stupid row estimates and\nhence bad plan choices.\n\nOr you could work on introducing a non-stupid selectivity estimator\nfor &&, but it's not a trivial project.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Dec 2009 12:01:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cost of sort/order by not estimated by the query planner " }, { "msg_contents": "2009/12/1 Laurent Laborde <[email protected]>:\n> The problem is in the order by, of course.\n> If i remove the \"order by\" the LIMIT 5 is faster (0.044 ms) and do an\n> index scan.\n> At limit 500 (without order) it still use an index scan and it is\n> slightly slower.\n> At limit 5000 (without order) it switch to a Bitmap Index Scan +\n> Bitmap Heap Scan and it's slower but acceptable (5.275 ms)\n>\n> Why, with the \"QUERY 2\", postgresql doesn't estimate the cost of the\n> Sort/ORDER BY ?\n> Of course, by ignoring the order, both query plan are right and the\n> choice for thoses differents plans totally make sense.\n\nIt's because the result of IndexScan is already sorted by demanded\nkey, whereas the one of BitmapIndexScan isn't. But I'm not sure why\nthe query lasts more than 30 minutes...\n\n\nRegards,\n\n-- \nHitoshi Harada\n", "msg_date": "Thu, 3 Dec 2009 15:58:39 +0900", "msg_from": "Hitoshi Harada <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cost of sort/order by not estimated by the query\n\tplanner" }, { "msg_contents": "'morning !\n\nAnd here is the query plan for :\n---------------------------------------\nexplain analyze SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nORDER BY _article.id ASC\nLIMIT 5;\n\n Limit (cost=0.00..2238.33 rows=5 width=1099) (actual\ntime=17548636.326..17548837.082 rows=5 loops=1)\n -> Index Scan using _article_pkey on _article\n(cost=0.00..7762964.53 rows=17341 width=1099) (actual\ntime=17548636.324..17548837.075 rows=5 loops=1)\n Filter: (bitfield && B'1'::bit varying)\n Total runtime: 17548837.154 ms\n\n\nVersus the \"limit 500\" query plan :\n-------------------------------------------\nexplain analyze SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nORDER BY _article.id ASC\nLIMIT 500;\n\n Limit (cost=66229.90..66231.15 rows=500 width=1099) (actual\ntime=1491.905..1492.146 rows=500 loops=1)\n -> Sort (cost=66229.90..66273.25 rows=17341 width=1099) (actual\ntime=1491.904..1492.008 rows=500 loops=1)\n Sort Key: id\n Sort Method: top-N heapsort Memory: 603kB\n -> Bitmap Heap Scan on _article (cost=138.67..65365.82\nrows=17341 width=1099) (actual time=777.489..1487.120 rows=6729\nloops=1)\n Recheck Cond: (bitfield && B'1'::bit varying)\n -> Bitmap Index Scan on idx_article_bitfield\n(cost=0.00..134.33 rows=17341 width=0) (actual time=769.799..769.799\nrows=6729 loops=1)\n Index Cond: (bitfield && B'1'::bit varying)\n Total runtime: 1630.690 ms\n\n\nI will read (and try to understand) all you said yesterday and reply\nas soon as i can :)\nThank you !\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Thu, 3 Dec 2009 09:51:25 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cost of sort/order by not estimated by the query\n\tplanner" }, { "msg_contents": "The table is clustered by by blog_id.\nSo, for testing purpose, i tried an ORDER BY blog_id.\n\nlimit 500 :\n-------------\nexplain analyze SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nORDER BY _article.blog_id ASC\nLIMIT 500;\n\n Limit (cost=66229.90..66231.15 rows=500 width=1099) (actual\ntime=9.368..9.580 rows=500 loops=1)\n -> Sort (cost=66229.90..66273.25 rows=17341 width=1099) (actual\ntime=9.367..9.443 rows=500 loops=1)\n Sort Key: blog_id\n Sort Method: top-N heapsort Memory: 660kB\n -> Bitmap Heap Scan on _article (cost=138.67..65365.82\nrows=17341 width=1099) (actual time=0.905..4.042 rows=6729 loops=1)\n Recheck Cond: (bitfield && B'1'::bit varying)\n -> Bitmap Index Scan on idx_article_bitfield\n(cost=0.00..134.33 rows=17341 width=0) (actual time=0.772..0.772\nrows=6729 loops=1)\n Index Cond: (bitfield && B'1'::bit varying)\n Total runtime: 9.824 ms\n\nLimit 5 :\n----------\nexplain analyze SELECT *\nFROM _article\nWHERE (_article.bitfield && getbit(0))\nORDER BY _article.blog_id ASC\nLIMIT 5;\n\n Limit (cost=0.00..1419.22 rows=5 width=1099) (actual\ntime=125076.420..280419.143 rows=5 loops=1)\n -> Index Scan using idx_article_blog_id on _article\n(cost=0.00..4922126.37 rows=17341 width=1099) (actual\ntime=125076.419..280419.137 rows=5 loops=1)\n Filter: (bitfield && B'1'::bit varying)\n Total runtime: 280419.241 ms\n\n\n-- \nLaurent \"ker2x\" Laborde\nSysadmin & DBA at http://www.over-blog.com/\n", "msg_date": "Thu, 3 Dec 2009 10:08:06 +0100", "msg_from": "Laurent Laborde <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cost of sort/order by not estimated by the query\n\tplanner" } ]
[ { "msg_contents": "Perhaps making your select be explicitely part of a read-only\ntransaction rather than letting java make use of an implicit\ntransaction (which may be in auto commit mode)\n\nOn 11/30/09, Waldomiro <[email protected]> wrote:\n> Hi everybody,\n>\n> I have an java application like this:\n>\n> while ( true ) {\n> Thread.sleep( 1000 ) // sleeps 1 second\n>\n> SELECT field1\n> FROM TABLE1\n> WHERE field2 = '10'\n>\n> if ( field1 != null ) {\n> BEGIN;\n>\n> processSomething( field1 );\n>\n> UPDATE TABLE1\n> SET field2 = '20'\n> WHERE field1 = '10';\n>\n> COMMIT;\n> }\n> }\n>\n> This is a simple program which is waiting for a record inserted by\n> another workstation, after I process that record I update to an\n> processed status.\n>\n> That table receives about 3000 inserts and 60000 updates each day, but\n> at night I do a TRUNCATE TABLE1 (Every Night), so the table is very\n> small. There is an index by field1 too.\n>\n> Some days It works very good all day, but somedays I have 7 seconds\n> freeze, I mean, my serves delays 7 seconds on this statement:\n> SELECT field1\n> FROM TABLE1\n> WHERE field2 = '10'\n>\n> Last Friday, It happens about 4 times, one at 9:50 am, another on 13:14\n> pm, another on 17:27 pm and another on 17:57 pm.\n>\n> I looked up to the statistics for that table, but the statistics says\n> that postgres is reading memory, not disk, becouse the table is very\n> small and I do a select every second, so the postgres keeps the table in\n> shared buffers.\n>\n> Why this 7 seconds delay? How could I figure out what is happening?\n>\n> I know:\n>\n> It is not disk, becouse statistics shows its reading memory.\n> It is not internet delay, becouse it is a local network\n> It is not workstations, becouse there are 2 workstations, and both\n> freeze at the same time\n> It is not processors, becouse my server has 8 processors\n> It is not memory, becouse my server has 32 GB, and about 200 MB free\n> It is not another big process or maybe not, becouse I think postgres\n> would not stops my simples process for 7 seconds to do a big process,\n> and I cant see any big process at that time.\n> Its not lock, becouse the simple select freezes, It doesnot have an \"FOR\n> UPDATE\"\n> Its not a vaccum needed, becouse I do a TRUNCATE every night.\n>\n> Is It possible the checkpoint is doing that? Or the archiving? How can I\n> see?\n>\n> Someone have any idea?\n>\n> Thank you\n>\n> Waldomiro Caraiani\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 30 Nov 2009 12:12:21 -0500", "msg_from": "Denis Lussier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Server Freezing" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> De: [email protected] \n> [mailto:[email protected]] En nombre de Waldomiro\n> Enviado el: Lunes, 30 de Noviembre de 2009 22:03\n> Para: [email protected]\n> Asunto: [PERFORM] Server Freezing\n> \n> Hi everybody,\n> \n> ...\n> \n> That table receives about 3000 inserts and 60000 updates each \n> day, but at night I do a TRUNCATE TABLE1 (Every Night), so \n> the table is very small. There is an index by field1 too.\n> \n> Some days It works very good all day, but somedays I have 7 \n> seconds freeze, I mean, my serves delays 7 seconds on this statement:\n> SELECT field1\n> FROM TABLE1\n> WHERE field2 = '10'\n\nHi.\nYou should probably consider creating a partial index on field2 = '10'.\n\n> I looked up to the statistics for that table, but the \n> statistics says that postgres is reading memory, not disk, \n> becouse the table is very small and I do a select every \n> second, so the postgres keeps the table in shared buffers.\n\n\nYou say you dont vacuum this table, but considering 60000 updates on 3000\nrecords, assuming you are updating each record 20 times, your table could\neat up the space of 60M records. ¿Have you considered this?\n\nThough, I am not sure how this impacts when the whole table is held in\nshared buffers.\n\n> \n> Why this 7 seconds delay? How could I figure out what is happening?\n> \n\nTurn log_checkpoints = on to see in the logs if these occur during the\nfreeze.\nAlso log_lock_waits = on will help diagnose the situation.\n\nWhat version of postgres are you running and how are your checkpoints\nconfigured?\n\nRegards,\nFernando.\n\n", "msg_date": "Mon, 30 Nov 2009 17:12:18 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server Freezing" }, { "msg_contents": "On Mon, Nov 30, 2009 at 8:02 PM, Waldomiro <[email protected]> wrote:\n> Its not a vaccum needed, becouse I do a TRUNCATE every night.\n\nBut you're updating each row 20 times a day - you could very well need a vacuum.\n\n> Is It possible the checkpoint is doing that? Or the archiving? How can I\n> see?\n\nIt seems likely to be caused by checkpoint I/O or vacuuming activity,\nbut I'm not sure how to figure out which.\n\n...Robert\n", "msg_date": "Mon, 30 Nov 2009 17:08:15 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server Freezing" }, { "msg_contents": "Waldomiro wrote:\n> Is It possible the checkpoint is doing that? Or the archiving? How can \n> I see?\nIf you're using PostgreSQL 8.3 or later, you can turn on log_checkpoints \nand you'll get a note when each checkpoint finishes. The parts that are \nmore likely to slow the server down are right at the end, so if you see \na bunch of slow queries around the same time as the checkpoint message \nappears in the logs, that's the likely cause. Bad checkpoint behavior \ncan certainly cause several seconds of freezing on a system with 32GB of \nRAM, because with that much data you can have quite a bit in the OS \nwrite cache that all gets forced out at the end of the checkpoint.\n\nFinding when the checkpoints happen on 8.2 or earlier is much harder; I \ncan tell you what to look for on Linux for example, but it's kind of \npainful to track them down.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Mon, 30 Nov 2009 19:08:09 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server Freezing" }, { "msg_contents": "Hi everybody,\n\nI have an java application like this:\n\nwhile ( true ) {\n Thread.sleep( 1000 ) // sleeps 1 second\n\n SELECT field1\n FROM TABLE1\n WHERE field2 = '10'\n\n if ( field1 != null ) {\n BEGIN;\n\n processSomething( field1 );\n\n UPDATE TABLE1\n SET field2 = '20'\n WHERE field1 = '10';\n\n COMMIT;\n }\n}\n\nThis is a simple program which is waiting for a record inserted by \nanother workstation, after I process that record I update to an \nprocessed status.\n\nThat table receives about 3000 inserts and 60000 updates each day, but \nat night I do a TRUNCATE TABLE1 (Every Night), so the table is very \nsmall. There is an index by field1 too.\n\nSome days It works very good all day, but somedays I have 7 seconds \nfreeze, I mean, my serves delays 7 seconds on this statement:\n SELECT field1\n FROM TABLE1\n WHERE field2 = '10'\n\nLast Friday, It happens about 4 times, one at 9:50 am, another on 13:14 \npm, another on 17:27 pm and another on 17:57 pm.\n\nI looked up to the statistics for that table, but the statistics says \nthat postgres is reading memory, not disk, becouse the table is very \nsmall and I do a select every second, so the postgres keeps the table in \nshared buffers.\n\nWhy this 7 seconds delay? How could I figure out what is happening?\n\nI know:\n\nIt is not disk, becouse statistics shows its reading memory.\nIt is not internet delay, becouse it is a local network\nIt is not workstations, becouse there are 2 workstations, and both \nfreeze at the same time\nIt is not processors, becouse my server has 8 processors\nIt is not memory, becouse my server has 32 GB, and about 200 MB free\nIt is not another big process or maybe not, becouse I think postgres \nwould not stops my simples process for 7 seconds to do a big process, \nand I cant see any big process at that time.\nIts not lock, becouse the simple select freezes, It doesnot have an \"FOR \nUPDATE\"\nIts not a vaccum needed, becouse I do a TRUNCATE every night.\n\nIs It possible the checkpoint is doing that? Or the archiving? How can I \nsee?\n\nSomeone have any idea?\n\nThank you\n\nWaldomiro Caraiani\n", "msg_date": "Mon, 30 Nov 2009 23:02:33 -0200", "msg_from": "Waldomiro <[email protected]>", "msg_from_op": false, "msg_subject": "Server Freezing" }, { "msg_contents": "I�m using PostgreSQL 8.1. There is a way to see that?\n\nWaldomiro\n\nGreg Smith escreveu:\n> Waldomiro wrote:\n>> Is It possible the checkpoint is doing that? Or the archiving? How \n>> can I see?\n> If you're using PostgreSQL 8.3 or later, you can turn on \n> log_checkpoints and you'll get a note when each checkpoint finishes. \n> The parts that are more likely to slow the server down are right at \n> the end, so if you see a bunch of slow queries around the same time as \n> the checkpoint message appears in the logs, that's the likely cause. \n> Bad checkpoint behavior can certainly cause several seconds of \n> freezing on a system with 32GB of RAM, because with that much data you \n> can have quite a bit in the OS write cache that all gets forced out at \n> the end of the checkpoint.\n>\n> Finding when the checkpoints happen on 8.2 or earlier is much harder; \n> I can tell you what to look for on Linux for example, but it's kind of \n> painful to track them down.\n>\n\n\n", "msg_date": "Tue, 01 Dec 2009 09:02:47 -0200", "msg_from": "Waldomiro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server Freezing" }, { "msg_contents": "\n\n\n\n\nI�m using PostgreSQL 8.1, and my settings are:\n\ncheckpoint_segments=50\ncheckpoint_timeout=300\ncheckpoint_warning=30\ncommit_delay=0\ncommit_siblings=5\narchive_command= cp -i %p/BACKUP/LOGS/%f\nautovacuum=off\nbgwriter_all_maxpages=5\nbgwriter_all_percent=0.333\nbgwriter_delay=200\nbgwriter_lru_maxpages=5\nbgwriter_lru_percent=1\nfsync=on\nfull_page_writes=on\nstats_block_level=on\nstats_command_string=on\nstats_reset_on_server_start=off\nstats_row_level=on\nstats_start_collector=on\n\nWaldomiro\n\nFernando Hevia escreveu:\n\n \n\n \n\n-----Mensaje original-----\nDe: [email protected] \n[mailto:[email protected]] En nombre de Waldomiro\nEnviado el: Lunes, 30 de Noviembre de 2009 22:03\nPara: [email protected]\nAsunto: [PERFORM] Server Freezing\n\nHi everybody,\n\n...\n\nThat table receives about 3000 inserts and 60000 updates each \nday, but at night I do a TRUNCATE TABLE1 (Every Night), so \nthe table is very small. There is an index by field1 too.\n\nSome days It works very good all day, but somedays I have 7 \nseconds freeze, I mean, my serves delays 7 seconds on this statement:\n SELECT field1\n FROM TABLE1\n WHERE field2 = '10'\n \n\n\nHi.\nYou should probably consider creating a partial index on field2 = '10'.\n\n \n\nI looked up to the statistics for that table, but the \nstatistics says that postgres is reading memory, not disk, \nbecouse the table is very small and I do a select every \nsecond, so the postgres keeps the table in shared buffers.\n \n\n\n\nYou say you dont vacuum this table, but considering 60000 updates on 3000\nrecords, assuming you are updating each record 20 times, your table could\neat up the space of 60M records. �Have you considered this?\n\nThough, I am not sure how this impacts when the whole table is held in\nshared buffers.\n\n \n\nWhy this 7 seconds delay? How could I figure out what is happening?\n\n \n\n\nTurn log_checkpoints = on to see in the logs if these occur during the\nfreeze.\nAlso log_lock_waits = on will help diagnose the situation.\n\nWhat version of postgres are you running and how are your checkpoints\nconfigured?\n\nRegards,\nFernando.\n\n\n \n\n\n\n-- \n Waldomiro Caraiani\nNeto\n|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||\nGRUPO SHX \n\n Desenvolvimento \n+ 55 (16) 3331.3268\[email protected] \n www.shx.com.br \n\n\n\n", "msg_date": "Tue, 01 Dec 2009 09:18:54 -0200", "msg_from": "Waldomiro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server Freezing" }, { "msg_contents": " \n\n> -----Mensaje original-----\n> De: Waldomiro\n> \n> I´m using PostgreSQL 8.1, \n\nSorry, log_checkpoints isn't supported till 8.3\n\n> and my settings are:\n> \n> checkpoint_segments=50\n> checkpoint_timeout=300\n> checkpoint_warning=30\n> commit_delay=0\n> commit_siblings=5\n> archive_command= cp -i %p/BACKUP/LOGS/%f autovacuum=off\n> bgwriter_all_maxpages=5\n> bgwriter_all_percent=0.333\n> bgwriter_delay=200\n> bgwriter_lru_maxpages=5\n> bgwriter_lru_percent=1\n> fsync=on\n> full_page_writes=on\n> stats_block_level=on\n> stats_command_string=on\n> stats_reset_on_server_start=off\n> stats_row_level=on\n> stats_start_collector=on\n> \n\nAs tempting as it is to decrease checkpoint_segments, better confirm it is a\ncheckpoint related problem before fiddling with these settings.\n\nI recommend reading Greg Smith's post on checkpoints & bg writer. It's about\n8.3 improvements but it includes good advice on how to diagnose checkpoint\nissues on prior versions:\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nIn fact, one of his recomendations should be very helpful here: set\ncheckpoint_warning=3600 and log_min_duration_statement=1000, that way you\nshould see in the log if statements over 1 sec occur simultaneously with\ncheckpoints being reached.\n\nPay attention to the chapter on the bg_writer too.\n\nRegards,\nFernando.\n\n\n", "msg_date": "Tue, 1 Dec 2009 12:50:34 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server Freezing" } ]
[ { "msg_contents": "I have a million-row table (two text columns of ~25 characters each plus two integers, one of which is PK) that is replaced every week. Since I'm doing it on a live system, it's run inside a transaction. This is the only time the table is modified; all other access is read-only.\n\nI wanted to use \"truncate table\" for efficiency, to avoid vacuum and index bloat, etc. But when I do \"truncate\" inside a transaction, all clients are blocked from read until the entire transaction is complete. If I switch to \"delete from ...\", it's slower, but other clients can continue to use the old data until the transaction commits.\n\nThe only work-around I've thought of is to create a brand new table, populate it and index it, then start a transaction that drops the old table and renames the new one.\n\nAny thoughts?\n\nThanks,\nCraig\n\n", "msg_date": "Mon, 30 Nov 2009 10:50:17 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "truncate in transaction blocks read access" }, { "msg_contents": "On Mon, 2009-11-30 at 10:50 -0800, Craig James wrote:\n> I have a million-row table (two text columns of ~25 characters each plus two integers, one of which is PK) that is replaced every week. Since I'm doing it on a live system, it's run inside a transaction. This is the only time the table is modified; all other access is read-only.\n> \n> I wanted to use \"truncate table\" for efficiency, to avoid vacuum and index bloat, etc. But when I do \"truncate\" inside a transaction, all clients are blocked from read until the entire transaction is complete. If I switch to \"delete from ...\", it's slower, but other clients can continue to use the old data until the transaction commits.\n> \n> The only work-around I've thought of is to create a brand new table, populate it and index it, then start a transaction that drops the old table and renames the new one.\n> \n> Any thoughts?\n\nUse partitioning so you can roll off data.\n\nhttp://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html\n\nJoshua D. Drake\n\n\n> \n> Thanks,\n> Craig\n> \n> \n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\nIf the world pushes look it in the eye and GRR. Then push back harder. - Salamander\n\n", "msg_date": "Mon, 30 Nov 2009 11:31:43 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: truncate in transaction blocks read access" } ]
[ { "msg_contents": "Dear All,\n\nI don't know if this is a stupid question, or not, but I can't \nunderstand the following.\n\nI have a pretty simple query, which runs in about 7ms\n\n SELECT * FROM h.inventory WHERE demand_id =289276563;\n\n\nThe result of this is a 15 row x 17 column table. However, I want this \nto be sorted by id, so I changed the query to:\n\n\n SELECT * FROM h.inventory WHERE demand_id =289276563 ORDER BY id;\n\nwhich makes it take 32 seconds!\n\n\nThat surprises me - I'd expect the ORDER BY to be the last thing that \nruns, and for a sort of such a small dataset to be almost \ninstantaneous. Indeed, if I do ORDER BY random(), then it's fast.\n\nThe system is running 8.4.1, and is otherwise lightly loaded, I can do \nthis repeatedly with similar results.\n\nIs this normal? Have I hit a bug?\n\nI attach the view definition, the result set, and the output from \nexplain analyze (both ways).\n\nThanks,\n\nRichard\n\n\n\n\n View \"h.inventory\"\n Column | Type | Modifiers\n---------------+--------------------------+-----------\n id | bigint |\n material_id | bigint |\n material_tag | text |\n material_name | text |\n location_id | bigint |\n location_tag | text |\n location_name | text |\n qty | integer |\n divergence | integer |\n ctime | timestamp with time zone |\n actor_id | bigint |\n actor_tag | text |\n actor_name | text |\n demand_id | bigint |\n target_id | bigint |\n target_tag | text |\n target_name | text |\nView definition:\n SELECT inventory.id, inventory.material_id, h_material.tag AS \nmaterial_tag, h_material.name AS material_name, inventory.location_id, \nh_location.tag AS location_tag, h_location.name AS location_name, \ninventory.qty, inventory.divergence, inventory.ctime, \ninventory.actor_id, h_actor.tag AS actor_tag, h_actor.name AS \nactor_name, inventory.demand_id, h_demand.target_id, \nh_demand.target_tag, h_demand.target_name\n FROM core.inventory\n LEFT JOIN h.material h_material ON inventory.material_id = h_material.id\n LEFT JOIN h.location h_location ON inventory.location_id = h_location.id\n LEFT JOIN h.actor h_actor ON inventory.actor_id = h_actor.id\n LEFT JOIN h.demand h_demand ON inventory.demand_id = h_demand.id;\n\n\n\n\n\n\n\n\n\n\n\n\n id | material_id | material_tag | material_name | location_id \n| location_tag | location_name | qty | divergence | \n ctime | actor_id | actor_tag | actor_name \n | demand_id | target_id | target_tag | target_name\n-----------+-------------+---------------+---------------+-------------+--------------+------------------------+-----+------------+-------------------------------+----------+-----------+------------------------------+-----------+-----------+----------------+----------------------------------------\n 292904293 | 289238938 | 1000001113980 | | 280410977 \n| 1030576 | Container 1030576 | 0 | 0 | 2009-12-01 \n14:53:35.010023+00 | 5543 | 139664 | Joanna Mikolajczak \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 292904294 | 289238938 | 1000001113980 | | 280410977 \n| 1030576 | Container 1030576 | 1 | 0 | 2009-12-01 \n14:53:35.060378+00 | | | \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 292904292 | 289238938 | 1000001113980 | | 4008 \n| 308 | Chute 308 | 0 | 0 | 2009-12-01 \n14:53:34.925801+00 | 5543 | 139664 | Joanna Mikolajczak \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 292817907 | 289238938 | 1000001113980 | | 5137 \n| 991 | Chute 991 (not needed) | 0 | 0 | 2009-12-01 \n14:38:00.819189+00 | 6282 | CW 991 | Chute 991 worker \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 291755251 | 289238938 | 1000001113980 | | 5137 \n| 991 | Chute 991 (not needed) | 0 | 0 | 2009-12-01 \n12:03:05.957424+00 | 6282 | CW 991 | Chute 991 worker \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 291543235 | 289238938 | 1000001113980 | | 5137 \n| 991 | Chute 991 (not needed) | 0 | 0 | 2009-12-01 \n11:35:19.28846+00 | 6282 | CW 991 | Chute 991 worker \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 291524046 | 289238938 | 1000001113980 | | 4008 \n| 308 | Chute 308 | 0 | 0 | 2009-12-01 \n11:31:49.40378+00 | | | \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 291524045 | 289238938 | 1000001113980 | | 4008 \n| 308 | Chute 308 | 0 | 0 | 2009-12-01 \n11:31:49.402217+00 | 6300 | FSC | Flow System Controller (FSC) \n| 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS HEATH \n/ EMBARGO\n 291522067 | 289238938 | 1000001113980 | | 5143 \n| CAM2 | North Camera | 0 | 0 | 2009-12-01 \n11:31:22.931085+00 | 6300 | FSC | Flow System Controller (FSC) \n| 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS HEATH \n/ EMBARGO\n 291518675 | 289238938 | 1000001113980 | | 5137 \n| 991 | Chute 991 (not needed) | 0 | 0 | 2009-12-01 \n11:30:32.10016+00 | 6282 | CW 991 | Chute 991 worker \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 291506067 | 289238938 | 1000001113980 | | 5137 \n| 991 | Chute 991 (not needed) | 0 | 0 | 2009-12-01 \n11:26:38.065565+00 | 6282 | CW 991 | Chute 991 worker \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 291491123 | 289238938 | 1000001113980 | | 5137 \n| 991 | Chute 991 (not needed) | 0 | 0 | 2009-12-01 \n11:21:33.009506+00 | 6282 | CW 991 | Chute 991 worker \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 291322415 | 289238938 | 1000001113980 | | 4008 \n| 308 | Chute 308 | 0 | 0 | 2009-12-01 \n10:45:08.281846+00 | | | \n | 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS \nHEATH / EMBARGO\n 291322414 | 289238938 | 1000001113980 | | 4008 \n| 308 | Chute 308 | 0 | 0 | 2009-12-01 \n10:45:08.280018+00 | 6300 | FSC | Flow System Controller (FSC) \n| 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS HEATH \n/ EMBARGO\n 291319302 | 289238938 | 1000001113980 | | 5143 \n| CAM2 | North Camera | 0 | 0 | 2009-12-01 \n10:44:41.807128+00 | 6300 | FSC | Flow System Controller (FSC) \n| 289276563 | 3153 | 300244 EMBARGO | 300244 308/09 HAYWARDS HEATH \n/ EMBARGO\n(15 rows)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nexplain analyze select * from h.inventory where demand_id =289276563;\nTime: 7.251 ms\n\n\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..726857452.94 rows=806903677108 \nwidth=195) (actual time=0.108..0.704 rows=15 loops=1)\n Join Filter: (core.inventory.material_id = core.material.id)\n -> Nested Loop Left Join (cost=0.00..183236.84 rows=203176856 \nwidth=166) (actual time=0.103..0.588 rows=15 loops=1)\n Join Filter: (demand.material_id = core.material.id)\n -> Nested Loop Left Join (cost=0.00..260.03 rows=51160 \nwidth=174) (actual time=0.090..0.462 rows=15 loops=1)\n Join Filter: (core.inventory.location_id = core.location.id)\n -> Nested Loop Left Join (cost=0.00..146.71 rows=28 \nwidth=128) (actual time=0.068..0.286 rows=15 loops=1)\n -> Nested Loop Left Join (cost=0.00..125.36 \nrows=28 width=103) (actual time=0.058..0.225 rows=15 loops=1)\n Join Filter: (core.inventory.demand_id = \ndemand.id)\n -> Index Scan using inventory_demand_id on \ninventory (cost=0.00..22.36 rows=28 width=56) (actual time=0.025..0.053 \nrows=15 loops=1)\n Index Cond: (demand_id = 289276563)\n -> Nested Loop Left Join (cost=0.00..3.67 \nrows=1 width=55) (actual time=0.009..0.010 rows=1 loops=15)\n -> Index Scan using demand_pkey on \ndemand (cost=0.00..1.89 rows=1 width=24) (actual time=0.005..0.005 \nrows=1 loops=15)\n Index Cond: (id = 289276563)\n -> Index Scan using waypoint_pkey on \nwaypoint (cost=0.00..1.77 rows=1 width=39) (actual time=0.003..0.003 \nrows=1 loops=15)\n Index Cond: (demand.target_id = \nwaypoint.id)\n -> Index Scan using actor_pkey on actor \n(cost=0.00..0.75 rows=1 width=33) (actual time=0.003..0.003 rows=1 loops=15)\n Index Cond: (core.inventory.actor_id = actor.id)\n -> Append (cost=0.00..4.00 rows=4 width=50) (actual \ntime=0.005..0.010 rows=1 loops=15)\n -> Index Scan using location_pkey on location \n(cost=0.00..0.56 rows=1 width=72) (actual time=0.001..0.001 rows=0 loops=15)\n Index Cond: (core.inventory.location_id = \ncore.location.id)\n -> Index Scan using waypoint_pkey on waypoint \nlocation (cost=0.00..1.31 rows=1 width=39) (actual time=0.003..0.003 \nrows=1 loops=15)\n Index Cond: (core.inventory.location_id = \ncore.location.id)\n -> Index Scan using container_pkey on container \nlocation (cost=0.00..1.78 rows=1 width=54) (actual time=0.004..0.004 \nrows=0 loops=15)\n Index Cond: (core.inventory.location_id = \ncore.location.id)\n -> Index Scan using supply_pkey on supply \nlocation (cost=0.00..0.35 rows=1 width=36) (actual time=0.001..0.001 \nrows=0 loops=15)\n Index Cond: (core.inventory.location_id = \ncore.location.id)\n -> Append (cost=0.00..3.55 rows=2 width=8) (actual \ntime=0.004..0.007 rows=1 loops=15)\n -> Index Scan using material_pkey on material \n(cost=0.00..1.78 rows=1 width=8) (actual time=0.004..0.005 rows=1 loops=15)\n Index Cond: (demand.material_id = core.material.id)\n -> Index Scan using container_pkey on container \nmaterial (cost=0.00..1.78 rows=1 width=8) (actual time=0.002..0.002 \nrows=0 loops=15)\n Index Cond: (demand.material_id = core.material.id)\n -> Append (cost=0.00..3.55 rows=2 width=38) (actual \ntime=0.003..0.006 rows=1 loops=15)\n -> Index Scan using material_pkey on material \n(cost=0.00..1.78 rows=1 width=22) (actual time=0.003..0.003 rows=1 loops=15)\n Index Cond: (core.inventory.material_id = core.material.id)\n -> Index Scan using container_pkey on container material \n(cost=0.00..1.78 rows=1 width=54) (actual time=0.003..0.003 rows=0 loops=15)\n Index Cond: (core.inventory.material_id = core.material.id)\n Total runtime: 0.858 ms\n(38 rows)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nexplain analyze select * from h.inventory where demand_id =289276563 \norder by id;\nTime: 32868.784 ms\n \n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..727737158.77 rows=806903677108 \nwidth=195) (actual time=31739.052..32862.322 rows=15 loops=1)\n Join Filter: (core.inventory.material_id = core.material.id)\n -> Nested Loop Left Join (cost=0.00..1062942.67 rows=203176856 \nwidth=166) (actual time=31739.044..32862.084 rows=15 loops=1)\n Join Filter: (demand.material_id = core.material.id)\n -> Nested Loop Left Join (cost=0.00..879965.86 rows=51160 \nwidth=174) (actual time=31739.025..32861.812 rows=15 loops=1)\n Join Filter: (core.inventory.location_id = core.location.id)\n -> Nested Loop Left Join (cost=0.00..879852.54 rows=28 \nwidth=128) (actual time=31739.006..32861.428 rows=15 loops=1)\n -> Nested Loop Left Join (cost=0.00..879831.20 \nrows=28 width=103) (actual time=31738.994..32861.276 rows=15 loops=1)\n Join Filter: (core.inventory.demand_id = \ndemand.id)\n -> Index Scan using inventory_pkey on \ninventory (cost=0.00..879728.20 rows=28 width=56) (actual \ntime=31738.956..32860.738 rows=15 loops=1)\n Filter: (demand_id = 289276563)\n -> Nested Loop Left Join (cost=0.00..3.67 \nrows=1 width=55) (actual time=0.030..0.031 rows=1 loops=15)\n -> Index Scan using demand_pkey on \ndemand (cost=0.00..1.89 rows=1 width=24) (actual time=0.019..0.019 \nrows=1 loops=15)\n Index Cond: (id = 289276563)\n -> Index Scan using waypoint_pkey on \nwaypoint (cost=0.00..1.77 rows=1 width=39) (actual time=0.008..0.008 \nrows=1 loops=15)\n Index Cond: (demand.target_id = \nwaypoint.id)\n -> Index Scan using actor_pkey on actor \n(cost=0.00..0.75 rows=1 width=33) (actual time=0.007..0.007 rows=1 loops=15)\n Index Cond: (core.inventory.actor_id = actor.id)\n -> Append (cost=0.00..4.00 rows=4 width=50) (actual \ntime=0.010..0.019 rows=1 loops=15)\n -> Index Scan using location_pkey on location \n(cost=0.00..0.56 rows=1 width=72) (actual time=0.003..0.003 rows=0 loops=15)\n Index Cond: (core.inventory.location_id = \ncore.location.id)\n -> Index Scan using waypoint_pkey on waypoint \nlocation (cost=0.00..1.31 rows=1 width=39) (actual time=0.006..0.006 \nrows=1 loops=15)\n Index Cond: (core.inventory.location_id = \ncore.location.id)\n -> Index Scan using container_pkey on container \nlocation (cost=0.00..1.78 rows=1 width=54) (actual time=0.006..0.006 \nrows=0 loops=15)\n Index Cond: (core.inventory.location_id = \ncore.location.id)\n -> Index Scan using supply_pkey on supply \nlocation (cost=0.00..0.35 rows=1 width=36) (actual time=0.003..0.003 \nrows=0 loops=15)\n Index Cond: (core.inventory.location_id = \ncore.location.id)\n -> Append (cost=0.00..3.55 rows=2 width=8) (actual \ntime=0.011..0.014 rows=1 loops=15)\n -> Index Scan using material_pkey on material \n(cost=0.00..1.78 rows=1 width=8) (actual time=0.010..0.011 rows=1 loops=15)\n Index Cond: (demand.material_id = core.material.id)\n -> Index Scan using container_pkey on container \nmaterial (cost=0.00..1.78 rows=1 width=8) (actual time=0.002..0.002 \nrows=0 loops=15)\n Index Cond: (demand.material_id = core.material.id)\n -> Append (cost=0.00..3.55 rows=2 width=38) (actual \ntime=0.004..0.012 rows=1 loops=15)\n -> Index Scan using material_pkey on material \n(cost=0.00..1.78 rows=1 width=22) (actual time=0.003..0.004 rows=1 loops=15)\n Index Cond: (core.inventory.material_id = core.material.id)\n -> Index Scan using container_pkey on container material \n(cost=0.00..1.78 rows=1 width=54) (actual time=0.008..0.008 rows=0 loops=15)\n Index Cond: (core.inventory.material_id = core.material.id)\n Total runtime: 32862.509 ms\n(38 rows)\n\n\n\n", "msg_date": "Tue, 01 Dec 2009 18:52:13 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Order by (for 15 rows) adds 30 seconds to query time" }, { "msg_contents": "Le mardi 01 décembre 2009 à 18:52 +0000, Richard Neill a écrit :\n> Is this normal? Have I hit a bug?\n\nPostgreSQL query analyzer needs to run a couple of times before it can\nrewrite and optimize the query. Make sure demand_id, id and join IDs\ncarry indexes.\n\nRun EXPLAIN ANALYSE your_query to understand how the parser works and\npost it back here.\n\nKind regards,\nJean-Michel", "msg_date": "Tue, 01 Dec 2009 21:06:05 +0100", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query time" }, { "msg_contents": "Richard Neill <[email protected]> wrote:\n \n> I'd expect the ORDER BY to be the last thing that runs\n \n> Nested Loop Left Join (cost=0.00..727737158.77\n> rows=806903677108 width=195) (actual time=31739.052..32862.322\n> rows=15 loops=1)\n \nIt probably would if it knew there were going to be 15 rows to sort.\nIt is estimating that there will be 806,903,677,108 rows, in which\ncase it thinks that using the index will be faster. The question is\nwhy it's 10 or 11 orders of magnitude off on the estimate of result\nrows. Could you show us the table definitions underlying that view?\n \n-Kevin\n", "msg_date": "Tue, 01 Dec 2009 14:06:06 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\n\t time" }, { "msg_contents": "\n\nJean-Michel Pouré wrote:\n> Le mardi 01 décembre 2009 à 18:52 +0000, Richard Neill a écrit :\n>> Is this normal? Have I hit a bug?\n> \n> PostgreSQL query analyzer needs to run a couple of times before it can\n> rewrite and optimize the query. Make sure demand_id, id and join IDs\n> carry indexes.\n> \n\nI did, and they do. This table has been in place for ages, with \nautovacuum on, and a manual vacuum analyze every night. I checked by \nrunning analyze explicitly on all the relevant tables just before \nposting this.\n\n> Run EXPLAIN ANALYSE your_query to understand how the parser works and\n> post it back here.\n> \n\nAlready in previous email :-)\n\n> Kind regards,\n> Jean-Michel\n\n\n\nKevin Grittner wrote:\n > Richard Neill <[email protected]> wrote:\n >\n >> I'd expect the ORDER BY to be the last thing that runs\n >\n >> Nested Loop Left Join (cost=0.00..727737158.77\n >> rows=806903677108 width=195) (actual time=31739.052..32862.322\n >> rows=15 loops=1)\n >\n > It probably would if it knew there were going to be 15 rows to sort.\n > It is estimating that there will be 806,903,677,108 rows, in which\n > case it thinks that using the index will be faster. The question is\n > why it's 10 or 11 orders of magnitude off on the estimate of result\n > rows. Could you show us the table definitions underlying that view?\n >\n > -Kevin\n >\n\n\nAm I wrong in thinking that ORDER BY is always applied after the main \nquery is run?\n\nEven if I run it this way:\n\nselect * from (select * from h.inventory where demand_id =289276563) as \nsqry order by id;\n\nwhich should(?) surely force it to run the first select, then sort, it's \nstill very slow. On the other hand, it's quick if I do order by id+1\n\nThe table definitions are as follows (sorry there are so many).\n\n\nRichard\n\n\n\n\n\n\n\nfswcs=# \\d h.demand\n View \"h.demand\"\n Column | Type | Modifiers\n---------------+---------+-----------\n id | bigint |\n target_id | bigint |\n target_tag | text |\n target_name | text |\n material_id | bigint |\n material_tag | text |\n material_name | text |\n qty | integer |\n benefit | integer |\nView definition:\n SELECT demand.id, demand.target_id, h_target_waypoint.tag AS \ntarget_tag, h_target_waypoint.name AS target_name, demand.material_id, \nh_material.tag AS material_tag, h_material.name AS material_name, \ndemand.qty, demand.benefit\n FROM core.demand\n LEFT JOIN h.waypoint h_target_waypoint ON demand.target_id = \nh_target_waypoint.id\n LEFT JOIN h.material h_material ON demand.material_id = h_material.id;\n\nfswcs=# \\d core.demand\n Table \"core.demand\"\n Column | Type | Modifiers\n-------------+---------+--------------------------------\n id | bigint | not null default core.new_id()\n target_id | bigint | not null\n material_id | bigint | not null\n qty | integer | not null\n benefit | integer | not null default 0\nIndexes:\n \"demand_pkey\" PRIMARY KEY, btree (id) CLUSTER\n \"demand_target_id_key\" UNIQUE, btree (target_id, material_id)\n \"demand_material_id\" btree (material_id)\n \"demand_target_id\" btree (target_id)\nForeign-key constraints:\n \"demand_material_id_fkey\" FOREIGN KEY (material_id) REFERENCES \ncore.__material_id(id)\n \"demand_target_id_fkey\" FOREIGN KEY (target_id) REFERENCES \ncore.waypoint(id)\nReferenced by:\n TABLE \"viwcs.du_report_contents\" CONSTRAINT \n\"du_report_contents_demand_id_fkey\" FOREIGN KEY (demand_id) REFERENCES \ncore.demand(id)\n TABLE \"core.inventory\" CONSTRAINT \"inventory_demand_id_fkey\" \nFOREIGN KEY (demand_id) REFERENCES core.demand(id)\n TABLE \"viwcs.wave_demand\" CONSTRAINT \"wave_demand_demand_id_fkey\" \nFOREIGN KEY (demand_id) REFERENCES core.demand(id)\n\nfswcs=# \\d h.waypoint\n View \"h.waypoint\"\n Column | Type | Modifiers\n-----------+---------+-----------\n id | bigint |\n tag | text |\n name | text |\n is_router | boolean |\n is_target | boolean |\n is_packer | boolean |\nView definition:\n SELECT waypoint.id, waypoint.tag, waypoint.name, waypoint.is_router, \nwaypoint.is_target, waypoint.is_packer\n FROM core.waypoint;\n\nfswcs=# \\d h.material\n View \"h.material\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | bigint |\n tag | text |\n name | text |\n mass | integer |\n volume | integer |\nView definition:\n SELECT material.id, material.tag, material.name, material.mass, \nmaterial.volume\n FROM core.material;\n\nfswcs=# \\d core.wa\ncore.waypoint core.waypoint_name_key core.waypoint_pkey \ncore.waypoint_tag_key\nfswcs=# \\d core.waypoint\n Table \"core.waypoint\"\n Column | Type | Modifiers\n-----------+---------+--------------------------------\n id | bigint | not null default core.new_id()\n tag | text | not null\n name | text | not null\n is_router | boolean | not null\n is_target | boolean | not null\n is_packer | boolean | not null\nIndexes:\n \"waypoint_pkey\" PRIMARY KEY, btree (id) CLUSTER\n \"waypoint_tag_key\" UNIQUE, btree (tag)\n \"waypoint_name_key\" btree (name)\nReferenced by:\n TABLE \"core.demand\" CONSTRAINT \"demand_target_id_fkey\" FOREIGN KEY \n(target_id) REFERENCES core.waypoint(id)\n TABLE \"viwcs.du_report\" CONSTRAINT \"du_report_target_id_fkey\" \nFOREIGN KEY (target_id) REFERENCES core.waypoint(id)\n TABLE \"viwcs.mss_actor_state\" CONSTRAINT \n\"mss_actor_state_last_demand_tag_fkey\" FOREIGN KEY (last_demand_tag) \nREFERENCES core.waypoint(tag)\n TABLE \"viwcs.mss_actor_state\" CONSTRAINT \n\"mss_actor_state_last_racking_tag_fkey\" FOREIGN KEY (last_racking_tag) \nREFERENCES core.waypoint(tag)\n TABLE \"viwcs.mss_rack_action_queue\" CONSTRAINT \n\"mss_rack_action_queue_racking_tag_fkey\" FOREIGN KEY (racking_tag) \nREFERENCES core.waypoint(tag)\n TABLE \"core.route_cache\" CONSTRAINT \"route_cache_next_hop_id_fkey\" \nFOREIGN KEY (next_hop_id) REFERENCES core.waypoint(id) ON DELETE CASCADE\n TABLE \"core.route_cache\" CONSTRAINT \"route_cache_router_id_fkey\" \nFOREIGN KEY (router_id) REFERENCES core.waypoint(id) ON DELETE CASCADE\n TABLE \"core.route_cache\" CONSTRAINT \"route_cache_target_id_fkey\" \nFOREIGN KEY (target_id) REFERENCES core.waypoint(id) ON DELETE CASCADE\n TABLE \"core.route\" CONSTRAINT \"route_dst_id_fkey\" FOREIGN KEY \n(dst_id) REFERENCES core.waypoint(id)\n TABLE \"core.route\" CONSTRAINT \"route_src_id_fkey\" FOREIGN KEY \n(src_id) REFERENCES core.waypoint(id)\n TABLE \"viwcs.wave_genreorders_map\" CONSTRAINT \n\"wave_genreorders_map_ERR_GENREID_UNKNOWN\" FOREIGN KEY (target_id) \nREFERENCES core.waypoint(id)\nTriggers:\n __waypoint__location_id_delete BEFORE DELETE ON core.waypoint FOR \nEACH ROW EXECUTE PROCEDURE core.__location_id_delete()\n __waypoint__location_id_insert BEFORE INSERT ON core.waypoint FOR \nEACH ROW EXECUTE PROCEDURE core.__location_id_insert()\n __waypoint__location_id_update BEFORE UPDATE ON core.waypoint FOR \nEACH ROW EXECUTE PROCEDURE core.__location_id_update()\n __waypoint__tag_id_delete BEFORE DELETE ON core.waypoint FOR EACH \nROW EXECUTE PROCEDURE core.__tag_id_delete()\n __waypoint__tag_id_insert BEFORE INSERT ON core.waypoint FOR EACH \nROW EXECUTE PROCEDURE core.__tag_id_insert()\n __waypoint__tag_id_update BEFORE UPDATE ON core.waypoint FOR EACH \nROW EXECUTE PROCEDURE core.__tag_id_update()\n __waypoint__tag_tag_delete BEFORE DELETE ON core.waypoint FOR EACH \nROW EXECUTE PROCEDURE core.__tag_tag_delete()\n __waypoint__tag_tag_insert BEFORE INSERT ON core.waypoint FOR EACH \nROW EXECUTE PROCEDURE core.__tag_tag_insert()\n __waypoint__tag_tag_update BEFORE UPDATE ON core.waypoint FOR EACH \nROW EXECUTE PROCEDURE core.__tag_tag_update()\nInherits: core.location\n\nfswcs=# \\d core.ma\ncore.material core.material_name_key core.material_pkey \ncore.material_tag_key\nfswcs=# \\d core.material\n Table \"core.material\"\n Column | Type | Modifiers\n--------+---------+--------------------------------\n id | bigint | not null default core.new_id()\n tag | text | not null\n name | text | not null\n mass | integer | not null\n volume | integer | not null\nIndexes:\n \"material_pkey\" PRIMARY KEY, btree (id)\n \"material_tag_key\" UNIQUE, btree (tag)\n \"material_name_key\" btree (name)\nCheck constraints:\n \"material_mass_check\" CHECK (mass >= 0)\n \"material_volume_check\" CHECK (volume >= 0)\nTriggers:\n __material__material_id_delete BEFORE DELETE ON core.material FOR \nEACH ROW EXECUTE PROCEDURE core.__material_id_delete()\n __material__material_id_insert BEFORE INSERT ON core.material FOR \nEACH ROW EXECUTE PROCEDURE core.__material_id_insert()\n __material__material_id_update BEFORE UPDATE ON core.material FOR \nEACH ROW EXECUTE PROCEDURE core.__material_id_update()\n __material__tag_id_delete BEFORE DELETE ON core.material FOR EACH \nROW EXECUTE PROCEDURE core.__tag_id_delete()\n __material__tag_id_insert BEFORE INSERT ON core.material FOR EACH \nROW EXECUTE PROCEDURE core.__tag_id_insert()\n __material__tag_id_update BEFORE UPDATE ON core.material FOR EACH \nROW EXECUTE PROCEDURE core.__tag_id_update()\n __material__tag_tag_delete BEFORE DELETE ON core.material FOR EACH \nROW EXECUTE PROCEDURE core.__tag_tag_delete()\n __material__tag_tag_insert BEFORE INSERT ON core.material FOR EACH \nROW EXECUTE PROCEDURE core.__tag_tag_insert()\n __material__tag_tag_update BEFORE UPDATE ON core.material FOR EACH \nROW EXECUTE PROCEDURE core.__tag_tag_update()\nInherits: core.tag\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 01 Dec 2009 22:46:29 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query time" }, { "msg_contents": "Richard Neill <[email protected]> wrote:\n \n> Am I wrong in thinking that ORDER BY is always applied after the\n> main query is run?\n \nYes, you are wrong to think that. It compares the costs of various\nplans, and when it has an index with the high order portion matching\nyour ORDER BY clause, it may think that it can scan that index to\ngenerate the correct sequence. If the sort is estimated to be\nexpensive enough compared to the index scan, it will use the index\nscan and skip the sort. Sorting hundreds of billions of rows can be\nexpensive.\n \n> Even if I run it this way:\n> \n> select * from (select * from h.inventory where demand_id\n> =289276563) as sqry order by id;\n> \n> which should(?) surely force it to run the first select, then\n> sort,\n \nI wouldn't necessarily assume that. You can EXPLAIN that form of\nthe query and find out easily enough. Does it say:\n \n -> Index Scan using inventory_demand_id on\ninventory (cost=0.00..22.36 rows=28 width=56) (actual time=0.025..0.053\nrows=15 loops=1)\n Index Cond: (demand_id = 289276563)\n \nor:\n \n -> Index Scan using inventory_pkey on\ninventory (cost=0.00..879728.20 rows=28 width=56) (actual\ntime=31738.956..32860.738 rows=15 loops=1)\n Filter: (demand_id = 289276563)\n \n> it's quick if I do order by id+1\n \nYou don't have an index on id+1.\n \n> The table definitions are as follows (sorry there are so many).\n \nI'll poke around to try to get a clue why the estimated result rows\nare so far off, but I may be in over my head there, so hopefully\nothers will look, too. For one thing, I haven't used inheritance,\nand I don't know how that might be playing into the bad estimates. \n(At first glance, it does seem to get into trouble about the time it\nestimates the rows for the outer joins to those.)\n \nThe real problem to solve here is that it's estimating the rows\ncount for the result so badly. If you need a short-term\nwork-around, you've already discovered that you can keep it from\nusing the index on id for ordering by creating an expression using\nid which causes it not to consider the index a match. That's kind\nof ugly to keep long term, though.\n \n-Kevin\n", "msg_date": "Tue, 01 Dec 2009 17:36:39 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\n\t time" }, { "msg_contents": "Dear Kevin,\n\nThanks for a very helpful reply.\n\nKevin Grittner wrote:\n> Richard Neill <[email protected]> wrote:\n> \n>> Am I wrong in thinking that ORDER BY is always applied after the\n>> main query is run?\n> \n> Yes, you are wrong to think that. It compares the costs of various\n> plans, and when it has an index with the high order portion matching\n> your ORDER BY clause, it may think that it can scan that index to\n> generate the correct sequence. If the sort is estimated to be\n> expensive enough compared to the index scan, it will use the index\n> scan and skip the sort. Sorting hundreds of billions of rows can be\n> expensive.\n> \n\nThat makes sense now.\n\n\n\n>> Even if I run it this way:\n>>\n>> select * from (select * from h.inventory where demand_id\n>> =289276563) as sqry order by id;\n>>\n>> which should(?) surely force it to run the first select, then\n>> sort,\n> \n> I wouldn't necessarily assume that. You can EXPLAIN that form of\n> the query and find out easily enough. Does it say:\n> \n> -> Index Scan using inventory_demand_id on\n> inventory (cost=0.00..22.36 rows=28 width=56) (actual time=0.025..0.053\n> rows=15 loops=1)\n> Index Cond: (demand_id = 289276563)\n> \n> or:\n> \n> -> Index Scan using inventory_pkey on\n> inventory (cost=0.00..879728.20 rows=28 width=56) (actual\n> time=31738.956..32860.738 rows=15 loops=1)\n> Filter: (demand_id = 289276563)\n> \n>> it's quick if I do order by id+1\n> \n> You don't have an index on id+1.\n> \n\nYour explanation is validated by the explain - it only does the sort \nlast iff I use \"order by id+1\", where there is no index for that.\n\n[Aside: using \"id+0\" also forces a sort.]\n\n\n> \n> The real problem to solve here is that it's estimating the rows\n> count for the result so badly. If you need a short-term\n> work-around, you've already discovered that you can keep it from\n> using the index on id for ordering by creating an expression using\n> id which causes it not to consider the index a match. That's kind\n> of ugly to keep long term, though.\n> \n\nWe seem to have a general case of very bad query plans, where in other \ncases, explain analyze shows that the query-planner's guesses are miles \nadrift.\n\nOthers have said that this is symptomatic of a lack of doing analyze, \nhowever we are doing quite a lot of analyzing (both through autovacuum, \nand a manual \"vacuum verbose analyze\" every night). Our underlying \nstatistical distribution isn't that changeable.\n\nThanks,\n\nRichard\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 02 Dec 2009 05:42:59 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\t time" }, { "msg_contents": "On Tue, 1 Dec 2009, Jean-Michel Pour� wrote:\n> PostgreSQL query analyzer needs to run a couple of times before it can\n> rewrite and optimize the query. Make sure demand_id, id and join IDs\n> carry indexes.\n\nHuh? At what point does the planner carry over previous plans and use them \nto further optimise the query?\n\nBut perhaps the biggest factor here is calling a five table join a \"pretty \nsimple query\".\n\nMatthew\n\n-- \n Prolog doesn't have enough parentheses. -- Computer Science Lecturer", "msg_date": "Wed, 2 Dec 2009 11:08:17 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query time" }, { "msg_contents": "On 2/12/2009 7:08 PM, Matthew Wakeling wrote:\n> On Tue, 1 Dec 2009, Jean-Michel Pourᅵ wrote:\n>> PostgreSQL query analyzer needs to run a couple of times before it can\n>> rewrite and optimize the query. Make sure demand_id, id and join IDs\n>> carry indexes.\n>\n> Huh? At what point does the planner carry over previous plans and use\n> them to further optimise the query?\n>\n> But perhaps the biggest factor here is calling a five table join a\n> \"pretty simple query\".\n\nSome of those tables are views composed of multiple unions, too, by the \nlooks of things.\n\nDoesn't the planner have some ... issues ... with estimation of row \ncounts on joins over unions? Or is my memory just more faulty than usual?\n\n--\nCraig Ringer\n", "msg_date": "Wed, 02 Dec 2009 21:46:50 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query time" }, { "msg_contents": "Craig Ringer <[email protected]> wrote:\n \n> Some of those tables are views composed of multiple unions, too,\n> by the looks of things.\n> \n> Doesn't the planner have some ... issues ... with estimation of\n> row counts on joins over unions? Or is my memory just more faulty\n> than usual?\n \nSo far I can't tell if it's views with unions or (as I suspect)\ninheritance. The views and tables shown so far reference other\nobjects not yet shown:\n \ncore.inventory\nh.location\nh.actor\n \nHowever, I'm pretty sure that the problem is that the estimated row\ncount explodes for no reason that I can see when the \"Nested Loop\nLeft Join\" has an \"Append\" node from a parent table on the right.\n \n28 rows joined to a 4 row append yields 51160 rows?\n51160 rows joined to a 2 row append yields 203176856 rows?\n203176856 rows joined to a 2 row append yields 806903677108 rows?\n \nSomething seems funny with the math. I would have expected 28 times\n4 times 2 times 2, equaling 448. Still higher than 15, but only by\none order of magnitude -- where it might still make relatively sane\nplan choices.\n \n-Kevin\n", "msg_date": "Wed, 02 Dec 2009 12:47:27 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\n\t time" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Craig Ringer <[email protected]> wrote:\n>> Doesn't the planner have some ... issues ... with estimation of\n>> row counts on joins over unions? Or is my memory just more faulty\n>> than usual?\n \n> So far I can't tell if it's views with unions or (as I suspect)\n> inheritance.\n\nAs of recent versions there shouldn't be a lot of difference between\ninheritance and UNION ALL subqueries --- they're both \"appendrels\"\nto the planner. And yeah, I think the statistical support is pretty\ncrummy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Dec 2009 15:25:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query time " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> And yeah, I think the statistical support is pretty crummy.\n \nDo you know, off-hand, why the estimated row count for a \"Nested\nLoop Left Join\" is not the product of the estimates for the two\nsides? (I fear I'm missing something important which lead to the\ncurrent estimates.)\n \nEstimates extracted from the problem plan:\n \n Nested Loop Left Join (rows=806903677108)\n -> Nested Loop Left Join (rows=203176856)\n -> Nested Loop Left Join (rows=51160)\n -> Nested Loop Left Join (rows=28)\n -> Append (rows=4)\n -> Append (rows=2)\n -> Append (rows=2)\n \n-Kevin\n", "msg_date": "Wed, 02 Dec 2009 15:33:04 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\n\t time" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Estimates extracted from the problem plan:\n \n> Nested Loop Left Join (rows=806903677108)\n> -> Nested Loop Left Join (rows=203176856)\n> -> Nested Loop Left Join (rows=51160)\n> -> Nested Loop Left Join (rows=28)\n> -> Append (rows=4)\n> -> Append (rows=2)\n> -> Append (rows=2)\n\nThat does look weird. Do we have a self-contained test case?\n\nI wouldn't necessarily expect the join rowcount to be exactly the\nproduct of the input rowcounts, but it shouldn't be that far off,\nI should think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Dec 2009 16:48:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query time " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> That does look weird. Do we have a self-contained test case?\n \nRichard, could you capture the schema for the affected tables and\nviews with pg_dump -s and also the related rows from pg_statistic?\n(The actual table contents aren't needed to see this issue.)\n \n-Kevin\n", "msg_date": "Wed, 02 Dec 2009 16:02:15 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\n\t time" }, { "msg_contents": "Kevin Grittner wrote:\n> Tom Lane <[email protected]> wrote:\n> \n>> That does look weird. Do we have a self-contained test case?\n\nNot at the moment. It seems to only occur with relatively complex joins.\n\n> \n> Richard, could you capture the schema for the affected tables and\n> views with pg_dump -s and also the related rows from pg_statistic?\n> (The actual table contents aren't needed to see this issue.)\n> \n\nHere are the relevant parts of the schema - I've cut this out of the \nsource-tree rather than pg_dump, since it seems more readable.\n\nRegarding pg_statistic, I don't understand how to find the relevant \nrows - what am I looking for? (the pg_statistic table is 247M in size).\n\nThanks for your help,\n\nRichard", "msg_date": "Wed, 02 Dec 2009 23:04:31 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\t time" }, { "msg_contents": "Richard Neill <[email protected]> wrote:\n \n> Regarding pg_statistic, I don't understand how to find the\n> relevant rows - what am I looking for? (the pg_statistic table is\n> 247M in size).\n \nI think the only relevant rows would be the ones with starelid =\npg_class.oid for a table used in the query, and I think you could\nfurther limit it to rows where staattnum = pg_attribute.attnum for a\ncolumn referenced in the WHERE clause or a JOIN's ON clause\n(including in the views). To help match them up, and to cover all\nthe bases, listing the related pg_class and pg_attribute rows would\nhelp.\n \nHopefully that will allow us to generate the same plan in an\nEXPLAIN, and then see how it gets such an overblown estimate of the\nresult rows.\n \n-Kevin\n", "msg_date": "Wed, 02 Dec 2009 17:31:26 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\t\n\t time" }, { "msg_contents": "\n\nKevin Grittner wrote:\n> Richard Neill <[email protected]> wrote:\n> \n>> Regarding pg_statistic, I don't understand how to find the\n>> relevant rows - what am I looking for? (the pg_statistic table is\n>> 247M in size).\n> \n> I think the only relevant rows would be the ones with starelid =\n> pg_class.oid for a table used in the query, and I think you could\n> further limit it to rows where staattnum = pg_attribute.attnum for a\n> column referenced in the WHERE clause or a JOIN's ON clause\n> (including in the views). To help match them up, and to cover all\n> the bases, listing the related pg_class and pg_attribute rows would\n> help.\n> \n> Hopefully that will allow us to generate the same plan in an\n> EXPLAIN, and then see how it gets such an overblown estimate of the\n> result rows.\n\n\nThanks for your explanation. I ran the query:\n\nSELECT * from pg_statistic WHERE starelid IN\n (SELECT oid FROM pg_class where relname IN\n ('demand','waypoint','actor','location','material','inventory')\n );\n\nand it's 228kB compressed, so rather than attaching it, I'm placing it \nhere: http://www.richardneill.org/tmp/pg_statistic.bz2\n\n\nLikewise, the much smaller (16kB) output from:\n\nSELECT * from pg_class where relname IN\n ('demand','waypoint','actor','location','material','inventory');\n\nSELECT * from pg_attribute ;\n\nis at: http://www.richardneill.org/tmp/pg_attribute_pg_class.bz2\n\n\n\nP.S. Would it be easier for you if I set up SSH access to a spare \nmachine, with a copy of the database?\n\n\nThanks very much for your help,\n\nRichard\n", "msg_date": "Thu, 03 Dec 2009 01:31:01 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\t\t time" } ]
[ { "msg_contents": "Hello Everyone,\n\n \n\nI have a very bit big database around 15 million in size, and the dump\nfile is around 12 GB.\n\nWhile importing this dump in to database I have noticed that initially\nquery response time is very slow but it does improves with time.\n\nAny suggestions to improve performance after dump in imported in to\ndatabase will be highly appreciated!\n\n \n\n \n\n \n\nRegards,\n\nAshish\n\n\n\n\n\n\n\n\n\n\n\nHello Everyone,\n \nI have a very bit big database around 15 million in size,\nand the dump file is around 12 GB.\nWhile importing this dump in to database I have noticed that\ninitially query response time is very slow but it does improves with time.\nAny suggestions to improve performance after dump in\nimported in to database will be highly appreciated!\n \n \n \nRegards,\nAshish", "msg_date": "Thu, 3 Dec 2009 05:01:46 +0530", "msg_from": "\"Ashish Kumar Singh\" <[email protected]>", "msg_from_op": true, "msg_subject": "performance while importing a very large data set in to database" }, { "msg_contents": "Ashish Kumar Singh escribi�:\n>\n> Hello Everyone,\n>\n> \n>\n> I have a very bit big database around 15 million in size, and the dump \n> file is around 12 GB.\n>\n> While importing this dump in to database I have noticed that initially \n> query response time is very slow but it does improves with time.\n>\n> Any suggestions to improve performance after dump in imported in to \n> database will be highly appreciated!\n>\n> \n>\n> \n>\n> \n>\n> Regards,\n>\n> Ashish\n>\nMy suggestion is:\n1- Afterward of the db restore, you can do a vacuum analyze manually on \nyour big tables to erase all dead rows\n2- Then you can reindex your big tables on any case that you use it.\n3- Then apply A CLUSTER command on the right tables that have these indexes.\n\nRegards\n\n\n-- \n-------------------------------------\n\"TIP 4: No hagas 'kill -9' a postmaster\"\nIng. Marcos Lu�s Ort�z Valmaseda\nPostgreSQL System DBA \nCentro de Tecnolog�as de Almacenamiento y An�lis de Datos (CENTALAD)\nUniversidad de las Ciencias Inform�ticas\n\nLinux User # 418229\nhttp://www.postgresql-es.org\nhttp://www.postgresql.org\nhttp://www.planetpostgresql.org\n\n\n\n", "msg_date": "Sat, 05 Dec 2009 15:16:38 +0100", "msg_from": "=?ISO-8859-1?Q?=22Ing_=2E_Marcos_Lu=EDs_Ort=EDz_Valmaseda?=\n\t=?ISO-8859-1?Q?=22?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance while importing a very large data set in to database" }, { "msg_contents": "On 12/02/2009 11:31 PM, Ashish Kumar Singh wrote:\n> While importing this dump in to database I have noticed that initially\n> query response time is very slow but it does improves with time.\n>\n> Any suggestions to improve performance after dump in imported in to\n> database will be highly appreciated!\n\nAnalyse your tables?\n-J\n", "msg_date": "Sat, 05 Dec 2009 19:25:12 +0000", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance while importing a very large data set in to database" }, { "msg_contents": "On Wed, Dec 2, 2009 at 4:31 PM, Ashish Kumar Singh\n<[email protected]> wrote:\n> Hello Everyone,\n>\n> I have a very bit big database around 15 million in size, and the dump file\n> is around 12 GB.\n>\n> While importing this dump in to database I have noticed that initially query\n> response time is very slow but it does improves with time.\n>\n> Any suggestions to improve performance after dump in imported in to database\n> will be highly appreciated!\n\nThis is pretty normal. When the db first starts up or right after a\nload it has nothing in its buffers or the kernel cache. As you access\nmore and more data the db and OS learned what is most commonly\naccessed and start holding onto those data and throw the less used\nstuff away to make room for it. Our production dbs run at a load\nfactor of about 4 to 6, but when first started and put in the loop\nthey'll hit 25 or 30 and have slow queries for a minute or so.\n\nHaving a fast IO subsystem will help offset some of this, and\nsometimes \"select * from bigtable\" might too.\n", "msg_date": "Sat, 5 Dec 2009 13:42:17 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance while importing a very large data set in to database" }, { "msg_contents": "On Sat, Dec 5, 2009 at 7:16 AM, \"Ing . Marcos Luís Ortíz Valmaseda\"\n<[email protected]> wrote:\n> Ashish Kumar Singh escribió:\n>>\n>> Hello Everyone,\n>>\n>>\n>> I have a very bit big database around 15 million in size, and the dump\n>> file is around 12 GB.\n>>\n>> While importing this dump in to database I have noticed that initially\n>> query response time is very slow but it does improves with time.\n>>\n>> Any suggestions to improve performance after dump in imported in to\n>> database will be highly appreciated!\n>>\n>>\n>>\n>>\n>> Regards,\n>>\n>> Ashish\n>>\n> My suggestion is:\n> 1- Afterward of the db restore, you can do a vacuum analyze manually on your\n> big tables to erase all dead rows\n\nWell, there should be no dead rows, it's a fresh restore, so just\nplain analyze would be enough. Note that autovacuum will kick in\neventually and do this for you.\n\n> 2- Then you can reindex your big tables on any case that you use it.\n\nAgain, a freshly loaded db does not need to be reindexed. The indexes\nare fresh and new and clean.\n\n> 3- Then apply A CLUSTER command on the right tables that have these indexes.\n\nNow that's useful, but if you're gonna cluster, do it FIRST, then\nanalyze the tables.\n", "msg_date": "Sat, 5 Dec 2009 16:09:54 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance while importing a very large data set in to database" }, { "msg_contents": "\n>> I have a very bit big database around 15 million in size, and the dump \n>> file\n>> is around 12 GB.\n>>\n>> While importing this dump in to database I have noticed that initially \n>> query\n>> response time is very slow but it does improves with time.\n>>\n>> Any suggestions to improve performance after dump in imported in to \n>> database\n>> will be highly appreciated!\n>\n> This is pretty normal. When the db first starts up or right after a\n> load it has nothing in its buffers or the kernel cache. As you access\n> more and more data the db and OS learned what is most commonly\n> accessed and start holding onto those data and throw the less used\n> stuff away to make room for it. Our production dbs run at a load\n> factor of about 4 to 6, but when first started and put in the loop\n> they'll hit 25 or 30 and have slow queries for a minute or so.\n>\n> Having a fast IO subsystem will help offset some of this, and\n> sometimes \"select * from bigtable\" might too.\n\n\nMaybe it's the updating of the the hint bits ?...\n\n", "msg_date": "Sun, 06 Dec 2009 15:14:04 +0100", "msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance while importing a very large data set in to database" }, { "msg_contents": "Does postgres have the concept of \"pinning\" procs, functions, etc to \ncache.\n\nAs you mention, typically frequently used statements are cached \nimproving performance.\n\nIf waiting for the DBMS to do this is not an option then pinning \ncritical ones should improve performance immediately following start up.\n\nThis is an approach I have used with oracle to address this situation.\n\nKris\n\nOn 5-Dec-09, at 15:42, Scott Marlowe <[email protected]> wrote:\n\n> On Wed, Dec 2, 2009 at 4:31 PM, Ashish Kumar Singh\n> <[email protected]> wrote:\n>> Hello Everyone,\n>>\n>> I have a very bit big database around 15 million in size, and the \n>> dump file\n>> is around 12 GB.\n>>\n>> While importing this dump in to database I have noticed that \n>> initially query\n>> response time is very slow but it does improves with time.\n>>\n>> Any suggestions to improve performance after dump in imported in to \n>> database\n>> will be highly appreciated!\n>\n> This is pretty normal. When the db first starts up or right after a\n> load it has nothing in its buffers or the kernel cache. As you access\n> more and more data the db and OS learned what is most commonly\n> accessed and start holding onto those data and throw the less used\n> stuff away to make room for it. Our production dbs run at a load\n> factor of about 4 to 6, but when first started and put in the loop\n> they'll hit 25 or 30 and have slow queries for a minute or so.\n>\n> Having a fast IO subsystem will help offset some of this, and\n> sometimes \"select * from bigtable\" might too.\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 6 Dec 2009 09:15:59 -0500", "msg_from": "Kris Kewley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance while importing a very large data set in to database" }, { "msg_contents": "Kris Kewley wrote:\n> Does postgres have the concept of \"pinning\" procs, functions, etc to \n> cache.\n>\nNo. Everything that's in PostgreSQL's cache gets a usage count attached \nto is. When the buffer is used by something else, that count gets \nincremented. And when new buffers need to be allocated, the process \nthat searches for them decrements usage counts until it find one with a \ncount of 0 that's then evicted. There's no way to pin things using this \nscheme, the best you can do is try to access the data in advance and/or \nregularly enough that its usage count never drops too far.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Sun, 06 Dec 2009 10:15:46 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance while importing a very large data set in to database" } ]
[ { "msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 5228\nLogged by: aftab\nEmail address: [email protected]\nPostgreSQL version: 8.3.8\nOperating system: Centos 5\nDescription: Execution of prepared query is slow when timestamp\nparameter is used\nDetails: \n\ne.g. \nprepare testplan (int, int) as \nSELECT *\nFROM position WHERE \nposition.POSITION_STATE_ID=$1 AND \nposition.TARGET_ID=$2\nAND position.TIME>='2009-10-30 13:43:32'\nORDER BY position.ID DESC ;\n\nEXPLAIN ANALYZE EXECUTE testplan(2,63)\n\n QUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------\n Sort (cost=166238.58..166370.97 rows=52956 width=297) (actual\ntime=28.618..28.619 rows=1 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 25kB\n -> Bitmap Heap Scan on \"position\" (cost=6182.89..147236.51 rows=52956\nwidth=297) (actual time=28.518..28.521 rows=1 loops=1)\n Recheck Cond: (target_id = $2)\n Filter: ((\"time\" >= '2009-10-30 13:43:32'::timestamp without time\nzone) AND (position_state_id = $1))\n -> Bitmap Index Scan on position_target_fk (cost=0.00..6169.65\nrows=210652 width=0) (actual time=0.624..0.624 rows=1006 loops=1)\n Index Cond: (target_id = $2)\n Total runtime: 28.763 ms\n(9 rows)\n\nWhen I replace \"time\" filter with a parameter then the same query takes\nlonger \n\nprepare testplan (int, int, timestamp) as \nSELECT *\nFROM position WHERE \nposition.POSITION_STATE_ID=$1 AND \nposition.TARGET_ID=$2\nAND position.TIME>=$3\nORDER BY position.ID DESC ;\n\nEXPLAIN ANALYZE EXECUTE testplan(2,63,'2009-10-30 13:43:32');\n\n QUERY\nPLAN\n----------------------------------------------------------------------------\n------------------------------------------------------------------\n Sort (cost=154260.75..154348.53 rows=35111 width=297) (actual\ntime=2852.357..2852.358 rows=1 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using position_time on \"position\" (cost=0.00..146688.94\nrows=35111 width=297) (actual time=0.113..2852.338 rows=1 loops=1)\n Index Cond: (\"time\" >= $3)\n Filter: ((position_state_id = $1) AND (target_id = $2))\n Total runtime: 2852.439 ms\n(7 rows)\n", "msg_date": "Thu, 3 Dec 2009 08:25:32 GMT", "msg_from": "\"aftab\" <[email protected]>", "msg_from_op": true, "msg_subject": "BUG #5228: Execution of prepared query is slow when timestamp\n\tparameter is used" }, { "msg_contents": "aftab wrote:\n> The following bug has been logged online:\n> \n> Bug reference: 5228\n> Logged by: aftab\n> Email address: [email protected]\n> PostgreSQL version: 8.3.8\n> Operating system: Centos 5\n> Description: Execution of prepared query is slow when timestamp\n> parameter is used\n\nIt's far from clear that this is a bug. I've replied to the\npgsql-performance list to direct further discussion there.\n\nWhy this is happening: PostgreSQL has more information to use to plan a\nquery when it knows the actual values of parameters at planning time. It\ncan use statistics about the distribution of data in the table to make\nbetter choices about query plans.\n\nWhen you prepare a parameterized query, you're telling PostgreSQL to\nplan a query that'll work well for _any_ value in those parameters. It\ncan't make as much use of statistics about the column(s) involved.\n\nThere has been periodic discussion on the mailing list about having an\n'PREPARE NOCACHE' or 'EXECUTE REPLAN' command or something like that,\nwhere you can use a parameterized query, but query planning is done each\ntime the query is executed based on the actual values of the parameters.\nI don't know if this has come to anything or if anybody thinks it's even\na good idea.\n\n( If it's thought to be, perhaps a TODO entry would be warranted? It\ncertainly needs a FAQ entry or an article in [[Category:Performance]] ).\n\nAt present, the only way I'm aware of to force re-planning while still\nusing query parameters is to wrap your parameterized query up in a\nPL/PgSQL function that uses EXECUTE ... USING :\n\nhttp://www.postgresql.org/docs/8.4/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN\n\n... but PL/PgSQL has its own performance costs.\n\n\nOne thing that might help your query perform better without making any\nchanges to it is to give it more work_mem, which might let it use a\ndifferent sort or sort more efficiently. You can set work_mem per-user,\nper-connection, per-database or globally - see the PostgreSQL documentation.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 03 Dec 2009 16:52:16 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] BUG #5228: Execution of prepared query is slow when\n\ttimestamp parameter is used" } ]
[ { "msg_contents": "----------------------------------------------\nTABLE STRUCTURE\n----------------------------------------------\n\nCREATE TABLE gbobjects\n(\n ssid bigint NOT NULL,\n nid character varying NOT NULL,\n inid bigint NOT NULL,\n uid bigint NOT NULL,\n status character varying,\n noofchanges integer NOT NULL,\n fieldschanged character varying[] NOT NULL,\n changetype bigint[] NOT NULL,\n noofcommits integer NOT NULL,\n noofchangesaftercommit integer NOT NULL,\n history bigint[] NOT NULL,\n gbtimestamp timestamp with time zone DEFAULT now(),\n rendered_nbh text,\n nbh text,\n CONSTRAINT gbobjects_pkey PRIMARY KEY (ssid)\n)\nWITH (OIDS=FALSE);\nALTER TABLE gbobjects OWNER TO postgres;\n\n\n-- Index: nid_object\n\nCREATE INDEX nid_object\n ON gbobjects\n USING btree\n (nid);\n\n\n-------------------------------------------------------\nusing EXPLAIN\n-------------------------------------------------------\n\nWe populated the table with data and used EXPLAIN\n\n\ndbpedia=# EXPLAIN SELECT nid,max(ssid) FROM gbobjects where ssid<=\n100000 group by nid ;\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------\n GroupAggregate (cost=20966.03..22944.49 rows=98923 width=27)\n -> Sort (cost=20966.03..21213.34 rows=98923 width=27)\n Sort Key: nid\n -> Index Scan using ssid_object on gbobjects (cost=0.00..10388.88\nrows=98923 width=27)\n Index Cond: (ssid <= 100000)\n\n\nTotal rows : *875459 *\n\n\n*The cost is very high. Is there a way to reduce the cost ?. We have kept\nthe\npostgresql configuration files as it is i.e. they are the default\nconfiguration\nfiles.* Can the cost be reduced by changing some parameters in\npostgresql.conf file. If yes which are those parameters ?\n\n*Operating system used : ubuntu-9.04\npostgresql version : 8.3\nRam : 2 GB\n*\n\nThank you in advance\nRajiv nair\n\n----------------------------------------------TABLE STRUCTURE----------------------------------------------CREATE TABLE gbobjects(  ssid bigint NOT NULL,  nid character varying NOT NULL,\n  inid bigint NOT NULL,  uid bigint NOT NULL,  status character varying,  noofchanges integer NOT NULL,  fieldschanged character varying[] NOT NULL,\n  changetype bigint[] NOT NULL,  noofcommits integer NOT NULL,  noofchangesaftercommit integer NOT NULL,  history bigint[] NOT NULL,  gbtimestamp timestamp with time zone DEFAULT now(),  rendered_nbh text,\n\n  nbh text,  CONSTRAINT gbobjects_pkey PRIMARY KEY (ssid))WITH (OIDS=FALSE);ALTER TABLE gbobjects OWNER TO postgres;-- Index: nid_objectCREATE INDEX nid_object\n  ON gbobjects  USING btree  (nid);-------------------------------------------------------using EXPLAIN -------------------------------------------------------We populated the table with data and used EXPLAIN\ndbpedia=# EXPLAIN   SELECT   nid,max(ssid) FROM gbobjects  where ssid<= 100000  group by nid  ;                                             QUERY PLAN                                            \n--------------------------------------------------------------------------------------------------\n GroupAggregate  (cost=20966.03..22944.49 rows=98923 width=27)   ->  Sort  (cost=20966.03..21213.34 rows=98923 width=27)         Sort Key: nid         ->  Index Scan using ssid_object on gbobjects  (cost=0.00..10388.88 rows=98923 width=27)\n\n               Index Cond: (ssid <= 100000)Total rows : 875459 The cost is very high. Is there a way to reduce the cost ?. We have kept thepostgresql configuration files as it is i.e. they are the default configuration\nfiles. Can the cost be reduced by changing some parameters inpostgresql.conf file. If yes which are those parameters ? Operating system used : ubuntu-9.04postgresql version : 8.3Ram : 2 GB \nThank you in advanceRajiv nair", "msg_date": "Fri, 4 Dec 2009 15:45:25 +0530", "msg_from": "nair rajiv <[email protected]>", "msg_from_op": true, "msg_subject": "query cost too high, anyway to reduce it" }, { "msg_contents": "On Fri, Dec 4, 2009 at 3:15 AM, nair rajiv <[email protected]> wrote:\n\n> We populated the table with data and used EXPLAIN\n>\n>\n> dbpedia=# EXPLAIN   SELECT   nid,max(ssid) FROM gbobjects  where ssid<=\n> 100000  group by nid  ;\n>\n>               QUERY PLAN\n> --------------------------------------------------------------------------------------------------\n>  GroupAggregate  (cost=20966.03..22944.49 rows=98923 width=27)\n>    ->  Sort  (cost=20966.03..21213.34 rows=98923 width=27)\n>          Sort Key: nid\n>          ->  Index Scan using ssid_object on gbobjects  (cost=0.00..10388.88\n> rows=98923 width=27)\n>                Index Cond: (ssid <= 100000)\n>\n>\n> Total rows : 875459\n>\n>\n> The cost is very high.\n\nCompared to what?\n\n> Is there a way to reduce the cost ?. We have kept the\n> postgresql configuration files as it is i.e. they are the default\n> configuration\n> files.\n> Can the cost be reduced by changing some parameters in\n> postgresql.conf file. If yes which are those parameters ?\n\nSure you can change the numbers for random_page_cost and\nsequential_page_cost, but the query isn't gonna run faster. You're\nretrieving 875k rows, that's never gonna be cheap.\n\nBetter is to run explain analyze and look at the times you're getting\nfor each step in the query plan.\n", "msg_date": "Sat, 5 Dec 2009 13:45:07 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query cost too high, anyway to reduce it" } ]
[ { "msg_contents": "Hi All,\n\n\nMaybe some questions are quite newbie ones, and I did try hard to scan \nall the articles and documentation, but I did not find a satisfying \nanswer.\n\nI'm running PostgreSQL 8.3.6 on a 32-Bit Centos 4 machine (which I \nprobably should update to 64 Bit soon)\n\n\nI have some tables which tend to get huge (and will for sure hit the \nwall of my storage system soon, total DB ~700 GB now):\n\nSELECT relfilenode, relpages,reltuples,relname FROM pg_class WHERE \nrelpages > 10000 ORDER BY relpages DESC;\n relfilenode | relpages | reltuples | relname\n-------------+----------+-------------+---------------------------------\n-\n 72693 | 51308246 | 4.46436e+09 | result_orig\n 72711 | 17871658 | 6.15227e+06 | test\n 73113 | 12240806 | 4.46436e+09 | result_orig_test_id\n 73112 | 12240806 | 4.46436e+09 | result_orig_prt_id\n 72717 | 118408 | 6.15241e+06 | test_orig\n 72775 | 26489 | 6.15241e+06 | test_orig_lt_id\n 72755 | 19865 | 6.15241e+06 | test_orig_test_id_key\n 73147 | 16872 | 6.15227e+06 | test_test_id\n 73146 | 16872 | 6.15227e+06 | test_lt_id\n\n\nI'm going to work on the table size of the largest table (result_orig) \nitself by eliminating columns, stuffing n Booleans into bit(n)'s, \nreplacing double precision by reals, etc.. By this I should be able to \nreduce the storage per row to ~1/3 of the bytes currently used.\n\nI have the same information stored in an Oracle 10g DB which consumes \nonly 70G data and 2G for indexes. The schema may be better optimized, \nbut for sure there is a table with 4 billion rows inside as well. So \nit's about 10x smaller in disk space than PgSQL. I wonder why.\n\nBut still:\n\n### My Issue No. 1: Index Size\nWhat really worries me is the size of the two largest indexes \n(result_orig_test_id, result_orig_prt_id) I'm using. Both are roughly \n1/3 of the result_orig table size and each index only b-tree indexes a \nsingle bigint column (prt_id, test_id) of result_orig. Roughly every \ngroup of 100 rows of result_orig have the same prt_id, roughly every \ngroup of 1000-10000 rows have the same test_id. Each of these two cols \nis a Foreign Key (ON DELETE CASCADE).\n\nSo my fear is now, even if I can reduce the amount of data per row in \nresult_orig, my indexes will remain as large as before and then dominate \ndisk usage.\n\nIs such disk usage for indexes expected? What can I do to optimize? I \ncould not run yet a VACUUM on result_orig, as I hit into max_fsm_pages \nlimit (still trying to adjust that one). I tried REINDEX, it didn't \nchange anything.\n\n\n### My Issue No. 2: relpages and VACUUM\nI have another table \"test\" which is - as starting point - created by \nINSERTs and then UPDATE'd. It has the same columns and roughly the same \nnumber of rows as table test_orig, but consumes 160 times the number of \npages. I tried VACUUM on this table but it did not change anything on \nits relpages count. Maybe this is just because VACUUM without FULL does \nnot re-claim disk space, i.e. relpages stays as it is? I did observe \nthat after VACUUM, a REINDEX on this table did considerably shrink down \nthe size of its indexes (test_test_id, test_lt_id). \n\n\n### My Issue No 3: VACCUM FULL out of memory\nI tried to do a VACCUM FULL on the two tables (test, result_orig) \nmentioned above. In both cases it fails with a very low number on out of \nmemory like this:\n\nERROR: out of memory\nDETAIL: Failed on request of size 224.\n\nI use these kernel settings:\nkernel.shmmni = 4096\nkernel.shmall = 2097152\nkernel.shmmax = 2147483648\nvm.overcommit_memory = 2\n\nAnd these postgresql.conf settings:\nshared_buffers = 512MB # min 128kB or \nmax_connections*16kB\ntemp_buffers = 128MB # min 800kB\nmax_prepared_transactions = 1024 # can be 0 or more\nwork_mem = 16MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nmax_stack_depth = 8MB # min 100kB\nmax_fsm_pages = 70000000 # min max_fsm_relations*16, 6 \nbytes each\nmax_fsm_relations = 4194304 # min 100, ~70 bytes each\n#max_files_per_process = 1000 # min 25\n#shared_preload_libraries = '' # (change requires restart)\n\nWhat's going wrong here? I know, one should not use VACUUM FULL, but I \nwas curious to see if this would have any impact on relpages count \nmentioned in Issue 2. \n\n\n###My Issue No. 4: Autovacuum\nI have the feeling that Autovacuum is not really running, else why are \ntables and indexes growing that much, especially \"test\" table?\n \n#-----------------------------------------------------------------------\n-------\n# AUTOVACUUM PARAMETERS\n#-----------------------------------------------------------------------\n-------\n\nautovacuum = on # Enable autovacuum subprocess? \n'on'\nlog_autovacuum_min_duration = 1000 # -1 disables, 0 logs all \nactions and\nautovacuum_max_workers = 3 # max number of autovacuum \nsubprocesses\nautovacuum_naptime = 1min # time between autovacuum runs\nautovacuum_vacuum_threshold = 50 # min number of row updates \nbefore\nautovacuum_analyze_threshold = 50 # min number of row updates \nbefore\nautovacuum_vacuum_scale_factor = 0.2 # fraction of table size before \nvacuum\nautovacuum_analyze_scale_factor = 0.1 # fraction of table size before \nanalyze\nautovacuum_freeze_max_age = 200000000 # maximum XID age before forced \nvacuum\nautovacuum_vacuum_cost_delay = 20 # default vacuum cost delay for\nautovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n\nHow would I check it is running correctly? I don't see any error \nmessages in syslog from autovacuum.\n\n\n\nAny help, also on tuning postgresql.conf to this application, is greatly \nappreciated!\n\nThanks\n\nAndy\n\n\n\n", "msg_date": "Sat, 5 Dec 2009 00:03:12 +0100", "msg_from": "Andreas Thiel <[email protected]>", "msg_from_op": true, "msg_subject": "Large DB, several tuning questions: Index sizes, VACUUM, REINDEX,\n\tAutovacuum" }, { "msg_contents": "Hi,\n\nOn Saturday 05 December 2009 00:03:12 Andreas Thiel wrote:\n> I'm running PostgreSQL 8.3.6 on a 32-Bit Centos 4 machine (which I\n> probably should update to 64 Bit soon)\nHow much memory?\n\n\n> I'm going to work on the table size of the largest table (result_orig)\n> itself by eliminating columns, stuffing n Booleans into bit(n)'s,\n> replacing double precision by reals, etc.. By this I should be able to\n> reduce the storage per row to ~1/3 of the bytes currently used.\nThat sounds rather ambitous - did you factor in the per row overhead?\n\n> I have the same information stored in an Oracle 10g DB which consumes\n> only 70G data and 2G for indexes. The schema may be better optimized,\n> but for sure there is a table with 4 billion rows inside as well. So\n> it's about 10x smaller in disk space than PgSQL. I wonder why.\nThats hard to say without seeing the table definition for both. Could you post \nit?\n\n2GB for indexes sounds rather small - those are btrees?\n\nIt might also be interesting to look into the freespacemap to see how much \nempty space there is - there is a contrib module pg_freespacemap for that.\n\nYou can also check how much dead tuples a 'ANALYZE VERBOSE tablename' sees.\n\n> Is such disk usage for indexes expected? What can I do to optimize? I\n> could not run yet a VACUUM on result_orig, as I hit into max_fsm_pages\n> limit (still trying to adjust that one). I tried REINDEX, it didn't\n> change anything.\nSo its quite possible that your relations are heavily bloated - altough if you \nreindex that shouldnt matter that much.\n\nBtw, have you possibly left over some old prepared transactions or an idle in \ntransaction connection? Both can lead to sever bloat.\nFor the former you can check the system table pg_prepared_xact for the latter \npg_stat_activity.\n\n> ### My Issue No. 2: relpages and VACUUM\n> I have another table \"test\" which is - as starting point - created by\n> INSERTs and then UPDATE'd. It has the same columns and roughly the same\n> number of rows as table test_orig, but consumes 160 times the number of\n> pages. I tried VACUUM on this table but it did not change anything on\n> its relpages count. Maybe this is just because VACUUM without FULL does\n> not re-claim disk space, i.e. relpages stays as it is? I did observe\n> that after VACUUM, a REINDEX on this table did considerably shrink down\n> the size of its indexes (test_test_id, test_lt_id).\nA normal VACUUM does not move tuples around - it only marks space as free so \nit can later be filled. \n\n(If the free space is trailing it tries to free it if there are no locks \npreventing it).\n\n> ### My Issue No 3: VACCUM FULL out of memory\n> I tried to do a VACCUM FULL on the two tables (test, result_orig)\n> mentioned above. In both cases it fails with a very low number on out of\n> memory like this:\n> \n> ERROR: out of memory\n> DETAIL: Failed on request of size 224.\nWell, thats the number of memory its trying to allocate, not the amount it has \nallocated. Normally the postmaster should output some sort of memory map when \nthat happens. Did you get anything like that?\n\n> I use these kernel settings:\n> kernel.shmmni = 4096\n> kernel.shmall = 2097152\n> kernel.shmmax = 2147483648\n> vm.overcommit_memory = 2\n\n> max_stack_depth = 8MB # min 100kB\nThat sounds a bit too high if you count in that libc and consorts may use some \nstack space as well - although that should be unrelated to the current issue.\n\n> max_fsm_pages = 70000000 # min max_fsm_relations*16, 6\n> bytes each\nAs a very rough guide you can start with the sum of relpages in pg_class for \nthat one.\n\n> max_fsm_relations = 4194304 # min 100, ~70 bytes each\nThat seems kinda high. Do you have multiple millions of relations? It might be \nrelated to the oom situation during vacuum full, although it seems rather \nunlikely.\n\n> ###My Issue No. 4: Autovacuum\n> I have the feeling that Autovacuum is not really running, else why are\n> tables and indexes growing that much, especially \"test\" table?\nYou should see notes about autovacuum in the locks. With an \nautovacuum_vacuum_scale_factor of 0.2 you need \n0.002 times the size of a table in changed tuples before autovacuum starts. \nFor a billion thats quite a bit. I found that this setting often is too high.\n\n> How would I check it is running correctly? I don't see any error\n> messages in syslog from autovacuum.\nYou should see messages about it starting in the syslog.\n\n\nAndres\n", "msg_date": "Sat, 5 Dec 2009 21:00:00 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large DB, several tuning questions: Index sizes, VACUUM, REINDEX,\n\tAutovacuum" }, { "msg_contents": "On 5/12/2009 7:03 AM, Andreas Thiel wrote:\n> Hi All,\n>\n>\n> Maybe some questions are quite newbie ones, and I did try hard to scan\n> all the articles and documentation, but I did not find a satisfying\n> answer.\n\n> ### My Issue No. 1: Index Size\n> Is such disk usage for indexes expected? What can I do to optimize? I\n> could not run yet a VACUUM on result_orig, as I hit into max_fsm_pages\n> limit\n\nYou'll like 8.4 then, as you no longer have to play with max_fsm_pages.\n\nThe fact that you're hitting max_fsm_pages suggests that you are \nprobably going to be encountering table bloat.\n\nOf course, to get to 8.4 you're going to have to go through a dump and \nreload of doom...\n\n> ### My Issue No. 2: relpages and VACUUM\n> I have another table \"test\" which is - as starting point - created by\n> INSERTs and then UPDATE'd. It has the same columns and roughly the same\n> number of rows as table test_orig, but consumes 160 times the number of\n> pages. I tried VACUUM on this table but it did not change anything on\n> its relpages count. Maybe this is just because VACUUM without FULL does\n> not re-claim disk space, i.e. relpages stays as it is? I did observe\n> that after VACUUM, a REINDEX on this table did considerably shrink down\n> the size of its indexes (test_test_id, test_lt_id).\n\nCLUSTER is often convenient for re-writing a highly bloated table. \nYou'll need enough free disk space to hold the real rows from the table \ntwice, plus the dead space once, while CLUSTER runs.\n\n--\nCraig Ringer\n", "msg_date": "Sun, 06 Dec 2009 09:15:39 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large DB, several tuning questions: Index sizes, VACUUM,\n\tREINDEX, Autovacuum" }, { "msg_contents": "Craig Ringer wrote:\n>> ### My Issue No. 1: Index Size\n>> Is such disk usage for indexes expected? What can I do to optimize? I\n>> could not run yet a VACUUM on result_orig, as I hit into max_fsm_pages\n>> limit\n>\n> You'll like 8.4 then, as you no longer have to play with max_fsm_pages.\n> The fact that you're hitting max_fsm_pages suggests that you are \n> probably going to be encountering table bloat.\n> Of course, to get to 8.4 you're going to have to go through a dump and \n> reload of doom...\nYeah, increasing max_fsm_pages and seeing what VACUUM VERBOSE tells you \nafterwards is job #1, as all of the information you're getting now is \nuseless if VACUUM is stalled out on a giant task. It should be possible \nto migrate from 8.3 to 8.4 using pg_migrator rather than doing a dump \nand reload. I would recommend considering that as soon as \npossible--your options are either to learn a lot about better VACUUM \npractice and being diligent to make sure you never exceed it in the \nfuture, or to switch to 8.4 and it will take care of itself.\n\nYou also need to be careful not to let the system run completely out of \ndisk space before doing something about this, because CLUSTER (the only \nuseful way to clean up after a VACUUM mistake of the magnitude you're \nfacing now) requires making a second copy of the live data in the table \nas its method to clean things up. That option goes away once you're \nreally low on disk space, and if you get backed into that corner by that \nyou'll really be stuck.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Sat, 05 Dec 2009 20:33:47 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large DB, several tuning questions: Index sizes, VACUUM,\n\tREINDEX, Autovacuum" } ]
[ { "msg_contents": "Hi All,\n\nI want to identify the bottleneck queries inside the procedure. I want to know which of the queries are taking the time. How can I measure time taken to execute the individual Query inside a Procedure ?\n\nThank You\nNiraj Patel\n\n\n________________________________\nFrom: \"[email protected]\" <[email protected]>\nTo: Niraj Patel <[email protected]>\nSent: Sat, 5 December, 2009 6:33:10 AM\nSubject: Welcome to the pgsql-performance list!\n\nWelcome to the pgsql-performance mailing list!\nYour password at postgresql.org is\n\nniIGwr\n\nTo leave this mailing list, send the following command in the body\nof a message to [email protected]:\n\napprove niIGwr unsubscribe pgsql-performance [email protected]\n\nThis command will work even if your address changes. For that reason,\namong others, it is important that you keep a copy of this message.\n\nTo post a message to the mailing list, send it to\n [email protected]\n\nIf you need help or have questions about the mailing list, please\ncontact the people who manage the list by sending a message to\n [email protected]\n\nYou can manage your subscription by visiting the following WWW location:\n <http://mail.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/npatel%40gridsolv.com>\nHi All,I want to identify the bottleneck queries inside the procedure. I want to know which of the queries are taking the time. How can I measure time taken to execute the individual Query inside a Procedure ?Thank YouNiraj PatelFrom:\n \"[email protected]\" <[email protected]>To: Niraj Patel <[email protected]>Sent: Sat, 5 December, 2009 6:33:10 AMSubject: Welcome to the pgsql-performance list!Welcome to the pgsql-performance mailing list!Your password at postgresql.org isniIGwrTo leave this mailing list, send the following command in the bodyof a message to [email protected]:approve niIGwr unsubscribe pgsql-performance [email protected] command will work even if your address changes.  For that reason,among others, it is important that you\n keep a copy of this message.To post a message to the mailing list, send it to  [email protected] you need help or have questions about the mailing list, pleasecontact the people who manage the list by sending a message to  [email protected] can manage your subscription by visiting the following WWW location:  <http://mail.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/npatel%40gridsolv.com>", "msg_date": "Fri, 4 Dec 2009 17:18:28 -0800 (PST)", "msg_from": "niraj patel <[email protected]>", "msg_from_op": true, "msg_subject": "Time Profiling inside the procedure" } ]
[ { "msg_contents": "On Fri, Dec 4, 2009 at 4:03 PM, Andreas Thiel <[email protected]> wrote:\n> Hi All,\n>\n> Maybe some questions are quite newbie ones, and I did try hard to scan\n> all the articles and documentation, but I did not find a satisfying\n> answer.\n>\n> I'm running PostgreSQL 8.3.6 on a 32-Bit Centos 4 machine (which I\n> probably should update to 64 Bit soon)\n\nYeah, 64 bit is worth the migration.\n\n> I have the same information stored in an Oracle 10g DB which consumes\n> only 70G data and 2G for indexes. The schema may be better optimized,\n> but for sure there is a table with 4 billion rows inside as well. So\n> it's about 10x smaller in disk space than PgSQL. I wonder why.\n\nYou've probably got a bloated data store.\n\n> Is such disk usage for indexes expected? What can I do to optimize? I\n> could not run yet a VACUUM on result_orig, as I hit into max_fsm_pages\n> limit (still trying to adjust that one). I tried REINDEX, it didn't\n> change anything.\n\nOK, you've got a problem with max_fsm_pages not being big enough, so\npretty much anything you do vacuum wise is a wasted effort until you\nfix that.\n\n> ### My Issue No. 2: relpages and VACUUM\n> I have another table \"test\" which is - as starting point - created by\n> INSERTs and then UPDATE'd. It has the same columns and roughly the same\n> number of rows as table test_orig,  but consumes 160 times the number of\n> pages. I tried VACUUM on this table but it did not change anything on\n> its relpages count. Maybe this is just because VACUUM without FULL does\n> not re-claim disk space, i.e. relpages stays as it is? I did observe\n> that after VACUUM, a REINDEX on this table did considerably shrink down\n> the size of its indexes (test_test_id, test_lt_id).\n\nVacuum does NOT shrink tables. It reclaims free space to be reused.\nIf you have a large table that's 95% dead space regular vacuum can't\ndo anything for you. Note that not having a large enough free space\nmap is likely the cause of your problems.\n\n> ### My Issue No 3: VACCUM FULL out of memory\n> I tried to do a VACCUM FULL on the two tables (test, result_orig)\n> mentioned above. In both cases it fails with a very low number on out of\n> memory like this:\n>\n> ERROR:  out of memory\n> DETAIL:  Failed on request of size 224.\n\nIt's likely doing a LOT of other memory allocations before this one\nfails. 64 bit and more memory might help.\n\n> I use these kernel settings:\n> kernel.shmmni = 4096\n> kernel.shmall = 2097152\n> kernel.shmmax = 2147483648\n\nNone of those have anything to do with how much vacuum full can allocate really.\n\n> vm.overcommit_memory = 2\n\nThis will keep vacuum from allocating memory that may be available but\nis already \"spoken for\" so to speak. How much memory dos your machine\nhave?\n\n> And these postgresql.conf settings:\n> shared_buffers = 512MB                  # min 128kB or\n> max_connections*16kB\n> temp_buffers = 128MB                    # min 800kB\n> max_prepared_transactions = 1024        # can be 0 or more\n> work_mem = 16MB                         # min 64kB\n> maintenance_work_mem = 256MB            # min 1MB\n> max_stack_depth = 8MB                   # min 100kB\n> max_fsm_pages = 70000000                # min max_fsm_relations*16, 6\n\nWow, and you're still running out? Do you have autovacuum turned off\nor something?\n\n> What's going wrong here? I know, one should not use VACUUM FULL, but I\n> was curious to see if this would have any impact on relpages count\n> mentioned in Issue 2.\n\nVacuum full is perfectly cromulent, assuming you know the downsides\nand avoid using it for regular periodic maintenance. For instance, my\nmain production database had a drive go out a few days ago, and it\nheld a query up for several days, during which vacuum couldn't reclaim\nspace freed up during that time. I set a maintenance window last\nnight and ran vacuum full plus reindex on a couple of tables that had\ngotten particularly bloated.\n\n> ###My Issue No. 4: Autovacuum\n> I have the feeling that Autovacuum is not really running, else why are\n> tables and indexes growing that much, especially \"test\" table?\n\nCould it be a single long running query or transaction is keeping\nvacuum from reclaiming free space? check pg_stat_activity for long\nrunning queries or transactions.\n", "msg_date": "Sat, 5 Dec 2009 13:39:11 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large DB, several tuning questions: Index sizes,\n\tVACUUM, REINDEX, Autovacuum" } ]
[ { "msg_contents": "Hi Andreas,\n\nCould you please properly quote the email? The way you did it is quite \nunreadable because you always have to guess who wrote what.\n\nOn Sunday 06 December 2009 17:06:39 Andreas Thiel wrote:\n> > I'm going to work on the table size of the largest table (result_orig)\n> > itself by eliminating columns, stuffing n Booleans into bit(n)'s,\n> > replacing double precision by reals, etc.. By this I should be able to\n> > reduce the storage per row to ~1/3 of the bytes currently used.\n> That sounds rather ambitous - did you factor in the per row overhead?\n> I did now create the new table, I have now 63 instead of 94 bytes/row on\n> average. So yes you're right I'm about to hit the bottom of the per row\n> overhead.\nHow did you calculate that? Did you factor in the alignment requirements? The \nddl would be helpfull...\n\n> Btw, have you possibly left over some old prepared transactions or an\n> idle in\n> transaction connection? Both can lead to sever bloat.\n> For the former you can check the system table pg_prepared_xact for the\n> latter\n> pg_stat_activity.\n> Seems no the case, pg_prepared_xact doesn't even exist.\nIts pg_prepared_xacts (note the s), sorry my mind played me.\n\n> Where would I find that postmaster output? In syslog? There's nothing\n> visible...\nDepends on your setup. I have not the slightest clue about centos. If \nnecessary start postmaster directly.\n\n> > max_fsm_relations = 4194304 # min 100, ~70 bytes each\nHave you corrected that value?\n\n\nAndres\n", "msg_date": "Sun, 6 Dec 2009 17:51:19 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large DB, several tuning questions: Index sizes, VACUUM, REINDEX,\n\tAutovacuum" } ]
[ { "msg_contents": "On Sunday 06 December 2009 19:20:17 Andreas Thiel wrote:\n> Hi Andres,\n> \n> Thanks a lot for your answers. As bottom line I think the answer is I\n> have to rethink my DB structure.\nCan't answer that one without knowing much more ;)\n\n> > Could you please properly quote the email? The way you did it is quite\n> > unreadable because you always have to guess who wrote what.\n> I try to, is it now getting better? My apologies, still trying to adopt\n> to using Office 07:-)\nBetter, yes.\n\n\n> Well, I know the data types of my columns sum up to 32 bytes right now\n> (was about 100 before). As I only see a reduction of relpages/reltuples\n> by 30% not by a factor 3, I assume that the row overhead kicks in. The\n> data definition of the new table looks like this:\n> bigint REFERENCES test_orig(test_id) ON DELETE CASCADE\n> bigint REFERENCES part_orig(prt_id) ON DELETE CASCADE\n> smallint\n> bit(16)\n> real\n> text (usually empty in most rows)\n> smallint\n> I did calculate 32 Bytes per row (if text is empty), but actually\n> relpages/reltuples is about ~63 bytes. This would result in a per row\n> overhead of 31 bytes. Would it change anything if I remove the 2 FOREIGN\n> KEY constraints?\nIf you remove those columns entirely, sure. If you remove only the constraint, \nno.\n\nThe row overhead in 8.3/8.4 is 28bytes afaik. You miss two points in your \ncalculation - one is alignment (i.e. a integer will only start at a 4byte \nboundary) and the other is that for text you need to store the length of the \ncolumn as well.\n\n> > Its pg_prepared_xacts (note the s), sorry my mind played me.\n> Nothing inside this table as well. (I did also - while trying to improve\n> postgresql.conf a few days ago - restart the server a couple of times, I\n> think that would have removed any hanging transactions or prepares,\n> shouldn't it?)\nNo, prepared transactions do not get removed by restarting. But thats fine \nthen.\n\n> > > > > max_fsm_relations = 4194304 # min 100, ~70 bytes\nfsm_relations is the max number of relations you want to store in the fsm - \ncurrently that means you could have 4 mio tables+indexes.\n\n> No, but it seems at least VACUUM is now running fine and no longer\n> complaining about too small number for max_fsm_pages. Do you think if I\n> reduce those two numbers, I'll have a better chance to run VACUUM FULL?\n> Currently max_fsm_pages is slightly larger than relpages of my largest\n> table. I read somewhere, max_fsm_pages should be about 1/2 of the total\n> number of relpages in a DB, maybe another way to say it should be larger\n> than the largest table...\nThe largest table does not really have any special influence on the fsm, so I \nwouldnt count that rule as very good.\nIts not that easy to calculate the size of the fsm correctly - thats why its \ngone in 8.4...\n\nI know of several instances running with a larger fsm_pages - you could try to \nreduce the fsm_relations setting - I dont know if there are problems lurking \nwith such a oversized value.\n\nI actually doubt that thats related to the oom youre seeing though - whats \nyour \"maintenance_work_mem\" setting and whats your \n/proc/sys/vm/overcommit_ratio and how much swap do you have?\n\nAndres\n", "msg_date": "Sun, 6 Dec 2009 20:09:59 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large DB, several tuning questions: Index sizes, VACUUM, REINDEX,\n\tAutovacuum" }, { "msg_contents": "On Sun, Dec 6, 2009 at 12:09 PM, Andres Freund <[email protected]> wrote:\n> I know of several instances running with a larger fsm_pages - you could try to\n> reduce the fsm_relations setting - I dont know if there are problems lurking\n> with such a oversized value.\n\nI run a db with 10M max_fsm_pages and 500k max_fam_relations. We use\nabout 4.5M pages and only 1200 or so relations. But we HAVE many more\nrelations than that, in the 40k range, so the higher number for max\nrelations is to make sure that if all those start getting updated we\ncan track them too.\n", "msg_date": "Sun, 6 Dec 2009 13:24:06 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large DB, several tuning questions: Index sizes,\n\tVACUUM, REINDEX, Autovacuum" } ]
[ { "msg_contents": "Hello All,\n\nI'm in the process of loading a massive amount of data (500 GB). After \nsome initial timings, I'm looking at 260 hours to load the entire 500GB. \n10 days seems like an awfully long time so I'm searching for ways to \nspeed this up. The load is happening in the Amazon cloud (EC2), on a \nm1.large instance:\n-7.5 GB memory\n-4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)\n-64-bit platform\n\n\nSo far I have modified my postgresql.conf file (PostgreSQL 8.1.3). The \nmodifications I have made are as follows:\n\nshared_buffers = 786432\nwork_mem = 10240\nmaintenance_work_mem = 6291456\nmax_fsm_pages = 3000000\nwal_buffers = 2048\ncheckpoint_segments = 200\ncheckpoint_timeout = 300\ncheckpoint_warning = 30\nautovacuum = off\n\n\nThere are a variety of instance types available in the Amazon cloud \n(http://aws.amazon.com/ec2/instance-types/), including high memory and \nhigh CPU. High memory instance types come with 34GB or 68GB of memory. \nHigh CPU instance types have a lot less memory (7GB max) but up to 8 \nvirtual cores. I am more than willing to change to any of the other \ninstance types.\n\nAlso, there is nothing else happening on the loading server. It is \ncompletely dedicated to the load.\n\nAny advice would be greatly appreciated.\n\nThanks,\n\nBen\n", "msg_date": "Mon, 07 Dec 2009 10:12:28 -0800", "msg_from": "Ben Brehmer <[email protected]>", "msg_from_op": true, "msg_subject": "Load experimentation" }, { "msg_contents": "Ben Brehmer <[email protected]> wrote:\n \n> -7.5 GB memory\n> -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units\n> each)\n> -64-bit platform\n \nWhat OS?\n \n> (PostgreSQL 8.1.3)\n \nWhy use such an antiquated, buggy version? Newer versions are\nfaster.\n \n-Kevin\n", "msg_date": "Mon, 07 Dec 2009 12:33:13 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "On Mon, Dec 7, 2009 at 1:12 PM, Ben Brehmer <[email protected]> wrote:\n\n> Hello All,\n>\n> I'm in the process of loading a massive amount of data (500 GB). After some\n> initial timings, I'm looking at 260 hours to load the entire 500GB. 10 days\n> seems like an awfully long time so I'm searching for ways to speed this up.\n> The load is happening in the Amazon cloud (EC2), on a m1.large instance:\n> -7.5 GB memory\n> -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)\n> -64-bit platform\n>\n>\n> So far I have modified my postgresql.conf file (PostgreSQL 8.1.3). The\n> modifications I have made are as follows:\n>\n\n Can you go with PG 8.4? That's a start :-)\n\n>\n> shared_buffers = 786432\n> work_mem = 10240\n> maintenance_work_mem = 6291456\n> max_fsm_pages = 3000000\n> wal_buffers = 2048\n> checkpoint_segments = 200\n> checkpoint_timeout = 300\n> checkpoint_warning = 30\n> autovacuum = off\n>\n\n I'd set fsync=off for the load, I'd also make sure that you're using the\nCOPY command (on the server side) to do the load.\n\nOn Mon, Dec 7, 2009 at 1:12 PM, Ben Brehmer <[email protected]> wrote:\nHello All,\n\nI'm in the process of loading a massive amount of data (500 GB). After some initial timings, I'm looking at 260 hours to load the entire 500GB. 10 days seems like an awfully long time so I'm searching for ways to speed this up. The load is happening in the Amazon cloud (EC2), on a m1.large instance:\n\n-7.5 GB memory\n-4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)\n-64-bit platform\n\n\nSo far I have modified my postgresql.conf  file (PostgreSQL 8.1.3). The modifications I have made are as follows:    Can you go with PG 8.4?  That's a start :-) \n\nshared_buffers = 786432\nwork_mem = 10240\nmaintenance_work_mem = 6291456\nmax_fsm_pages = 3000000\nwal_buffers = 2048\ncheckpoint_segments = 200\ncheckpoint_timeout = 300\ncheckpoint_warning = 30\nautovacuum = off   I'd set fsync=off for the load, I'd also make sure that you're using the COPY command (on the server side) to do the load.", "msg_date": "Mon, 7 Dec 2009 13:33:44 -0500", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "2009/12/7 Kevin Grittner <[email protected]>\n\n> Ben Brehmer <[email protected]> wrote:\n>\n> > -7.5 GB memory\n> > -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units\n> > each)\n> > -64-bit platform\n>\n> What OS?\n>\n> > (PostgreSQL 8.1.3)\n>\n> Why use such an antiquated, buggy version? Newer versions are\n> faster.\n>\n> -Kevin\n>\n\n\nI'd agree with trying to use the latest version you can.\n\nHow are you loading this data? I'd make sure you haven't got any indices,\nprimary keys, triggers or constraints on your tables before you begin the\ninitial load, just add them after. Also use either the COPY command for\nloading, or prepared transactions. Individual insert commands will just\ntake way too long.\n\nRegards\n\nThom\n\n2009/12/7 Kevin Grittner <[email protected]>\nBen Brehmer <[email protected]> wrote:\n\n> -7.5 GB memory\n> -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units\n>    each)\n> -64-bit platform\n\nWhat OS?\n\n> (PostgreSQL 8.1.3)\n\nWhy use such an antiquated, buggy version?  Newer versions are\nfaster.\n\n-Kevin\nI'd agree with trying to use the latest version you can.How are you loading this data?  I'd make sure you haven't got any indices, primary keys, triggers or constraints on your tables before you begin the initial load, just add them after.  Also use either the COPY command for loading, or prepared transactions.  Individual insert commands will just take way too long.\nRegardsThom", "msg_date": "Mon, 7 Dec 2009 18:39:29 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "Kevin,\n\nThis is running on on x86_64-unknown-linux-gnu, compiled by GCC gcc \n(GCC) 4.1.2 20080704 (Red Hat 4.1.2-44)\n\nBen\n\nOn 07/12/2009 10:33 AM, Kevin Grittner wrote:\n> Ben Brehmer<[email protected]> wrote:\n>\n> \n>> -7.5 GB memory\n>> -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units\n>> each)\n>> -64-bit platform\n>> \n>\n> What OS?\n>\n> \n>> (PostgreSQL 8.1.3)\n>> \n>\n> Why use such an antiquated, buggy version? Newer versions are\n> faster.\n>\n> -Kevin\n>\n> \n", "msg_date": "Mon, 07 Dec 2009 10:45:16 -0800", "msg_from": "Ben Brehmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "Ben Brehmer wrote:\n> Hello All,\n> \n> I'm in the process of loading a massive amount of data (500 GB). After \n> some initial timings, I'm looking at 260 hours to load the entire 500GB.\n\nYou don't say how you are loading the data, so there's not much to go on. But generally, there are two primary ways to speed things up:\n\n1. Group MANY inserts into a single transaction. If you're doing a row-at-a-time, it will be very slow. The \"sweet spot\" seems to be somewhere between 100 and 1000 inserts in a single transaction. Below 100, you're still slowing things down, above 1000, it probably won't make much difference.\n\n2. Use the COPY command. This requires you to format your data into the form that COPY uses. But it's VERY fast.\n\nCraig\n\n> 10 days seems like an awfully long time so I'm searching for ways to \n> speed this up. The load is happening in the Amazon cloud (EC2), on a \n> m1.large instance:\n> -7.5 GB memory\n> -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)\n> -64-bit platform\n> \n> \n> So far I have modified my postgresql.conf file (PostgreSQL 8.1.3). The \n> modifications I have made are as follows:\n> \n> shared_buffers = 786432\n> work_mem = 10240\n> maintenance_work_mem = 6291456\n> max_fsm_pages = 3000000\n> wal_buffers = 2048\n> checkpoint_segments = 200\n> checkpoint_timeout = 300\n> checkpoint_warning = 30\n> autovacuum = off\n> \n> \n> There are a variety of instance types available in the Amazon cloud \n> (http://aws.amazon.com/ec2/instance-types/), including high memory and \n> high CPU. High memory instance types come with 34GB or 68GB of memory. \n> High CPU instance types have a lot less memory (7GB max) but up to 8 \n> virtual cores. I am more than willing to change to any of the other \n> instance types.\n> \n> Also, there is nothing else happening on the loading server. It is \n> completely dedicated to the load.\n> \n> Any advice would be greatly appreciated.\n> \n> Thanks,\n> \n> Ben\n> \n\n", "msg_date": "Mon, 07 Dec 2009 10:50:17 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "Thanks for the quick responses. I will respond to all questions in one \nemail:\n\nBy \"Loading data\" I am implying: \"psql -U postgres -d somedatabase -f \nsql_file.sql\". The sql_file.sql contains table creates and insert \nstatements. There are no indexes present nor created during the load.\n\nOS: x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 \n(Red Hat 4.1.2-44)\n\nPostgreSQL: I will try upgrading to latest version.\n\nCOPY command: Unfortunately I'm stuck with INSERTS due to the nature \nthis data was generated (Hadoop/MapReduce).\n\nTransactions: Have started a second load process with chunks of 1000 \ninserts wrapped in a transaction. Its dropped the load time for 1000 \ninserts from 1 Hour to 7 minutes :)\n\nDisk Setup: Using a single disk Amazon image for the destination \n(database). Source is coming from an EBS volume. I didn't think there \nwere any disk options in Amazon?\n\n\nThanks!\n\nBen\n\n\n\n\n\nOn 07/12/2009 10:39 AM, Thom Brown wrote:\n> 2009/12/7 Kevin Grittner <[email protected] \n> <mailto:[email protected]>>\n>\n> Ben Brehmer <[email protected] <mailto:[email protected]>>\n> wrote:\n>\n> > -7.5 GB memory\n> > -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units\n> > each)\n> > -64-bit platform\n>\n> What OS?\n>\n> > (PostgreSQL 8.1.3)\n>\n> Why use such an antiquated, buggy version? Newer versions are\n> faster.\n>\n> -Kevin\n>\n>\n>\n> I'd agree with trying to use the latest version you can.\n>\n> How are you loading this data? I'd make sure you haven't got any \n> indices, primary keys, triggers or constraints on your tables before \n> you begin the initial load, just add them after. Also use either the \n> COPY command for loading, or prepared transactions. Individual insert \n> commands will just take way too long.\n>\n> Regards\n>\n> Thom\n\n\n\n\n\n\nThanks for the quick responses. I will respond to all questions in one\nemail:\n\nBy \"Loading data\" I am implying: \"psql -U postgres -d somedatabase -f\nsql_file.sql\".  The sql_file.sql contains table creates and insert\nstatements. There are no indexes present nor created during the load. \n\nOS: x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704\n(Red Hat 4.1.2-44)\n\n\nPostgreSQL: I will try upgrading to latest version.\n\nCOPY command: Unfortunately I'm stuck with INSERTS due to the nature\nthis data was generated (Hadoop/MapReduce). \n\nTransactions: Have started a second load process with chunks of 1000\ninserts wrapped in a transaction. Its dropped the load time for 1000\ninserts from 1 Hour to 7 minutes :)\n\nDisk Setup: Using a single disk Amazon image for the destination\n(database). Source is coming from an EBS volume. I didn't think there\nwere any disk options in Amazon?\n\n\nThanks!\n\nBen\n\n\n\n\n\nOn 07/12/2009 10:39 AM, Thom Brown wrote:\n\n2009/12/7 Kevin Grittner <[email protected]>\n\nBen Brehmer <[email protected]> wrote:\n\n> -7.5 GB memory\n> -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units\n>    each)\n> -64-bit platform\n\n\nWhat OS?\n\n> (PostgreSQL 8.1.3)\n\nWhy use such an antiquated, buggy version?  Newer versions are\nfaster.\n\n-Kevin\n\n\n\n\n\nI'd agree with trying to use the latest version you can.\n\n\nHow are you loading this data?  I'd make sure you haven't got\nany indices, primary keys, triggers or constraints on your tables\nbefore you begin the initial load, just add them after.  Also use\neither the COPY command for loading, or prepared transactions.\n Individual insert commands will just take way too long.\n\n\nRegards\n\n\nThom", "msg_date": "Mon, 07 Dec 2009 11:12:12 -0800", "msg_from": "Ben Brehmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "Ben Brehmer wrote:\n> Thanks for the quick responses. I will respond to all questions in one \n> email:\n> \n> By \"Loading data\" I am implying: \"psql -U postgres -d somedatabase -f \n> sql_file.sql\". The sql_file.sql contains table creates and insert \n> statements. There are no indexes present nor created during the load.\n\nAlthough transactions of over 1000 INSERT statements don't speed things up much, they don't hurt either, especially on a new system that nobody is using yet. Since you're loading from big SQL files using psql, just put a \"begin;\" at the top of the file and a \"commit;\" at the bottom. Unlike Oracle, Postgres even allows CREATE and such to be done inside a transaction.\n\nAnd BTW, don't forget to ANALYZE when you're all done.\n\nCraig\n\n> \n> OS: x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 \n> (Red Hat 4.1.2-44)\n> \n> PostgreSQL: I will try upgrading to latest version.\n> \n> COPY command: Unfortunately I'm stuck with INSERTS due to the nature \n> this data was generated (Hadoop/MapReduce).\n> \n> Transactions: Have started a second load process with chunks of 1000 \n> inserts wrapped in a transaction. Its dropped the load time for 1000 \n> inserts from 1 Hour to 7 minutes :)\n> \n> Disk Setup: Using a single disk Amazon image for the destination \n> (database). Source is coming from an EBS volume. I didn't think there \n> were any disk options in Amazon?\n> \n> \n> Thanks!\n> \n> Ben\n> \n> \n> \n> \n> \n> On 07/12/2009 10:39 AM, Thom Brown wrote:\n>> 2009/12/7 Kevin Grittner <[email protected] \n>> <mailto:[email protected]>>\n>>\n>> Ben Brehmer <[email protected] <mailto:[email protected]>>\n>> wrote:\n>>\n>> > -7.5 GB memory\n>> > -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units\n>> > each)\n>> > -64-bit platform\n>>\n>> What OS?\n>>\n>> > (PostgreSQL 8.1.3)\n>>\n>> Why use such an antiquated, buggy version? Newer versions are\n>> faster.\n>>\n>> -Kevin\n>>\n>>\n>>\n>> I'd agree with trying to use the latest version you can.\n>>\n>> How are you loading this data? I'd make sure you haven't got any \n>> indices, primary keys, triggers or constraints on your tables before \n>> you begin the initial load, just add them after. Also use either the \n>> COPY command for loading, or prepared transactions. Individual insert \n>> commands will just take way too long.\n>>\n>> Regards\n>>\n>> Thom\n\n", "msg_date": "Mon, 07 Dec 2009 11:21:22 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "On Monday 07 December 2009, Ben Brehmer <[email protected]> wrote:\n> Disk Setup: Using a single disk Amazon image for the destination\n> (database). Source is coming from an EBS volume. I didn't think there\n> were any disk options in Amazon?\n\nI don't think any Amazon cloud service is particularly well suited to a \ndatabase. Combined virtualized hosts with terrible I/O, and it's actually \nhard to envision a worse place to run a database.\n\n-- \n\"No animals were harmed in the recording of this episode. We tried but that \ndamn monkey was just too fast.\"\n", "msg_date": "Mon, 7 Dec 2009 11:48:40 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "Ben Brehmer wrote:\n> By \"Loading data\" I am implying: \"psql -U postgres -d somedatabase -f \n> sql_file.sql\". The sql_file.sql contains table creates and insert \n> statements. There are no indexes present nor created during the load.\n> COPY command: Unfortunately I'm stuck with INSERTS due to the nature \n> this data was generated (Hadoop/MapReduce).\nYour basic options here are to batch the INSERTs into bigger chunks, \nand/or to split your data file up so that it can be loaded by more than \none process at a time. There's some comments and links to more guidance \nhere at http://wiki.postgresql.org/wiki/Bulk_Loading_and_Restores\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nBen Brehmer wrote:\n\n\nBy \"Loading data\" I am implying: \"psql -U postgres -d somedatabase -f\nsql_file.sql\".  The sql_file.sql contains table creates and insert\nstatements. There are no indexes present nor created during the load. \nCOPY command: Unfortunately I'm stuck with INSERTS due to the nature\nthis data was generated (Hadoop/MapReduce). \n\nYour basic options here are to batch the INSERTs into bigger chunks,\nand/or to split your data file up so that it can be loaded by more than\none process at a time.  There's some comments and links to more\nguidance here at\nhttp://wiki.postgresql.org/wiki/Bulk_Loading_and_Restores\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com", "msg_date": "Mon, 07 Dec 2009 15:59:29 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "Thanks for all the responses. I have one more thought;\n\nSince my input data is split into about 200 files (3GB each), I could \npotentially spawn one load command for each file. What would be the \nmaximum number of input connections Postgres can handle without bogging \ndown? When I say 'input connection' I mean \"psql -U postgres -d dbname \n-f one_of_many_sql_files\".\n\nThanks,\nBen\n\n\n\nOn 07/12/2009 12:59 PM, Greg Smith wrote:\n> Ben Brehmer wrote:\n>> By \"Loading data\" I am implying: \"psql -U postgres -d somedatabase -f \n>> sql_file.sql\". The sql_file.sql contains table creates and insert \n>> statements. There are no indexes present nor created during the load.\n>> COPY command: Unfortunately I'm stuck with INSERTS due to the nature \n>> this data was generated (Hadoop/MapReduce).\n> Your basic options here are to batch the INSERTs into bigger chunks, \n> and/or to split your data file up so that it can be loaded by more \n> than one process at a time. There's some comments and links to more \n> guidance here at http://wiki.postgresql.org/wiki/Bulk_Loading_and_Restores\n>\n> -- \n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n> \n\n\n\n\n\n\nThanks for all the responses. I have one more thought; \n\nSince my input data is split into about 200 files (3GB each), I could\npotentially spawn one load command for each file. What would be the\nmaximum number of input connections Postgres can handle without bogging\ndown? When I say 'input connection' I mean \"psql -U postgres -d dbname\n-f one_of_many_sql_files\".\n\nThanks,\nBen\n\n\n\nOn 07/12/2009 12:59 PM, Greg Smith wrote:\n\n\nBen Brehmer wrote:\n \n\nBy \"Loading data\" I am implying: \"psql -U postgres -d somedatabase -f\nsql_file.sql\".  The sql_file.sql contains table creates and insert\nstatements. There are no indexes present nor created during the load. \nCOPY command: Unfortunately I'm stuck with INSERTS due to the nature\nthis data was generated (Hadoop/MapReduce). \n\nYour basic options here are to batch the INSERTs into bigger chunks,\nand/or to split your data file up so that it can be loaded by more than\none process at a time.  There's some comments and links to more\nguidance here at\n http://wiki.postgresql.org/wiki/Bulk_Loading_and_Restores\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com", "msg_date": "Mon, 07 Dec 2009 23:22:10 -0800", "msg_from": "Ben Brehmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "Ben Brehmer wrote:\n> Since my input data is split into about 200 files (3GB each), I could \n> potentially spawn one load command for each file. What would be the \n> maximum number of input connections Postgres can handle without \n> bogging down? \nYou can expect to easily get one loader process per real CPU going. \nBeyond that, it depends on how CPU intensive they all are and what the \nresulting I/O rate out of the combination is. You're probably going to \nrun out of CPU on a loading job long before you hit any of the other \nlimits in this area, and potentially you could run out of disk \nthroughput on a cloud system before that. PostgreSQL isn't going to bog \ndown on a connection basis until you've reached several hundred of them, \nyour loader will be lucky to hit 10 active processes before it grinds to \na halt on some physical resources unrelated to general database scaling.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nBen Brehmer wrote:\n\n\nSince my input data is split into about 200 files (3GB each), I could\npotentially spawn one load command for each file. What would be the\nmaximum number of input connections Postgres can handle without bogging\ndown? \nYou can expect to easily get one loader process per real CPU going. \nBeyond that, it depends on how CPU intensive they all are and what the\nresulting I/O rate out of the combination is.  You're probably going to\nrun out of CPU on a loading job long before you hit any of the other\nlimits in this area, and potentially you could run out of disk\nthroughput on a cloud system before that.  PostgreSQL isn't going to\nbog down on a connection basis until you've reached several hundred of\nthem, your loader will be lucky to hit 10 active processes before it\ngrinds to a halt on some physical resources unrelated to general\ndatabase scaling.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com", "msg_date": "Tue, 08 Dec 2009 02:35:12 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "On Tue, Dec 8, 2009 at 12:22 AM, Ben Brehmer <[email protected]> wrote:\n> Thanks for all the responses. I have one more thought;\n>\n> Since my input data is split into about 200 files (3GB each), I could\n> potentially spawn one load command for each file. What would be the maximum\n> number of input connections Postgres can handle without bogging down? When I\n> say 'input connection' I mean \"psql -U postgres -d dbname -f\n> one_of_many_sql_files\".\n\nThis is VERY dependent on your IO capacity and number of cores. My\nexperience is that unless you're running on a decent number of disks,\nyou'll run out of IO capacity first in most machines. n pairs of\nmirrors in a RAID-10 can handle x input threads where x has some near\nlinear relation to n. Have 100 disks in a RAID-10 array? You can\nsurely handle dozens of load threads with no IO wait. Have 4 disks in\na RAID-10? Maybe two to four load threads will max you out. Once\nyou're IO bound, adding more threads and more CPUs won't help, it'll\nhurt. The only way to really know is to benchmark it, but i'd guess\nthat about half as many import threads as mirror pairs in a RAID-10\n(or just drives if you're using RAID-0) would be a good place to start\nand work from there.\n", "msg_date": "Tue, 8 Dec 2009 00:58:50 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "On Tue, Dec 8, 2009 at 12:58 AM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Dec 8, 2009 at 12:22 AM, Ben Brehmer <[email protected]> wrote:\n>> Thanks for all the responses. I have one more thought;\n>>\n>> Since my input data is split into about 200 files (3GB each), I could\n>> potentially spawn one load command for each file. What would be the maximum\n>> number of input connections Postgres can handle without bogging down? When I\n>> say 'input connection' I mean \"psql -U postgres -d dbname -f\n>> one_of_many_sql_files\".\n>\n> This is VERY dependent on your IO capacity and number of cores.  My\n> experience is that unless you're running on a decent number of disks,\n> you'll run out of IO capacity first in most machines.  n pairs of\n> mirrors in a RAID-10 can handle x input threads where x has some near\n> linear relation to n.  Have 100 disks in a RAID-10 array?  You can\n> surely handle dozens of load threads with no IO wait.  Have 4 disks in\n> a RAID-10?  Maybe two to four load threads will max you out.  Once\n> you're IO bound, adding more threads and more CPUs won't help, it'll\n> hurt.  The only way to really know is to benchmark it, but i'd guess\n> that about half as many import threads as mirror pairs in a RAID-10\n> (or just drives if you're using RAID-0) would be a good place to start\n> and work from there.\n\nNote that if you start running out of CPU horsepower first the\ndegradation will be less harsh as you go past the knee in the\nperformance curve. IO has a sharper knee than CPU.\n", "msg_date": "Tue, 8 Dec 2009 00:59:40 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "Hi,\n\nBen Brehmer <[email protected]> writes:\n> By \"Loading data\" I am implying: \"psql -U postgres -d somedatabase -f sql_file.sql\". The sql_file.sql contains table creates and insert statements. There are no\n> indexes present nor created during the load.\n>\n> OS: x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-44)\n>\n> PostgreSQL: I will try upgrading to latest version.\n>\n> COPY command: Unfortunately I'm stuck with INSERTS due to the nature\n> this data was generated (Hadoop/MapReduce).\n\nWhat I think you could do is the followings:\n\n - switch to using 8.4\n - load your files in a *local* database\n - pg_dump -Fc\n - now pg_restore -j X on the amazon setup\n\nThat way you will be using COPY rather than INSERTs and parallel loading\nbuilt-in pg_restore (and optimisations of when to add the indexes and\nconstraints). The X is to choose depending on the IO power and the\nnumber of CPU...\n\nRegards,\n-- \ndim\n", "msg_date": "Tue, 08 Dec 2009 10:08:45 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "On Tue, Dec 8, 2009 at 2:08 AM, Dimitri Fontaine <[email protected]> wrote:\n> Hi,\n>\n> Ben Brehmer <[email protected]> writes:\n>> By \"Loading data\" I am implying: \"psql -U postgres -d somedatabase -f sql_file.sql\".  The sql_file.sql contains table creates and insert statements. There are no\n>> indexes present nor created during the load.\n>>\n>> OS: x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-44)\n>>\n>> PostgreSQL: I will try upgrading to latest version.\n>>\n>> COPY command: Unfortunately I'm stuck with INSERTS due to the nature\n>> this data was generated (Hadoop/MapReduce).\n>\n> What I think you could do is the followings:\n>\n>  - switch to using 8.4\n>  - load your files in a *local* database\n>  - pg_dump -Fc\n>  - now pg_restore -j X on the amazon setup\n>\n> That way you will be using COPY rather than INSERTs and parallel loading\n> built-in pg_restore (and optimisations of when to add the indexes and\n> constraints). The X is to choose depending on the IO power and the\n> number of CPU...\n\nThat's a lot of work to get to COPY. It might be enough to drop all\nFK relations and indexes on the destination db in the cloud, load the\ndata in a few (or one) transaction(s), then recreate indexes and FK\nrelationships.\n", "msg_date": "Tue, 8 Dec 2009 02:28:28 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> That's a lot of work to get to COPY. \n\nWell, yes. I though about it this way only after having read that OP is\nuneasy with producing another format from his source data, and\nconsidering it's a one-shot operation.\n\nAh, tradeoffs, how to find the right one!\n-- \ndim\n", "msg_date": "Tue, 08 Dec 2009 10:37:15 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "On 12/07/2009 12:12 PM, Ben Brehmer wrote:\n> Hello All,\n>\n> I'm in the process of loading a massive amount of data (500 GB). After\n> some initial timings, I'm looking at 260 hours to load the entire 500GB.\n> 10 days seems like an awfully long time so I'm searching for ways to\n> speed this up. The load is happening in the Amazon cloud (EC2), on a\n> m1.large instance:\n> -7.5 GB memory\n> -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)\n> -64-bit platform\n>\n>\n> So far I have modified my postgresql.conf file (PostgreSQL 8.1.3). The\n> modifications I have made are as follows:\n>\n> shared_buffers = 786432\n> work_mem = 10240\n> maintenance_work_mem = 6291456\n> max_fsm_pages = 3000000\n> wal_buffers = 2048\n> checkpoint_segments = 200\n> checkpoint_timeout = 300\n> checkpoint_warning = 30\n> autovacuum = off\n>\n>\n> There are a variety of instance types available in the Amazon cloud\n> (http://aws.amazon.com/ec2/instance-types/), including high memory and\n> high CPU. High memory instance types come with 34GB or 68GB of memory.\n> High CPU instance types have a lot less memory (7GB max) but up to 8\n> virtual cores. I am more than willing to change to any of the other\n> instance types.\n>\n> Also, there is nothing else happening on the loading server. It is\n> completely dedicated to the load.\n>\n> Any advice would be greatly appreciated.\n>\n> Thanks,\n>\n> Ben\n>\n\nI'm kind of curious, how goes the load? Is it done yet? Still looking at days'n'days to finish?\n\nI was thinking... If the .sql files are really nicely formatted, it would not be too hard to whip up a perl script to run as a filter to change the statements into copy's.\n\nEach file would have to only fill one table, and only contain inserts, and all the insert statements would have to set the same fields. (And I'm sure there could be other problems).\n\nAlso, just for the load, did you disable fsync?\n\n-Andy\n", "msg_date": "Wed, 09 Dec 2009 07:31:19 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "Hi Andy,\n\nLoad is chugging along. We've optimized our postgres conf as much as \npossible but are seeing the inevitable I/O bottleneck. I had the same \nthought as you (converting inserts into copy's) a while back but \nunfortunately each file has many inserts into many different tables. \nPotentially I could rip through this with a little MapReduce job on \n50-100 nodes, which is still something I might do.\n\nOne thought we are playing with was taking advantage of 4 x 414GB EBS \ndevices in a RAID0 configuration. This would spread disk writes across 4 \nblock devices.\n\nRight now I'm wrapping about 1500 inserts in a transaction block. Since \nits an I/O bottlenecks, COPY statements might not give me much advantage.\n\nIts definitely a work in progress :)\n\nBen\n\n\nOn 09/12/2009 5:31 AM, Andy Colson wrote:\n> On 12/07/2009 12:12 PM, Ben Brehmer wrote:\n>> Hello All,\n>>\n>> I'm in the process of loading a massive amount of data (500 GB). After\n>> some initial timings, I'm looking at 260 hours to load the entire 500GB.\n>> 10 days seems like an awfully long time so I'm searching for ways to\n>> speed this up. The load is happening in the Amazon cloud (EC2), on a\n>> m1.large instance:\n>> -7.5 GB memory\n>> -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each)\n>> -64-bit platform\n>>\n>>\n>> So far I have modified my postgresql.conf file (PostgreSQL 8.1.3). The\n>> modifications I have made are as follows:\n>>\n>> shared_buffers = 786432\n>> work_mem = 10240\n>> maintenance_work_mem = 6291456\n>> max_fsm_pages = 3000000\n>> wal_buffers = 2048\n>> checkpoint_segments = 200\n>> checkpoint_timeout = 300\n>> checkpoint_warning = 30\n>> autovacuum = off\n>>\n>>\n>> There are a variety of instance types available in the Amazon cloud\n>> (http://aws.amazon.com/ec2/instance-types/), including high memory and\n>> high CPU. High memory instance types come with 34GB or 68GB of memory.\n>> High CPU instance types have a lot less memory (7GB max) but up to 8\n>> virtual cores. I am more than willing to change to any of the other\n>> instance types.\n>>\n>> Also, there is nothing else happening on the loading server. It is\n>> completely dedicated to the load.\n>>\n>> Any advice would be greatly appreciated.\n>>\n>> Thanks,\n>>\n>> Ben\n>>\n>\n> I'm kind of curious, how goes the load? Is it done yet? Still \n> looking at days'n'days to finish?\n>\n> I was thinking... If the .sql files are really nicely formatted, it \n> would not be too hard to whip up a perl script to run as a filter to \n> change the statements into copy's.\n>\n> Each file would have to only fill one table, and only contain inserts, \n> and all the insert statements would have to set the same fields. (And \n> I'm sure there could be other problems).\n>\n> Also, just for the load, did you disable fsync?\n>\n> -Andy\n>\n", "msg_date": "Thu, 10 Dec 2009 12:24:06 -0800", "msg_from": "Ben Brehmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "On 12/7/09 11:12 AM, \"Ben Brehmer\" <[email protected]> wrote:\n\n> Thanks for the quick responses. I will respond to all questions in one email:\n> \n> COPY command: Unfortunately I'm stuck with INSERTS due to the nature this data\n> was generated (Hadoop/MapReduce).\n\nIf you have control over the MapReduce output, you can have that output\nresult files in a format that COPY likes.\n\nIf you don't have any control over that its more complicated. I use a final\npass Hadoop Map only job to go over the output and insert into postgres\ndirectly from the job, using the :\n\nINSERT INTO <table> VALUES (val1, val2, ... ), (val1, val2, ...), ...\nInsert style from Java with about 80 rows per insert statement and a single\ntransaction for about a thousand of these. This was faster than batch\ninserts .\n\n\n> \n> On 07/12/2009 10:39 AM, Thom Brown wrote:\n>> \n>> 2009/12/7 Kevin Grittner <[email protected]>\n>> \n>>> \n>>> Ben Brehmer <[email protected]> wrote:\n>>> \n>>>> -7.5 GB memory\n>>>> -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units\n>>>> each)\n>>>> -64-bit platform\n>>> \n>>> \n>>> What OS?\n>>> \n>>>> (PostgreSQL 8.1.3)\n>>> \n>>> Why use such an antiquated, buggy version? Newer versions are\n>>> faster.\n>>> \n>>> -Kevin\n>>> \n>> \n>> \n>> \n>> \n>> \n>> \n>> I'd agree with trying to use the latest version you can.\n>> \n>> \n>> \n>> \n>> How are you loading this data? I'd make sure you haven't got any indices,\n>> primary keys, triggers or constraints on your tables before you begin the\n>> initial load, just add them after. Also use either the COPY command for\n>> loading, or prepared transactions. Individual insert commands will just take\n>> way too long.\n>> \n>> \n>> \n>> \n>> Regards\n>> \n>> \n>> \n>> \n>> Thom\n> \n\n", "msg_date": "Thu, 10 Dec 2009 15:29:59 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" }, { "msg_contents": "\n\n\nOn 12/10/09 3:29 PM, \"Scott Carey\" <[email protected]> wrote:\n\n> On 12/7/09 11:12 AM, \"Ben Brehmer\" <[email protected]> wrote:\n> \n>> Thanks for the quick responses. I will respond to all questions in one email:\n>> \n>> COPY command: Unfortunately I'm stuck with INSERTS due to the nature this\n>> data\n>> was generated (Hadoop/MapReduce).\n> \n> If you have control over the MapReduce output, you can have that output\n> result files in a format that COPY likes.\n> \n> If you don't have any control over that its more complicated. I use a final\n> pass Hadoop Map only job to go over the output and insert into postgres\n> directly from the job, using the :\n> \n> INSERT INTO <table> VALUES (val1, val2, ... ), (val1, val2, ...), ...\n> Insert style from Java with about 80 rows per insert statement and a single\n> transaction for about a thousand of these. This was faster than batch\n> inserts .\n> \n\nI should mention that the above is a bit off. There is an important caveat\nthat each of these individual tasks might run twice in Hadoop (only one will\nfinish -- speculative execution and retry on error). To deal with this you\ncan run each job inside a single transaction so that a failure will\nrollback, and likely want to turn off speculative execution.\n\nAnother option is to run only one map job, with no reduce for this sort of\nwork in order to ensure duplicate data is not inserted. We are inserting\ninto a temp table named uniquely per chunk first (sometimes in parallel).\nThen while holding a posstgres advisory lock we do a SELECT * FROM <temp>\nINTO <destination> type operation, which is fast.\n\n> \n>> \n>> On 07/12/2009 10:39 AM, Thom Brown wrote:\n>>> \n>>> 2009/12/7 Kevin Grittner <[email protected]>\n>>> \n>>>> \n>>>> Ben Brehmer <[email protected]> wrote:\n>>>> \n>>>>> -7.5 GB memory\n>>>>> -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units\n>>>>> each)\n>>>>> -64-bit platform\n>>>> \n>>>> \n>>>> What OS?\n>>>> \n>>>>> (PostgreSQL 8.1.3)\n>>>> \n>>>> Why use such an antiquated, buggy version? Newer versions are\n>>>> faster.\n>>>> \n>>>> -Kevin\n>>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> I'd agree with trying to use the latest version you can.\n>>> \n>>> \n>>> \n>>> \n>>> How are you loading this data? I'd make sure you haven't got any indices,\n>>> primary keys, triggers or constraints on your tables before you begin the\n>>> initial load, just add them after. Also use either the COPY command for\n>>> loading, or prepared transactions. Individual insert commands will just\n>>> take\n>>> way too long.\n>>> \n>>> \n>>> \n>>> \n>>> Regards\n>>> \n>>> \n>>> \n>>> \n>>> Thom\n>> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 10 Dec 2009 18:37:16 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load experimentation" } ]
[ { "msg_contents": "Hello everybody,\n\nwe have severe performance penalty between Postgresql 8.3.8 and 8.4.1\n\nConsider the following tables:\n\nCREATE TABLE xdf.xdf_admin_hierarchy\n(\n admin_place_id integer NOT NULL,\n admin_order smallint NOT NULL,\n iso_country_code character(3) NOT NULL,\n country_id integer NOT NULL,\n order1_id integer,\n order2_id integer,\n order8_id integer,\n builtup_id integer,\n num_links integer,\n CONSTRAINT pk_xdf_admin_hierarchy PRIMARY KEY (admin_place_id)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE xdf.xdf_admin_hierarchy OWNER TO frog;\n\nCREATE TABLE xdf.xdf_link_admin\n(\n admin_place_id integer NOT NULL,\n link_id integer NOT NULL,\n side character(1) NOT NULL,\n CONSTRAINT pk_xdf_link_admin PRIMARY KEY (link_id, side)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE xdf.xdf_link_admin OWNER TO frog;\n\nCREATE INDEX nx_xdflinkadmin_adminplaceid\n ON xdf.xdf_link_admin\n USING btree\n (admin_place_id);\n\nCREATE INDEX nx_xdflinkadmin_linkid\n ON xdf.xdf_link_admin\n USING btree\n (link_id);\n\nCREATE TABLE xdf.xdf_road_link\n(\n road_link_id integer NOT NULL,\n road_name_id integer,\n left_address_range_id integer NOT NULL,\n right_address_range_id integer NOT NULL,\n address_type smallint NOT NULL,\n is_exit_name character(1) NOT NULL,\n explicatable character(1) NOT NULL,\n is_junction_name character(1) NOT NULL,\n is_name_on_roadsign character(1) NOT NULL,\n is_postal_name character(1) NOT NULL,\n is_stale_name character(1) NOT NULL,\n is_vanity_name character(1) NOT NULL,\n is_scenic_name character(1) NOT NULL,\n link_id integer NOT NULL,\n CONSTRAINT pk_xdf_road_link PRIMARY KEY (road_link_id)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE xdf.xdf_road_link OWNER TO frog;\n\nCREATE INDEX nx_xdfroadlink_leftaddressrangeid\n ON xdf.xdf_road_link\n USING btree\n (left_address_range_id);\n\nCREATE INDEX nx_xdfroadlink_linkid\n ON xdf.xdf_road_link\n USING btree\n (link_id);\n\nCREATE INDEX nx_xdfroadlink_rightaddressrangeid\n ON xdf.xdf_road_link\n USING btree\n (right_address_range_id);\n\nCREATE INDEX nx_xdfroadlink_roadnameid\n ON xdf.xdf_road_link\n USING btree\n (road_name_id);\n\nCREATE TABLE xdf.xdf_road_name\n(\n road_name_id integer NOT NULL,\n route_type smallint NOT NULL,\n attached_to_base character(1) NOT NULL,\n precedes_base character(1) NOT NULL,\n prefix character varying(10),\n street_type character varying(30),\n suffix character varying(2),\n base_name character varying(60) NOT NULL,\n language_code character(3) NOT NULL,\n is_exonym character(1) NOT NULL,\n name_type character(1) NOT NULL,\n direction_on_sign character(1) NOT NULL,\n street_name character varying(60) NOT NULL,\n CONSTRAINT pk_xdf_road_name PRIMARY KEY (road_name_id)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE xdf.xdf_road_name OWNER TO frog;\n\nCREATE INDEX nx_xdfroadname_languagecode\n ON xdf.xdf_road_name\n USING btree\n (language_code);\n\nIf one executes a query of the following structure:\n\nSELECT AH.ORDER8_ID, AH.BUILTUP_ID, RL.LINK_ID, LA.SIDE,\nRL.ROAD_NAME_ID, RL.LEFT_ADDRESS_RANGE_ID, RL.RIGHT_ADDRESS_RANGE_ID,\nRL.IS_EXIT_NAME, RL.EXPLICATABLE, RL.IS_JUNCTION_NAME,\nRL.IS_NAME_ON_ROADSIGN, RL.IS_POSTAL_NAME, RL.IS_STALE_NAME,\nRL.IS_VANITY_NAME, RL.ROAD_LINK_ID, RN.STREET_NAME,\nRN.ROUTE_TYPE\nFROM xdf.xdf_ADMIN_HIERARCHY AH, xdf.xdf_LINK_ADMIN LA,\nxdf.xdf_ROAD_LINK RL, xdf.xdf_ROAD_NAME RN\nWHERE AH.ADMIN_PLACE_ID = LA.ADMIN_PLACE_ID\nAND LA.LINK_ID = RL.LINK_ID\nAND RL.ROAD_NAME_ID = RN.ROAD_NAME_ID\nAND RL.IS_EXIT_NAME = 'N'\nAND RL.IS_JUNCTION_NAME = 'N'\nAND RN.ROAD_NAME_ID BETWEEN 158348561 AND 158348660\nORDER BY RL.ROAD_NAME_ID, AH.ORDER8_ID, AH.BUILTUP_ID, RL.LINK_ID;\n \nIt is carried out with poor performance on postgresql 8.4.1 However postgresql 8.3.8 performs just fine.\nIf you take a closer look at the query with EXPLAIN, it becomes obvious, that postgresql 8.4 does not\nconsider the primary key at level 3 and instead generates a hash join:\n\nPostgresql 8.4.1:\n\nSort (cost=129346.71..129498.64 rows=60772 width=61)\n Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n -> Hash Join (cost=2603.57..124518.03 rows=60772 width=61)\n Hash Cond: (la.admin_place_id = ah.admin_place_id)\n -> Nested Loop (cost=6.82..120781.81 rows=60772 width=57)\n -> Nested Loop (cost=6.82..72383.98 rows=21451 width=51)\n -> Index Scan using pk_rdf_road_name on rdf_road_name rn (cost=0.00..11.24 rows=97 width=21)\n Index Cond: ((road_name_id >= 158348561) AND (road_name_id <= 158348660))\n -> Bitmap Heap Scan on rdf_road_link rl (cost=6.82..743.34 rows=222 width=34)\n Recheck Cond: (rl.road_name_id = rn.road_name_id)\n Filter: ((rl.is_exit_name = 'N'::bpchar) AND (rl.is_junction_name = 'N'::bpchar))\n -> Bitmap Index Scan on nx_rdfroadlink_roadnameid (cost=0.00..6.76 rows=222 width=0)\n Index Cond: (rl.road_name_id = rn.road_name_id)\n -> Index Scan using nx_rdflinkadmin_linkid on rdf_link_admin la (cost=0.00..2.22 rows=3 width=10)\n Index Cond: (la.link_id = rl.link_id)\n -> Hash (cost=1544.11..1544.11 rows=84211 width=12)\n -> Seq Scan on rdf_admin_hierarchy ah (cost=0.00..1544.11 rows=84211 width=12)\n\nPostgresql 8.3.8:\n\nSort (cost=3792.75..3792.95 rows=81 width=61)\n Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n -> Nested Loop (cost=21.00..3790.18 rows=81 width=61)\n -> Nested Loop (cost=21.00..3766.73 rows=81 width=57)\n -> Nested Loop (cost=21.00..3733.04 rows=14 width=51)\n -> Index Scan using pk_rdf_road_name on rdf_road_name rn (cost=0.00..8.32 rows=1 width=21)\n Index Cond: ((road_name_id >= 158348561) AND (road_name_id <= 158348660))\n -> Bitmap Heap Scan on rdf_road_link rl (cost=21.00..3711.97 rows=1020 width=34)\n Recheck Cond: (rl.road_name_id = rn.road_name_id)\n Filter: ((rl.is_exit_name = 'N'::bpchar) AND (rl.is_junction_name = 'N'::bpchar))\n -> Bitmap Index Scan on nx_rdfroadlink_roadnameid (cost=0.00..20.75 rows=1020 width=0)\n Index Cond: (rl.road_name_id = rn.road_name_id)\n -> Index Scan using nx_rdflinkadmin_linkid on rdf_link_admin la (cost=0.00..2.31 rows=8 width=10)\n Index Cond: (la.link_id = rl.link_id)\n -> Index Scan using pk_rdf_admin_hierarchy on rdf_admin_hierarchy ah (cost=0.00..0.28 rows=1 width=12)\n Index Cond: (ah.admin_place_id = la.admin_place_id)\n\nWith our data it is a performance difference from 1h16min (8.3.8) to 2h43min (8.4.1)\n\nI hope someone can help me out with my problem. If you need further information please let me know.\n\nMit freundlichem Gruß / Best regards\n\nDavid Schmitz\nDipl.-Ing.(FH)\nSoftware Developer New Map Compiler\n\nHARMAN/BECKER AUTOMOTIVE SYSTEMS\ninnovative systems GmbH\nHugh-Greene-Weg 2-4 - 22529 Hamburg - Germany\nPhone: +49 (0)40-30067-990\nFax: +49 (0)40-30067-969\nMailto:[email protected] \n \n*******************************************\ninnovative systems GmbH Navigation-Multimedia\nGeschaeftsfuehrung: Edwin Summers - Michael Juergen Mauser\nSitz der Gesellschaft: Hamburg - Registergericht: Hamburg HRB 59980 \n \n*******************************************\nDiese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und loeschen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet.\nThis e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the contents in this e-mail is strictly forbidden.\n*******************************************\n", "msg_date": "Mon, 7 Dec 2009 23:05:14 +0100", "msg_from": "\"Schmitz, David\" <[email protected]>", "msg_from_op": true, "msg_subject": "performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "\"Schmitz, David\" <[email protected]> wrote:\n \n> It is carried out with poor performance on postgresql 8.4.1\n> However postgresql 8.3.8 performs just fine.\n> If you take a closer look at the query with EXPLAIN, it becomes\n> obvious, that postgresql 8.4 does not consider the primary key at\n> level 3 and instead generates a hash join:\n \n> Postgresql 8.4.1:\n> \n> Sort (cost=129346.71..129498.64 rows=60772 width=61)\n \n> Postgresql 8.3.8:\n> \n> Sort (cost=3792.75..3792.95 rows=81 width=61)\n \nIt determines the plan based on available statistics, which in this\ncase seem to indicate rather different data. Do the two databases\nhave identical data? Have they both been recently analyzed? What\nis the default_statistics_target on each? Do any columns in these\ntables have overrides?\n \n-Kevin\n", "msg_date": "Mon, 07 Dec 2009 16:19:51 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and\n\t 8.4.1" }, { "msg_contents": "-----Ursprüngliche Nachricht-----\nVon:\tSchmitz, David\nGesendet:\tDi 08.12.2009 00:14\nAn:\tKevin Grittner\nCc:\t\nBetreff:\tAW: [PERFORM] performance penalty between Postgresql 8.3.8 and 8.4.1\n\n\n\n\n-----Ursprüngliche Nachricht-----\nVon:\tKevin Grittner [mailto:[email protected]]\nGesendet:\tMo 07.12.2009 23:19\nAn:\tSchmitz, David; [email protected]\nCc:\t\nBetreff:\tRe: [PERFORM] performance penalty between Postgresql 8.3.8 and 8.4.1\n\n\"Schmitz, David\" <[email protected]> wrote:\n \n> It is carried out with poor performance on postgresql 8.4.1\n> However postgresql 8.3.8 performs just fine.\n> If you take a closer look at the query with EXPLAIN, it becomes\n> obvious, that postgresql 8.4 does not consider the primary key at\n> level 3 and instead generates a hash join:\n \n> Postgresql 8.4.1:\n> \n> Sort (cost=129346.71..129498.64 rows=60772 width=61)\n \n> Postgresql 8.3.8:\n> \n> Sort (cost=3792.75..3792.95 rows=81 width=61)\n \nIt determines the plan based on available statistics, which in this\ncase seem to indicate rather different data. Do the two databases\nhave identical data? Have they both been recently analyzed? What\nis the default_statistics_target on each? Do any columns in these\ntables have overrides?\n \n-Kevin\n\n\nHello Kevin,\n\nboth databases have identical / same data and hardware. On postgresql 8.3.8 default statistics target is 10 and at postgresql 8.4.1 it is 100. But i have been experimenting in both directions with postgres 8.4.1 10, 100, 1000 or 10000 does not matter perfomance remains bad. Analyze has been run recently on both databases (even an explicit analayze before query makes no difference). Autovaccuum and analyze are set quite aggressive at 0.01 (v) and 0.02 (a) and postgres 8.3.8 still outperforms 8.4.1.\n\nRegards\n\ndave \n \n*******************************************\ninnovative systems GmbH Navigation-Multimedia\nGeschaeftsfuehrung: Edwin Summers - Michael Juergen Mauser\nSitz der Gesellschaft: Hamburg - Registergericht: Hamburg HRB 59980 \n \n*******************************************\nDiese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und loeschen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet.\nThis e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the contents in this e-mail is strictly forbidden.\n*******************************************\n", "msg_date": "Tue, 8 Dec 2009 00:17:10 +0100", "msg_from": "\"Schmitz, David\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "Hi David,\n\nOn Monday 07 December 2009 23:05:14 Schmitz, David wrote:\n> With our data it is a performance difference from 1h16min (8.3.8) to\n> 2h43min (8.4.1)\nCan you afford a explain analyze run overnight or so for both?\n\nAndres\n", "msg_date": "Tue, 8 Dec 2009 00:25:21 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "On Mon, Dec 7, 2009 at 5:19 PM, Kevin Grittner\n<[email protected]> wrote:\n> \"Schmitz, David\" <[email protected]> wrote:\n>\n>> It is carried out with poor performance on postgresql 8.4.1\n>> However postgresql 8.3.8 performs just fine.\n>> If you take a closer look at the query with EXPLAIN, it becomes\n>> obvious, that postgresql 8.4 does not consider the primary key at\n>> level 3 and instead generates a hash join:\n>\n>> Postgresql 8.4.1:\n>>\n>> Sort  (cost=129346.71..129498.64 rows=60772 width=61)\n>\n>> Postgresql 8.3.8:\n>>\n>> Sort  (cost=3792.75..3792.95 rows=81 width=61)\n>\n> It determines the plan based on available statistics, which in this\n> case seem to indicate rather different data.  Do the two databases\n> have identical data?  Have they both been recently analyzed?  What\n> is the default_statistics_target on each?  Do any columns in these\n> tables have overrides?\n\nI think Tom made some changes to the join selectivity code which might\nbe relevant here, though I'm not sure exactly what's going on. Can we\nsee, on the 8.4.1 database:\n\nSELECT SUM(1) FROM rdf_admin_hierarchy;\nSELECT s.stadistinct, s.stanullfrac, s.stawidth,\narray_upper(s.stanumbers1, 1) FROM pg_statistic s WHERE s.starelid =\n'rdf_admin_hierarchy'::regclass AND s.staattnum = (SELECT a.attnum\nFROM pg_attribute a WHERE a.attname = 'admin_place_id' AND a.attrelid\n= 'rdf_admin_hierarchy'::regclass);\n\n...Robert\n", "msg_date": "Mon, 7 Dec 2009 23:04:31 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "Hi Andres,\n\nThis query returns for 8.4.1 and for 8.3.8 the same result:\n\nstadistinct = -1\nstanullfrac = 0\nstawidth = 4\narray_upper nothing \n\nRegards \n\nDavid\n\n>-----Ursprüngliche Nachricht-----\n>Von: Robert Haas [mailto:[email protected]] \n>Gesendet: Dienstag, 8. Dezember 2009 05:05\n>An: Kevin Grittner\n>Cc: Schmitz, David; [email protected]\n>Betreff: Re: [PERFORM] performance penalty between Postgresql \n>8.3.8 and 8.4.1\n>\n>On Mon, Dec 7, 2009 at 5:19 PM, Kevin Grittner \n><[email protected]> wrote:\n>> \"Schmitz, David\" <[email protected]> wrote:\n>>\n>>> It is carried out with poor performance on postgresql 8.4.1 However \n>>> postgresql 8.3.8 performs just fine.\n>>> If you take a closer look at the query with EXPLAIN, it becomes \n>>> obvious, that postgresql 8.4 does not consider the primary key at \n>>> level 3 and instead generates a hash join:\n>>\n>>> Postgresql 8.4.1:\n>>>\n>>> Sort (cost=129346.71..129498.64 rows=60772 width=61)\n>>\n>>> Postgresql 8.3.8:\n>>>\n>>> Sort (cost=3792.75..3792.95 rows=81 width=61)\n>>\n>> It determines the plan based on available statistics, which in this \n>> case seem to indicate rather different data. Do the two databases \n>> have identical data? Have they both been recently analyzed? \n> What is \n>> the default_statistics_target on each? Do any columns in \n>these tables \n>> have overrides?\n>\n>I think Tom made some changes to the join selectivity code \n>which might be relevant here, though I'm not sure exactly \n>what's going on. Can we see, on the 8.4.1 database:\n>\n>SELECT SUM(1) FROM rdf_admin_hierarchy;\n>SELECT s.stadistinct, s.stanullfrac, s.stawidth, \n>array_upper(s.stanumbers1, 1) FROM pg_statistic s WHERE \n>s.starelid = 'rdf_admin_hierarchy'::regclass AND s.staattnum = \n>(SELECT a.attnum FROM pg_attribute a WHERE a.attname = \n>'admin_place_id' AND a.attrelid = 'rdf_admin_hierarchy'::regclass);\n>\n>...Robert\n> \n \n*******************************************\ninnovative systems GmbH Navigation-Multimedia\nGeschaeftsfuehrung: Edwin Summers - Michael Juergen Mauser\nSitz der Gesellschaft: Hamburg - Registergericht: Hamburg HRB 59980 \n \n*******************************************\nDiese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und loeschen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet.\nThis e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the contents in this e-mail is strictly forbidden.\n*******************************************\n", "msg_date": "Tue, 8 Dec 2009 10:41:51 +0100", "msg_from": "\"Schmitz, David\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "Hi Andres,\n\nEXPLAIN ANALYZE \nselect ah.ORDER8_ID, ah.BUILTUP_ID, rl.LINK_ID, la.SIDE, \n rl.ROAD_NAME_ID, rl.LEFT_ADDRESS_RANGE_ID, rl.RIGHT_ADDRESS_RANGE_ID, \n rl.IS_EXIT_NAME, rl.EXPLICATABLE, rl.IS_JUNCTION_NAME, \n rl.IS_NAME_ON_ROADSIGN, rl.IS_POSTAL_NAME, rl.IS_STALE_NAME, \n rl.IS_VANITY_NAME, rl.ROAD_LINK_ID, rn.STREET_NAME, \n rn.ROUTE_TYPE \n from rdf.xdf_ADMIN_HIERARCHY ah \n join xdf.xdf_LINK_ADMIN la \n on ah.ADMIN_PLACE_ID = la.ADMIN_PLACE_ID \n join xdf.xdf_ROAD_LINK rl \n on la.LINK_ID = rl.LINK_ID \n join xdf.xdf_ROAD_NAME rn \n on rl.ROAD_NAME_ID = rn.ROAD_NAME_ID \n where rl.IS_EXIT_NAME = 'N' \n and rl.IS_JUNCTION_NAME = 'N' \n and rn.ROAD_NAME_ID between 158348561 and 158348660 \n order by rl.ROAD_NAME_ID, ah.ORDER8_ID, ah.BUILTUP_ID, rl.LINK_ID;\n\nOn Postgresql 8.4.1\n\nSort (cost=129346.71..129498.64 rows=60772 width=61) (actual time=100.358..100.496 rows=1444 loops=1)\n Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n Sort Method: quicksort Memory: 252kB\n -> Hash Join (cost=2603.57..124518.03 rows=60772 width=61) (actual time=62.359..97.268 rows=1444 loops=1)\n Hash Cond: (la.admin_place_id = ah.admin_place_id)\n -> Nested Loop (cost=6.82..120781.81 rows=60772 width=57) (actual time=0.318..33.600 rows=1444 loops=1)\n -> Nested Loop (cost=6.82..72383.98 rows=21451 width=51) (actual time=0.232..12.359 rows=722 loops=1)\n -> Index Scan using pk_xdf_road_name on xdf_road_name rn (cost=0.00..11.24 rows=97 width=21) (actual time=0.117..0.185 rows=100 loops=1)\n Index Cond: ((road_name_id >= 158348561) AND (road_name_id <= 158348660))\n -> Bitmap Heap Scan on xdf_road_link rl (cost=6.82..743.34 rows=222 width=34) (actual time=0.025..0.115 rows=7 loops=100)\n Recheck Cond: (rl.road_name_id = rn.road_name_id)\n Filter: ((rl.is_exit_name = 'N'::bpchar) AND (rl.is_junction_name = 'N'::bpchar))\n -> Bitmap Index Scan on nx_xdfroadlink_roadnameid (cost=0.00..6.76 rows=222 width=0) (actual time=0.008..0.008 rows=7 loops=100)\n Index Cond: (rl.road_name_id = rn.road_name_id)\n -> Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin la (cost=0.00..2.22 rows=3 width=10) (actual time=0.023..0.028 rows=2 loops=722)\n Index Cond: (la.link_id = rl.link_id)\n -> Hash (cost=1544.11..1544.11 rows=84211 width=12) (actual time=61.924..61.924 rows=84211 loops=1)\n -> Seq Scan on xdf_admin_hierarchy ah (cost=0.00..1544.11 rows=84211 width=12) (actual time=0.017..33.442 rows=84211 loops=1)\nTotal runtime: 101.446 ms\n\n\nand on Postgresql 8.3.8:\n\nSort (cost=3792.75..3792.95 rows=81 width=61) (actual time=28.928..29.074 rows=1444 loops=1)\n Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n Sort Method: quicksort Memory: 252kB\n -> Nested Loop (cost=21.00..3790.18 rows=81 width=61) (actual time=0.210..26.098 rows=1444 loops=1)\n -> Nested Loop (cost=21.00..3766.73 rows=81 width=57) (actual time=0.172..19.148 rows=1444 loops=1)\n -> Nested Loop (cost=21.00..3733.04 rows=14 width=51) (actual time=0.129..6.126 rows=722 loops=1)\n -> Index Scan using pk_xdf_road_name on xdf_road_name rn (cost=0.00..8.32 rows=1 width=21) (actual time=0.059..0.117 rows=100 loops=1)\n Index Cond: ((road_name_id >= 158348561) AND (road_name_id <= 158348660))\n -> Bitmap Heap Scan on xdf_road_link rl (cost=21.00..3711.97 rows=1020 width=34) (actual time=0.015..0.055 rows=7 loops=100)\n Recheck Cond: (rl.road_name_id = rn.road_name_id)\n Filter: ((rl.is_exit_name = 'N'::bpchar) AND (rl.is_junction_name = 'N'::bpchar))\n -> Bitmap Index Scan on nx_xdfroadlink_roadnameid (cost=0.00..20.75 rows=1020 width=0) (actual time=0.007..0.007 rows=7 loops=100)\n Index Cond: (rl.road_name_id = rn.road_name_id)\n -> Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin la (cost=0.00..2.31 rows=8 width=10) (actual time=0.014..0.017 rows=2 loops=722)\n Index Cond: (la.link_id = rl.link_id)\n -> Index Scan using pk_xdf_admin_hierarchy on xdf_admin_hierarchy ah (cost=0.00..0.28 rows=1 width=12) (actual time=0.003..0.004 rows=1 loops=1444)\n Index Cond: (ah.admin_place_id = la.admin_place_id)\nTotal runtime: 29.366 ms\n\nHope this gives any clue. Or did I missunderstand you?\n\nRegards\n\nDavid\n\n\n>-----Ursprüngliche Nachricht-----\n>Von: Andres Freund [mailto:[email protected]] \n>Gesendet: Dienstag, 8. Dezember 2009 00:25\n>An: [email protected]\n>Cc: Schmitz, David\n>Betreff: Re: [PERFORM] performance penalty between Postgresql \n>8.3.8 and 8.4.1\n>\n>Hi David,\n>\n>On Monday 07 December 2009 23:05:14 Schmitz, David wrote:\n>> With our data it is a performance difference from 1h16min \n>(8.3.8) to \n>> 2h43min (8.4.1)\n>Can you afford a explain analyze run overnight or so for both?\n>\n>Andres\n> \n \n*******************************************\ninnovative systems GmbH Navigation-Multimedia\nGeschaeftsfuehrung: Edwin Summers - Michael Juergen Mauser\nSitz der Gesellschaft: Hamburg - Registergericht: Hamburg HRB 59980 \n \n*******************************************\nDiese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und loeschen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet.\nThis e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the contents in this e-mail is strictly forbidden.\n*******************************************\n", "msg_date": "Tue, 8 Dec 2009 10:59:51 +0100", "msg_from": "\"Schmitz, David\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "2009/12/8 Schmitz, David <[email protected]>\n\n> Hi Andres,\n>\n> EXPLAIN ANALYZE\n> select ah.ORDER8_ID, ah.BUILTUP_ID, rl.LINK_ID, la.SIDE,\n> rl.ROAD_NAME_ID, rl.LEFT_ADDRESS_RANGE_ID,\n> rl.RIGHT_ADDRESS_RANGE_ID,\n> rl.IS_EXIT_NAME, rl.EXPLICATABLE, rl.IS_JUNCTION_NAME,\n> rl.IS_NAME_ON_ROADSIGN, rl.IS_POSTAL_NAME,\n> rl.IS_STALE_NAME,\n> rl.IS_VANITY_NAME, rl.ROAD_LINK_ID, rn.STREET_NAME,\n> rn.ROUTE_TYPE\n> from rdf.xdf_ADMIN_HIERARCHY ah\n> join xdf.xdf_LINK_ADMIN la\n> on ah.ADMIN_PLACE_ID = la.ADMIN_PLACE_ID\n> join xdf.xdf_ROAD_LINK rl\n> on la.LINK_ID = rl.LINK_ID\n> join xdf.xdf_ROAD_NAME rn\n> on rl.ROAD_NAME_ID = rn.ROAD_NAME_ID\n> where rl.IS_EXIT_NAME = 'N'\n> and rl.IS_JUNCTION_NAME = 'N'\n> and rn.ROAD_NAME_ID between 158348561 and 158348660\n> order by rl.ROAD_NAME_ID, ah.ORDER8_ID, ah.BUILTUP_ID,\n> rl.LINK_ID;\n>\n> On Postgresql 8.4.1\n>\n> Sort (cost=129346.71..129498.64 rows=60772 width=61) (actual\n> time=100.358..100.496 rows=1444 loops=1)\n> Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n> Sort Method: quicksort Memory: 252kB\n> -> Hash Join (cost=2603.57..124518.03 rows=60772 width=61) (actual\n> time=62.359..97.268 rows=1444 loops=1)\n> Hash Cond: (la.admin_place_id = ah.admin_place_id)\n> -> Nested Loop (cost=6.82..120781.81 rows=60772 width=57) (actual\n> time=0.318..33.600 rows=1444 loops=1)\n> -> Nested Loop (cost=6.82..72383.98 rows=21451 width=51)\n> (actual time=0.232..12.359 rows=722 loops=1)\n> -> Index Scan using pk_xdf_road_name on xdf_road_name\n> rn (cost=0.00..11.24 rows=97 width=21) (actual time=0.117..0.185 rows=100\n> loops=1)\n> Index Cond: ((road_name_id >= 158348561) AND\n> (road_name_id <= 158348660))\n> -> Bitmap Heap Scan on xdf_road_link rl\n> (cost=6.82..743.34 rows=222 width=34) (actual time=0.025..0.115 rows=7\n> loops=100)\n> Recheck Cond: (rl.road_name_id = rn.road_name_id)\n> Filter: ((rl.is_exit_name = 'N'::bpchar) AND\n> (rl.is_junction_name = 'N'::bpchar))\n> -> Bitmap Index Scan on\n> nx_xdfroadlink_roadnameid (cost=0.00..6.76 rows=222 width=0) (actual\n> time=0.008..0.008 rows=7 loops=100)\n> Index Cond: (rl.road_name_id =\n> rn.road_name_id)\n> -> Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin\n> la (cost=0.00..2.22 rows=3 width=10) (actual time=0.023..0.028 rows=2\n> loops=722)\n> Index Cond: (la.link_id = rl.link_id)\n> -> Hash (cost=1544.11..1544.11 rows=84211 width=12) (actual\n> time=61.924..61.924 rows=84211 loops=1)\n> -> Seq Scan on xdf_admin_hierarchy ah (cost=0.00..1544.11\n> rows=84211 width=12) (actual time=0.017..33.442 rows=84211 loops=1)\n> Total runtime: 101.446 ms\n>\n>\n> and on Postgresql 8.3.8:\n>\n> Sort (cost=3792.75..3792.95 rows=81 width=61) (actual time=28.928..29.074\n> rows=1444 loops=1)\n> Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n> Sort Method: quicksort Memory: 252kB\n> -> Nested Loop (cost=21.00..3790.18 rows=81 width=61) (actual\n> time=0.210..26.098 rows=1444 loops=1)\n> -> Nested Loop (cost=21.00..3766.73 rows=81 width=57) (actual\n> time=0.172..19.148 rows=1444 loops=1)\n> -> Nested Loop (cost=21.00..3733.04 rows=14 width=51)\n> (actual time=0.129..6.126 rows=722 loops=1)\n> -> Index Scan using pk_xdf_road_name on xdf_road_name\n> rn (cost=0.00..8.32 rows=1 width=21) (actual time=0.059..0.117 rows=100\n> loops=1)\n> Index Cond: ((road_name_id >= 158348561) AND\n> (road_name_id <= 158348660))\n> -> Bitmap Heap Scan on xdf_road_link rl\n> (cost=21.00..3711.97 rows=1020 width=34) (actual time=0.015..0.055 rows=7\n> loops=100)\n> Recheck Cond: (rl.road_name_id = rn.road_name_id)\n> Filter: ((rl.is_exit_name = 'N'::bpchar) AND\n> (rl.is_junction_name = 'N'::bpchar))\n> -> Bitmap Index Scan on\n> nx_xdfroadlink_roadnameid (cost=0.00..20.75 rows=1020 width=0) (actual\n> time=0.007..0.007 rows=7 loops=100)\n> Index Cond: (rl.road_name_id =\n> rn.road_name_id)\n> -> Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin\n> la (cost=0.00..2.31 rows=8 width=10) (actual time=0.014..0.017 rows=2\n> loops=722)\n> Index Cond: (la.link_id = rl.link_id)\n> -> Index Scan using pk_xdf_admin_hierarchy on xdf_admin_hierarchy\n> ah (cost=0.00..0.28 rows=1 width=12) (actual time=0.003..0.004 rows=1\n> loops=1444)\n> Index Cond: (ah.admin_place_id = la.admin_place_id)\n> Total runtime: 29.366 ms\n>\n> Hope this gives any clue. Or did I missunderstand you?\n>\n> Regards\n>\n> David\n>\n>\n> >-----Ursprüngliche Nachricht-----\n> >Von: Andres Freund [mailto:[email protected]]\n> >Gesendet: Dienstag, 8. Dezember 2009 00:25\n> >An: [email protected]\n> >Cc: Schmitz, David\n> >Betreff: Re: [PERFORM] performance penalty between Postgresql\n> >8.3.8 and 8.4.1\n> >\n> >Hi David,\n> >\n> >On Monday 07 December 2009 23:05:14 Schmitz, David wrote:\n> >> With our data it is a performance difference from 1h16min\n> >(8.3.8) to\n> >> 2h43min (8.4.1)\n> >Can you afford a explain analyze run overnight or so for both?\n> >\n> >Andres\n> >\n>\n>\n>\n>\nYour output shows that the xdf_admin_hierarchy tables between versions are\ndrastically different. 8.3.8 only contains 1 row, whereas 8.4.1 contains\n84211 rows.\n\nThom\n\n2009/12/8 Schmitz, David <[email protected]>\n\nHi Andres,\n\nEXPLAIN ANALYZE\nselect ah.ORDER8_ID, ah.BUILTUP_ID, rl.LINK_ID, la.SIDE,\n                    rl.ROAD_NAME_ID, rl.LEFT_ADDRESS_RANGE_ID, rl.RIGHT_ADDRESS_RANGE_ID,\n                    rl.IS_EXIT_NAME, rl.EXPLICATABLE, rl.IS_JUNCTION_NAME,\n                    rl.IS_NAME_ON_ROADSIGN, rl.IS_POSTAL_NAME, rl.IS_STALE_NAME,\n                    rl.IS_VANITY_NAME, rl.ROAD_LINK_ID, rn.STREET_NAME,\n                    rn.ROUTE_TYPE\n                from rdf.xdf_ADMIN_HIERARCHY ah\n                join xdf.xdf_LINK_ADMIN la\n                on ah.ADMIN_PLACE_ID = la.ADMIN_PLACE_ID\n                join xdf.xdf_ROAD_LINK rl\n                on la.LINK_ID = rl.LINK_ID\n                join xdf.xdf_ROAD_NAME rn\n                on rl.ROAD_NAME_ID = rn.ROAD_NAME_ID\n                where rl.IS_EXIT_NAME = 'N'\n                    and rl.IS_JUNCTION_NAME = 'N'\n                    and rn.ROAD_NAME_ID between 158348561  and 158348660\n                order by rl.ROAD_NAME_ID, ah.ORDER8_ID, ah.BUILTUP_ID, rl.LINK_ID;\n\nOn Postgresql 8.4.1\n\nSort  (cost=129346.71..129498.64 rows=60772 width=61) (actual time=100.358..100.496 rows=1444 loops=1)\n  Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n  Sort Method:  quicksort  Memory: 252kB\n  ->  Hash Join  (cost=2603.57..124518.03 rows=60772 width=61) (actual time=62.359..97.268 rows=1444 loops=1)\n        Hash Cond: (la.admin_place_id = ah.admin_place_id)\n        ->  Nested Loop  (cost=6.82..120781.81 rows=60772 width=57) (actual time=0.318..33.600 rows=1444 loops=1)\n              ->  Nested Loop  (cost=6.82..72383.98 rows=21451 width=51) (actual time=0.232..12.359 rows=722 loops=1)\n                    ->  Index Scan using pk_xdf_road_name on xdf_road_name rn  (cost=0.00..11.24 rows=97 width=21) (actual time=0.117..0.185 rows=100 loops=1)\n                          Index Cond: ((road_name_id >= 158348561) AND (road_name_id <= 158348660))\n                    ->  Bitmap Heap Scan on xdf_road_link rl  (cost=6.82..743.34 rows=222 width=34) (actual time=0.025..0.115 rows=7 loops=100)\n                          Recheck Cond: (rl.road_name_id = rn.road_name_id)\n                          Filter: ((rl.is_exit_name = 'N'::bpchar) AND (rl.is_junction_name = 'N'::bpchar))\n                          ->  Bitmap Index Scan on nx_xdfroadlink_roadnameid  (cost=0.00..6.76 rows=222 width=0) (actual time=0.008..0.008 rows=7 loops=100)\n                                Index Cond: (rl.road_name_id = rn.road_name_id)\n              ->  Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin la  (cost=0.00..2.22 rows=3 width=10) (actual time=0.023..0.028 rows=2 loops=722)\n                    Index Cond: (la.link_id = rl.link_id)\n        ->  Hash  (cost=1544.11..1544.11 rows=84211 width=12) (actual time=61.924..61.924 rows=84211 loops=1)\n              ->  Seq Scan on xdf_admin_hierarchy ah  (cost=0.00..1544.11 rows=84211 width=12) (actual time=0.017..33.442 rows=84211 loops=1)\nTotal runtime: 101.446 ms\n\n\nand on Postgresql  8.3.8:\n\nSort  (cost=3792.75..3792.95 rows=81 width=61) (actual time=28.928..29.074 rows=1444 loops=1)\n  Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n  Sort Method:  quicksort  Memory: 252kB\n  ->  Nested Loop  (cost=21.00..3790.18 rows=81 width=61) (actual time=0.210..26.098 rows=1444 loops=1)\n        ->  Nested Loop  (cost=21.00..3766.73 rows=81 width=57) (actual time=0.172..19.148 rows=1444 loops=1)\n              ->  Nested Loop  (cost=21.00..3733.04 rows=14 width=51) (actual time=0.129..6.126 rows=722 loops=1)\n                    ->  Index Scan using pk_xdf_road_name on xdf_road_name rn  (cost=0.00..8.32 rows=1 width=21) (actual time=0.059..0.117 rows=100 loops=1)\n                          Index Cond: ((road_name_id >= 158348561) AND (road_name_id <= 158348660))\n                    ->  Bitmap Heap Scan on xdf_road_link rl  (cost=21.00..3711.97 rows=1020 width=34) (actual time=0.015..0.055 rows=7 loops=100)\n                          Recheck Cond: (rl.road_name_id = rn.road_name_id)\n                          Filter: ((rl.is_exit_name = 'N'::bpchar) AND (rl.is_junction_name = 'N'::bpchar))\n                          ->  Bitmap Index Scan on nx_xdfroadlink_roadnameid  (cost=0.00..20.75 rows=1020 width=0) (actual time=0.007..0.007 rows=7 loops=100)\n                                Index Cond: (rl.road_name_id = rn.road_name_id)\n              ->  Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin la  (cost=0.00..2.31 rows=8 width=10) (actual time=0.014..0.017 rows=2 loops=722)\n                    Index Cond: (la.link_id = rl.link_id)\n        ->  Index Scan using pk_xdf_admin_hierarchy on xdf_admin_hierarchy ah  (cost=0.00..0.28 rows=1 width=12) (actual time=0.003..0.004 rows=1 loops=1444)\n              Index Cond: (ah.admin_place_id = la.admin_place_id)\nTotal runtime: 29.366 ms\n\nHope this gives any clue. Or did I missunderstand you?\n\nRegards\n\nDavid\n\n\n>-----Ursprüngliche Nachricht-----\n>Von: Andres Freund [mailto:[email protected]]\n>Gesendet: Dienstag, 8. Dezember 2009 00:25\n>An: [email protected]\n>Cc: Schmitz, David\n>Betreff: Re: [PERFORM] performance penalty between Postgresql\n>8.3.8 and 8.4.1\n>\n>Hi David,\n>\n>On Monday 07 December 2009 23:05:14 Schmitz, David wrote:\n>> With our data it is a performance difference from 1h16min\n>(8.3.8) to\n>> 2h43min (8.4.1)\n>Can you afford a explain analyze run overnight or so for both?\n>\n>Andres\n>\n\nYour output shows that the xdf_admin_hierarchy tables between versions are drastically different.  8.3.8 only contains 1 row, whereas 8.4.1 contains 84211 rows.\nThom", "msg_date": "Tue, 8 Dec 2009 10:11:32 +0000", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "Hi Thom,\n \nI did a select count(*) from xdf.xdf_admin_hierarchy and it returns 84211 on both databases postgres 8.3.8 and 8.4.1.\nThe amount of data is exactly the same in both databases as they are restored from the same dump.\n \nRegards\n \nDavid\n\n\n _____ \n\n\tVon: Thom Brown [mailto:[email protected]] \n\tGesendet: Dienstag, 8. Dezember 2009 11:12\n\tAn: Schmitz, David\n\tCc: Andres Freund; [email protected]\n\tBetreff: Re: [PERFORM] performance penalty between Postgresql 8.3.8 and 8.4.1\n\t\n\t\n\t2009/12/8 Schmitz, David <[email protected]>\n\t\n\n\t\tHi Andres,\n\t\t\n\t\tEXPLAIN ANALYZE\n\t\tselect ah.ORDER8_ID, ah.BUILTUP_ID, rl.LINK_ID, la.SIDE,\n\t\t rl.ROAD_NAME_ID, rl.LEFT_ADDRESS_RANGE_ID, rl.RIGHT_ADDRESS_RANGE_ID,\n\t\t rl.IS_EXIT_NAME, rl.EXPLICATABLE, rl.IS_JUNCTION_NAME,\n\t\t rl.IS_NAME_ON_ROADSIGN, rl.IS_POSTAL_NAME, rl.IS_STALE_NAME,\n\t\t rl.IS_VANITY_NAME, rl.ROAD_LINK_ID, rn.STREET_NAME,\n\t\t rn.ROUTE_TYPE\n\t\t from rdf.xdf_ADMIN_HIERARCHY ah\n\t\t join xdf.xdf_LINK_ADMIN la\n\t\t on ah.ADMIN_PLACE_ID = la.ADMIN_PLACE_ID\n\t\t join xdf.xdf_ROAD_LINK rl\n\t\t on la.LINK_ID = rl.LINK_ID\n\t\t join xdf.xdf_ROAD_NAME rn\n\t\t on rl.ROAD_NAME_ID = rn.ROAD_NAME_ID\n\t\t where rl.IS_EXIT_NAME = 'N'\n\t\t and rl.IS_JUNCTION_NAME = 'N'\n\t\t and rn.ROAD_NAME_ID between 158348561 and 158348660\n\t\t order by rl.ROAD_NAME_ID, ah.ORDER8_ID, ah.BUILTUP_ID, rl.LINK_ID;\n\t\t\n\t\tOn Postgresql 8.4.1\n\t\t\n\t\tSort (cost=129346.71..129498.64 rows=60772 width=61) (actual time=100.358..100.496 rows=1444 loops=1)\n\t\t\n\t\t Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n\t\t\n\t\t Sort Method: quicksort Memory: 252kB\n\t\t -> Hash Join (cost=2603.57..124518.03 rows=60772 width=61) (actual time=62.359..97.268 rows=1444 loops=1)\n\t\t\n\t\t Hash Cond: (la.admin_place_id = ah.admin_place_id)\n\t\t\n\t\t -> Nested Loop (cost=6.82..120781.81 rows=60772 width=57) (actual time=0.318..33.600 rows=1444 loops=1)\n\t\t -> Nested Loop (cost=6.82..72383.98 rows=21451 width=51) (actual time=0.232..12.359 rows=722 loops=1)\n\t\t -> Index Scan using pk_xdf_road_name on xdf_road_name rn (cost=0.00..11.24 rows=97 width=21) (actual time=0.117..0.185 rows=100 loops=1)\n\t\t\n\t\t Index Cond: ((road_name_id >= 158348561) AND (road_name_id <= 158348660))\n\t\t\n\t\t -> Bitmap Heap Scan on xdf_road_link rl (cost=6.82..743.34 rows=222 width=34) (actual time=0.025..0.115 rows=7 loops=100)\n\t\t\n\t\t Recheck Cond: (rl.road_name_id = rn.road_name_id)\n\t\t Filter: ((rl.is_exit_name = 'N'::bpchar) AND (rl.is_junction_name = 'N'::bpchar))\n\t\t\n\t\t -> Bitmap Index Scan on nx_xdfroadlink_roadnameid (cost=0.00..6.76 rows=222 width=0) (actual time=0.008..0.008 rows=7 loops=100)\n\t\t\n\t\t Index Cond: (rl.road_name_id = rn.road_name_id)\n\t\t\n\t\t -> Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin la (cost=0.00..2.22 rows=3 width=10) (actual time=0.023..0.028 rows=2 loops=722)\n\t\t\n\t\t Index Cond: (la.link_id = rl.link_id)\n\t\t\n\t\t -> Hash (cost=1544.11..1544.11 rows=84211 width=12) (actual time=61.924..61.924 rows=84211 loops=1)\n\t\t -> Seq Scan on xdf_admin_hierarchy ah (cost=0.00..1544.11 rows=84211 width=12) (actual time=0.017..33.442 rows=84211 loops=1)\n\t\tTotal runtime: 101.446 ms\n\t\t\n\t\t\n\t\tand on Postgresql 8.3.8:\n\t\t\n\t\tSort (cost=3792.75..3792.95 rows=81 width=61) (actual time=28.928..29.074 rows=1444 loops=1)\n\t\t\n\t\t Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n\t\t\n\t\t Sort Method: quicksort Memory: 252kB\n\t\t -> Nested Loop (cost=21.00..3790.18 rows=81 width=61) (actual time=0.210..26.098 rows=1444 loops=1)\n\t\t -> Nested Loop (cost=21.00..3766.73 rows=81 width=57) (actual time=0.172..19.148 rows=1444 loops=1)\n\t\t -> Nested Loop (cost=21.00..3733.04 rows=14 width=51) (actual time=0.129..6.126 rows=722 loops=1)\n\t\t -> Index Scan using pk_xdf_road_name on xdf_road_name rn (cost=0.00..8.32 rows=1 width=21) (actual time=0.059..0.117 rows=100 loops=1)\n\t\t\n\t\t Index Cond: ((road_name_id >= 158348561) AND (road_name_id <= 158348660))\n\t\t\n\t\t -> Bitmap Heap Scan on xdf_road_link rl (cost=21.00..3711.97 rows=1020 width=34) (actual time=0.015..0.055 rows=7 loops=100)\n\t\t\n\t\t Recheck Cond: (rl.road_name_id = rn.road_name_id)\n\t\t Filter: ((rl.is_exit_name = 'N'::bpchar) AND (rl.is_junction_name = 'N'::bpchar))\n\t\t\n\t\t -> Bitmap Index Scan on nx_xdfroadlink_roadnameid (cost=0.00..20.75 rows=1020 width=0) (actual time=0.007..0.007 rows=7 loops=100)\n\t\t\n\t\t Index Cond: (rl.road_name_id = rn.road_name_id)\n\t\t\n\t\t -> Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin la (cost=0.00..2.31 rows=8 width=10) (actual time=0.014..0.017 rows=2 loops=722)\n\t\t\n\t\t Index Cond: (la.link_id = rl.link_id)\n\t\t\n\t\t -> Index Scan using pk_xdf_admin_hierarchy on xdf_admin_hierarchy ah (cost=0.00..0.28 rows=1 width=12) (actual time=0.003..0.004 rows=1 loops=1444)\n\t\t\n\t\t Index Cond: (ah.admin_place_id = la.admin_place_id)\n\t\t\n\t\tTotal runtime: 29.366 ms\n\t\t\n\t\tHope this gives any clue. Or did I missunderstand you?\n\t\t\n\t\tRegards\n\t\t\n\t\tDavid\n\t\t\n\t\t\n\t\t>-----Ursprüngliche Nachricht-----\n\t\t>Von: Andres Freund [mailto:[email protected]]\n\t\t>Gesendet: Dienstag, 8. Dezember 2009 00:25\n\t\t>An: [email protected]\n\t\t>Cc: Schmitz, David\n\t\t>Betreff: Re: [PERFORM] performance penalty between Postgresql\n\t\t\n\t\t>8.3.8 and 8.4.1\n\t\t>\n\t\t\n\t\t>Hi David,\n\t\t>\n\t\t>On Monday 07 December 2009 23:05:14 Schmitz, David wrote:\n\t\t>> With our data it is a performance difference from 1h16min\n\t\t>(8.3.8) to\n\t\t>> 2h43min (8.4.1)\n\t\t>Can you afford a explain analyze run overnight or so for both?\n\t\t>\n\t\t>Andres\n\t\t>\n\t\t\n\t\t\n\n\n\n\n\tYour output shows that the xdf_admin_hierarchy tables between versions are drastically different. 8.3.8 only contains 1 row, whereas 8.4.1 contains 84211 rows.\n\t\n\tThom \n \n*******************************************\ninnovative systems GmbH Navigation-Multimedia\nGeschaeftsfuehrung: Edwin Summers - Michael Juergen Mauser\nSitz der Gesellschaft: Hamburg - Registergericht: Hamburg HRB 59980 \n \n*******************************************\nDiese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und loeschen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet.\nThis e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the contents in this e-mail is strictly forbidden.\n*******************************************\n\n\n\n\n\n\n\nHi Thom,\n \nI did a select count(*) from xdf.xdf_admin_hierarchy and it \nreturns 84211 on both databases postgres 8.3.8 and 8.4.1.\nThe amount of data is exactly the same in both databases as \nthey are restored from the same dump.\n \nRegards\n \nDavid\n\n\n\n\nVon: Thom Brown [mailto:[email protected]] \n Gesendet: Dienstag, 8. Dezember 2009 11:12An: Schmitz, \n DavidCc: Andres Freund; \n [email protected]: Re: [PERFORM] performance \n penalty between Postgresql 8.3.8 and 8.4.1\n\n2009/12/8 Schmitz, David <[email protected]>\nHi \n Andres,EXPLAIN ANALYZEselect ah.ORDER8_ID, ah.BUILTUP_ID, \n rl.LINK_ID, la.SIDE,              \n      rl.ROAD_NAME_ID, rl.LEFT_ADDRESS_RANGE_ID, \n rl.RIGHT_ADDRESS_RANGE_ID,            \n        rl.IS_EXIT_NAME, rl.EXPLICATABLE, \n rl.IS_JUNCTION_NAME,              \n      rl.IS_NAME_ON_ROADSIGN, rl.IS_POSTAL_NAME, \n rl.IS_STALE_NAME,                \n    rl.IS_VANITY_NAME, rl.ROAD_LINK_ID, rn.STREET_NAME,  \n                 \n  rn.ROUTE_TYPE              \n  from rdf.xdf_ADMIN_HIERARCHY ah          \n      join xdf.xdf_LINK_ADMIN la      \n          on ah.ADMIN_PLACE_ID = \n la.ADMIN_PLACE_ID              \n  join xdf.xdf_ROAD_LINK rl            \n    on la.LINK_ID = rl.LINK_ID        \n        join xdf.xdf_ROAD_NAME rn      \n          on rl.ROAD_NAME_ID = \n rn.ROAD_NAME_ID              \n  where rl.IS_EXIT_NAME = 'N'          \n          and rl.IS_JUNCTION_NAME = 'N'  \n                  and \n rn.ROAD_NAME_ID between 158348561  and 158348660    \n            order by rl.ROAD_NAME_ID, \n ah.ORDER8_ID, ah.BUILTUP_ID, rl.LINK_ID;On Postgresql \n 8.4.1Sort  (cost=129346.71..129498.64 rows=60772 width=61) \n (actual time=100.358..100.496 rows=1444 loops=1)\n Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, \n rl.link_id Sort Method:  quicksort  Memory: \n 252kB ->  Hash Join  (cost=2603.57..124518.03 \n rows=60772 width=61) (actual time=62.359..97.268 rows=1444 loops=1)\n       Hash Cond: (la.admin_place_id = \n ah.admin_place_id)       ->  Nested \n Loop  (cost=6.82..120781.81 rows=60772 width=57) (actual \n time=0.318..33.600 rows=1444 loops=1)          \n    ->  Nested Loop  (cost=6.82..72383.98 rows=21451 \n width=51) (actual time=0.232..12.359 rows=722 loops=1)    \n                ->  Index \n Scan using pk_xdf_road_name on xdf_road_name rn  (cost=0.00..11.24 \n rows=97 width=21) (actual time=0.117..0.185 rows=100 loops=1)\n                  \n        Index Cond: ((road_name_id >= 158348561) AND \n (road_name_id <= 158348660))          \n          ->  Bitmap Heap Scan on \n xdf_road_link rl  (cost=6.82..743.34 rows=222 width=34) (actual \n time=0.025..0.115 rows=7 loops=100)\n                  \n        Recheck Cond: (rl.road_name_id = \n rn.road_name_id)                \n          Filter: ((rl.is_exit_name = 'N'::bpchar) \n AND (rl.is_junction_name = 'N'::bpchar))      \n                    -> \n  Bitmap Index Scan on nx_xdfroadlink_roadnameid  (cost=0.00..6.76 \n rows=222 width=0) (actual time=0.008..0.008 rows=7 loops=100)\n                  \n              Index Cond: (rl.road_name_id \n = rn.road_name_id)            \n  ->  Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin \n la  (cost=0.00..2.22 rows=3 width=10) (actual time=0.023..0.028 rows=2 \n loops=722)\n                  \n  Index Cond: (la.link_id = rl.link_id)      \n  ->  Hash  (cost=1544.11..1544.11 rows=84211 width=12) \n (actual time=61.924..61.924 rows=84211 loops=1)      \n        ->  Seq Scan on xdf_admin_hierarchy ah \n  (cost=0.00..1544.11 rows=84211 width=12) (actual time=0.017..33.442 \n rows=84211 loops=1)Total runtime: 101.446 msand on \n Postgresql  8.3.8:Sort  (cost=3792.75..3792.95 rows=81 \n width=61) (actual time=28.928..29.074 rows=1444 loops=1)\n Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, \n rl.link_id Sort Method:  quicksort  Memory: \n 252kB ->  Nested Loop  (cost=21.00..3790.18 rows=81 \n width=61) (actual time=0.210..26.098 rows=1444 loops=1)    \n    ->  Nested Loop  (cost=21.00..3766.73 rows=81 \n width=57) (actual time=0.172..19.148 rows=1444 loops=1)    \n          ->  Nested Loop \n  (cost=21.00..3733.04 rows=14 width=51) (actual time=0.129..6.126 \n rows=722 loops=1)                \n    ->  Index Scan using pk_xdf_road_name on xdf_road_name \n rn  (cost=0.00..8.32 rows=1 width=21) (actual time=0.059..0.117 \n rows=100 loops=1)\n                  \n        Index Cond: ((road_name_id >= 158348561) AND \n (road_name_id <= 158348660))          \n          ->  Bitmap Heap Scan on \n xdf_road_link rl  (cost=21.00..3711.97 rows=1020 width=34) (actual \n time=0.015..0.055 rows=7 loops=100)\n                  \n        Recheck Cond: (rl.road_name_id = \n rn.road_name_id)                \n          Filter: ((rl.is_exit_name = 'N'::bpchar) \n AND (rl.is_junction_name = 'N'::bpchar))      \n                    -> \n  Bitmap Index Scan on nx_xdfroadlink_roadnameid  (cost=0.00..20.75 \n rows=1020 width=0) (actual time=0.007..0.007 rows=7 loops=100)\n                  \n              Index Cond: (rl.road_name_id \n = rn.road_name_id)            \n  ->  Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin \n la  (cost=0.00..2.31 rows=8 width=10) (actual time=0.014..0.017 rows=2 \n loops=722)\n                  \n  Index Cond: (la.link_id = rl.link_id)      \n  ->  Index Scan using pk_xdf_admin_hierarchy on \n xdf_admin_hierarchy ah  (cost=0.00..0.28 rows=1 width=12) (actual \n time=0.003..0.004 rows=1 loops=1444)\n             Index Cond: \n (ah.admin_place_id = la.admin_place_id)Total runtime: 29.366 \n msHope this gives any clue. Or did I missunderstand \n you?RegardsDavid>-----Ursprüngliche \n Nachricht----->Von: Andres Freund [mailto:[email protected]]>Gesendet: \n Dienstag, 8. Dezember 2009 00:25>An: [email protected]>Cc: \n Schmitz, David>Betreff: Re: [PERFORM] performance penalty between \n Postgresql\n>8.3.8 and 8.4.1>\n>Hi David,>>On Monday 07 December 2009 \n 23:05:14 Schmitz, David wrote:>> With our data it is a performance \n difference from 1h16min>(8.3.8) to>> 2h43min \n (8.4.1)>Can you afford a explain analyze run overnight or so for \n both?>>Andres>\n\nYour output shows that the xdf_admin_hierarchy tables between \n versions are drastically different.  8.3.8 only contains 1 row, whereas \n 8.4.1 contains 84211 \nrows.Thom \n \n\n*******************************************innovative systems GmbH Navigation-MultimediaGeschaeftsfuehrung: Edwin Summers - Michael Juergen MauserSitz der Gesellschaft: Hamburg - Registergericht: Hamburg HRB 59980  *******************************************Diese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und loeschen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet.\n\n\nThis e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the contents in this e-mail is strictly forbidden.*******************************************", "msg_date": "Tue, 8 Dec 2009 11:18:45 +0100", "msg_from": "\"Schmitz, David\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "Hi David,\n\nOn Tuesday 08 December 2009 10:59:51 Schmitz, David wrote:\n> >> With our data it is a performance difference from 1h16min\n> >> (8.3.8) to 2h43min (8.4.1)\n> On Postgresql 8.4.1\n> Total runtime: 101.446 ms\n> and on Postgresql 8.3.8:\n> Total runtime: 29.366 ms\nHm. There obviously is more going on than these queries?\n\n> Hash Join (cost=2603.57..124518.03 rows=60772 width=61) (actual \ntime=62.359..97.268 rows=1444 loops=1)\n> Nested Loop (cost=21.00..3790.18 rows=81 width=61) (actual \ntime=0.210..26.098 rows=1444 loops=1)\nBoth misestimate the resultset quite a bit. It looks like happenstance that \nthe one on 8.3 turns out to be better...\n\nAndres\n", "msg_date": "Tue, 8 Dec 2009 11:28:55 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "Hi Andres,\n\nthis is just one of many of these queries. There are a lot of jobs calculating \nstuff for different ranges which are defined via between in the where clause.\n\n\nWhen I leave out the between in the where clause it returns:\n\nOn Postgresql 8.4.1:\n\nSort (cost=5390066.42..5435347.78 rows=18112546 width=61) (actual time=84382.275..91367.983 rows=12742796 loops=1)\nSort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\nSort Method: external merge Disk: 924536kB\n-> Hash Join (cost=1082249.40..2525563.48 rows=18112546 width=61) (actual time=23367.205..52256.209 rows=12742796 loops=1)\nHash Cond: (la.admin_place_id = ah.admin_place_id)\n-> Merge Join (cost=1079652.65..2183356.50 rows=18112546 width=57) (actual time=23306.643..45541.157 rows=12742796 loops=1)\n Merge Cond: (la.link_id = rl.link_id)\n -> Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin la (cost=0.00..798398.53 rows=16822372 width=10) (actual time=0.098..12622.576 rows=16822399 loops=1)\n -> Sort (cost=1071304.95..1087287.81 rows=6393147 width=51) (actual time=23302.596..25640.559 rows=12742795 loops=1)\n\t Sort Key: rl.link_id\n\t Sort Method: external sort Disk: 405896kB\n\t -> Hash Join (cost=15735.91..348620.58 rows=6393147 width=51) (actual time=327.064..9189.938 rows=6371398 loops=1)\n\t\t Hash Cond: (rl.road_name_id = rn.road_name_id)\n\t\t -> Seq Scan on xdf_road_link rl (cost=0.00..182236.41 rows=7708159 width=34) (actual time=0.028..2689.085 rows=7709085 loops=1)\n\t\t\tFilter: ((is_exit_name = 'N'::bpchar) AND (is_junction_name = 'N'::bpchar))\n\t\t -> Hash (cost=9885.96..9885.96 rows=467996 width=21) (actual time=326.740..326.740 rows=467996 loops=1)\n\t\t\t-> Seq Scan on xdf_road_name rn (cost=0.00..9885.96 rows=467996 width=21) (actual time=0.019..191.473 rows=467996 loops=1)\n-> Hash (cost=1544.11..1544.11 rows=84211 width=12) (actual time=60.453..60.453 rows=84211 loops=1)\n -> Seq Scan on xdf_admin_hierarchy ah (cost=0.00..1544.11 rows=84211 width=12) (actual time=0.019..31.723 rows=84211 loops=1)\nTotal runtime: 92199.676 ms\n\nOn Postgresql 8.3.8:\n\nSort (cost=9419546.57..9514635.57 rows=38035597 width=61) (actual time=82790.473..88847.963 rows=12742796 loops=1)\n Sort Key: rl.road_name_id, ah.order8_id, ah.builtup_id, rl.link_id\n Sort Method: external merge Disk: 999272kB\n -> Hash Join (cost=1079404.97..3200652.85 rows=38035597 width=61) (actual time=22583.059..51197.249 rows=12742796 loops=1)\n Hash Cond: (la.admin_place_id = ah.admin_place_id)\n -> Merge Join (cost=1076808.22..2484888.66 rows=38035597 width=57) (actual time=22524.015..44539.246 rows=12742796 loops=1)\n Merge Cond: (la.link_id = rl.link_id)\n -> Index Scan using nx_xdflinkadmin_linkid on xdf_link_admin la (cost=0.00..795583.17 rows=16822420 width=10) (actual time=0.086..11725.990 rows=16822399 loops=1)\n -> Sort (cost=1076734.49..1092821.79 rows=6434920 width=51) (actual time=22514.553..25083.253 rows=12742795 loops=1)\n Sort Key: rl.link_id\n Sort Method: external sort Disk: 443264kB\n -> Hash Join (cost=15743.47..349025.77 rows=6434920 width=51) (actual time=330.211..9014.353 rows=6371398 loops=1)\n Hash Cond: (rl.road_name_id = rn.road_name_id)\n -> Seq Scan on xdf_road_link rl (cost=0.00..182235.08 rows=7706491 width=34) (actual time=0.018..2565.983 rows=7709085 loops=1)\n Filter: ((is_exit_name = 'N'::bpchar) AND (is_junction_name = 'N'::bpchar))\n -> Hash (cost=9890.43..9890.43 rows=468243 width=21) (actual time=329.906..329.906 rows=467996 loops=1)\n -> Seq Scan on xdf_road_name rn (cost=0.00..9890.43 rows=468243 width=21) (actual time=0.018..190.764 rows=467996 loops=1)\n -> Hash (cost=1544.11..1544.11 rows=84211 width=12) (actual time=58.910..58.910 rows=84211 loops=1)\n -> Seq Scan on xdf_admin_hierarchy ah (cost=0.00..1544.11 rows=84211 width=12) (actual time=0.009..28.725 rows=84211 loops=1)\nTotal runtime: 89612.801 ms\n\nRegards \n\nDavid \n\n>-----Ursprüngliche Nachricht-----\n>Von: Andres Freund [mailto:[email protected]] \n>Gesendet: Dienstag, 8. Dezember 2009 11:29\n>An: [email protected]\n>Cc: Schmitz, David\n>Betreff: Re: [PERFORM] performance penalty between Postgresql \n>8.3.8 and 8.4.1\n>\n>Hi David,\n>\n>On Tuesday 08 December 2009 10:59:51 Schmitz, David wrote:\n>> >> With our data it is a performance difference from 1h16min\n>> >> (8.3.8) to 2h43min (8.4.1)\n>> On Postgresql 8.4.1\n>> Total runtime: 101.446 ms\n>> and on Postgresql 8.3.8:\n>> Total runtime: 29.366 ms\n>Hm. There obviously is more going on than these queries?\n>\n>> Hash Join (cost=2603.57..124518.03 rows=60772 width=61) (actual\n>time=62.359..97.268 rows=1444 loops=1)\n>> Nested Loop (cost=21.00..3790.18 rows=81 width=61) (actual\n>time=0.210..26.098 rows=1444 loops=1)\n>Both misestimate the resultset quite a bit. It looks like \n>happenstance that the one on 8.3 turns out to be better...\n>\n>Andres\n> \n \n*******************************************\ninnovative systems GmbH Navigation-Multimedia\nGeschaeftsfuehrung: Edwin Summers - Michael Juergen Mauser\nSitz der Gesellschaft: Hamburg - Registergericht: Hamburg HRB 59980 \n \n*******************************************\nDiese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und loeschen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet.\nThis e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the contents in this e-mail is strictly forbidden.\n*******************************************\n", "msg_date": "Tue, 8 Dec 2009 11:42:01 +0100", "msg_from": "\"Schmitz, David\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "On 8/12/2009 6:11 PM, Thom Brown wrote:\n\n> Your output shows that the xdf_admin_hierarchy tables between versions\n> are drastically different. 8.3.8 only contains 1 row, whereas 8.4.1\n> contains 84211 rows.\n\nThat's just because one of them is doing a nested loop where it looks up \na single row from xdf_admin_hierarchy via its primary key on each \niteration. The other plan is doing a hash join on a sequential scan over \nxdf_admin_hierarchy so it reports all the rows at once.\n\n--\nCraig Ringer\n", "msg_date": "Tue, 08 Dec 2009 20:12:04 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "Hi Craig,\n\nthat is exactly the problem postgresql 8.4.1 does not consider the primary key but instead calculates \na hash join. This can only result in poorer performance. I think this is a bug.\n\nRegards\n\nDavid \n\n>-----Ursprüngliche Nachricht-----\n>Von: Craig Ringer [mailto:[email protected]] \n>Gesendet: Dienstag, 8. Dezember 2009 13:12\n>An: Thom Brown\n>Cc: Schmitz, David; Andres Freund; [email protected]\n>Betreff: Re: [PERFORM] performance penalty between Postgresql \n>8.3.8 and 8.4.1\n>\n>On 8/12/2009 6:11 PM, Thom Brown wrote:\n>\n>> Your output shows that the xdf_admin_hierarchy tables \n>between versions \n>> are drastically different. 8.3.8 only contains 1 row, whereas 8.4.1 \n>> contains 84211 rows.\n>\n>That's just because one of them is doing a nested loop where \n>it looks up a single row from xdf_admin_hierarchy via its \n>primary key on each iteration. The other plan is doing a hash \n>join on a sequential scan over xdf_admin_hierarchy so it \n>reports all the rows at once.\n>\n>--\n>Craig Ringer\n> \n \n*******************************************\ninnovative systems GmbH Navigation-Multimedia\nGeschaeftsfuehrung: Edwin Summers - Michael Juergen Mauser\nSitz der Gesellschaft: Hamburg - Registergericht: Hamburg HRB 59980 \n \n*******************************************\nDiese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und loeschen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet.\nThis e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the contents in this e-mail is strictly forbidden.\n*******************************************\n", "msg_date": "Tue, 8 Dec 2009 14:27:14 +0100", "msg_from": "\"Schmitz, David\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "On Tue, Dec 8, 2009 at 7:12 AM, Craig Ringer\n<[email protected]> wrote:\n> On 8/12/2009 6:11 PM, Thom Brown wrote:\n>\n>> Your output shows that the xdf_admin_hierarchy tables between versions\n>> are drastically different.  8.3.8 only contains 1 row, whereas 8.4.1\n>> contains 84211 rows.\n>\n> That's just because one of them is doing a nested loop where it looks up a\n> single row from xdf_admin_hierarchy via its primary key on each iteration.\n> The other plan is doing a hash join on a sequential scan over\n> xdf_admin_hierarchy so it reports all the rows at once.\n\nI've been meaning to write a patch to show the places after the\ndecimal point in that case. Rounding off to an integer is horribly\nmisleading and obscures what is really going on. Although in this\ncase maybe it would come out 1.000 anyway.\n\n...Robert\n", "msg_date": "Tue, 8 Dec 2009 10:02:05 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "On Tue, Dec 8, 2009 at 8:27 AM, Schmitz, David <[email protected]> wrot\n> that is exactly the problem postgresql 8.4.1 does not consider the primary key but instead calculates\n> a hash join. This can only result in poorer performance. I think this is a bug.\n\nYour statement that \"this can only result in poorer performance\" is\nflat wrong. Just because there's a primary key doesn't mean that an\ninner-indexscan plan is fastest. Frequently a hash join is faster. I\ncan think of a couple of possible explanations for the behavior you're\nseeing:\n\n- Something could be blocking PostgreSQL from using that index at all.\n If you do EXPLAIN SELECT * FROM xdf_admin_hierarchy WHERE\nadmin_place_id = <some particular value>, does it use the index or\nseq-scan the table?\n\n- The index on your 8.4.1 system might be bloated. You could perhaps\nSELECT reltuples FROM pg_class WHERE oid =\n'pk_xdf_admin_hierarchy'::regclass on both systems to see if one index\nis larger than the other.\n\n- You might have changed the value of the work_mem parameter on one\nsystem vs. the other. Try \"show work_mem;\" on each system and see\nwhat you get.\n\nIf it's none of those things, it's could be the result of a code\nchange, but I'm at a loss to think of which one would apply in this\ncase. I suppose we could do a bisection search but that's a lot of\nwork for you. If you could extract a reproducible test case (complete\nwith data) that would allow someone else to try to track it down.\n\n...Robert\n", "msg_date": "Tue, 8 Dec 2009 10:13:51 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I can think of a couple of possible explanations for the behavior you're\n> seeing:\n\nThe reason it's switching from a nestloop to something else is pretty\nobvious: the estimate of the number of rows coming out of the lower\njoin has gone from 81 to 60772. Neither of which is real accurate :-(,\nbut the larger value pretty strongly discourages using a nestloop.\n\nThe estimates for the individual scans mostly seem to be better than\nbefore, in the case of xdf_road_name far better: 97 vs 1, against a true\nvalue of 100. So that's good; I suspect though that it just comes from\nthe increase in default stats target and doesn't reflect any logic\nchange. The bottom line though is that it's gone from a considerable\nunderestimate of the join size to a considerable overestimate, and that\npushes it to use a different plan that turns out to be inferior.\n\nI don't see any fixable bug here. This is just a corner case where\nthe inherent inaccuracies in join size estimation went wrong for us;\nbut for every one of those there's another one where we'd get the\nright answer for the wrong reason.\n\nOne thing that might be worth considering is to try to improve the\naccuracy of this rowcount estimate:\n\n -> Bitmap Heap Scan on xdf_road_link rl (cost=6.82..743.34 rows=222 width=34) (actual time=0.025..0.115 rows=7 loops=100)\n Recheck Cond: (rl.road_name_id = rn.road_name_id)\n Filter: ((rl.is_exit_name = 'N'::bpchar) AND (rl.is_junction_name = 'N'::bpchar))\n -> Bitmap Index Scan on nx_xdfroadlink_roadnameid (cost=0.00..6.76 rows=222 width=0) (actual time=0.008..0.008 rows=7 loops=100)\n Index Cond: (rl.road_name_id = rn.road_name_id)\n\nI think a large part of the inaccuracy here has to do with not having\ngood stats for the joint effect of the is_exit_name and is_junction_name\nconditions. But to be frank that looks like bad schema design.\nConsider merging those and any related flags into one \"entry type\"\ncolumn.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Dec 2009 11:03:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1 " }, { "msg_contents": "Hi Robert,\n\nunfortunatley its non of the things :-( see below:\n\n- EXPLAIN SELECT * FROM xdf.xdf_admin_hierarchy \n WHERE admin_place_id = 150738434\n \n On Postgresql 8.4.1 and 8.3.8\n Index Scan using pk_rdf_admin_hierarchy on rdf_admin_hierarchy (cost=0.00..8.28 rows=1 width=34)\n Index Cond: (admin_place_id = 150738434)\n\n- SELECT reltuples FROM pg_class WHERE oid = 'pk_xdf_admin_hierarchy'::regclass\n returns 84211 on postgresql 8.4.1 and 8.3.8\n\n- work_mem is 512MB on both systems\n\n- unfortunately I can not hand out any data because of legal issues so we will have to \n do further debugging if necessary\n\nSo how should we proceed with this issue?\n\nRegards\n\nDavid\n\n\n>-----Ursprüngliche Nachricht-----\n>Von: Robert Haas [mailto:[email protected]] \n>Gesendet: Dienstag, 8. Dezember 2009 16:14\n>An: Schmitz, David\n>Cc: Craig Ringer; Thom Brown; Andres Freund; \n>[email protected]\n>Betreff: Re: [PERFORM] performance penalty between Postgresql \n>8.3.8 and 8.4.1\n>\n>On Tue, Dec 8, 2009 at 8:27 AM, Schmitz, David \n><[email protected]> wrot\n>> that is exactly the problem postgresql 8.4.1 does not consider the \n>> primary key but instead calculates a hash join. This can \n>only result in poorer performance. I think this is a bug.\n>\n>Your statement that \"this can only result in poorer \n>performance\" is flat wrong. Just because there's a primary \n>key doesn't mean that an inner-indexscan plan is fastest. \n>Frequently a hash join is faster. I can think of a couple of \n>possible explanations for the behavior you're\n>seeing:\n>\n>- Something could be blocking PostgreSQL from using that index at all.\n> If you do EXPLAIN SELECT * FROM xdf_admin_hierarchy WHERE \n>admin_place_id = <some particular value>, does it use the \n>index or seq-scan the table?\n>\n>- The index on your 8.4.1 system might be bloated. You could \n>perhaps SELECT reltuples FROM pg_class WHERE oid = \n>'pk_xdf_admin_hierarchy'::regclass on both systems to see if \n>one index is larger than the other.\n>\n>- You might have changed the value of the work_mem parameter \n>on one system vs. the other. Try \"show work_mem;\" on each \n>system and see what you get.\n>\n>If it's none of those things, it's could be the result of a \n>code change, but I'm at a loss to think of which one would \n>apply in this case. I suppose we could do a bisection search \n>but that's a lot of work for you. If you could extract a \n>reproducible test case (complete with data) that would allow \n>someone else to try to track it down.\n>\n>...Robert\n> \n \n*******************************************\ninnovative systems GmbH Navigation-Multimedia\nGeschaeftsfuehrung: Edwin Summers - Michael Juergen Mauser\nSitz der Gesellschaft: Hamburg - Registergericht: Hamburg HRB 59980 \n \n*******************************************\nDiese E-Mail enthaelt vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und loeschen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet.\nThis e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the contents in this e-mail is strictly forbidden.\n*******************************************\n", "msg_date": "Tue, 8 Dec 2009 17:07:47 +0100", "msg_from": "\"Schmitz, David\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" }, { "msg_contents": "On Tue, Dec 8, 2009 at 11:07 AM, Schmitz, David\n<[email protected]> wrote:\n> So how should we proceed with this issue?\n\nI think Tom nailed it.\n\n...Robert\n", "msg_date": "Tue, 8 Dec 2009 12:38:56 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance penalty between Postgresql 8.3.8 and 8.4.1" } ]
[ { "msg_contents": "Hi,\n\nI am looking for a way to let the user know what the estimated time for the\ncurrent transaction he requested and while the transaction is in progress,\nhow much time is elapsed for the transaction as a fraction of the total\nestimated time at a particular instance, by dynamically estimating the time\nfor the transaction at that instance.\n\nI got to know how Prostgre estimates the cost for a particular operation.\nDoes estimated cost means the estimated time to evaluate an operation in the\ncontext of Postgre? And also may I know if there is any way to achieve the\nrequirement I mentioned above, with the Postgre SQL?\n\nI would be very much thankful for your suggestions as I am doing a research\nproject to implement a mechanism to achieve the above mentioned task.\n\nThank you.\n\nregards,\nHasini.\n\nHi,I am looking for a way to let the user know what the estimated time for the current transaction he requested and while the transaction is in progress, how much time is elapsed for the transaction  as a fraction of the total estimated time at a particular instance, by dynamically estimating the time for the transaction at that instance.\nI got to know how Prostgre estimates the cost for a particular operation. Does estimated cost means the estimated time to evaluate an operation in the context of Postgre? And also may I know if there is any way to achieve the requirement I mentioned above, with the Postgre SQL?\nI would be very much thankful for your suggestions as I am doing a research project to implement a mechanism to achieve the above mentioned task.Thank you.\nregards,Hasini.", "msg_date": "Tue, 8 Dec 2009 11:07:55 +0800", "msg_from": "Hasini Gunasinghe <[email protected]>", "msg_from_op": true, "msg_subject": "Dynamlically updating the estimated cost of a transaction" }, { "msg_contents": "Hasini Gunasinghe wrote:\n>\n> I am looking for a way to let the user know what the estimated time \n> for the current transaction he requested and while the transaction is \n> in progress, how much time is elapsed for the transaction as a \n> fraction of the total estimated time at a particular instance, by \n> dynamically estimating the time for the transaction at that instance.\nI think this one needs to get added to the FAQ. To re-use from when I \nanswered this last month: this just isn't exposed in PostgreSQL yet. \nClients ask for queries to be run, eventually they get rows of results \nback, but there's no notion of how many they're going to get in advance \nor how far along they are in executing the query's execution plan. \nThere's a couple of academic projects that have started exposing more of \nthe query internals, but I'm not aware of anyone who's even started \nmoving in the direction of what you'd need to produce a progress bar or \nestimate a total run-time. It's a hard problem--you could easily spend \nseveral years of your life on this alone and still not have even a \nmediocre way to predict how much time is left to execute a generic query.\n\nIn practice, people tend to save query log files showing historical \ninformation about how long queries took to run, and then use that to \npredict future response times. That's a much easier way to get \nsomething useful for a lot of applications than expecting you can ever \nestimate just based on an EXPLAIN plan.\n\n> I got to know how Prostgre estimates the cost for a particular \n> operation. Does estimated cost means the estimated time to evaluate an \n> operation in the context of Postgre? And also may I know if there is \n> any way to achieve the requirement I mentioned above, with the Postgre \n> SQL?\nEstimated costs are not ever used to predict an estimated time. An \ninteresting research project would be trying to tie the two together \nmore tightly, by collecting a bunch of data measuring real EXPLAIN \nANALYZE execution times with their respective cost estimates.\n\nEven after you collected it, actually using the data from such research \nis quite tricky. For example, people have tried to tie some of the \nindividual cost components to the real world--for example, measuring the \ntrue amount of time it takes to do a sequential read vs. a seek and \nadjusting random_page_cost accordingly. But if you then set \nrandom_page_cost to its real-world value based on that estimate, you get \na value well outside what seems to work for people in practice. This \nsuggests the underlying cost estimate doesn't reflect the real-world \nvalue it intends to that closely. But improving on that situation \nwithout going backwards in the quality of the plans the query optimizer \nproduces is a tricky problem.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Tue, 08 Dec 2009 00:43:47 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Dynamlically updating the estimated cost of a transaction" } ]
[ { "msg_contents": "*$ dbt2-run-workload -a pgsql -d 10 -w 1 -c 1 -o\n/home/store/tmp/testdbt2/out-1.o*\npostmaster starting\nDBT-2 test for pgsql started...\n\nDATABASE SYSTEM: localhost\nDATABASE NAME: dbt2\nDATABASE CONNECTIONS: 1\nTERMINAL THREADS: 10\nTERMINALS PER WAREHOUSE: 10\nWAREHOUSES PER THREAD/CLIENT PAIR: 500\nSCALE FACTOR (WAREHOUSES): 1\nDURATION OF TEST (in sec): 10\n1 client stared every 1000 millisecond(s)\n\nStage 1. Starting up client...\nSleeping 501 seconds\ncollecting database statistics...\n\nStage 2. Starting up driver...\n1000 threads started per millisecond\nestimated rampup time: Sleeping 5010 seconds\n\nestimated rampup time has elapsed\nestimated steady state time: Sleeping 10 seconds\n\nStage 3. Processing of results...\nKilling client...\n/usr/local/bin/dbt2-run-workload: line 518: 24055 Terminated\ndbt2-client ${CLIENT_COMMAND_ARGS} -p ${PORT} -o ${CDIR} >\n${CLIENT_OUTPUT_DIR}/`hostname`/client-${SEG}.out 2>&1\nwaiting for postmaster to shut down.... done\npostmaster stopped\n*Can't use an undefined value as an ARRAY reference at\n/usr/lib/perl5/site_perl/5.8.8/Test/Parser/Dbt2.pm line 521.*\nThe authenticity of host 'localhost (127.0.0.1)' can't be established.\nRSA key fingerprint is b6:da:1d:d0:28:d7:ed:06:08:72:44:de:02:f1:b9:52.\nAre you sure you want to continue connecting (yes/no)?\nHost key verification failed.\nTest completed.\nResults are in: /home/store/tmp/testdbt2/out-1.o\n\n Response Time (s)\n Transaction % Average : 90th % Total\nRollbacks %\n------------ ----- --------------------- ----------- ---------------\n-----\n\n\n-- \nBest regards,\nNiu Yan\n\n$ dbt2-run-workload -a pgsql -d 10 -w 1 -c 1 -o /home/store/tmp/testdbt2/out-1.opostmaster startingDBT-2 test for pgsql started...\nDATABASE SYSTEM: localhostDATABASE NAME: dbt2DATABASE CONNECTIONS: 1TERMINAL THREADS: 10TERMINALS PER WAREHOUSE: 10WAREHOUSES PER THREAD/CLIENT PAIR: 500SCALE FACTOR (WAREHOUSES): 1DURATION OF TEST (in sec): 10\n1 client stared every 1000 millisecond(s)\nStage 1. Starting up client...Sleeping 501 secondscollecting database statistics...\nStage 2. Starting up driver...1000 threads started per millisecondestimated rampup time: Sleeping 5010 seconds\nestimated rampup time has elapsedestimated steady state time: Sleeping 10 seconds\nStage 3. Processing of results...Killing client.../usr/local/bin/dbt2-run-workload: line 518: 24055 Terminated              dbt2-client ${CLIENT_COMMAND_ARGS} -p ${PORT} -o ${CDIR} > ${CLIENT_OUTPUT_DIR}/`hostname`/client-${SEG}.out 2>&1\nwaiting for postmaster to shut down.... donepostmaster stoppedCan't use an undefined value as an ARRAY reference at /usr/lib/perl5/site_perl/5.8.8/Test/Parser/Dbt2.pm line 521.The authenticity of host 'localhost (127.0.0.1)' can't be established.\nRSA key fingerprint is b6:da:1d:d0:28:d7:ed:06:08:72:44:de:02:f1:b9:52.Are you sure you want to continue connecting (yes/no)?Host key verification failed.Test completed.Results are in: /home/store/tmp/testdbt2/out-1.o\n                         Response Time (s) Transaction      %    Average :    90th %        Total        Rollbacks      %------------  -----  ---------------------  -----------  ---------------  -----\n-- Best regards,Niu Yan", "msg_date": "Tue, 8 Dec 2009 13:37:06 +0800", "msg_from": "Niu Yan <[email protected]>", "msg_from_op": true, "msg_subject": "error occured in dbt2 against with postgresql" }, { "msg_contents": "On Tue, Dec 8, 2009 at 12:37 AM, Niu Yan <[email protected]> wrote:\n> Can't use an undefined value as an ARRAY reference at\n> /usr/lib/perl5/site_perl/5.8.8/Test/Parser/Dbt2.pm line 521.\n\nI'm guessing this is intended as a bug report, but this is a\nPostgreSQL mailing list, and that's a Perl error message.\n\n...Robert\n", "msg_date": "Tue, 8 Dec 2009 10:52:22 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: error occured in dbt2 against with postgresql" } ]
[ { "msg_contents": "Hi All,\n\nI have to optimize following query :\n\nSELECT r.TopFamilyID AS FamilyID, FROM CMRules r \n WHERE r.WorkspaceID =18512 \n GROUP BY r.TopFamilyID ;\n\nThe explain plan is as follows :\n\n Group (cost=509989.19..511518.30 rows=9 width=10) (actual time=1783.102..2362.587 rows=261 loops=1)\n -> Sort (cost=509989.19..510753.74 rows=305821 width=10) (actual time=1783.097..2121.378 rows=272211 loops=1)\n Sort Key: topfamilyid\n -> Bitmap Heap Scan on cmrules r (cost=14501.36..476896.34 rows=305821 width=10) (actual time=51.507..351.487 rows=272211 loops=1)\n Recheck Cond: (workspaceid = 18512::numeric)\n -> Bitmap Index Scan on pk_ws_fea_fam_cmrules (cost=0.00..14424.90 rows=305821 width=0) (actual time=48.097..48.097 rows=272211 loops=1)\n Index Cond: (workspaceid = 18512::numeric)\n Total runtime: 2373.008 ms\n(8 rows)\n-----------------------------------------------------------------------------------------------------------------\n\\d CMRules gives follows indexes\n\nIndexes:\n \"pk_ws_fea_fam_cmrules\" PRIMARY KEY, btree (workspaceid, featureid, topfamilyid, ruleenddate, gid)\n \"idx_cmrules\" btree (topfamilyid)\n \"idx_gid_ws_cmrules\" btree (gid, workspaceid)\n-----------------------------------------------------------------------------------------------------------------\nSELECT count(distinct r.TopFamilyID) FROM CMRules r WHERE r.WorkspaceID =18512\n\nGives me 261 Rows \n\nSELECT count(r.TopFamilyID) FROM CMRules r WHERE r.WorkspaceID =18512 ;\n\nGives me 272 211 Rows\n\nselect count(*) from cmrules;\n\nGives me 17 643 532 Rows\n\n\nPlease suggest me something to optimize this query\n\nThanks \nNiraj Patel\n\nHi All,I have to optimize following query :SELECT r.TopFamilyID AS FamilyID,  FROM CMRules r            WHERE r.WorkspaceID =18512              GROUP BY r.TopFamilyID ;\nThe explain plan is as follows : Group  (cost=509989.19..511518.30 rows=9 width=10) (actual time=1783.102..2362.587 rows=261 loops=1)   ->  Sort  (cost=509989.19..510753.74 rows=305821 width=10) (actual time=1783.097..2121.378 rows=272211 loops=1)         Sort Key: topfamilyid         ->  Bitmap Heap Scan on cmrules r  (cost=14501.36..476896.34 rows=305821 width=10) (actual time=51.507..351.487 rows=272211 loops=1)               Recheck Cond: (workspaceid = 18512::numeric)               ->  Bitmap Index Scan on\n pk_ws_fea_fam_cmrules  (cost=0.00..14424.90 rows=305821 width=0) (actual time=48.097..48.097 rows=272211 loops=1)                     Index Cond: (workspaceid = 18512::numeric) Total runtime: 2373.008 ms(8 rows)-----------------------------------------------------------------------------------------------------------------\\d CMRules gives follows indexesIndexes:    \"pk_ws_fea_fam_cmrules\" PRIMARY KEY, btree (workspaceid, featureid, topfamilyid, ruleenddate, gid)    \"idx_cmrules\" btree (topfamilyid)    \"idx_gid_ws_cmrules\" btree (gid, workspaceid)-----------------------------------------------------------------------------------------------------------------SELECT count(distinct r.TopFamilyID) FROM CMRules r \n WHERE r.WorkspaceID =18512Gives me 261 Rows SELECT count(r.TopFamilyID) FROM CMRules r  WHERE r.WorkspaceID =18512  ;Gives me 272 211 Rowsselect count(*) from  cmrules;Gives me 17 643 532 RowsPlease suggest me something to optimize this queryThanks Niraj Patel", "msg_date": "Tue, 8 Dec 2009 05:38:57 -0800 (PST)", "msg_from": "niraj patel <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing Bitmap Heap Scan." }, { "msg_contents": "it looks like it might choose wrong plan, cos it gets the stats wrong.\nTry increasing number of stats to 100.\nBtw, what version it is ?\n\nit looks like it might choose wrong plan, cos it gets the stats wrong. Try increasing number of stats to 100. Btw, what version it is ?", "msg_date": "Tue, 8 Dec 2009 13:42:49 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing Bitmap Heap Scan." }, { "msg_contents": "Hi gryzman,\n\nI have run vacuum full analyze on the cmrules tables. The version of pstgres is 8.2.13. How should I change stats to 100 ?\n\nThanks\n\n\n\n________________________________\nFrom: Grzegorz Jaśkiewicz <[email protected]>\nTo: niraj patel <[email protected]>\nCc: [email protected]\nSent: Tue, 8 December, 2009 7:12:49 PM\nSubject: Re: [PERFORM] Optimizing Bitmap Heap Scan.\n\nit looks like it might choose wrong plan, cos it gets the stats wrong. \nTry increasing number of stats to 100. \nBtw, what version it is ?\nHi gryzman,I have run vacuum full analyze on the cmrules tables. The version of pstgres is 8.2.13. How should I change stats to 100 ?ThanksFrom: Grzegorz Jaśkiewicz <[email protected]>To: niraj patel <[email protected]>Cc: [email protected]: Tue, 8 December, 2009 7:12:49 PMSubject:\n Re: [PERFORM] Optimizing Bitmap Heap Scan.it looks like it might choose wrong plan, cos it gets the stats wrong. Try increasing number of stats to 100. Btw, what version it is ?", "msg_date": "Tue, 8 Dec 2009 05:50:52 -0800 (PST)", "msg_from": "niraj patel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing Bitmap Heap Scan." }, { "msg_contents": "On Tue, 8 Dec 2009, niraj patel wrote:\n> �Group� (cost=509989.19..511518.30 rows=9 width=10) (actual time=1783.102..2362.587\n> rows=261 loops=1)\n> �� ->� Sort� (cost=509989.19..510753.74 rows=305821 width=10) (actual\n> time=1783.097..2121.378 rows=272211 loops=1)\n> �������� Sort Key: topfamilyid\n> �������� ->� Bitmap Heap Scan on cmrules r� (cost=14501.36..476896.34 rows=305821\n> width=10) (actual time=51.507..351.487 rows=272211 loops=1)\n> �������������� Recheck Cond: (workspaceid = 18512::numeric)\n> �������������� ->� Bitmap Index Scan on pk_ws_fea_fam_cmrules� (cost=0.00..14424.90\n> rows=305821 width=0) (actual time=48.097..48.097 rows=272211 loops=1)\n> �������������������� Index Cond: (workspaceid = 18512::numeric)\n> �Total runtime: 2373.008 ms\n> (8 rows)\n\n> select count(*) from� cmrules;\n> \n> Gives me 17 643 532 Rows\n\nLooks good from here. Think about what you're asking the database to do. \nIt has to select 272211 rows out of a large table with 17643532 rows. That \nin itself could take a very long time. It is clear that in your EXPLAIN \nthis data is already cached, otherwise it would have to perform nigh on \n270000 seeks over the discs, which would take (depending on the disc \nsystem) something on the order of twenty minutes. Those 272211 rows then \nhave to be sorted, which takes a couple of seconds, which again is pretty \ngood. The rows are then uniqued, which is really quick, before returning \nthe results.\n\nIt's hard to think how you would expect the database to do this any \nfaster, really.\n\n> Indexes:\n> ��� \"pk_ws_fea_fam_cmrules\" PRIMARY KEY, btree (workspaceid, featureid, topfamilyid,\n> ruleenddate, gid)\n> ��� \"idx_cmrules\" btree (topfamilyid)\n> ��� \"idx_gid_ws_cmrules\" btree (gid, workspaceid)\n\nYou may perhaps benefit from an index on just the workspaceid column, but \nthe benefit may be minor.\n\nYou may think of clustering the table on the index, but that will only be \nof benefit if the data is not in the cache.\n\nThe statistics seem to be pretty accurate, predicting 305821 instead of \n272211 rows. The database is not going to easily predict the number of \nunique results (9 instead of 261), but that doesn't affect the query plan \nmuch, so I wouldn't worry about it.\n\nI would consider upgrading to Postgres 8.4 if possible, as it does have \nsome considerable performance improvements, especially for bitmap index \nscans if you are using a RAID array. I'd also try using \"SELECT DISTINCT\" \nrather than \"GROUP BY\" and seeing if that helps.\n\nMatthew\n\n-- \n Now the reason people powdered their faces back then was to change the values\n \"s\" and \"n\" in this equation here. - Computer science lecturer", "msg_date": "Tue, 8 Dec 2009 14:03:38 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing Bitmap Heap Scan." }, { "msg_contents": "Hi Matthew ,\n\nThanks very much for the analysis. It does takes 17 sec to execute when data is not in cache. I cannot use \"distinct\" as I have aggregate operators in select clause in original query. What I would like to ask can partitioning around workspaceid would help ? Or any sort of selective index would help me. \n\nThanks.\n\n\n\n\n________________________________\nFrom: Matthew Wakeling <[email protected]>\nTo: niraj patel <[email protected]>\nCc: [email protected]\nSent: Tue, 8 December, 2009 7:33:38 PM\nSubject: Re: [PERFORM] Optimizing Bitmap Heap Scan.\n\nOn Tue, 8 Dec 2009, niraj patel wrote:\n> Group (cost=509989.19..511518.30 rows=9 width=10) (actual time=1783.102..2362.587\n> rows=261 loops=1)\n> -> Sort (cost=509989.19..510753.74 rows=305821 width=10) (actual\n> time=1783.097..2121.378 rows=272211 loops=1)\n> Sort Key: topfamilyid\n> -> Bitmap Heap Scan on cmrules r (cost=14501.36..476896.34 rows=305821\n> width=10) (actual time=51.507..351.487 rows=272211 loops=1)\n> Recheck Cond: (workspaceid = 18512::numeric)\n> -> Bitmap Index Scan on pk_ws_fea_fam_cmrules (cost=0.00..14424.90\n> rows=305821 width=0) (actual time=48.097..48.097 rows=272211 loops=1)\n> Index Cond: (workspaceid = 18512::numeric)\n> Total runtime: 2373.008 ms\n> (8 rows)\n\n> select count(*) from cmrules;\n> \n> Gives me 17 643 532 Rows\n\nLooks good from here. Think about what you're asking the database to do. It has to select 272211 rows out of a large table with 17643532 rows. That in itself could take a very long time. It is clear that in your EXPLAIN this data is already cached, otherwise it would have to perform nigh on 270000 seeks over the discs, which would take (depending on the disc system) something on the order of twenty minutes. Those 272211 rows then have to be sorted, which takes a couple of seconds, which again is pretty good. The rows are then uniqued, which is really quick, before returning the results.\n\nIt's hard to think how you would expect the database to do this any faster, really.\n\n> Indexes:\n> \"pk_ws_fea_fam_cmrules\" PRIMARY KEY, btree (workspaceid, featureid, topfamilyid,\n> ruleenddate, gid)\n> \"idx_cmrules\" btree (topfamilyid)\n> \"idx_gid_ws_cmrules\" btree (gid, workspaceid)\n\nYou may perhaps benefit from an index on just the workspaceid column, but the benefit may be minor.\n\nYou may think of clustering the table on the index, but that will only be of benefit if the data is not in the cache.\n\nThe statistics seem to be pretty accurate, predicting 305821 instead of 272211 rows. The database is not going to easily predict the number of unique results (9 instead of 261), but that doesn't affect the query plan much, so I wouldn't worry about it.\n\nI would consider upgrading to Postgres 8.4 if possible, as it does have some considerable performance improvements, especially for bitmap index scans if you are using a RAID array. I'd also try using \"SELECT DISTINCT\" rather than \"GROUP BY\" and seeing if that helps.\n\nMatthew\n\n-- Now the reason people powdered their faces back then was to change the values\n\"s\" and \"n\" in this equation here. - Computer science lecturer\n-- Sent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\nHi Matthew ,Thanks very much for the analysis. It does takes 17 sec to execute when data is not in cache. I cannot use \"distinct\" as I have aggregate operators in select clause in original query. What I would like to ask can partitioning around workspaceid would help ? Or any sort of selective index would help me. Thanks.From: Matthew Wakeling <[email protected]>To: niraj patel <[email protected]>Cc:\n [email protected]: Tue, 8 December, 2009 7:33:38 PMSubject: Re: [PERFORM] Optimizing Bitmap Heap Scan.On Tue, 8 Dec 2009, niraj patel wrote:>  Group  (cost=509989.19..511518.30 rows=9 width=10) (actual time=1783.102..2362.587> rows=261 loops=1)>    ->  Sort  (cost=509989.19..510753.74 rows=305821 width=10) (actual> time=1783.097..2121.378 rows=272211 loops=1)>          Sort Key: topfamilyid>          ->  Bitmap Heap Scan on cmrules r  (cost=14501.36..476896.34 rows=305821> width=10) (actual time=51.507..351.487 rows=272211 loops=1)>                Recheck Cond:\n (workspaceid = 18512::numeric)>                ->  Bitmap Index Scan on pk_ws_fea_fam_cmrules  (cost=0.00..14424.90> rows=305821 width=0) (actual time=48.097..48.097 rows=272211 loops=1)>                      Index Cond: (workspaceid = 18512::numeric)>  Total runtime: 2373.008 ms> (8 rows)> select count(*) from  cmrules;> > Gives me 17 643 532 RowsLooks good from here. Think about what you're asking the database to do. It has to select 272211 rows out of a large table with 17643532 rows. That in itself could take a very long time. It is clear that in your EXPLAIN this data is already cached, otherwise it would have to perform nigh on 270000 seeks over the discs, which would take (depending on the\n disc system) something on the order of twenty minutes. Those 272211 rows then have to be sorted, which takes a couple of seconds, which again is pretty good. The rows are then uniqued, which is really quick, before returning the results.It's hard to think how you would expect the database to do this any faster, really.> Indexes:>     \"pk_ws_fea_fam_cmrules\" PRIMARY KEY, btree (workspaceid, featureid, topfamilyid,> ruleenddate, gid)>     \"idx_cmrules\" btree (topfamilyid)>     \"idx_gid_ws_cmrules\" btree (gid, workspaceid)You may perhaps benefit from an index on just the workspaceid column, but the benefit may be minor.You may think of clustering the table on the index, but that will only be of benefit if the data is not in the cache.The statistics seem to be pretty accurate, predicting 305821 instead of 272211 rows. The database is not going\n to easily predict the number of unique results (9 instead of 261), but that doesn't affect the query plan much, so I wouldn't worry about it.I would consider upgrading to Postgres 8.4 if possible, as it does have some considerable performance improvements, especially for bitmap index scans if you are using a RAID array. I'd also try using \"SELECT DISTINCT\" rather than \"GROUP BY\" and seeing if that helps.Matthew-- Now the reason people powdered their faces back then was to change the values\"s\" and \"n\" in this equation here.                - Computer science lecturer-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 8 Dec 2009 06:27:35 -0800 (PST)", "msg_from": "niraj patel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing Bitmap Heap Scan." }, { "msg_contents": "On Tue, 8 Dec 2009, niraj patel wrote:\n> Thanks very much for the analysis. It does takes 17 sec to execute when \n> data is not in cache.\n\nIt sounds like the table is already very much ordered by the workspaceid, \notherwise this would have taken much longer.\n\n> What I would like to ask can partitioning around workspaceid would help? \n> Or any sort of selective index would help me.\n\nDepends on how many distinct values of workspaceid there are. I would \nsuggest that given how well ordered your table is, and if you aren't doing \ntoo many writes, then there would be little benefit, and much hassle.\n\nMatthew\n\n-- \n Now, you would have thought these coefficients would be integers, given that\n we're working out integer results. Using a fraction would seem really\n stupid. Well, I'm quite willing to be stupid here - in fact, I'm going to\n use complex numbers. -- Computer Science Lecturer\n", "msg_date": "Tue, 8 Dec 2009 14:48:33 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing Bitmap Heap Scan." }, { "msg_contents": "From: niraj patel <[email protected]>\nSubject: Re: [PERFORM] Optimizing Bitmap Heap Scan.\nTo: \"Grzegorz Jaśkiewicz\" <[email protected]>\nCc: [email protected]\nDate: Tuesday, December 8, 2009, 1:50 PM\n\nHi gryzman,\n\nI have run vacuum full analyze on the cmrules tables. The version of pstgres is 8.2.13. How should I change stats to 100 ?\n\nThanks\nFrom: Grzegorz Jaśkiewicz <[email protected]>\nTo: niraj patel <[email protected]>\nCc: [email protected]\nSent: Tue, 8 December, 2009 7:12:49 PM\nSubject:\n Re: [PERFORM] Optimizing Bitmap Heap Scan.\n\n it looks like it might choose wrong plan, cos it gets the stats wrong. \nTry increasing number of stats to 100. \nBtw, what version it is ?\n\n\nin psql \nmydb=# set default_statistics_target = 100;\n\n \n\n\n\n\n \nFrom: niraj patel <[email protected]>Subject: Re: [PERFORM] Optimizing Bitmap Heap Scan.To: \"Grzegorz Jaśkiewicz\" <[email protected]>Cc: [email protected]: Tuesday, December 8, 2009, 1:50 PMHi gryzman,I have run vacuum full analyze on the cmrules tables. The version of pstgres is 8.2.13. How should I change stats to 100 ?ThanksFrom: Grzegorz Jaśkiewicz <[email protected]>To: niraj patel <[email protected]>Cc: [email protected]: Tue, 8 December, 2009 7:12:49 PMSubject:\n Re: [PERFORM] Optimizing Bitmap Heap Scan. it looks like it might choose wrong plan, cos it gets the stats wrong. Try increasing number of stats to 100. Btw, what version it is ?in psql mydb=# set default_statistics_target = 100;", "msg_date": "Tue, 8 Dec 2009 07:51:13 -0800 (PST)", "msg_from": "Lennin Caro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing Bitmap Heap Scan." }, { "msg_contents": "Lennin Caro <[email protected]> wrote:\n \n> I have run vacuum full\n \nThat's not usually a good idea. For one thing, it will tend to\nbloat your indexes.\n \n-Kevin\n", "msg_date": "Tue, 08 Dec 2009 10:48:02 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing Bitmap Heap Scan." }, { "msg_contents": "2009/12/8 Lennin Caro <[email protected]>\n>\n> From: niraj patel <[email protected]>\n>\n> Subject: Re: [PERFORM] Optimizing Bitmap Heap Scan.\n> To: \"Grzegorz Jaśkiewicz\" <[email protected]>\n> Cc: [email protected]\n> Date: Tuesday, December 8, 2009, 1:50 PM\n>\n> Hi gryzman,\n>\n> I have run vacuum full analyze on the cmrules tables. The version of pstgres is 8.2.13. How should I change stats to 100 ?\n>\n> Thanks\n> ________________________________\n> From: Grzegorz Jaśkiewicz <[email protected]>\n> To: niraj patel <[email protected]>\n> Cc: [email protected]\n> Sent: Tue, 8 December, 2009 7:12:49 PM\n> Subject: Re: [PERFORM] Optimizing Bitmap Heap Scan.\n>\n> it looks like it might choose wrong plan, cos it gets the stats wrong.\n> Try increasing number of stats to 100.\n> Btw, what version it is ?\n>\n>\n> in psql\n> mydb=# set default_statistics_target = 100;\n\nThat's only going to affect the current session. To change it\npermanently, edit postgresql.conf and do pg_ctl reload.\n\n...Robert\n", "msg_date": "Tue, 8 Dec 2009 13:30:32 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing Bitmap Heap Scan." } ]
[ { "msg_contents": "Hello\n\nMy vacuums have suddenly started to fail, seemingly at random. I am\nconfused.\n\nI'm running 8.1.3, with close to a dozen servers, up to 150 databases each.\nI have 8GB of RAM. Vacuums have started to fail on all servers (though only\nthe occasional vacuum) with the following error:\n\nVACUUM,ERROR: out of memory\nVACUUM,DETAIL: Failed on request of size 268435452\n\nI have some terrible tables that I inherited for which I recently created\ntons of indexes in order to make them useful. I had a post a couple of\nweeks ago detailing my problem with trying to get a function working to\nsimplify the data...I fell back on indexes where the column values were not\nnull/empty. Since they are almost always null/empty, I was able to\ndramatically speed up access without eating up much disk space, but I did\nthrow an extra 200 indexes into each database. Shortly after I started\ngetting occasional vacuum failures with the above error.\n\nI'm not sure if it's a coincidence or not, but my maintenance_work_mem is\nset to 262144 KB, which matches the failed request size above.\n\nI initially assumed that with 200*150 additional relations, I was messing up\nmy max_fsm_relations setting, which is 60,000. However, as a test a ran a\nverbose vacuum analyze on a small table to get the statistics at the end,\nfrom which I got the following:\n\nINFO: free space map contains 2239943 pages in 28445 relations\nDETAIL: A total of 2623552 page slots are in use (including overhead).\n2623552 page slots are required to track all free space.\nCurrent limits are: 8000000 page slots, 60000 relations, using 50650 KB.\n\nwhich seems to indicate I'm well within my limits.\n\n(for curiosity's sake, which relations count towards that limit? From what\nI can tell it's only tables and indexes...functions, views, triggers, etc\nshouldn't contribute, should they?)\n\nAm I interpreting this wrong? Anyone have any insight as to what is going\nwrong? I can provide more information if needed...\n\nThanks,\n\nHelloMy vacuums have suddenly started to fail, seemingly at random.  I am confused.I'm running 8.1.3, with close to a dozen servers, up to 150 databases each.  I have 8GB of RAM.  Vacuums have started to fail on all servers (though only the occasional vacuum) with the following error:\nVACUUM,ERROR:  out of memoryVACUUM,DETAIL:  Failed on request of size 268435452I have some terrible tables that I inherited for which I recently created tons of indexes in order to make them useful.  I had a post a couple of weeks ago detailing my problem with trying to get a function working to simplify the data...I fell back on indexes where the column values were not null/empty.  Since they are almost always null/empty, I was able to dramatically speed up access without eating up much disk space, but I did throw an extra 200 indexes into each database.  Shortly after I started getting occasional vacuum failures with the above error.\nI'm not sure if it's a coincidence or not, but my maintenance_work_mem is set to 262144 KB, which matches the failed request size above.I initially assumed that with 200*150 additional relations, I was messing up my max_fsm_relations setting, which is 60,000.  However, as a test a ran a verbose vacuum analyze on a small table to get the statistics at the end, from which I got the following:\nINFO:  free space map contains 2239943 pages in 28445 relationsDETAIL:  A total of 2623552 page slots are in use (including overhead).2623552 page slots are required to track all free space.Current limits are:  8000000 page slots, 60000 relations, using 50650 KB.\nwhich seems to indicate I'm well within my limits.(for curiosity's sake, which relations count towards that limit?  From what I can tell it's only tables and indexes...functions, views, triggers, etc shouldn't contribute, should they?)\nAm I interpreting this wrong?  Anyone have any insight as to what is going wrong?  I can provide more information if needed...Thanks,", "msg_date": "Tue, 8 Dec 2009 10:51:14 -0500", "msg_from": "Jonathan Foy <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum running out of memory" }, { "msg_contents": "Jonathan Foy <[email protected]> writes:\n> My vacuums have suddenly started to fail, seemingly at random. I am\n> confused.\n\n> I'm running 8.1.3, with close to a dozen servers, up to 150 databases each.\n> I have 8GB of RAM. Vacuums have started to fail on all servers (though only\n> the occasional vacuum) with the following error:\n\n> VACUUM,ERROR: out of memory\n> VACUUM,DETAIL: Failed on request of size 268435452\n\nI'd back off maintenance_work_mem if I were you. I think you don't have\nenough RAM to be running a lot of concurrent VACUUMs all with the same\nlarge memory consumption.\n\nAlso, if it's really 8.1.3, consider an update to 8.1.something-recent.\nNot only are you exposed to a number of very serious known bugs, but\nthis patch in particular would likely help you:\nhttp://archives.postgresql.org/pgsql-committers/2007-09/msg00377.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Dec 2009 11:22:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum running out of memory " }, { "msg_contents": "I was wondering if that was the problem. So I'm correct in thinking that\nthe failure occurred when the vacuum tried to pull its 256 MB as defined in\nthe maintenance_work_mem value, and the system just did not have enough\navailable...any idea why that would suddenly start happening? The indexes I\ncreated shouldn't have affected that, should they?\n\nAnd point taken with the update. I'm pushing to get us to 8.4,\nunsuccessfully so far, but management might be more amenable to minor\nversion upgrades, since as I understand it there shouldn't be any risk of\napplication problems with minor version changes...\n\nOn Tue, Dec 8, 2009 at 11:22 AM, Tom Lane <[email protected]> wrote:\n\n> Jonathan Foy <[email protected]> writes:\n> > My vacuums have suddenly started to fail, seemingly at random. I am\n> > confused.\n>\n> > I'm running 8.1.3, with close to a dozen servers, up to 150 databases\n> each.\n> > I have 8GB of RAM. Vacuums have started to fail on all servers (though\n> only\n> > the occasional vacuum) with the following error:\n>\n> > VACUUM,ERROR: out of memory\n> > VACUUM,DETAIL: Failed on request of size 268435452\n>\n> I'd back off maintenance_work_mem if I were you. I think you don't have\n> enough RAM to be running a lot of concurrent VACUUMs all with the same\n> large memory consumption.\n>\n> Also, if it's really 8.1.3, consider an update to 8.1.something-recent.\n> Not only are you exposed to a number of very serious known bugs, but\n> this patch in particular would likely help you:\n> http://archives.postgresql.org/pgsql-committers/2007-09/msg00377.php\n>\n> regards, tom lane\n>\n\nI was wondering if that was the problem.  So I'm correct in thinking that the failure occurred when the vacuum tried to pull its 256 MB as defined in the maintenance_work_mem value, and the system just did not have enough available...any idea why that would suddenly start happening?  The indexes I created shouldn't have affected that, should they?\nAnd point taken with the update.  I'm pushing to get us to 8.4, unsuccessfully so far, but management might be more amenable to minor version upgrades, since as I understand it there shouldn't be any risk of application problems with minor version changes...\nOn Tue, Dec 8, 2009 at 11:22 AM, Tom Lane <[email protected]> wrote:\nJonathan Foy <[email protected]> writes:\n> My vacuums have suddenly started to fail, seemingly at random.  I am\n> confused.\n\n> I'm running 8.1.3, with close to a dozen servers, up to 150 databases each.\n> I have 8GB of RAM.  Vacuums have started to fail on all servers (though only\n> the occasional vacuum) with the following error:\n\n> VACUUM,ERROR:  out of memory\n> VACUUM,DETAIL:  Failed on request of size 268435452\n\nI'd back off maintenance_work_mem if I were you.  I think you don't have\nenough RAM to be running a lot of concurrent VACUUMs all with the same\nlarge memory consumption.\n\nAlso, if it's really 8.1.3, consider an update to 8.1.something-recent.\nNot only are you exposed to a number of very serious known bugs, but\nthis patch in particular would likely help you:\nhttp://archives.postgresql.org/pgsql-committers/2007-09/msg00377.php\n\n                        regards, tom lane", "msg_date": "Tue, 8 Dec 2009 11:31:04 -0500", "msg_from": "Jonathan Foy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum running out of memory" }, { "msg_contents": "On Tue, Dec 8, 2009 at 4:31 PM, Jonathan Foy <[email protected]> wrote:\n> I was wondering if that was the problem.  So I'm correct in thinking that\n> the failure occurred when the vacuum tried to pull its 256 MB as defined in\n> the maintenance_work_mem value, and the system just did not have enough\n> available...\n\nCorrect\n\n> any idea why that would suddenly start happening?  The indexes I\n> created shouldn't have affected that, should they?\n\nWell the 8.1 vacuum was pretty inefficient in how it scanned indexes\nso adding lots of indexes will make it take a lot longer. That might\nmean you're running more vacuums at the same time now. The 8.2 vacuum\nis much improved on that front, though adding lots of indexes will\nstill make vacuum take longer (along with updates and inserts).\n\n> And point taken with the update.  I'm pushing to get us to 8.4,\n> unsuccessfully so far, but management might be more amenable to minor\n> version upgrades, since as I understand it there shouldn't be any risk of\n> application problems with minor version changes...\n\nYou're always better off running the most recent minor release. Minor\nreleases fix security holes, data corruption bugs, crashing bugs, etc.\nOccasionally those bugs do fix behavioural bugs, especially early in\nthe release cycle before the next major release is out but mostly\nthey're real bugs that if you had run into you would know. You should\nstill read all the release notes for them though.\n\n\n-- \ngreg\n", "msg_date": "Tue, 8 Dec 2009 16:41:13 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum running out of memory" }, { "msg_contents": "Jonathan Foy <[email protected]> writes:\n> I was wondering if that was the problem. So I'm correct in thinking that\n> the failure occurred when the vacuum tried to pull its 256 MB as defined in\n> the maintenance_work_mem value, and the system just did not have enough\n> available...any idea why that would suddenly start happening? The indexes I\n> created shouldn't have affected that, should they?\n\nNot directly, AFAICS, but they could stretch out the time required to\nvacuum their tables, thus possibly leading to vacuums overlapping that\ndidn't overlap before. Just a guess though. Another likely bet is\nthat this is just an effect of the overall system load increasing\nover time (more backends == more memory needed).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Dec 2009 11:42:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum running out of memory " } ]
[ { "msg_contents": "I just installed a shiny new database server with pg 8.4.1 running on \nCentOS 5.4. After using slony to replicate over my database I decided to \n do some basic performance tests to see how spiffy my shiny new server \nis. This machine has 32G ram, over 31 of which is used for the system \nfile cache.\n\nSo I run \"select count(*) from large_table\" and I see in xosview a solid \nblock of write activity. Runtime is 28125.644 ms for the first run. The \nsecond run does not show a block of write activity and takes 3327.441 ms\n\ntop shows that this writing is being done by kjournald. What is going on \nhere? There is not a lot of write activity on this server so there \nshould not be a significant number of dirty cache pages that kjournald \nwould need to write out before it could read in my table. Certainly in \nthe 31G being used for file cache there should be enough non-dirty pages \nthat could be dropped to read in my table w/o having to flush anything \nto disk. My table size is 2,870,927,360 bytes.\n\n# cat /proc/sys/vm/dirty_expire_centisecs\n2999\n\nI restarted postgres and ran a count(*) on an even larger table.\n\n[local]=> explain analyze select count(*) from et;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=6837051.82..6837051.83 rows=1 width=0) (actual \ntime=447240.157..447240.157 rows=1 loops=1)\n -> Seq Scan on et (cost=0.00..6290689.25 rows=218545025 width=0) \n(actual time=5.971..400326.911 rows=218494524 loops=1)\n Total runtime: 447240.402 ms\n(3 rows)\n\nTime: 447258.525 ms\n[local]=> explain analyze select count(*) from et;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=6837113.44..6837113.45 rows=1 width=0) (actual \ntime=103011.724..103011.724 rows=1 loops=1)\n -> Seq Scan on et (cost=0.00..6290745.95 rows=218546995 width=0) \n(actual time=9.844..71629.497 rows=218496012 loops=1)\n Total runtime: 103011.832 ms\n(3 rows)\n\nTime: 103012.523 ms\n\n[local]=> select pg_relation_size('et');\n pg_relation_size\n------------------\n 33631543296\n(1 row)\n\n\nI posted xosview snapshots from the two runs at: \nhttp://www.tupari.net/2009-12-9/ This time the first run showed a mix of \nread/write activity instead of the solid write I saw before.\n", "msg_date": "Wed, 09 Dec 2009 13:29:00 -0500", "msg_from": "Joseph S <[email protected]>", "msg_from_op": true, "msg_subject": "big select is resulting in a large amount of disk writing by\n kjournald" }, { "msg_contents": "Hint bit I/O?\n\nKen\n\nOn Wed, Dec 09, 2009 at 01:29:00PM -0500, Joseph S wrote:\n> I just installed a shiny new database server with pg 8.4.1 running on \n> CentOS 5.4. After using slony to replicate over my database I decided to \n> do some basic performance tests to see how spiffy my shiny new server is. \n> This machine has 32G ram, over 31 of which is used for the system file \n> cache.\n>\n> So I run \"select count(*) from large_table\" and I see in xosview a solid \n> block of write activity. Runtime is 28125.644 ms for the first run. The \n> second run does not show a block of write activity and takes 3327.441 ms\n>\n> top shows that this writing is being done by kjournald. What is going on \n> here? There is not a lot of write activity on this server so there should \n> not be a significant number of dirty cache pages that kjournald would need \n> to write out before it could read in my table. Certainly in the 31G being \n> used for file cache there should be enough non-dirty pages that could be \n> dropped to read in my table w/o having to flush anything to disk. My table \n> size is 2,870,927,360 bytes.\n>\n> # cat /proc/sys/vm/dirty_expire_centisecs\n> 2999\n>\n> I restarted postgres and ran a count(*) on an even larger table.\n>\n> [local]=> explain analyze select count(*) from et;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=6837051.82..6837051.83 rows=1 width=0) (actual \n> time=447240.157..447240.157 rows=1 loops=1)\n> -> Seq Scan on et (cost=0.00..6290689.25 rows=218545025 width=0) \n> (actual time=5.971..400326.911 rows=218494524 loops=1)\n> Total runtime: 447240.402 ms\n> (3 rows)\n>\n> Time: 447258.525 ms\n> [local]=> explain analyze select count(*) from et;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=6837113.44..6837113.45 rows=1 width=0) (actual \n> time=103011.724..103011.724 rows=1 loops=1)\n> -> Seq Scan on et (cost=0.00..6290745.95 rows=218546995 width=0) \n> (actual time=9.844..71629.497 rows=218496012 loops=1)\n> Total runtime: 103011.832 ms\n> (3 rows)\n>\n> Time: 103012.523 ms\n>\n> [local]=> select pg_relation_size('et');\n> pg_relation_size\n> ------------------\n> 33631543296\n> (1 row)\n>\n>\n> I posted xosview snapshots from the two runs at: \n> http://www.tupari.net/2009-12-9/ This time the first run showed a mix of \n> read/write activity instead of the solid write I saw before.\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 9 Dec 2009 13:45:57 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big select is resulting in a large amount of disk\n\twriting by kjournald" }, { "msg_contents": "Joseph S wrote:\n> So I run \"select count(*) from large_table\" and I see in xosview a \n> solid block of write activity. Runtime is 28125.644 ms for the first \n> run. The second run does not show a block of write activity and takes \n> 3327.441 ms\nhttp://wiki.postgresql.org/wiki/Hint_Bits\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Wed, 09 Dec 2009 14:53:56 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big select is resulting in a large amount of disk writing\n\tby kjournald" }, { "msg_contents": "Greg Smith wrote:\n> Joseph S wrote:\n>> So I run \"select count(*) from large_table\" and I see in xosview a \n>> solid block of write activity. Runtime is 28125.644 ms for the first \n>> run. The second run does not show a block of write activity and takes \n>> 3327.441 ms\n> http://wiki.postgresql.org/wiki/Hint_Bits\n> \n\nHmm. A large select results in a lot of writes? This seems broken. And \nif we are setting these hint bits then what do we need VACUUM for? Is \nthere any way to tune this behavior? Is there any way to get stats on \nhow many rows/pages would need hint bits set?\n", "msg_date": "Wed, 09 Dec 2009 15:50:57 -0500", "msg_from": "Joseph S <[email protected]>", "msg_from_op": true, "msg_subject": "Re: big select is resulting in a large amount of disk writing\n\tby kjournald" }, { "msg_contents": "Joseph S wrote:\n> Greg Smith wrote:\n>> Joseph S wrote:\n>>> So I run \"select count(*) from large_table\" and I see in xosview a \n>>> solid block of write activity. Runtime is 28125.644 ms for the first \n>>> run. The second run does not show a block of write activity and \n>>> takes 3327.441 ms\n>> http://wiki.postgresql.org/wiki/Hint_Bits\n>>\n>\n> Hmm. A large select results in a lot of writes? This seems broken. \n> And if we are setting these hint bits then what do we need VACUUM \n> for? Is there any way to tune this behavior? Is there any way to get \n> stats on how many rows/pages would need hint bits set?\nBasically, the idea is that if you're pulling a page in for something \nelse that requires you to compute the hint bits, just do it now so \nVACUUM doesn't have to later, while you're in there anyway. Why make \nVACUUM do the work later if you're doing part of it now anyway? If you \nreorganize your test to VACUUM first *before* running the \"select (*) \nfrom...\", you'll discover the writes during SELECT go away. You're \nrunning into the worst-case behavior. For example, if you inserted a \nbunch of things more slowly, you might discover that autovacuum would do \nthis cleanup before you even got to looking at the data.\n\nThere's no tuning for the behavior beyond making autovacuum more \naggressive (to improve odds it will get there first), and no visibility \ninto what's happening either. And cranking up autovacuum has its own \ndownsides. This situation shows up a lot when you're benchmarking \nthings, but not as much in the real world, so it's hard to justify \nimproving.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Wed, 09 Dec 2009 17:04:47 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big select is resulting in a large amount of disk writing\n\tby kjournald" }, { "msg_contents": "Greg Smith wrote:\n> Joseph S wrote:\n>> Greg Smith wrote:\n>>> Joseph S wrote:\n>>>> So I run \"select count(*) from large_table\" and I see in xosview a \n>>>> solid block of write activity. Runtime is 28125.644 ms for the first \n>>>> run. The second run does not show a block of write activity and \n>>>> takes 3327.441 ms\n>>> http://wiki.postgresql.org/wiki/Hint_Bits\n>>>\n>>\n>> Hmm. A large select results in a lot of writes? This seems broken. \n>> And if we are setting these hint bits then what do we need VACUUM \n>> for? Is there any way to tune this behavior? Is there any way to get \n>> stats on how many rows/pages would need hint bits set?\n> Basically, the idea is that if you're pulling a page in for something \n> else that requires you to compute the hint bits, just do it now so \n> VACUUM doesn't have to later, while you're in there anyway. Why make \n> VACUUM do the work later if you're doing part of it now anyway? If you \n\nThen why not do all the work the VACUUM does? What does VACUUM do anyway?\n\n> reorganize your test to VACUUM first *before* running the \"select (*) \n> from...\", you'll discover the writes during SELECT go away. You're \n> running into the worst-case behavior. For example, if you inserted a \n> bunch of things more slowly, you might discover that autovacuum would do \n> this cleanup before you even got to looking at the data.\n\nI think autovacuum did hit these tables after slony copied them (I \nremember seeing them running). Would the hint bits be set during an \nreindex? For example the indexing slony does after the initial copy? \nI'm not sure if slony commits the transaction before it does the \nreindex. It probably doesn't.\n> \n\n> downsides. This situation shows up a lot when you're benchmarking \n> things, but not as much in the real world, so it's hard to justify \n> improving.\n> \n\nActually I think I have been running into this situation. There were \nmany reports that ran much faster the second time around than the first \nand I assumed it was just because the data was in memory cache. Now I'm \nthinking I was running into this.\n", "msg_date": "Wed, 09 Dec 2009 17:24:50 -0500", "msg_from": "Joseph S <[email protected]>", "msg_from_op": true, "msg_subject": "Re: big select is resulting in a large amount of disk writing\n\tby kjournald" }, { "msg_contents": "Joseph S <[email protected]> wrote:\n> I just installed a shiny new database server with pg 8.4.1 running\n> on CentOS 5.4. After using slony to replicate over my database I\n> decided to do some basic performance tests to see how spiffy my\n> shiny new server is. This machine has 32G ram, over 31 of which\n> is used for the system file cache.\n> \n> So I run \"select count(*) from large_table\" and I see in xosview a\n> solid block of write activity. Runtime is 28125.644 ms for the\n> first run. The second run does not show a block of write activity\n> and takes 3327.441 ms\n \nAs others have mentioned, this is due to hint bit updates, and doing\nan explicit VACUUM after the load and before you start using the\ndatabase will avoid run-time issues. You also need statistics, so\nbe sure to do VACUUM ANALYZE.\n \nThere is one other sneaky surprise awaiting you, however. Since\nthis stuff was all loaded with a narrow range of transaction IDs,\nthey will all need to be frozen at about the same time; so somewhere\ndown the road, either during a routine database vacuum or possibly\nin the middle of normal operations, all of these rows will need to\nbe rewritten *again* to change the transaction IDs used for managing\nMVCC to the special \"frozen\" value. We routinely follow a load with\nVACUUM FREEZE ANALYZE of the database to combine the update to\nfreeze the tuples with the update to set the hint bits and avoid\nthis problem.\n \nThere has been some talk about possibly writing tuples in a frozen\nstate with the hint bits already set if they are loaded in the same\ndatabase transaction which creates the table, but I'm not aware of\nanyone currently working on this.\n \n-Kevin\n", "msg_date": "Thu, 10 Dec 2009 10:41:53 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big select is resulting in a large amount of\n\tdisk writing by kjournald" }, { "msg_contents": "\nOn 12/10/09 8:41 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n> \n> There has been some talk about possibly writing tuples in a frozen\n> state with the hint bits already set if they are loaded in the same\n> database transaction which creates the table, but I'm not aware of\n> anyone currently working on this.\n> \n \nWow, that would be nice. That would cut in half the time it takes to\nrestore a several TB db (3 days to 1.5 here).\n\nI assume this would help a lot of \"CREATE TABLE AS SELECT ...\" use cases\ntoo. That is often the fastest way to do a large update on a table, but it\ncan still be annoyingly write intensive.\n\n\n> -Kevin\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 15 Dec 2009 17:28:21 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big select is resulting in a large amount of disk\n\twriting by kjournald" } ]
[ { "msg_contents": "\n\nHello,\n\nPostgreSQL has served us very well powering a busy national pet\nadoption website. Now I'd like to tune our setup further get more out\nof hardware. \n\nWhat I'm noticing is that the while the FreeBSD server has 4 Gigs of\nmemory, there are rarely every more than 2 in use-- the memory use\ngraphs as being rather constant. My goal is to make good use of those 2\nGigs of memory to improve performance and reduce the CPU usage. \n\nThe server has 4 2.33 Ghz processors in it, and RAIDed 15k RPM SCSI\ndisks.\n\nHere are some current memory-related settings from our postgresql.conf\nfile. (We currently run 8.2, but are planning an upgrade to 8.4\n\"soon\"). Do you see an obvious suggestions for improvement? \n\nI find the file a bit hard to read because of the lack of units in \nthe examples, but perhaps that's already been addressed in future\nversions.\n\n max_connections = 400 # Seems to be enough us\n shared_buffers = 8192\n effective_cache_size = 1000\n work_mem = 4096\n maintenance_work_mem = 160MB \n\nThanks for your suggestions!\n\n Mark\n\n[I tried to post this yesterday but didn't see it come through. This\nmessage is a second attempt.)\n\n-- \n . . . . . . . . . . . . . . . . . . . . . . . . . . . \n Mark Stosberg Principal Developer \n [email protected] Summersault, LLC \n 765-939-9301 ext 202 database driven websites\n . . . . . http://www.summersault.com/ . . . . . . . .\n\n\n", "msg_date": "Thu, 10 Dec 2009 10:50:10 -0500", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "Fw: Help me put 2 Gigs of RAM to use" }, { "msg_contents": "On Thu, 10 Dec 2009, Mark Stosberg wrote:\n> What I'm noticing is that the while the FreeBSD server has 4 Gigs of\n> memory, there are rarely every more than 2 in use-- the memory use\n> graphs as being rather constant. My goal is to make good use of those 2\n> Gigs of memory to improve performance and reduce the CPU usage.\n\nI think you'll find that the RAM is already being used quite effectively \nas disc cache by the OS. It sounds like the server is actually set up \npretty well. You may get slightly better performance by tweaking a thing \nhere or there, but the server needs some OS disc cache to perform well.\n\n> (We currently run 8.2, but are planning an upgrade to 8.4 \"soon\").\n\nHighly recommended.\n\n> [I tried to post this yesterday but didn't see it come through. This\n> message is a second attempt.)\n\nThe mailing list server will silently chuck any message whose subject \nstarts with the word \"help\", just in case you're asking for help about \nmanaging the mailing list. The default behaviour is not to inform you that \nit has done so. It is highly annoying - could a list admin please consider \nchanging this?\n\nMatthew\n\n-- \n I would like to think that in this day and age people would know better than\n to open executables in an e-mail. I'd also like to be able to flap my arms\n and fly to the moon. -- Tim Mullen\n", "msg_date": "Thu, 10 Dec 2009 16:03:11 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fw: Help me put 2 Gigs of RAM to use" }, { "msg_contents": "\nThanks for the response, Matthew.\n\n> On Thu, 10 Dec 2009, Mark Stosberg wrote:\n> > What I'm noticing is that the while the FreeBSD server has 4 Gigs of\n> > memory, there are rarely every more than 2 in use-- the memory use\n> > graphs as being rather constant. My goal is to make good use of those 2\n> > Gigs of memory to improve performance and reduce the CPU usage.\n> \n> I think you'll find that the RAM is already being used quite effectively \n> as disc cache by the OS. It sounds like the server is actually set up \n> pretty well. You may get slightly better performance by tweaking a thing \n> here or there, but the server needs some OS disc cache to perform well.\n\nAs part of reviewing this status, I it appears that the OS is only\naddresses 3 of the 4 Gigs of memory. We'll work on our FreeBSD setup to\ncure that.\n\nHere's how \"top\" reports the memory breakdown:\n\nMem: 513M Active, 2246M Inact, 249M Wired, 163M Cache, 112M Buf, 7176K\nFree Swap: 9216M Total, 1052K Used, 9215M Free\n\nSo perhaps the OS disc cache is represented in the \"Inactive\" memory\nstatistic? I suppose once we have the 4th Gig of memory actually\navailable, that would all be doing to the disk cache. \n\n> > (We currently run 8.2, but are planning an upgrade to 8.4 \"soon\").\n> \n> Highly recommended.\n\nFor performance improvements in particular?\n\n Mark\n\n-- \n . . . . . . . . . . . . . . . . . . . . . . . . . . . \n Mark Stosberg Principal Developer \n [email protected] Summersault, LLC \n 765-939-9301 ext 202 database driven websites\n . . . . . http://www.summersault.com/ . . . . . . . .\n\n\n", "msg_date": "Thu, 10 Dec 2009 11:44:32 -0500", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help me put 2 Gigs of RAM to use" }, { "msg_contents": "Mark Stosberg wrote:\n> I find the file a bit hard to read because of the lack of units in \n> the examples, but perhaps that's already been addressed in future\n> versions.\n>\n> max_connections = 400 # Seems to be enough us\n> shared_buffers = 8192\n> effective_cache_size = 1000\n> work_mem = 4096\n> maintenance_work_mem = 160MB\n> \nIt's already addressed in 8.2, as you can note by the fact that \n\"maintenance_work_mem\" is in there with an easy to read format. \nGuessing that someone either pulled in settings from an older version, \nor used some outdated web guide to get starter settings.\n\nTo convert the rest of them, you need to know what the units for each \nparameter is. You can find that out like this:\n\ngsmith=# select name,setting,unit from pg_settings where name in \n('shared_buffers','effective_cache_size','work_mem');\n\n name | setting | unit\n----------------------+---------+------\n effective_cache_size | 16384 | 8kB\n shared_buffers | 4096 | 8kB\n work_mem | 1024 | kB\n\nSo your shared buffers setting is 8192 * 8K = 64MB\neffective_cache_size is 8MB\nwork_mem is 4MB.\n\nThe first and last of those are reasonable but on the small side, the \nlast is...not. Increasing it won't actually use more memory on your \nserver though, it will just change query plans--so you want to be \ncareful about increasing it too much in one shot.\n\nThe next set of stuff you need to know about general guidelines for \nserver sizing is at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nYou'd probably want to put shared_buffers at a higher level based on the \namount of RAM on your server, but I'd suggest you tune the checkpoint \nparameters along with that--just increasing the buffer space along can \ncause problems rather than solve them if you're having checkpoints all \nthe time.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 10 Dec 2009 11:45:30 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fw: Help me put 2 Gigs of RAM to use" }, { "msg_contents": "On Thu, Dec 10, 2009 at 11:45 AM, Greg Smith <[email protected]> wrote:\n> So your shared buffers setting is 8192 * 8K = 64MB\n> effective_cache_size is 8MB\n> work_mem is 4MB.\n>\n> The first and last of those are reasonable but on the small side, the last\n> is...not.\n\nI believe that the second instance of the word \"last\" in that sentence\nshould have been \"middle\", referring to effective_cache_size. Small\nvalues discourage the planner from using indices in certain\nsituations.\n\n...Robert\n", "msg_date": "Thu, 10 Dec 2009 12:19:44 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fw: Help me put 2 Gigs of RAM to use" } ]
[ { "msg_contents": "Hey,\nI've got a computer which runs but 8.3 and 8.4. To create a db it takes 4s\nfor 8.3 and 9s for 8.4. I have many unit tests which create databases all\nof the time and now run much slower than 8.3 but it seems to be much longer\nas I remember at one point creating databases I considered an instantaneous\nthing. Does any on the list know why this is true and if I can get it back\nto normal.\n-Michael\n\nHey,I've got a computer which runs but 8.3 and 8.4.  To create a db it takes 4s for 8.3 and 9s for 8.4.  I have many unit tests which create databases all of the time and now run much slower than 8.3 but it seems to be much longer as I remember at one point creating databases I considered an instantaneous thing.  Does any on the list know why this is true and if I can get it back to normal.\n\n-Michael", "msg_date": "Thu, 10 Dec 2009 15:41:08 -0500", "msg_from": "Michael Clemmons <[email protected]>", "msg_from_op": true, "msg_subject": "8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Thursday 10 December 2009 21:41:08 Michael Clemmons wrote:\n> Hey,\n> I've got a computer which runs but 8.3 and 8.4. To create a db it takes 4s\n> for 8.3 and 9s for 8.4. I have many unit tests which create databases all\n> of the time and now run much slower than 8.3 but it seems to be much longer\n> as I remember at one point creating databases I considered an instantaneous\n> thing. Does any on the list know why this is true and if I can get it back\n> to normal.\nPossibly you had fsync=off at the time?\n\nAndres\n", "msg_date": "Thu, 10 Dec 2009 22:56:59 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "Im not sure what that means ppl in my office with slower hd speeds using 8.4\ncan create a db in 2s vs my 8-12s. Could using md5 instead of ident do it?\n\nOn Thu, Dec 10, 2009 at 4:56 PM, Andres Freund <[email protected]> wrote:\n\n> On Thursday 10 December 2009 21:41:08 Michael Clemmons wrote:\n> > Hey,\n> > I've got a computer which runs but 8.3 and 8.4. To create a db it takes\n> 4s\n> > for 8.3 and 9s for 8.4. I have many unit tests which create databases\n> all\n> > of the time and now run much slower than 8.3 but it seems to be much\n> longer\n> > as I remember at one point creating databases I considered an\n> instantaneous\n> > thing. Does any on the list know why this is true and if I can get it\n> back\n> > to normal.\n> Possibly you had fsync=off at the time?\n>\n> Andres\n>\n\nIm not sure what that means ppl in my office with slower hd speeds using 8.4 can create a db in 2s vs my 8-12s.  Could using md5 instead of ident do it?On Thu, Dec 10, 2009 at 4:56 PM, Andres Freund <[email protected]> wrote:\nOn Thursday 10 December 2009 21:41:08 Michael Clemmons wrote:\n\n> Hey,\n> I've got a computer which runs but 8.3 and 8.4.  To create a db it takes 4s\n> for 8.3 and 9s for 8.4.  I have many unit tests which create databases all\n> of the time and now run much slower than 8.3 but it seems to be much longer\n> as I remember at one point creating databases I considered an instantaneous\n> thing.  Does any on the list know why this is true and if I can get it back\n> to normal.\nPossibly you had fsync=off at the time?\n\nAndres", "msg_date": "Thu, 10 Dec 2009 17:01:08 -0500", "msg_from": "Michael Clemmons <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "Hi,\n\nOn Thursday 10 December 2009 23:01:08 Michael Clemmons wrote:\n> Im not sure what that means ppl in my office with slower hd speeds using\n> 8.4 can create a db in 2s vs my 8-12s.\n- Possibly their config is different - they could have disabled the \"fsync\" \nparameter which turns the database to be not crashsafe anymore but much faster \nin some circumstances.\n\n- Possibly you have much data in your template1 database?\nYou could check whether\n\nCREATE DATABASE speedtest TEMPLATE template1; takes more time than\nCREATE DATABASE speedtest TEMPLATE template0;.\n\nYou should issue both multiple times to ensure caching on the template \ndatabase doesnt play a role.\n\n> Could using md5 instead of ident do it?\nSeems unlikely.\nIs starting psql near-instantaneus? Are you using \"createdb\" or are you \nissuing \"CREATE DATABASE ...\"?\n\nAndres\n", "msg_date": "Thu, 10 Dec 2009 23:09:03 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "In my limited experience ext4 as presented by Karmic is not db friendly. I\nhad to carve my swap partition into a swap partition and an xfs partition to\nget better db performance. Try fsync=off first, but if that doesn't work\nthen try a mini xfs.\n\n\nOn Thu, Dec 10, 2009 at 5:09 PM, Andres Freund <[email protected]> wrote:\n\n> Hi,\n>\n> On Thursday 10 December 2009 23:01:08 Michael Clemmons wrote:\n> > Im not sure what that means ppl in my office with slower hd speeds using\n> > 8.4 can create a db in 2s vs my 8-12s.\n> - Possibly their config is different - they could have disabled the \"fsync\"\n> parameter which turns the database to be not crashsafe anymore but much\n> faster\n> in some circumstances.\n>\n> - Possibly you have much data in your template1 database?\n> You could check whether\n>\n> CREATE DATABASE speedtest TEMPLATE template1; takes more time than\n> CREATE DATABASE speedtest TEMPLATE template0;.\n>\n> You should issue both multiple times to ensure caching on the template\n> database doesnt play a role.\n>\n> > Could using md5 instead of ident do it?\n> Seems unlikely.\n> Is starting psql near-instantaneus? Are you using \"createdb\" or are you\n> issuing \"CREATE DATABASE ...\"?\n>\n> Andres\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIn my limited experience ext4 as presented by Karmic is not db friendly.  I had to carve my swap partition into a swap partition and an xfs partition to get better db performance.  Try fsync=off first, but if that doesn't work then try a mini xfs.\nOn Thu, Dec 10, 2009 at 5:09 PM, Andres Freund <[email protected]> wrote:\n\nHi,\n\nOn Thursday 10 December 2009 23:01:08 Michael Clemmons wrote:\n> Im not sure what that means ppl in my office with slower hd speeds using\n>  8.4 can create a db in 2s vs my 8-12s.\n- Possibly their config is different - they could have disabled the \"fsync\"\nparameter which turns the database to be not crashsafe anymore but much faster\nin some circumstances.\n\n- Possibly you have much data in your template1 database?\nYou could check whether\n\nCREATE DATABASE speedtest TEMPLATE template1; takes more time than\nCREATE DATABASE speedtest TEMPLATE template0;.\n\nYou should issue both multiple times to ensure caching on the template\ndatabase doesnt play a role.\n\n>  Could using md5 instead of ident do it?\nSeems unlikely.\nIs starting psql near-instantaneus? Are you using \"createdb\" or are you\nissuing \"CREATE DATABASE ...\"?\n\nAndres\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 10 Dec 2009 20:38:25 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Thu, 2009-12-10 at 20:38 -0500, Nikolas Everett wrote:\n> In my limited experience ext4 as presented by Karmic is not db\n> friendly. I had to carve my swap partition into a swap partition and\n> an xfs partition to get better db performance. Try fsync=off first,\n> but if that doesn't work then try a mini xfs.\n\nDo not turn fsync off. That is bad advice. I would not suggest ext4 at\nthis point for database operations. Use ext3. It is backward compatible.\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\nIf the world pushes look it in the eye and GRR. Then push back harder. - Salamander\n\n", "msg_date": "Fri, 11 Dec 2009 09:58:39 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "Turning fsync off on a dev database is a bad idea? Sure you might kill it\nand have to start over, but thats kind of the point in a dev database.\n\nOn Fri, Dec 11, 2009 at 12:58 PM, Joshua D. Drake <[email protected]>wrote:\n\n> On Thu, 2009-12-10 at 20:38 -0500, Nikolas Everett wrote:\n> > In my limited experience ext4 as presented by Karmic is not db\n> > friendly. I had to carve my swap partition into a swap partition and\n> > an xfs partition to get better db performance. Try fsync=off first,\n> > but if that doesn't work then try a mini xfs.\n>\n> Do not turn fsync off. That is bad advice. I would not suggest ext4 at\n> this point for database operations. Use ext3. It is backward compatible.\n>\n> Joshua D. Drake\n>\n>\n> --\n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\n> Consulting, Training, Support, Custom Development, Engineering\n> If the world pushes look it in the eye and GRR. Then push back harder. -\n> Salamander\n>\n>\n\nTurning fsync off on a dev database is a bad idea?  Sure you might kill it and have to start over, but thats kind of the point in a dev database.On Fri, Dec 11, 2009 at 12:58 PM, Joshua D. Drake <[email protected]> wrote:\nOn Thu, 2009-12-10 at 20:38 -0500, Nikolas Everett wrote:\n> In my limited experience ext4 as presented by Karmic is not db\n> friendly.  I had to carve my swap partition into a swap partition and\n> an xfs partition to get better db performance.  Try fsync=off first,\n> but if that doesn't work then try a mini xfs.\n\nDo not turn fsync off. That is bad advice. I would not suggest ext4 at\nthis point for database operations. Use ext3. It is backward compatible.\n\nJoshua D. Drake\n\n\n--\nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\nIf the world pushes look it in the eye and GRR. Then push back harder. - Salamander", "msg_date": "Fri, 11 Dec 2009 15:43:59 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Fri, 2009-12-11 at 15:43 -0500, Nikolas Everett wrote:\n> Turning fsync off on a dev database is a bad idea? Sure you might\n> kill it and have to start over, but thats kind of the point in a dev\n> database.\n\nMy experience is that bad dev practices turn into bad production\npractices, whether intentionally or not.\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\nIf the world pushes look it in the eye and GRR. Then push back harder. - Salamander\n\n", "msg_date": "Fri, 11 Dec 2009 12:50:10 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Fri, Dec 11, 2009 at 3:50 PM, Joshua D. Drake <[email protected]>wrote:\n\n> On Fri, 2009-12-11 at 15:43 -0500, Nikolas Everett wrote:\n> > Turning fsync off on a dev database is a bad idea? Sure you might\n> > kill it and have to start over, but thats kind of the point in a dev\n> > database.\n>\n> My experience is that bad dev practices turn into bad production\n> practices, whether intentionally or not.\n>\n\nFair enough. I'm of the opinion that developers need to have their unit\ntests run fast. If they aren't fast then your just not going to test as\nmuch as you should. If your unit tests *have* to createdb then you have to\ndo whatever you have to do to get it fast. It'd probably be better if unit\ntests don't create databases or alter tables at all though.\n\nRegardless of what is going on on your dev box you really should leave fsync\non on your continuous integration, integration test, and QA machines.\nThey're what your really modeling your production on anyway.\n\nOn Fri, Dec 11, 2009 at 3:50 PM, Joshua D. Drake <[email protected]> wrote:\nOn Fri, 2009-12-11 at 15:43 -0500, Nikolas Everett wrote:\n> Turning fsync off on a dev database is a bad idea?  Sure you might\n> kill it and have to start over, but thats kind of the point in a dev\n> database.\n\nMy experience is that bad dev practices turn into bad production\npractices, whether intentionally or not.Fair enough.  I'm of the opinion that developers need to have their unit tests run fast.  If they aren't fast then your just not going to test as much as you should.  If your unit tests *have* to createdb then you have to do whatever you have to do to get it fast.  It'd probably be better if unit tests don't create databases or alter tables at all though.\nRegardless of what is going on on your dev box you really should leave fsync on on your continuous integration, integration test, and QA machines.  They're what your really modeling your production on anyway.", "msg_date": "Fri, 11 Dec 2009 16:39:34 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Fri, Dec 11, 2009 at 2:39 PM, Nikolas Everett <[email protected]> wrote:\n>\n>\n> On Fri, Dec 11, 2009 at 3:50 PM, Joshua D. Drake <[email protected]>\n> wrote:\n>>\n>> On Fri, 2009-12-11 at 15:43 -0500, Nikolas Everett wrote:\n>> > Turning fsync off on a dev database is a bad idea?  Sure you might\n>> > kill it and have to start over, but thats kind of the point in a dev\n>> > database.\n>>\n>> My experience is that bad dev practices turn into bad production\n>> practices, whether intentionally or not.\n>\n> Fair enough.  I'm of the opinion that developers need to have their unit\n> tests run fast.  If they aren't fast then your just not going to test as\n> much as you should.  If your unit tests *have* to createdb then you have to\n> do whatever you have to do to get it fast.  It'd probably be better if unit\n> tests don't create databases or alter tables at all though.\n\nThis is my big issue. dropping / creating databases for unit tests is\noverkill. Running any DDL at all for a unit test seems wrong to me\ntoo. Insert a row if you need it, MAYBE. Unit tests should work with\na test database that HAS the structure and database already in place.\n\nWhat happens if your unit tests get lose in production and drop a\ndatabase, or a table. Not good.\n", "msg_date": "Fri, 11 Dec 2009 14:57:56 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Fri, Dec 11, 2009 at 4:39 PM, Nikolas Everett <[email protected]> wrote:\n\n>\n>\n>\n> Fair enough. I'm of the opinion that developers need to have their unit\n> tests run fast. If they aren't fast then your just not going to test as\n> much as you should. If your unit tests *have* to createdb then you have to\n> do whatever you have to do to get it fast. It'd probably be better if unit\n> tests don't create databases or alter tables at all though.\n>\n> Regardless of what is going on on your dev box you really should leave\n> fsync on on your continuous integration, integration test, and QA machines.\n> They're what your really modeling your production on anyway.\n>\n\n\n The other common issue is that developers running with something like\n'fsync=off' means that they have completely unrealistic expectations of the\nperformance surrounding something. If your developers see that when fsync\nis on, createdb takes x seconds vs. when it's off, then they'll know that\nbasing their entire process on that probably isn't a good idea. When\ndevelopers think something is lightning, they tend to base lots of stuff on\nit, whether it's production ready or not.\n\n\n--Scott\n\nOn Fri, Dec 11, 2009 at 4:39 PM, Nikolas Everett <[email protected]> wrote:\nFair enough.  I'm of the opinion that developers need to have their unit tests run fast.  If they aren't fast then your just not going to test as much as you should.  If your unit tests *have* to createdb then you have to do whatever you have to do to get it fast.  It'd probably be better if unit tests don't create databases or alter tables at all though.\nRegardless of what is going on on your dev box you really should leave fsync on on your continuous integration, integration test, and QA machines.  They're what your really modeling your production on anyway.\n  The other common issue is that developers running with something like 'fsync=off' means that they have completely unrealistic expectations of the performance surrounding something.  If your developers see that when fsync is on, createdb takes x seconds vs. when it's off, then they'll know that basing their entire process on that probably isn't a good idea.  When developers think something is lightning, they tend to base lots of stuff on it, whether it's production ready or not.   \n  --Scott", "msg_date": "Fri, 11 Dec 2009 16:59:43 -0500", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "\nOn 12/11/09 1:57 PM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> \n> This is my big issue. dropping / creating databases for unit tests is\n> overkill. Running any DDL at all for a unit test seems wrong to me\n> too. Insert a row if you need it, MAYBE. Unit tests should work with\n> a test database that HAS the structure and database already in place.\n> \n> What happens if your unit tests get lose in production and drop a\n> database, or a table. Not good.\n> \n\nProduction should not have a db with the same username/pw combination as dev\nboxes and unit tests . . .\n\nUnfortunately, unit-like (often more than a 'unit') tests can't always rely\non a test db being already set up. If one leaves any cruft around, it might\nbreak later tests later on non-deterministically. Automated tests that\ninsert data are absolutely required somewhere if the application inserts\ndata.\n\nThe best way to do this in postgres is to create a template database from\nscratch with whatever DDL is needed at the start of the run, and then create\nand drop db's as copies of that template per test or test suite.\n\nSo no, its not overkill at all IMO. I do wish to avoid it, and ideally all\ntests clean up after themselves, but in practice this does not happen and\nresults in hard to track down issues where test X fails because of something\nthat any one of tests A to W did (all of which pass), often wasting time of\nthe most valuable developers -- those who know the majority of the system\nwell enough to track down such issues across the whole system.\n\nOne thing to consider, is putting this temp database in a RAMFS, or ramdisk\nsince postgres does a lot of file creates and fsyncs when cloning a db from\na template. For almost all such test db's the actual data is small, but the\n# of tables is large.\n\n\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Fri, 11 Dec 2009 14:12:45 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Fri, Dec 11, 2009 at 2:59 PM, Scott Mead\n<[email protected]> wrote:\n> On Fri, Dec 11, 2009 at 4:39 PM, Nikolas Everett <[email protected]> wrote:\n>>\n>>\n>>\n>> Fair enough.  I'm of the opinion that developers need to have their unit\n>> tests run fast.  If they aren't fast then your just not going to test as\n>> much as you should.  If your unit tests *have* to createdb then you have to\n>> do whatever you have to do to get it fast.  It'd probably be better if unit\n>> tests don't create databases or alter tables at all though.\n>>\n>> Regardless of what is going on on your dev box you really should leave\n>> fsync on on your continuous integration, integration test, and QA machines.\n>> They're what your really modeling your production on anyway.\n>\n>\n>   The other common issue is that developers running with something like\n> 'fsync=off' means that they have completely unrealistic expectations of the\n> performance surrounding something.  If your developers see that when fsync\n> is on, createdb takes x seconds vs. when it's off, then they'll know that\n> basing their entire process on that probably isn't a good idea.  When\n> developers think something is lightning, they tend to base lots of stuff on\n> it, whether it's production ready or not.\n\nYeah, it's a huge mistake to give development super fast servers to\ntest on. Keep in mind production may need to handle 10k requests a\nminute / second whatever. Developers cannot generate that kind of\nload by just pointing and clicking. Our main production is on a\ncluster of 8 and 12 core machines with scads of memory and RAID-10\narrays all over the place. Development gets a 4 core machine with 8G\nram and an 8 drive RAID-6. It ain't slow, but it ain't really that\nfast either.\n", "msg_date": "Fri, 11 Dec 2009 15:12:47 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Fri, Dec 11, 2009 at 3:12 PM, Scott Carey <[email protected]> wrote:\n>\n> On 12/11/09 1:57 PM, \"Scott Marlowe\" <[email protected]> wrote:\n>\n>>\n>> This is my big issue.  dropping / creating databases for unit tests is\n>> overkill.  Running any DDL at all for a unit test seems wrong to me\n>> too.  Insert a row if you need it, MAYBE.  Unit tests should work with\n>> a test database that HAS the structure and database already in place.\n>>\n>> What happens if your unit tests get lose in production and drop a\n>> database, or a table.  Not good.\n>>\n>\n> Production should not have a db with the same username/pw combination as dev\n> boxes and unit tests . . .\n>\n> Unfortunately, unit-like (often more than a 'unit') tests can't always rely\n> on a test db being already set up.  If one leaves any cruft around, it might\n> break later tests later on non-deterministically.  Automated tests that\n> insert data are absolutely required somewhere if the application inserts\n> data.\n>\n> The best way to do this in postgres is to create a template database from\n> scratch with whatever DDL is needed at the start of the run, and then create\n> and drop db's as copies of that template per test or test suite.\n\nDebateable. Last job we had 44k or so unit tests, and we gave each\ndev their own db made from the main qa / unit testing db that they\ncould refresh at any time, and run the unit tests locally before\ncommitting code. Actual failures like the one you mention were very\nrare because of this approach. A simple ant refresh-db and they were\nready to test their code before committing it to the continuous\ntesting farm.\n", "msg_date": "Fri, 11 Dec 2009 15:19:05 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "Scott Mead wrote:\n> The other common issue is that developers running with something like \n> 'fsync=off' means that they have completely unrealistic expectations \n> of the performance surrounding something.\nRight, but the flip side here is that often the production server will \nhave hardware such as a caching RAID card that vastly improves \nperformance in this area. There's some room to cheat in order to \naccelerate the dev systems lack of such things, while still not giving a \ncompletely unrealistic view of performance.\n\nAs far as I'm concerned, using \"fsync=off\" is almost never excusable if \nyou're running 8.3 or later where \"synchronous_commit=off\" is a \npossibility. If you use that, it will usually improve the worst part of \ncommit issues substantially. And it happens in a way that's actually \nquite similar to how a caching write production server will run: small \nwrites happen instantly, but eventually bigger ones will end up \nbottlenecked at the disks anyway.\n\nIt would improve the average safety of our community members if anytime \nsomeone suggests \"fsync=off\", we strongly suggest \n\"synchronous_commit=off\" and potentially tuning its interval instead as \na middle ground, while still helping people who need to speed their \nsystems up. Saying \"never turn fsync off\" without suggesting this \nalternative is counter-productive. If you're in the sort of position \nwhere fsync is killing your performance you'll do anything to speed \nthings up (I've seen a 100:1 speed improvement) no matter how risky. \nI've ran a production system of 8.2 with fsync off, a TB of data, and no \nsafety net if a crash introduced corruption beyond a ZFS snapshot. It \nwasn't fun, but it was the only possibility to get bulk loading (there \nwas an ETL step in the middle after COPY) to happen fast enough. Using \nasync commit instead is a much better approach now that it's available.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Fri, 11 Dec 2009 17:39:54 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "Thanks all this has been a good help.\nI don't have control(or easy control) over unit tests creating/deleting\ndatabases since Im using the django framework for this job. Createdb takes\n12secs on my system(9.10 pg8.4 and ext4) which is impossibly slow for\nrunning 200unittests. Fsync got it to .2secs or so which is blazing but\nalso the speed I expected being used to 8.3 and xfs. This dev box is my\nlaptop and the data is litterally unimportant and doesn't exist longer than\n20sec but Im all about good practices. Will definately try synchronous\ncommit tonight once Im done working for the day. I've got some massive\ncopying todo later though so this will probably help in the future as well.\n\nOn Fri, Dec 11, 2009 at 5:39 PM, Greg Smith <[email protected]> wrote:\n\n> Scott Mead wrote:\n>\n>> The other common issue is that developers running with something like\n>> 'fsync=off' means that they have completely unrealistic expectations of the\n>> performance surrounding something.\n>>\n> Right, but the flip side here is that often the production server will have\n> hardware such as a caching RAID card that vastly improves performance in\n> this area. There's some room to cheat in order to accelerate the dev\n> systems lack of such things, while still not giving a completely unrealistic\n> view of performance.\n>\n> As far as I'm concerned, using \"fsync=off\" is almost never excusable if\n> you're running 8.3 or later where \"synchronous_commit=off\" is a possibility.\n> If you use that, it will usually improve the worst part of commit issues\n> substantially. And it happens in a way that's actually quite similar to how\n> a caching write production server will run: small writes happen instantly,\n> but eventually bigger ones will end up bottlenecked at the disks anyway.\n>\n> It would improve the average safety of our community members if anytime\n> someone suggests \"fsync=off\", we strongly suggest \"synchronous_commit=off\"\n> and potentially tuning its interval instead as a middle ground, while still\n> helping people who need to speed their systems up. Saying \"never turn fsync\n> off\" without suggesting this alternative is counter-productive. If you're\n> in the sort of position where fsync is killing your performance you'll do\n> anything to speed things up (I've seen a 100:1 speed improvement) no matter\n> how risky. I've ran a production system of 8.2 with fsync off, a TB of\n> data, and no safety net if a crash introduced corruption beyond a ZFS\n> snapshot. It wasn't fun, but it was the only possibility to get bulk\n> loading (there was an ETL step in the middle after COPY) to happen fast\n> enough. Using async commit instead is a much better approach now that it's\n> available.\n>\n> --\n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n>\n>\n\nThanks all this has been a good help.I don't have control(or easy control) over unit tests creating/deleting databases since Im using the django framework for this job.  Createdb takes 12secs on my system(9.10 pg8.4 and ext4)  which is impossibly slow for running 200unittests.  Fsync got it to .2secs or so which is blazing but also the speed I expected being used to 8.3 and xfs.  This dev box is my laptop and the data is litterally unimportant and doesn't exist longer than 20sec but Im all about good practices.  Will definately try synchronous commit tonight once Im done working for the day.  I've got some massive copying todo later though so this will probably help in the future as well. \nOn Fri, Dec 11, 2009 at 5:39 PM, Greg Smith <[email protected]> wrote:\nScott Mead wrote:\n\nThe other common issue is that developers running with something like 'fsync=off' means that they have completely unrealistic expectations of the performance surrounding something.\n\nRight, but the flip side here is that often the production server will have hardware such as a caching RAID card that vastly improves performance in this area.  There's some room to cheat in order to accelerate the dev systems lack of such things, while still not giving a completely unrealistic view of performance.\n\nAs far as I'm concerned, using \"fsync=off\" is almost never excusable if you're running 8.3 or later where \"synchronous_commit=off\" is a possibility.  If you use that, it will usually improve the worst part of commit issues substantially.  And it happens in a way that's actually quite similar to how a caching write production server will run:  small writes happen instantly, but eventually bigger ones will end up bottlenecked at the disks anyway.\n\nIt would improve the average safety of our community members if anytime someone suggests \"fsync=off\", we strongly suggest \"synchronous_commit=off\" and potentially tuning its interval instead as a middle ground, while still helping people who need to speed their systems up.  Saying \"never turn fsync off\" without suggesting this alternative is counter-productive.  If you're in the sort of position where fsync is killing your performance you'll do anything to speed things up (I've seen a 100:1 speed improvement) no matter how risky.  I've ran a production system of 8.2 with fsync off, a TB of data, and no safety net if a crash introduced corruption beyond a ZFS snapshot.  It wasn't fun, but it was the only possibility to get bulk loading (there was an ETL step in the middle after COPY) to happen fast enough.  Using async commit instead is a much better approach now that it's available.\n\n\n-- \nGreg Smith    2ndQuadrant   Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected]  www.2ndQuadrant.com", "msg_date": "Fri, 11 Dec 2009 17:52:01 -0500", "msg_from": "Michael Clemmons <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Fri, Dec 11, 2009 at 3:52 PM, Michael Clemmons\n<[email protected]> wrote:\n> Thanks all this has been a good help.\n> I don't have control(or easy control) over unit tests creating/deleting\n> databases since Im using the django framework for this job.\n\nReminds of the issues we had with Ruby on Rails and it's (at the time)\nvery mysql-centric tools that made us take a fork to large portions of\nits brain to get things like this working. Worked with a developer\nfor a day or two fixing most of the worst mysqlisms in RoR at the time\nto just get this kind of stuff working.\n\n>  Createdb takes\n> 12secs on my system(9.10 pg8.4 and ext4)  which is impossibly slow for\n> running 200unittests.\n\nWait, so each unit test createdbs by itself? Wow...\n\n>  Fsync got it to .2secs or so which is blazing but\n> also the speed I expected being used to 8.3 and xfs.  This dev box is my\n> laptop and the data is litterally unimportant and doesn't exist longer than\n> 20sec but Im all about good practices.  Will definately try synchronous\n> commit tonight once Im done working for the day.  I've got some massive\n> copying todo later though so this will probably help in the future as well.\n\nYeah, I'd probably resort to fsync off in that circumstance too\nespecially if syn commit off didn't help that much.\n", "msg_date": "Fri, 11 Dec 2009 16:59:13 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "Hi,\n\nOn Saturday 12 December 2009 00:59:13 Scott Marlowe wrote:\n> On Fri, Dec 11, 2009 at 3:52 PM, Michael Clemmons\n> > Createdb takes\n> > 12secs on my system(9.10 pg8.4 and ext4) which is impossibly slow for\n> > running 200unittests.\n> > Fsync got it to .2secs or so which is blazing but\n> > also the speed I expected being used to 8.3 and xfs. This dev box is my\n> > laptop and the data is litterally unimportant and doesn't exist longer\n> > than 20sec but Im all about good practices. Will definately try\n> > synchronous commit tonight once Im done working for the day. I've got\n> > some massive copying todo later though so this will probably help in the\n> > future as well.\n> Yeah, I'd probably resort to fsync off in that circumstance too\n> especially if syn commit off didn't help that much.\nHow should syn commit help with creating databases?\n\nThe problem with 8.4 and creating databases is that the number of files \nincreased hugely because of the introduction of relation forks.\nIt probably wouldnt be that hard to copy all files first, then reopen and fsync \nthem. Actually that should be a patch doable in an hour or two.\n\nAndres\n", "msg_date": "Sat, 12 Dec 2009 01:19:38 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "If ppl think its worth it I'll create a ticket\n\nOn Sat, Dec 12, 2009 at 6:09 AM, Hannu Krosing <[email protected]>wrote:\n\n> On Sat, 2009-12-12 at 01:19 +0100, Andres Freund wrote:\n> > Hi,\n> >\n> > On Saturday 12 December 2009 00:59:13 Scott Marlowe wrote:\n> > > On Fri, Dec 11, 2009 at 3:52 PM, Michael Clemmons\n> > > > Createdb takes\n> > > > 12secs on my system(9.10 pg8.4 and ext4) which is impossibly slow\n> for\n> > > > running 200unittests.\n> > > > Fsync got it to .2secs or so which is blazing but\n> > > > also the speed I expected being used to 8.3 and xfs. This dev box is\n> my\n> > > > laptop and the data is litterally unimportant and doesn't exist\n> longer\n> > > > than 20sec but Im all about good practices. Will definately try\n> > > > synchronous commit tonight once Im done working for the day. I've\n> got\n> > > > some massive copying todo later though so this will probably help in\n> the\n> > > > future as well.\n> > > Yeah, I'd probably resort to fsync off in that circumstance too\n> > > especially if syn commit off didn't help that much.\n> >\n> > How should syn commit help with creating databases?\n>\n> It does not help here. Tested ;)\n>\n> > The problem with 8.4 and creating databases is that the number of files\n> > increased hugely because of the introduction of relation forks.\n>\n> Plus the fact that fsync on ext4 is really slow. some info here:\n>\n> http://ldn.linuxfoundation.org/article/filesystems-data-preservation-fsync-and-benchmarks-pt-3\n>\n> > It probably wouldnt be that hard to copy all files first, then reopen and\n> fsync\n> > them. Actually that should be a patch doable in an hour or two.\n>\n> Probably something worth doing, as it will speed this up on all\n> filesystems, and doubly so on ext4 and xfs.\n>\n> --\n> Hannu Krosing http://www.2ndQuadrant.com\n> PostgreSQL Scalability and Availability\n> Services, Consulting and Training\n>\n>\n>\n\nIf ppl think its worth it I'll create a ticketOn Sat, Dec 12, 2009 at 6:09 AM, Hannu Krosing <[email protected]> wrote:\nOn Sat, 2009-12-12 at 01:19 +0100, Andres Freund wrote:\n> Hi,\n>\n> On Saturday 12 December 2009 00:59:13 Scott Marlowe wrote:\n> > On Fri, Dec 11, 2009 at 3:52 PM, Michael Clemmons\n> > >  Createdb takes\n> > > 12secs on my system(9.10 pg8.4 and ext4)  which is impossibly slow for\n> > > running 200unittests.\n> > >  Fsync got it to .2secs or so which is blazing but\n> > > also the speed I expected being used to 8.3 and xfs.  This dev box is my\n> > > laptop and the data is litterally unimportant and doesn't exist longer\n> > > than 20sec but Im all about good practices.  Will definately try\n> > > synchronous commit tonight once Im done working for the day.  I've got\n> > > some massive copying todo later though so this will probably help in the\n> > > future as well.\n> > Yeah, I'd probably resort to fsync off in that circumstance too\n> > especially if syn commit off didn't help that much.\n>\n> How should syn commit help with creating databases?\n\nIt does not help here. Tested ;)\n\n> The problem with 8.4 and creating databases is that the number of files\n> increased hugely because of the introduction of relation forks.\n\nPlus the fact that fsync on ext4 is really slow. some info here:\nhttp://ldn.linuxfoundation.org/article/filesystems-data-preservation-fsync-and-benchmarks-pt-3\n\n> It probably wouldnt be that hard to copy all files first, then reopen and fsync\n> them. Actually that should be a patch doable in an hour or two.\n\nProbably something worth doing, as it will speed this up on all\nfilesystems, and doubly so on ext4 and xfs.\n\n--\nHannu Krosing   http://www.2ndQuadrant.com\nPostgreSQL Scalability and Availability\n   Services, Consulting and Training", "msg_date": "Sat, 12 Dec 2009 15:36:27 -0500", "msg_from": "Michael Clemmons <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Saturday 12 December 2009 21:36:27 Michael Clemmons wrote:\n> If ppl think its worth it I'll create a ticket\nThanks, no need. I will post a patch tomorrow or so.\n\nAndres\n", "msg_date": "Sat, 12 Dec 2009 21:38:41 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Fri, Dec 11, 2009 at 5:12 PM, Scott Marlowe <[email protected]> wrote:\n> On Fri, Dec 11, 2009 at 2:59 PM, Scott Mead\n> <[email protected]> wrote:\n>> On Fri, Dec 11, 2009 at 4:39 PM, Nikolas Everett <[email protected]> wrote:\n>>>\n>>>\n>>>\n>>> Fair enough.  I'm of the opinion that developers need to have their unit\n>>> tests run fast.  If they aren't fast then your just not going to test as\n>>> much as you should.  If your unit tests *have* to createdb then you have to\n>>> do whatever you have to do to get it fast.  It'd probably be better if unit\n>>> tests don't create databases or alter tables at all though.\n>>>\n>>> Regardless of what is going on on your dev box you really should leave\n>>> fsync on on your continuous integration, integration test, and QA machines.\n>>> They're what your really modeling your production on anyway.\n>>\n>>\n>>   The other common issue is that developers running with something like\n>> 'fsync=off' means that they have completely unrealistic expectations of the\n>> performance surrounding something.  If your developers see that when fsync\n>> is on, createdb takes x seconds vs. when it's off, then they'll know that\n>> basing their entire process on that probably isn't a good idea.  When\n>> developers think something is lightning, they tend to base lots of stuff on\n>> it, whether it's production ready or not.\n>\n> Yeah, it's a huge mistake to give development super fast servers to\n> test on.  Keep in mind production may need to handle 10k requests a\n> minute / second whatever.  Developers cannot generate that kind of\n> load by just pointing and clicking.  Our main production is on a\n> cluster of 8 and 12 core machines with scads of memory and RAID-10\n> arrays all over the place.  Development gets a 4 core machine with 8G\n> ram and an 8 drive RAID-6.  It ain't slow, but it ain't really that\n> fast either.\n\nMy development box at work is an 1.8 Ghz Celeron with 256K of CPU\ncache, 1 GB of memory, and a single IDE drive... I don't have too\nmany slow queries in there.\n\n...Robert\n", "msg_date": "Sat, 12 Dec 2009 22:56:42 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Saturday 12 December 2009 21:38:41 Andres Freund wrote:\n> On Saturday 12 December 2009 21:36:27 Michael Clemmons wrote:\n> > If ppl think its worth it I'll create a ticket\n> Thanks, no need. I will post a patch tomorrow or so.\nWell. It was a long day...\n\nAnyway.\nIn this patch I delay the fsync done in copy_file and simply do a second pass \nover the directory in copy_dir and fsync everything in that pass.\nIncluding the directory - which was not done before and actually might be \nnecessary in some cases.\nI added a posix_fadvise(..., FADV_DONTNEED) to make it more likely that the \ncopied file reaches storage before the fsync. Without the speed benefits were \nquite a bit smaller and essentially random (which seems sensible).\n\nThis speeds up CREATE DATABASE from ~9 seconds to something around 0.8s on my \nlaptop. Still slower than with fsync off (~0.25) but quite a worthy \nimprovement.\n\nThe benefits are obviously bigger if the template database includes anything \nadded.\n\n\nAndres\n", "msg_date": "Mon, 28 Dec 2009 23:54:51 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu karmic\n\tslow createdb)" }, { "msg_contents": "On Monday 28 December 2009 23:54:51 Andres Freund wrote:\n> On Saturday 12 December 2009 21:38:41 Andres Freund wrote:\n> > On Saturday 12 December 2009 21:36:27 Michael Clemmons wrote:\n> > > If ppl think its worth it I'll create a ticket\n> >\n> > Thanks, no need. I will post a patch tomorrow or so.\n> \n> Well. It was a long day...\n> \n> Anyway.\n> In this patch I delay the fsync done in copy_file and simply do a second\n> pass over the directory in copy_dir and fsync everything in that pass.\n> Including the directory - which was not done before and actually might be\n> necessary in some cases.\n> I added a posix_fadvise(..., FADV_DONTNEED) to make it more likely that the\n> copied file reaches storage before the fsync. Without the speed benefits\n> were quite a bit smaller and essentially random (which seems sensible).\n> \n> This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s on\n> my laptop. Still slower than with fsync off (~0.25) but quite a worthy\n> improvement.\n> \n> The benefits are obviously bigger if the template database includes\n> anything added.\nObviously the patch would be helpfull.\n\nAndres", "msg_date": "Mon, 28 Dec 2009 23:59:43 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s on my\n> laptop. Still slower than with fsync off (~0.25) but quite a worthy \n> improvement.\n\nI can't help wondering whether that's real or some kind of\nplatform-specific artifact. I get numbers more like 3.5s (fsync off)\nvs 4.5s (fsync on) on a machine where I believe the disks aren't lying\nabout write-complete. It makes sense that an fsync at the end would be\na little bit faster, because it would give the kernel some additional\nfreedom in scheduling the required I/O, but it isn't cutting the total\nI/O required at all. So I find it really hard to believe a 10x speedup.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Dec 2009 18:06:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tuesday 29 December 2009 00:06:28 Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s\n> > on my laptop. Still slower than with fsync off (~0.25) but quite a\n> > worthy improvement.\n> I can't help wondering whether that's real or some kind of\n> platform-specific artifact. I get numbers more like 3.5s (fsync off)\n> vs 4.5s (fsync on) on a machine where I believe the disks aren't lying\n> about write-complete. It makes sense that an fsync at the end would be\n> a little bit faster, because it would give the kernel some additional\n> freedom in scheduling the required I/O, but it isn't cutting the total\n> I/O required at all. So I find it really hard to believe a 10x speedup.\nWell, a template database is about 5.5MB big here - that shouldnt take too \nlong when written near-sequentially?\nAs I said the real benefit only occurred after adding posix_fadvise(.., \nFADV_DONTNEED) which is somewhat plausible, because i.e. the directory entries \ndon't need to get scheduled for every file and because the kernel can reorder a \nwhole directory nearly sequentially. Without the advice it the kernel doesn't \nknow in time that it should write that data back and it wont do it for 5 \nseconds by default on linux or such...\n\nI looked at the strace output - it looks sensible timewise to me. If youre \ninterested I can give you output of that.\n\nAndres\n", "msg_date": "Tue, 29 Dec 2009 00:20:35 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tuesday 29 December 2009 00:06:28 Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s\n> > on my laptop. Still slower than with fsync off (~0.25) but quite a\n> > worthy improvement.\n> \n> I can't help wondering whether that's real or some kind of\n> platform-specific artifact. I get numbers more like 3.5s (fsync off)\n> vs 4.5s (fsync on) on a machine where I believe the disks aren't lying\n> about write-complete. It makes sense that an fsync at the end would be\n> a little bit faster, because it would give the kernel some additional\n> freedom in scheduling the required I/O, but it isn't cutting the total\n> I/O required at all. So I find it really hard to believe a 10x speedup.\nI only comfortably have access to two smaller machines without BBU from here \n(being in the Hacker Jeopardy at the ccc congress ;-)) and both show this \nbehaviour. I guess its somewhat filesystem dependent. \n\nAndres\n", "msg_date": "Tue, 29 Dec 2009 00:31:56 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Michael Clemmons wrote on 11.12.2009 23:52:\n> Thanks all this has been a good help.\n> I don't have control(or easy control) over unit tests creating/deleting\n> databases since Im using the django framework for this job. Createdb\n> takes 12secs on my system(9.10 pg8.4 and ext4) which is impossibly slow\n> for running 200unittests.\n\nI wonder if you could simply create one database, and then a new schema for each of the tests.\n\nAfter creating the schema you could alter the search_path for the \"unit test user\" and it would look like a completely new database.\n\nThomas\n\n\n\n", "msg_date": "Tue, 29 Dec 2009 00:57:42 +0100", "msg_from": "Thomas Kellerer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.4.1 ubuntu karmic slow createdb" }, { "msg_contents": "On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund <[email protected]> wrote:\n> fsync everything in that pass.\n> Including the directory - which was not done before and actually might be\n> necessary in some cases.\n\nEr. Yes. At least on ext4 this is pretty important. I wish it weren't,\nbut it doesn't look like we're going to convince the ext4 developers\nthey're crazy any day soon and it would really suck for a database\ncreated from a template to have files in it go missin.\n\n-- \ngreg\n", "msg_date": "Tue, 29 Dec 2009 00:27:29 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu karmic\n\tslow createdb)" }, { "msg_contents": "On Tuesday 29 December 2009 01:27:29 Greg Stark wrote:\n> On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund <[email protected]> wrote:\n> > fsync everything in that pass.\n> > Including the directory - which was not done before and actually might be\n> > necessary in some cases.\n> \n> Er. Yes. At least on ext4 this is pretty important. I wish it weren't,\n> but it doesn't look like we're going to convince the ext4 developers\n> they're crazy any day soon and it would really suck for a database\n> created from a template to have files in it go missin.\nActually it was necessary on ext3 as well - the window to hit the problem just \nwas much smaller, wasnt it?\n\nActually that part should possibly get backported.\n\n\nAndres\n", "msg_date": "Tue, 29 Dec 2009 01:29:34 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tue, 29 Dec 2009, Greg Stark wrote:\n\n> On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund <[email protected]> wrote:\n>> fsync everything in that pass.\n>> Including the directory - which was not done before and actually might be\n>> necessary in some cases.\n>\n> Er. Yes. At least on ext4 this is pretty important. I wish it weren't,\n> but it doesn't look like we're going to convince the ext4 developers\n> they're crazy any day soon and it would really suck for a database\n> created from a template to have files in it go missin.\n\nactually, as I understand it you need to do this on all filesystems except \next3, and on ext3 fsync is horribly slow because it writes out \n_everything_ that's pending, not just stuff related to the file you do the \nfsync on.\n\nDavid Lang\n", "msg_date": "Mon, 28 Dec 2009 16:30:17 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was\n\t8.4.1 ubuntu karmic \tslow createdb)" }, { "msg_contents": "On Tuesday 29 December 2009 01:30:17 [email protected] wrote:\n> On Tue, 29 Dec 2009, Greg Stark wrote:\n> > On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund <[email protected]> \nwrote:\n> >> fsync everything in that pass.\n> >> Including the directory - which was not done before and actually might\n> >> be necessary in some cases.\n> >\n> > Er. Yes. At least on ext4 this is pretty important. I wish it weren't,\n> > but it doesn't look like we're going to convince the ext4 developers\n> > they're crazy any day soon and it would really suck for a database\n> > created from a template to have files in it go missin.\n> \n> actually, as I understand it you need to do this on all filesystems except\n> ext3, and on ext3 fsync is horribly slow because it writes out\n> _everything_ that's pending, not just stuff related to the file you do the\n> fsync on.\nI dont think its all filesystems (ext2 should not be affected...), but generally \nyoure right. At least jfs, xfs are affected as well.\n\nIts btw not necessarily nearly-safe and slow on ext3 as well (data=writeback).\n\nAndres\n", "msg_date": "Tue, 29 Dec 2009 01:43:15 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu\n\tkarmic =?iso-8859-1?q?=09slow?= createdb)" }, { "msg_contents": "Andres Freund wrote:\n> As I said the real benefit only occurred after adding posix_fadvise(.., \n> FADV_DONTNEED) which is somewhat plausible, because i.e. the directory entries \n> don't need to get scheduled for every file and because the kernel can reorder a \n> whole directory nearly sequentially. Without the advice it the kernel doesn't \n> know in time that it should write that data back and it wont do it for 5 \n> seconds by default on linux or such...\n> \nI know they just fiddled with the logic in the last release, but for \nmost of the Linux kernels out there now pdflush wakes up every 5 seconds \nby default. But typically it only worries about writing things that \nhave been in the queue for 30 seconds or more until you've filled quite \na bit of memory, so that's also an interesting number. I tried to \ndocument the main tunables here and describe how they fit together at \nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\nIt would be interesting to graph the \"Dirty\" and \"Writeback\" figures in \n/proc/meminfo over time with and without this patch in place. That \nshould make it obvious what the kernel is doing differently in the two \ncases.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Mon, 28 Dec 2009 19:46:21 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync\n\t(was 8.4.1 ubuntu karmic slow createdb)" }, { "msg_contents": "On Tue, 29 Dec 2009, Andres Freund wrote:\n\n> On Tuesday 29 December 2009 01:30:17 [email protected] wrote:\n>> On Tue, 29 Dec 2009, Greg Stark wrote:\n>>> On Mon, Dec 28, 2009 at 10:54 PM, Andres Freund <[email protected]>\n> wrote:\n>>>> fsync everything in that pass.\n>>>> Including the directory - which was not done before and actually might\n>>>> be necessary in some cases.\n>>>\n>>> Er. Yes. At least on ext4 this is pretty important. I wish it weren't,\n>>> but it doesn't look like we're going to convince the ext4 developers\n>>> they're crazy any day soon and it would really suck for a database\n>>> created from a template to have files in it go missin.\n>>\n>> actually, as I understand it you need to do this on all filesystems except\n>> ext3, and on ext3 fsync is horribly slow because it writes out\n>> _everything_ that's pending, not just stuff related to the file you do the\n>> fsync on.\n> I dont think its all filesystems (ext2 should not be affected...), but generally\n> youre right. At least jfs, xfs are affected as well.\n\next2 definantly needs the fsync on the directory as well as the file \n(well, if the file metadata like size, change)\n\n> Its btw not necessarily nearly-safe and slow on ext3 as well (data=writeback).\n\nno, then it's just unsafe and slow ;-)\n\nDavid Lang\n", "msg_date": "Mon, 28 Dec 2009 16:46:26 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was\n\t8.4.1 ubuntu karmic \tslow createdb)" }, { "msg_contents": "On Tuesday 29 December 2009 01:46:21 Greg Smith wrote:\n> Andres Freund wrote:\n> > As I said the real benefit only occurred after adding posix_fadvise(..,\n> > FADV_DONTNEED) which is somewhat plausible, because i.e. the directory\n> > entries don't need to get scheduled for every file and because the kernel\n> > can reorder a whole directory nearly sequentially. Without the advice it\n> > the kernel doesn't know in time that it should write that data back and\n> > it wont do it for 5 seconds by default on linux or such...\n> It would be interesting to graph the \"Dirty\" and \"Writeback\" figures in\n> /proc/meminfo over time with and without this patch in place. That\n> should make it obvious what the kernel is doing differently in the two\n> cases.\nI did some analysis using blktrace (usefull tool btw) and the results show that\nthe io pattern is *significantly* different.\n\nFor one with the direct fsyncing nearly no hardware queuing is used and for\nanother nearly no requests are merged on software side.\n\nShort stats:\n\nOLD:\n\nTotal (8,0):\n Reads Queued: 2, 8KiB\t Writes Queued: 7854, 29672KiB\n Read Dispatches: 2, 8KiB\t Write Dispatches: 1926, 29672KiB\n Reads Requeued: 0\t\t Writes Requeued: 0\n Reads Completed: 2, 8KiB\t Writes Completed: 2362, 29672KiB\n Read Merges: 0, 0KiB\t Write Merges: 5492, 21968KiB\n PC Reads Queued: 0, 0KiB\t PC Writes Queued: 0, 0KiB\n PC Read Disp.: 436, 0KiB\t PC Write Disp.: 0, 0KiB\n PC Reads Req.: 0\t\t PC Writes Req.: 0\n PC Reads Compl.: 0\t\t PC Writes Compl.: 2362\n IO unplugs: 2395 \t Timer unplugs: 557\n\n\nNew:\n\nTotal (8,0):\n Reads Queued: 0, 0KiB\t Writes Queued: 1716, 5960KiB\n Read Dispatches: 0, 0KiB\t Write Dispatches: 324, 5960KiB\n Reads Requeued: 0\t\t Writes Requeued: 0\n Reads Completed: 0, 0KiB\t Writes Completed: 550, 5960KiB\n Read Merges: 0, 0KiB\t Write Merges: 1166, 4664KiB\n PC Reads Queued: 0, 0KiB\t PC Writes Queued: 0, 0KiB\n PC Read Disp.: 226, 0KiB\t PC Write Disp.: 0, 0KiB\n PC Reads Req.: 0\t\t PC Writes Req.: 0\n PC Reads Compl.: 0\t\t PC Writes Compl.: 550\n IO unplugs: 503 \t Timer unplugs: 30\n\n\nAndres\n", "msg_date": "Tue, 29 Dec 2009 03:05:39 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Andres,\nGreat job. Looking through the emails and thinking about why this works I\nthink this patch should significantly speedup 8.4 on most any file\nsystem(obviously some more than others) unless the system has significantly\nreduced memory or a slow single core. On a Celeron with 256 memory I suspect\nit'll crash out or just hit the swap and be a worse bottleneck. Anyone\nhave something like this to test on?\n-Michael\n\nOn Mon, Dec 28, 2009 at 9:05 PM, Andres Freund <[email protected]> wrote:\n\n> On Tuesday 29 December 2009 01:46:21 Greg Smith wrote:\n> > Andres Freund wrote:\n> > > As I said the real benefit only occurred after adding posix_fadvise(..,\n> > > FADV_DONTNEED) which is somewhat plausible, because i.e. the directory\n> > > entries don't need to get scheduled for every file and because the\n> kernel\n> > > can reorder a whole directory nearly sequentially. Without the advice\n> it\n> > > the kernel doesn't know in time that it should write that data back and\n> > > it wont do it for 5 seconds by default on linux or such...\n> > It would be interesting to graph the \"Dirty\" and \"Writeback\" figures in\n> > /proc/meminfo over time with and without this patch in place. That\n> > should make it obvious what the kernel is doing differently in the two\n> > cases.\n> I did some analysis using blktrace (usefull tool btw) and the results show\n> that\n> the io pattern is *significantly* different.\n>\n> For one with the direct fsyncing nearly no hardware queuing is used and for\n> another nearly no requests are merged on software side.\n>\n> Short stats:\n>\n> OLD:\n>\n> Total (8,0):\n> Reads Queued: 2, 8KiB Writes Queued: 7854,\n> 29672KiB\n> Read Dispatches: 2, 8KiB Write Dispatches: 1926,\n> 29672KiB\n> Reads Requeued: 0 Writes Requeued: 0\n> Reads Completed: 2, 8KiB Writes Completed: 2362,\n> 29672KiB\n> Read Merges: 0, 0KiB Write Merges: 5492,\n> 21968KiB\n> PC Reads Queued: 0, 0KiB PC Writes Queued: 0,\n> 0KiB\n> PC Read Disp.: 436, 0KiB PC Write Disp.: 0,\n> 0KiB\n> PC Reads Req.: 0 PC Writes Req.: 0\n> PC Reads Compl.: 0 PC Writes Compl.: 2362\n> IO unplugs: 2395 Timer unplugs: 557\n>\n>\n> New:\n>\n> Total (8,0):\n> Reads Queued: 0, 0KiB Writes Queued: 1716,\n> 5960KiB\n> Read Dispatches: 0, 0KiB Write Dispatches: 324,\n> 5960KiB\n> Reads Requeued: 0 Writes Requeued: 0\n> Reads Completed: 0, 0KiB Writes Completed: 550,\n> 5960KiB\n> Read Merges: 0, 0KiB Write Merges: 1166,\n> 4664KiB\n> PC Reads Queued: 0, 0KiB PC Writes Queued: 0,\n> 0KiB\n> PC Read Disp.: 226, 0KiB PC Write Disp.: 0,\n> 0KiB\n> PC Reads Req.: 0 PC Writes Req.: 0\n> PC Reads Compl.: 0 PC Writes Compl.: 550\n> IO unplugs: 503 Timer unplugs: 30\n>\n>\n> Andres\n>\n\nAndres,Great job.  Looking through the emails and thinking about why this works I think this patch should significantly speedup 8.4 on most any file system(obviously some more than others) unless the system has significantly reduced memory or a slow single core. On a Celeron with 256 memory I suspect it'll crash out or just hit the swap  and be a worse bottleneck.  Anyone have something like this to test on?\n-MichaelOn Mon, Dec 28, 2009 at 9:05 PM, Andres Freund <[email protected]> wrote:\nOn Tuesday 29 December 2009 01:46:21 Greg Smith wrote:\n> Andres Freund wrote:\n> > As I said the real benefit only occurred after adding posix_fadvise(..,\n> > FADV_DONTNEED) which is somewhat plausible, because i.e. the directory\n> > entries don't need to get scheduled for every file and because the kernel\n> > can reorder a whole directory nearly sequentially. Without the advice it\n> > the kernel doesn't know in time that it should write that data back and\n> > it wont do it for 5 seconds by default on linux or such...\n> It would be interesting to graph the \"Dirty\" and \"Writeback\" figures in\n> /proc/meminfo over time with and without this patch in place.  That\n> should make it obvious what the kernel is doing differently in the two\n> cases.\nI did some analysis using blktrace (usefull tool btw) and the results show that\nthe io pattern is *significantly* different.\n\nFor one with the direct fsyncing nearly no hardware queuing is used and for\nanother nearly no requests are merged on software side.\n\nShort stats:\n\nOLD:\n\nTotal (8,0):\n Reads Queued:           2,        8KiB  Writes Queued:        7854,    29672KiB\n Read Dispatches:        2,        8KiB  Write Dispatches:     1926,    29672KiB\n Reads Requeued:         0               Writes Requeued:         0\n Reads Completed:        2,        8KiB  Writes Completed:     2362,    29672KiB\n Read Merges:            0,        0KiB  Write Merges:         5492,    21968KiB\n PC Reads Queued:        0,        0KiB  PC Writes Queued:        0,        0KiB\n PC Read Disp.:        436,        0KiB  PC Write Disp.:          0,        0KiB\n PC Reads Req.:          0               PC Writes Req.:          0\n PC Reads Compl.:        0               PC Writes Compl.:     2362\n IO unplugs:          2395               Timer unplugs:         557\n\n\nNew:\n\nTotal (8,0):\n Reads Queued:           0,        0KiB  Writes Queued:        1716,     5960KiB\n Read Dispatches:        0,        0KiB  Write Dispatches:      324,     5960KiB\n Reads Requeued:         0               Writes Requeued:         0\n Reads Completed:        0,        0KiB  Writes Completed:      550,     5960KiB\n Read Merges:            0,        0KiB  Write Merges:         1166,     4664KiB\n PC Reads Queued:        0,        0KiB  PC Writes Queued:        0,        0KiB\n PC Read Disp.:        226,        0KiB  PC Write Disp.:          0,        0KiB\n PC Reads Req.:          0               PC Writes Req.:          0\n PC Reads Compl.:        0               PC Writes Compl.:      550\n IO unplugs:           503               Timer unplugs:          30\n\n\nAndres", "msg_date": "Mon, 28 Dec 2009 21:53:12 -0500", "msg_from": "Michael Clemmons <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was\n\t8.4.1 ubuntu karmic slow createdb)" }, { "msg_contents": "On Tuesday 29 December 2009 03:53:12 Michael Clemmons wrote:\n> Andres,\n> Great job. Looking through the emails and thinking about why this works I\n> think this patch should significantly speedup 8.4 on most any file\n> system(obviously some more than others) unless the system has significantly\n> reduced memory or a slow single core. On a Celeron with 256 memory I\n> suspect it'll crash out or just hit the swap and be a worse bottleneck. \n> Anyone have something like this to test on?\nWhy should it crash? The kernel should just block on writing and write out the \ndirty memory before continuing?\nPg is not caching anything here...\n\nAndres\n", "msg_date": "Tue, 29 Dec 2009 03:55:37 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Maybe not crash out but in this situation.\nN=0\nwhile(N>=0):\n CREATE DATABASE new_db_N;\nSince the fsync is the part which takes the memory and time but is happening\nin the background want the fsyncs pile up in the background faster than can\nbe run filling up the memory and stack.\nThis is very likely a mistake on my part about how postgres/processes\nactually works.\n-Michael\n\nOn Mon, Dec 28, 2009 at 9:55 PM, Andres Freund <[email protected]> wrote:\n\n> On Tuesday 29 December 2009 03:53:12 Michael Clemmons wrote:\n> > Andres,\n> > Great job. Looking through the emails and thinking about why this works\n> I\n> > think this patch should significantly speedup 8.4 on most any file\n> > system(obviously some more than others) unless the system has\n> significantly\n> > reduced memory or a slow single core. On a Celeron with 256 memory I\n> > suspect it'll crash out or just hit the swap and be a worse bottleneck.\n> > Anyone have something like this to test on?\n> Why should it crash? The kernel should just block on writing and write out\n> the\n> dirty memory before continuing?\n> Pg is not caching anything here...\n>\n> Andres\n>\n\nMaybe not crash out but in this situation.N=0while(N>=0):    CREATE DATABASE new_db_N;Since the fsync is the part which takes the memory and time but is happening in the background want the fsyncs pile up in the background faster than can be run filling up the memory and stack.\nThis is very likely a mistake on my part about how postgres/processes actually works.-MichaelOn Mon, Dec 28, 2009 at 9:55 PM, Andres Freund <[email protected]> wrote:\nOn Tuesday 29 December 2009 03:53:12 Michael Clemmons wrote:\n> Andres,\n> Great job.  Looking through the emails and thinking about why this works I\n> think this patch should significantly speedup 8.4 on most any file\n> system(obviously some more than others) unless the system has significantly\n> reduced memory or a slow single core. On a Celeron with 256 memory I\n>  suspect it'll crash out or just hit the swap  and be a worse bottleneck.\n>  Anyone have something like this to test on?\nWhy should it crash? The kernel should just block on writing and write out the\ndirty memory before continuing?\nPg is not caching anything here...\n\nAndres", "msg_date": "Mon, 28 Dec 2009 22:04:06 -0500", "msg_from": "Michael Clemmons <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was\n\t8.4.1 ubuntu karmic slow createdb)" }, { "msg_contents": "On Tuesday 29 December 2009 04:04:06 Michael Clemmons wrote:\n> Maybe not crash out but in this situation.\n> N=0\n> while(N>=0):\n> CREATE DATABASE new_db_N;\n> Since the fsync is the part which takes the memory and time but is\n> happening in the background want the fsyncs pile up in the background\n> faster than can be run filling up the memory and stack.\n> This is very likely a mistake on my part about how postgres/processes\nThe difference should not be visible outside the \"CREATE DATABASE ...\" at all.\nCurrently the process simplifiedly works like:\n\n------------\nfor file in source directory:\n\tcopy_file(source/file, target/file);\n\tfsync(target/file);\n------------\n\nI changed it to:\n\n-------------\nfor file in source directory:\n\tcopy_file(source/file, target/file);\n\n\t/*please dear kernel, write this out, but dont block*/\n\tposix_fadvise(target/file, FADV_DONTNEED); \n\nfor file in source directory:\n\tfsync(target/file);\n-------------\n\nIf at any point in time there is not enough cache available to cache anything \ncopy_file() will just have to wait for the kernel to write out the data.\nfsync() does not use memory itself.\n\nAndres\n", "msg_date": "Tue, 29 Dec 2009 04:11:14 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tue, Dec 29, 2009 at 2:05 AM, Andres Freund <[email protected]> wrote:\n>  Reads Completed:        2,        8KiB  Writes Completed:     2362,    29672KiB\n> New:\n>  Reads Completed:        0,        0KiB  Writes Completed:      550,     5960KiB\n\nIt looks like the new method is only doing 1/6th as much i/o. Do you\nknow what's going on there?\n\n\n-- \ngreg\n", "msg_date": "Tue, 29 Dec 2009 10:48:10 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tuesday 29 December 2009 11:48:10 Greg Stark wrote:\n> On Tue, Dec 29, 2009 at 2:05 AM, Andres Freund <[email protected]> wrote:\n> > Reads Completed: 2, 8KiB Writes Completed: 2362, \n> > 29672KiB New:\n> > Reads Completed: 0, 0KiB Writes Completed: 550, \n> > 5960KiB\n> \n> It looks like the new method is only doing 1/6th as much i/o. Do you\n> know what's going on there?\nWhile I was surprised by the amount of difference I am not surprised at all \nthat there is a significant one - currently the fsync will write out a whole \nbunch of useless stuff every time its called (all metadata, directory structure \nand so on)\n\nThis is reproducible...\n\n6MB sounds sensible for the operation btw - the template database is around \n5MB.\n\n\nWill try to analyze later what exactly causes the additional io.\n\n\nAndres\n", "msg_date": "Tue, 29 Dec 2009 12:13:21 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Monday 28 December 2009 23:59:43 Andres Freund wrote:\n> On Monday 28 December 2009 23:54:51 Andres Freund wrote:\n> > On Saturday 12 December 2009 21:38:41 Andres Freund wrote:\n> > > On Saturday 12 December 2009 21:36:27 Michael Clemmons wrote:\n> > > > If ppl think its worth it I'll create a ticket\n> > >\n> > > Thanks, no need. I will post a patch tomorrow or so.\n> >\n> > Well. It was a long day...\n> >\n> > Anyway.\n> > In this patch I delay the fsync done in copy_file and simply do a second\n> > pass over the directory in copy_dir and fsync everything in that pass.\n> > Including the directory - which was not done before and actually might be\n> > necessary in some cases.\n> > I added a posix_fadvise(..., FADV_DONTNEED) to make it more likely that\n> > the copied file reaches storage before the fsync. Without the speed\n> > benefits were quite a bit smaller and essentially random (which seems\n> > sensible).\n> >\n> > This speeds up CREATE DATABASE from ~9 seconds to something around 0.8s\n> > on my laptop. Still slower than with fsync off (~0.25) but quite a\n> > worthy improvement.\n> >\n> > The benefits are obviously bigger if the template database includes\n> > anything added.\n> \n> Obviously the patch would be helpfull.\nAnd it should also be helpfull not to have annoying oversights in there. A \t\nFreeDir(xldir); is missing at the end of copydir().\n\nAndres\n", "msg_date": "Tue, 29 Dec 2009 19:30:49 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Looking at this patch for the commitfest I have a few questions.\n\n1) You said you added an fsync of the new directory -- where is that I\ndon't see it anywhere.\n\n2) Why does the second pass to do the fsyncs read through fromdir to\nfind all the filenames. I find that odd and counterintuitive. It would\nbe much more natural to just loop through the files in the new\ndirectory. But I suppose it serves as an added paranoia check that the\nfiles are in fact still there and we're not fsyncing any files we\ndidn't just copy. I think it should still work, we should have an\nexclusive lock on the template database so there really ought to be no\ndifferences between the directory trees.\n\n3) It would be tempting to do the posix_fadvise on each chunk as we\ncopy it. That way we avoid poisoning the filesystem cache even as far\nas a 1G file. This might actually be quite significant if we're built\nwithout the 1G file chunk size. I'm a bit concerned that the code will\nbe a big more complex having to depend on a good off_t definition\nthough. Do we only use >1GB files on systems where off_t is capable of\nhandling >2^32 without gymnastics?\n\n-- \ngreg\n", "msg_date": "Mon, 18 Jan 2010 16:35:59 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu karmic\n\tslow createdb)" }, { "msg_contents": "On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark <[email protected]> wrote:\n> Looking at this patch for the commitfest I have a few questions.\n\nSo I've touched this patch up a bit:\n\n1) moved the posix_fadvise call to a new fd.c function\npg_fsync_start(fd,offset,nbytes) which initiates an fsync without\nwaiting on it. Currently it's only implemented with\nposix_fadvise(DONT_NEED) but I want to look into using sync_file_range\nin the future -- it looks like this call might be good enough for our\ncheckpoints.\n\n2) advised each 64k chunk as we write it which should avoid poisoning\nthe cache if you do a large create database on an active system.\n\n3) added the promised but afaict missing fsync of the directory -- i\nthink we should actually backpatch this.\n\nBarring any objections shall I commit it like this?\n\n\n-- \ngreg\n\n\n-- \ngreg", "msg_date": "Tue, 19 Jan 2010 14:52:25 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu karmic\n\tslow createdb)" }, { "msg_contents": "On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark <[email protected]> wrote:\n> Barring any objections shall I commit it like this?\n\nActually before we get there could someone who demonstrated the\nspeedup verify that this patch still gets that same speedup?\n\n-- \ngreg\n", "msg_date": "Tue, 19 Jan 2010 14:57:14 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu karmic\n\tslow createdb)" }, { "msg_contents": "On Tuesday 19 January 2010 15:52:25 Greg Stark wrote:\n> On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark <[email protected]> wrote:\n> > Looking at this patch for the commitfest I have a few questions.\n> \n> So I've touched this patch up a bit:\n> \n> 1) moved the posix_fadvise call to a new fd.c function\n> pg_fsync_start(fd,offset,nbytes) which initiates an fsync without\n> waiting on it. Currently it's only implemented with\n> posix_fadvise(DONT_NEED) but I want to look into using sync_file_range\n> in the future -- it looks like this call might be good enough for our\n> checkpoints.\n> \n> 2) advised each 64k chunk as we write it which should avoid poisoning\n> the cache if you do a large create database on an active system.\n> \n> 3) added the promised but afaict missing fsync of the directory -- i\n> think we should actually backpatch this.\nYes, that was a bit stupid from me - I added the fsync for directories which \nget recursed into (by not checking if its a file) but not for the uppermost \nlevel.\nSo all directories should get fsynced right now but the topmost one.\n\nI will review the patch later when I finally will have some time off again... \n~4h.\n\nThanks!\n\nAndres\n", "msg_date": "Tue, 19 Jan 2010 16:03:16 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> 1) moved the posix_fadvise call to a new fd.c function\n> pg_fsync_start(fd,offset,nbytes) which initiates an fsync without\n> waiting on it. Currently it's only implemented with\n> posix_fadvise(DONT_NEED) but I want to look into using sync_file_range\n> in the future -- it looks like this call might be good enough for our\n> checkpoints.\n\nThat function *seriously* needs documentation, in particular the fact\nthat it's a no-op on machines without the right kernel call. The name\nyou've chosen is very bad for those semantics. I'd pick something\nelse myself. Maybe \"pg_start_data_flush\" or something like that?\n\nOther than that quibble it seems basically sane.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jan 2010 10:25:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Hi Greg,\n\nOn Monday 18 January 2010 17:35:59 Greg Stark wrote: \n> 2) Why does the second pass to do the fsyncs read through fromdir to\n> find all the filenames. I find that odd and counterintuitive. It would\n> be much more natural to just loop through the files in the new\n> directory. But I suppose it serves as an added paranoia check that the\n> files are in fact still there and we're not fsyncing any files we\n> didn't just copy. I think it should still work, we should have an\n> exclusive lock on the template database so there really ought to be no\n> differences between the directory trees.\nIf it weren't safe we would already have a big problem....\n\nAndres\n", "msg_date": "Wed, 20 Jan 2010 05:01:55 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu\n\tkarmic slow createdb)" }, { "msg_contents": "Hi Greg,\n\nOn Tuesday 19 January 2010 15:52:25 Greg Stark wrote:\n> On Mon, Jan 18, 2010 at 4:35 PM, Greg Stark <[email protected]> wrote:\n> > Looking at this patch for the commitfest I have a few questions.\n> \n> So I've touched this patch up a bit:\n> \n> 1) moved the posix_fadvise call to a new fd.c function\n> pg_fsync_start(fd,offset,nbytes) which initiates an fsync without\n> waiting on it. Currently it's only implemented with\n> posix_fadvise(DONT_NEED) but I want to look into using sync_file_range\n> in the future -- it looks like this call might be good enough for our\n> checkpoints.\nWhy exactly should that depend on fsync? Sure, thats where most of the pain \ncomes from now but avoiding that cache poisoning wouldnt hurt otherwise as \nwell.\n\nI would rather have it called pg_flush_cache_range or such...\n\n> 2) advised each 64k chunk as we write it which should avoid poisoning\n> the cache if you do a large create database on an active system.\n> \n> 3) added the promised but afaict missing fsync of the directory -- i\n> think we should actually backpatch this.\nI think as well. You need it during recursing as well though (where I had \nadded it) and not only for the final directory.\n\n> Barring any objections shall I commit it like this?\nOther than the two things above it looks fine to me.\n\nThanks,\n\nAndres\n", "msg_date": "Wed, 20 Jan 2010 05:02:17 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu karmic\n\tslow createdb)" }, { "msg_contents": "On Tuesday 19 January 2010 15:57:14 Greg Stark wrote:\n> On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark <[email protected]> wrote:\n> > Barring any objections shall I commit it like this?\n> \n> Actually before we get there could someone who demonstrated the\n> speedup verify that this patch still gets that same speedup?\nAt least on the three machines I tested last time the result is still in the \nsame ballpark.\n\nAndres\n", "msg_date": "Wed, 20 Jan 2010 05:13:03 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Greg Stark wrote:\n> On Tue, Jan 19, 2010 at 2:52 PM, Greg Stark <[email protected]> wrote:\n> \n>> Barring any objections shall I commit it like this?\n>> \n>\n> Actually before we get there could someone who demonstrated the\n> speedup verify that this patch still gets that same speedup?\n> \n\nI think the final version of this patch could use at least one more \nperformance checking report that it does something useful. We got a lot \nof data from Andres, but do we know that the improvements here hold for \nothers too? I can take a look at it later this week, I have some \ninterest in this area.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nGreg Stark wrote:\n\nOn Tue, Jan 19, 2010 at 2:52 PM, Greg Stark <[email protected]> wrote:\n \n\nBarring any objections shall I commit it like this?\n \n\n\nActually before we get there could someone who demonstrated the\nspeedup verify that this patch still gets that same speedup?\n \n\n\nI think the final version of this patch could use at least one more\nperformance checking report that it does something useful.  We got a\nlot of data from Andres, but do we know that the improvements here hold\nfor others too?  I can take a look at it later this week, I have some\ninterest in this area.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com", "msg_date": "Wed, 20 Jan 2010 00:21:07 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic \tslow createdb)" }, { "msg_contents": "Greg Stark wrote:\n> Actually before we get there could someone who demonstrated the\n> speedup verify that this patch still gets that same speedup?\n> \n\nLet's step back a second and get to the bottom of why some people are \nseeing this and others aren't. The original report here suggested this \nwas an ext4 issue. As I pointed out recently on the performance list, \nthe reason for that is likely that the working write-barrier support for \next4 means it's passing through the fsync to \"lying\" hard drives via a \nproper cache flush, which didn't happen on your typical ext3 install. \nGiven that, I'd expect I could see the same issue with ext3 given a \ndrive with its write cache turned off, so that the theory I started \ntrying to prove before seeing the patch operate.\n\nWhat I did was create a little test program that created 5 databases and \nthen dropped them:\n\n\\timing\ncreate database a;\ncreate database b;\ncreate database c;\ncreate database d;\ncreate database e;\ndrop database a;\ndrop database b;\ndrop database c;\ndrop database d;\ndrop database e;\n\n(All of the drop times were very close by the way; around 100ms, nothing \nparticularly interesting there)\n\nIf I have my system's boot drive (attached to the motherboard, not on \nthe caching controller) in its regular, lying mode with write cache on, \nthe creates take the following times:\n\nTime: 713.982 ms Time: 659.890 ms Time: 590.842 ms Time: 675.506 ms \nTime: 645.521 ms\n\nA second run gives similar results; seems quite repeatable for every \ntest I ran so I'll just show one run of each.\n\nIf I then turn off the write-cache on the drive:\n\n$ sudo hdparm -W 0 /dev/sdb\n\nAnd repeat, these times show up instead:\n\nTime: 6781.205 ms Time: 6805.271 ms Time: 6947.037 ms Time: 6938.644 \nms Time: 7346.838 ms\n\nSo there's the problem case reproduced, right on regular old ext3 and \nUbuntu Jaunty: around 7 seconds to create a database, not real impressive.\n\nApplying the last patch you attached, with the cache on, I see this:\n\nTime: 396.105 ms Time: 389.984 ms Time: 469.800 ms Time: 386.043 ms \nTime: 441.269 ms\n\nAnd if I then turn the write cache off, back to slow times, but much better:\n\nTime: 2162.687 ms Time: 2174.057 ms Time: 2215.785 ms Time: 2174.100 \nms Time: 2190.811 ms\n\nThat makes the average times I'm seeing on my server:\n\nHEAD Cached: 657 ms Uncached: 6964 ms\nPatched Cached: 417 ms Uncached: 2183 ms\n\nModest speedup even with a caching drive, and a huge speedup in the case \nwhen you have one with slow fsync. Looks to me that if you address \nTom's concern about documentation and function naming, comitting this \npatch will certainly deliver as promised on the performance side. Maybe \n2 seconds is still too long for some people, but it's at least a whole \nlot better.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.co\n\n", "msg_date": "Wed, 27 Jan 2010 02:21:44 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic \tslow createdb)" }, { "msg_contents": "On Tue, Jan 19, 2010 at 3:25 PM, Tom Lane <[email protected]> wrote:\n> That function *seriously* needs documentation, in particular the fact\n> that it's a no-op on machines without the right kernel call.  The name\n> you've chosen is very bad for those semantics.  I'd pick something\n> else myself.  Maybe \"pg_start_data_flush\" or something like that?\n>\n\nI would like to make one token argument in favour of the name I\npicked. If it doesn't convince I'll change it since we can always\nrevisit the API down the road.\n\nI envision having two function calls, pg_fsync_start() and\npg_fsync_finish(). The latter will wait until the data synced in the\nfirst call is actually synced. The fall-back if there's no\nimplementation of this would be for fsync_start() to be a noop (or\nsomething unreliable like posix_fadvise) and fsync_finish() to just be\na regular fsync.\n\nI think we can accomplish this with sync_file_range() but I need to\nread up on how it actually works a bit more. In this case it doesn't\nmake a difference since when we call fsync_finish() it's going to be\nfor the entire file and nothing else will have been writing to these\nfiles. But for wal writing and checkpointing it might have very\ndifferent performance characteristics.\n\nThe big objection to this is that then we don't really have an api for\nFADV_DONT_NEED which is more about cache policy than about syncing to\ndisk. So for example a sequential scan might want to indicate that it\nisn't planning on reading the buffers it's churning through but\ndoesn't want to force them to be written sooner than otherwise and is\nnever going to call fsync_finish().\n\n\n\n-- \ngreg\n", "msg_date": "Fri, 29 Jan 2010 18:56:23 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu\n\tkarmic slow createdb)" }, { "msg_contents": "On Fri, Jan 29, 2010 at 1:56 PM, Greg Stark <[email protected]> wrote:\n> On Tue, Jan 19, 2010 at 3:25 PM, Tom Lane <[email protected]> wrote:\n>> That function *seriously* needs documentation, in particular the fact\n>> that it's a no-op on machines without the right kernel call.  The name\n>> you've chosen is very bad for those semantics.  I'd pick something\n>> else myself.  Maybe \"pg_start_data_flush\" or something like that?\n>>\n>\n> I would like to make one token argument in favour of the name I\n> picked. If it doesn't convince I'll change it since we can always\n> revisit the API down the road.\n>\n> I envision having two function calls, pg_fsync_start() and\n> pg_fsync_finish(). The latter will wait until the data synced in the\n> first call is actually synced. The fall-back if there's no\n> implementation of this would be for fsync_start() to be a noop (or\n> something unreliable like posix_fadvise) and fsync_finish() to just be\n> a regular fsync.\n>\n> I think we can accomplish this with sync_file_range() but I need to\n> read up on how it actually works a bit more. In this case it doesn't\n> make a difference since when we call fsync_finish() it's going to be\n> for the entire file and nothing else will have been writing to these\n> files. But for wal writing and checkpointing it might have very\n> different performance characteristics.\n>\n> The big objection to this is that then we don't really have an api for\n> FADV_DONT_NEED which is more about cache policy than about syncing to\n> disk. So for example a sequential scan might want to indicate that it\n> isn't planning on reading the buffers it's churning through but\n> doesn't want to force them to be written sooner than otherwise and is\n> never going to call fsync_finish().\n\nI took a look at this patch today and I agree with Tom that\npg_fsync_start() is a very confusing name. I don't know what the\nright name is, but this doesn't fsync so I don't think it shuld have\nfsync in the name. Maybe something like pg_advise_abandon() or\npg_abandon_cache(). The current name is really wishful thinking:\nyou're hoping that it will make the kernel start the fsync, but it\nmight not. I think pg_start_data_flush() is similarly optimistic.\n\n...Robert\n", "msg_date": "Tue, 2 Feb 2010 12:36:12 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:\n> On Fri, Jan 29, 2010 at 1:56 PM, Greg Stark <[email protected]> wrote:\n> > On Tue, Jan 19, 2010 at 3:25 PM, Tom Lane <[email protected]> wrote:\n> >> That function *seriously* needs documentation, in particular the fact\n> >> that it's a no-op on machines without the right kernel call. The name\n> >> you've chosen is very bad for those semantics. I'd pick something\n> >> else myself. Maybe \"pg_start_data_flush\" or something like that?\n> > \n> > I would like to make one token argument in favour of the name I\n> > picked. If it doesn't convince I'll change it since we can always\n> > revisit the API down the road.\n> > \n> > I envision having two function calls, pg_fsync_start() and\n> > pg_fsync_finish(). The latter will wait until the data synced in the\n> > first call is actually synced. The fall-back if there's no\n> > implementation of this would be for fsync_start() to be a noop (or\n> > something unreliable like posix_fadvise) and fsync_finish() to just be\n> > a regular fsync.\n> > \n> > I think we can accomplish this with sync_file_range() but I need to\n> > read up on how it actually works a bit more. In this case it doesn't\n> > make a difference since when we call fsync_finish() it's going to be\n> > for the entire file and nothing else will have been writing to these\n> > files. But for wal writing and checkpointing it might have very\n> > different performance characteristics.\n> > \n> > The big objection to this is that then we don't really have an api for\n> > FADV_DONT_NEED which is more about cache policy than about syncing to\n> > disk. So for example a sequential scan might want to indicate that it\n> > isn't planning on reading the buffers it's churning through but\n> > doesn't want to force them to be written sooner than otherwise and is\n> > never going to call fsync_finish().\n> \n> I took a look at this patch today and I agree with Tom that\n> pg_fsync_start() is a very confusing name. I don't know what the\n> right name is, but this doesn't fsync so I don't think it shuld have\n> fsync in the name. Maybe something like pg_advise_abandon() or\n> pg_abandon_cache(). The current name is really wishful thinking:\n> you're hoping that it will make the kernel start the fsync, but it\n> might not. I think pg_start_data_flush() is similarly optimistic.\nWhat about: pg_fsync_prepare(). That gives the reason why were doing that and \ndoesnt promise that it is actually doing an fsync.\nI dislike really having \"cache\" in the name, because the primary aim is not to \ndiscard the cache...\n\nAndres\n", "msg_date": "Tue, 2 Feb 2010 18:43:15 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:\n>> I took a look at this patch today and I agree with Tom that\n>> pg_fsync_start() is a very confusing name. I don't know what the\n>> right name is, but this doesn't fsync so I don't think it shuld have\n>> fsync in the name. Maybe something like pg_advise_abandon() or\n>> pg_abandon_cache(). The current name is really wishful thinking:\n>> you're hoping that it will make the kernel start the fsync, but it\n>> might not. I think pg_start_data_flush() is similarly optimistic.\n\n> What about: pg_fsync_prepare().\n\nprepare_for_fsync()?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Feb 2010 12:50:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tue, Feb 2, 2010 at 12:50 PM, Tom Lane <[email protected]> wrote:\n> Andres Freund <[email protected]> writes:\n>> On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:\n>>> I took a look at this patch today and I agree with Tom that\n>>> pg_fsync_start() is a very confusing name.  I don't know what the\n>>> right name is, but this doesn't fsync so I don't think it shuld have\n>>> fsync in the name.  Maybe something like pg_advise_abandon() or\n>>> pg_abandon_cache().  The current name is really wishful thinking:\n>>> you're hoping that it will make the kernel start the fsync, but it\n>>> might not.  I think pg_start_data_flush() is similarly optimistic.\n>\n>> What about: pg_fsync_prepare().\n>\n> prepare_for_fsync()?\n\nIt still seems mis-descriptive to me. Couldn't the same routine be\nused simply to abandon undirtied data that we no longer care about\ncaching?\n\n...Robert\n", "msg_date": "Tue, 2 Feb 2010 13:14:40 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tuesday 02 February 2010 19:14:40 Robert Haas wrote:\n> On Tue, Feb 2, 2010 at 12:50 PM, Tom Lane <[email protected]> wrote:\n> > Andres Freund <[email protected]> writes:\n> >> On Tuesday 02 February 2010 18:36:12 Robert Haas wrote:\n> >>> I took a look at this patch today and I agree with Tom that\n> >>> pg_fsync_start() is a very confusing name. I don't know what the\n> >>> right name is, but this doesn't fsync so I don't think it shuld have\n> >>> fsync in the name. Maybe something like pg_advise_abandon() or\n> >>> pg_abandon_cache(). The current name is really wishful thinking:\n> >>> you're hoping that it will make the kernel start the fsync, but it\n> >>> might not. I think pg_start_data_flush() is similarly optimistic.\n> >> \n> >> What about: pg_fsync_prepare().\n> > \n> > prepare_for_fsync()?\n> \n> It still seems mis-descriptive to me. Couldn't the same routine be\n> used simply to abandon undirtied data that we no longer care about\n> caching?\nFor now it could - but it very well might be converted to sync_file_range or \nsimilar, which would have different \"sideeffects\".\n\nAs the potential code duplication is rather small I would prefer to describe \nthe prime effect not the sideeffects...\n\nAndres\n", "msg_date": "Tue, 2 Feb 2010 19:34:07 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tue, Feb 2, 2010 at 1:34 PM, Andres Freund <[email protected]> wrote:\n> For now it could - but it very well might be converted to sync_file_range or\n> similar, which would have different \"sideeffects\".\n>\n> As the potential code duplication is rather small I would prefer to describe\n> the prime effect not the sideeffects...\n\nHmm, in that case, I think the problem is that this function has no\ncomment explaining its intended charter.\n\n...Robert\n", "msg_date": "Tue, 2 Feb 2010 14:06:32 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tuesday 02 February 2010 20:06:32 Robert Haas wrote:\n> On Tue, Feb 2, 2010 at 1:34 PM, Andres Freund <[email protected]> wrote:\n> > For now it could - but it very well might be converted to sync_file_range\n> > or similar, which would have different \"sideeffects\".\n> > \n> > As the potential code duplication is rather small I would prefer to\n> > describe the prime effect not the sideeffects...\n> \n> Hmm, in that case, I think the problem is that this function has no\n> comment explaining its intended charter.\nI agree there. Greg, do you want to update the patch with some comments or \nshall I?\n\nAndres\n", "msg_date": "Tue, 2 Feb 2010 20:08:12 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Hmm, in that case, I think the problem is that this function has no\n> comment explaining its intended charter.\n\nThat's certainly a big problem, but a comment won't fix the fact that\nthe name is misleading. We need both a comment and a name change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Feb 2010 14:33:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tue, Feb 2, 2010 at 2:33 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> Hmm, in that case, I think the problem is that this function has no\n>> comment explaining its intended charter.\n>\n> That's certainly a big problem, but a comment won't fix the fact that\n> the name is misleading.  We need both a comment and a name change.\n\nI think you're probably right, but it's not clear what the new name\nshould be until we have a comment explaining what the function is\nresponsible for.\n\n...Robert\n", "msg_date": "Tue, 2 Feb 2010 14:45:46 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Tue, Feb 2, 2010 at 7:45 PM, Robert Haas <[email protected]> wrote:\n> I think you're probably right, but it's not clear what the new name\n> should be until we have a comment explaining what the function is\n> responsible for.\n\nSo I wrote some comments but wasn't going to repost the patch with the\nunchanged name without explanation... But I think you're right though\nI was looking at it the other way around. I want to have an API for a\ntwo-stage sync and of course if I do that I'll comment it to explain\nthat clearly.\n\nThe gist of the comments was that the function is preparing to fsync\nto initiate the i/o early and allow the later fsync to fast -- but\nalso at the same time have the beneficial side-effect of avoiding\ncache poisoning. It's not clear that the two are necessarily linked\nthough. Perhaps we need two separate apis, though it'll be hard to\nkeep them separate on all platforms.\n\n-- \ngreg\n", "msg_date": "Wed, 3 Feb 2010 11:53:58 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On 02/03/10 12:53, Greg Stark wrote:\n> On Tue, Feb 2, 2010 at 7:45 PM, Robert Haas<[email protected]> wrote:\n>> I think you're probably right, but it's not clear what the new name\n>> should be until we have a comment explaining what the function is\n>> responsible for.\n>\n> So I wrote some comments but wasn't going to repost the patch with the\n> unchanged name without explanation... But I think you're right though\n> I was looking at it the other way around. I want to have an API for a\n> two-stage sync and of course if I do that I'll comment it to explain\n> that clearly.\n>\n> The gist of the comments was that the function is preparing to fsync\n> to initiate the i/o early and allow the later fsync to fast -- but\n> also at the same time have the beneficial side-effect of avoiding\n> cache poisoning. It's not clear that the two are necessarily linked\n> though. Perhaps we need two separate apis, though it'll be hard to\n> keep them separate on all platforms.\nI vote for two seperate apis - sure, there will be some unfortunate \noverlap for most unixoid platforms but its sure better possibly to allow \nadding more platforms later at a centralized place than having to \nanalyze every place where the api is used.\n\nAndres\n", "msg_date": "Wed, 03 Feb 2010 13:03:04 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Wed, Feb 3, 2010 at 6:53 AM, Greg Stark <[email protected]> wrote:\n> On Tue, Feb 2, 2010 at 7:45 PM, Robert Haas <[email protected]> wrote:\n>> I think you're probably right, but it's not clear what the new name\n>> should be until we have a comment explaining what the function is\n>> responsible for.\n>\n> So I wrote some comments but wasn't going to repost the patch with the\n> unchanged name without explanation... But I think you're right though\n> I was looking at it the other way around. I want to have an API for a\n> two-stage sync and of course if I do that I'll comment it to explain\n> that clearly.\n>\n> The gist of the comments was that the function is preparing to fsync\n> to initiate the i/o early and allow the later fsync to fast -- but\n> also at the same time have the beneficial side-effect of avoiding\n> cache poisoning. It's not clear that the two are necessarily linked\n> though. Perhaps we need two separate apis, though it'll be hard to\n> keep them separate on all platforms.\n\nWell, maybe we should start with a discussion of what kernel calls\nyou're aware of on different platforms and then we could try to put an\nAPI around it. I mean, right now all you've got is\nPOSIX_FADV_DONTNEED, so given just that I feel like the API could\nsimply be pg_dontneed() or something. It's hard to design a general\nframework based on one example.\n\n...Robert\n", "msg_date": "Wed, 3 Feb 2010 08:42:57 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On 02/03/10 14:42, Robert Haas wrote:\n> On Wed, Feb 3, 2010 at 6:53 AM, Greg Stark<[email protected]> wrote:\n>> On Tue, Feb 2, 2010 at 7:45 PM, Robert Haas<[email protected]> wrote:\n>>> I think you're probably right, but it's not clear what the new name\n>>> should be until we have a comment explaining what the function is\n>>> responsible for.\n>>\n>> So I wrote some comments but wasn't going to repost the patch with the\n>> unchanged name without explanation... But I think you're right though\n>> I was looking at it the other way around. I want to have an API for a\n>> two-stage sync and of course if I do that I'll comment it to explain\n>> that clearly.\n>>\n>> The gist of the comments was that the function is preparing to fsync\n>> to initiate the i/o early and allow the later fsync to fast -- but\n>> also at the same time have the beneficial side-effect of avoiding\n>> cache poisoning. It's not clear that the two are necessarily linked\n>> though. Perhaps we need two separate apis, though it'll be hard to\n>> keep them separate on all platforms.\n>\n> Well, maybe we should start with a discussion of what kernel calls\n> you're aware of on different platforms and then we could try to put an\n> API around it.\nIn linux there is sync_file_range. On newer Posixish systems one can \nemulate that with mmap() and msync() (in batches obviously).\n\nNo idea about windows.\n\nAndres\n", "msg_date": "Wed, 03 Feb 2010 15:19:49 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Andres Freund wrote:\n> On 02/03/10 14:42, Robert Haas wrote:\n>> Well, maybe we should start with a discussion of what kernel calls\n>> you're aware of on different platforms and then we could try to put an\n>> API around it.\n> In linux there is sync_file_range. On newer Posixish systems one can \n> emulate that with mmap() and msync() (in batches obviously).\n>\n> No idea about windows.\n\nThere's a series of parameters you can pass into CreateFile: \nhttp://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx\n\nA lot of these are already mapped inside of src/port/open.c in a pretty \nstraightforward way from the POSIX-oriented interface:\n\nO_RDWR,O_WRONLY -> GENERIC_WRITE, GENERIC_READ\nO_RANDOM -> FILE_FLAG_RANDOM_ACCESS\nO_SEQUENTIAL -> FILE_FLAG_SEQUENTIAL_SCAN\nO_SHORT_LIVED -> FILE_ATTRIBUTE_TEMPORARY\nO_TEMPORARY -> FILE_FLAG_DELETE_ON_CLOSE\nO_DIRECT -> FILE_FLAG_NO_BUFFERING\nO_DSYNC -> FILE_FLAG_WRITE_THROUGH\n\nYou have to read the whole \"Caching Behavior\" section to see exactly how \nall of those interact, and even then notes like \nhttp://support.microsoft.com/kb/99794 are needed to follow the fine \npoints of things like FILE_FLAG_NO_BUFFERING vs. FILE_FLAG_WRITE_THROUGH.\n\nSo anything that's setting those POSIX open flags better than before is \ngetting the benefit of that improvement on Windows, too. But that's not \nquite the same as the changes using fadvise to provide better targeted \ncache control hints.\n\nI'm getting the impression that doing much better on Windows might fall \ninto the same sort of category as Solaris, where the primary interface \nfor this sort of thing is to use an AIO implementation instead: \nhttp://msdn.microsoft.com/en-us/library/aa365683(VS.85).aspx\n\nThe effective_io_concurrency feature had proof of concept test programs \nthat worked using AIO, but actually following through on that \nimplementation would require a major restructuring of how the database \ninteracts with the OS in terms of reads and writes of blocks. It looks \nto me like doing something similar to sync_file_range on Windows would \nbe similarly difficult.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n", "msg_date": "Sat, 06 Feb 2010 00:03:30 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Saturday 06 February 2010 06:03:30 Greg Smith wrote:\n> Andres Freund wrote:\n> > On 02/03/10 14:42, Robert Haas wrote:\n> >> Well, maybe we should start with a discussion of what kernel calls\n> >> you're aware of on different platforms and then we could try to put an\n> >> API around it.\n> > \n> > In linux there is sync_file_range. On newer Posixish systems one can\n> > emulate that with mmap() and msync() (in batches obviously).\n> > \n> > No idea about windows.\n> The effective_io_concurrency feature had proof of concept test programs\n> that worked using AIO, but actually following through on that\n> implementation would require a major restructuring of how the database\n> interacts with the OS in terms of reads and writes of blocks. It looks\n> to me like doing something similar to sync_file_range on Windows would\n> be similarly difficult.\nLooking a bit arround it seems one could achieve something approximediately \nsimilar to pg_prepare_fsync() by using\nCreateFileMapping && MapViewOfFile && FlushViewOfFile \n\nIf I understand it correctly that will flush, but not wait. Unfortunately you \ncant event make it wait, so its not possible to implement sync_file_range or \nsimilar fully.\n\nAndres\n", "msg_date": "Sat, 6 Feb 2010 13:03:50 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Sat, Feb 6, 2010 at 7:03 AM, Andres Freund <[email protected]> wrote:\n> On Saturday 06 February 2010 06:03:30 Greg Smith wrote:\n>> Andres Freund wrote:\n>> > On 02/03/10 14:42, Robert Haas wrote:\n>> >> Well, maybe we should start with a discussion of what kernel calls\n>> >> you're aware of on different platforms and then we could try to put an\n>> >> API around it.\n>> >\n>> > In linux there is sync_file_range. On newer Posixish systems one can\n>> > emulate that with mmap() and msync() (in batches obviously).\n>> >\n>> > No idea about windows.\n>> The effective_io_concurrency feature had proof of concept test programs\n>> that worked using AIO, but actually following through on that\n>> implementation would require a major restructuring of how the database\n>> interacts with the OS in terms of reads and writes of blocks.  It looks\n>> to me like doing something similar to sync_file_range on Windows would\n>> be similarly difficult.\n> Looking a bit arround it seems one could achieve something approximediately\n> similar to pg_prepare_fsync() by using\n> CreateFileMapping && MapViewOfFile && FlushViewOfFile\n>\n> If I understand it correctly that will flush, but not wait. Unfortunately you\n> cant event make it wait, so its not possible to implement sync_file_range or\n> similar fully.\n\nWell it seems that what we're trying to implement is more like\nit_would_be_nice_if_you_would_start_syncing_this_file_range_but_its_ok_if_you_dont(),\nso maybe that would work.\n\nAnyway, is there something that we can agree on and get committed here\nfor 9.0, or should we postpone this to 9.1? It seems simple enough\nthat we ought to be able to get it done, but we're running out of time\nand we don't seem to have a clear vision here yet...\n\n...Robert\n", "msg_date": "Sun, 7 Feb 2010 00:13:15 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Robert Haas wrote:\n> Well it seems that what we're trying to implement is more like\n> it_would_be_nice_if_you_would_start_syncing_this_file_range_but_its_ok_if_you_dont(),\n> so maybe that would work.\n>\n> Anyway, is there something that we can agree on and get committed here\n> for 9.0, or should we postpone this to 9.1? It seems simple enough\n> that we ought to be able to get it done, but we're running out of time\n> and we don't seem to have a clear vision here yet...\n> \n\nThis is turning into yet another one of those situations where something \nsimple and useful is being killed by trying to generalize it way more \nthan it needs to be, given its current goals and its lack of external \ninterfaces. There's no catversion bump or API breakage to hinder future \nrefactoring if this isn't optimally designed internally from day one.\n\nThe feature is valuable and there seems at least one spot where it may \nbe resolving the possibility of a subtle OS interaction bug by being \nmore thorough in the way that it writes and syncs. The main contention \nseems to be over naming and completely optional additional abstraction. \nI consider the whole \"let's make this cover every type of complicated \nsync on every platform\" goal interesting and worthwhile, but it's \ncompletely optional for this release. The stuff being fretted over now \nis ultimately an internal interface that can be refactored at will in \nlater releases with no user impact.\n\nIf the goal here could be shifted back to finding the minimal level of \nabstraction that doesn't seem completely wrong, then updating the \nfunction names and comments to match that more closely, this could \nreturn to committable. That's all I thought was left to do when I moved \nit to \"ready for committer\", and as far as I've seen this expanded scope \nof discussion has just moved backwards from that point.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Sun, 07 Feb 2010 04:23:14 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> This is turning into yet another one of those situations where something \n> simple and useful is being killed by trying to generalize it way more \n> than it needs to be, given its current goals and its lack of external \n> interfaces. There's no catversion bump or API breakage to hinder future \n> refactoring if this isn't optimally designed internally from day one.\n\nI agree that it's too late in the cycle for any major redesign of the\npatch. But is it too much to ask to use a less confusing name for the\nfunction?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 Feb 2010 11:24:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Sun, Feb 7, 2010 at 11:24 AM, Tom Lane <[email protected]> wrote:\n> Greg Smith <[email protected]> writes:\n>> This is turning into yet another one of those situations where something\n>> simple and useful is being killed by trying to generalize it way more\n>> than it needs to be, given its current goals and its lack of external\n>> interfaces.  There's no catversion bump or API breakage to hinder future\n>> refactoring if this isn't optimally designed internally from day one.\n>\n> I agree that it's too late in the cycle for any major redesign of the\n> patch.  But is it too much to ask to use a less confusing name for the\n> function?\n\n+1. Let's just rename the thing, add some comments, and call it good.\n\n...Robert\n", "msg_date": "Sun, 7 Feb 2010 13:23:10 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Sunday 07 February 2010 19:23:10 Robert Haas wrote:\n> On Sun, Feb 7, 2010 at 11:24 AM, Tom Lane <[email protected]> wrote:\n> > Greg Smith <[email protected]> writes:\n> >> This is turning into yet another one of those situations where something\n> >> simple and useful is being killed by trying to generalize it way more\n> >> than it needs to be, given its current goals and its lack of external\n> >> interfaces. There's no catversion bump or API breakage to hinder future\n> >> refactoring if this isn't optimally designed internally from day one.\n> > \n> > I agree that it's too late in the cycle for any major redesign of the\n> > patch. But is it too much to ask to use a less confusing name for the\n> > function?\n> \n> +1. Let's just rename the thing, add some comments, and call it good.\nWill post a updated patch in the next hours unless somebody beats me too it.\n\nAndres\n", "msg_date": "Sun, 7 Feb 2010 19:27:02 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Sunday 07 February 2010 19:27:02 Andres Freund wrote:\n> On Sunday 07 February 2010 19:23:10 Robert Haas wrote:\n> > On Sun, Feb 7, 2010 at 11:24 AM, Tom Lane <[email protected]> wrote:\n> > > Greg Smith <[email protected]> writes:\n> > >> This is turning into yet another one of those situations where\n> > >> something simple and useful is being killed by trying to generalize\n> > >> it way more than it needs to be, given its current goals and its lack\n> > >> of external interfaces. There's no catversion bump or API breakage\n> > >> to hinder future refactoring if this isn't optimally designed\n> > >> internally from day one.\n> > > \n> > > I agree that it's too late in the cycle for any major redesign of the\n> > > patch. But is it too much to ask to use a less confusing name for the\n> > > function?\n> > \n> > +1. Let's just rename the thing, add some comments, and call it good.\n> \n> Will post a updated patch in the next hours unless somebody beats me too\n> it.\nHere we go.\n\nI left the name at my suggestion pg_fsync_prepare instead of Tom's \nprepare_for_fsync because it seemed more consistend with the naming in the \nrest of the file. Obviously feel free to adjust.\n\nI personally think the fsync on the directory should be added to the stable \nbranches - other opinions?\nIf wanted I can prepare patches for that.\n\nAndres", "msg_date": "Mon, 8 Feb 2010 02:31:42 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Andres Freund escribi�:\n\n> I personally think the fsync on the directory should be added to the stable \n> branches - other opinions?\n> If wanted I can prepare patches for that.\n\nYeah, it seems there are two patches here -- one is the addition of\nfsync_fname() and the other is the fsync_prepare stuff.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 8 Feb 2010 00:09:01 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was\n\t8.4.1 ubuntu karmic slow createdb)" }, { "msg_contents": "On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera\n<[email protected]> wrote:\n> Andres Freund escribió:\n>> I personally think the fsync on the directory should be added to the stable\n>> branches - other opinions?\n>> If wanted I can prepare patches for that.\n>\n> Yeah, it seems there are two patches here -- one is the addition of\n> fsync_fname() and the other is the fsync_prepare stuff.\n\nAndres, you want to take a crack at splitting this up?\n\n...Robert\n", "msg_date": "Sun, 7 Feb 2010 23:53:23 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Monday 08 February 2010 05:53:23 Robert Haas wrote:\n> On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera\n> \n> <[email protected]> wrote:\n> > Andres Freund escribió:\n> >> I personally think the fsync on the directory should be added to the\n> >> stable branches - other opinions?\n> >> If wanted I can prepare patches for that.\n> > \n> > Yeah, it seems there are two patches here -- one is the addition of\n> > fsync_fname() and the other is the fsync_prepare stuff.\n> \n> Andres, you want to take a crack at splitting this up?\nWill do. Later today or tomorrow morning.\n\nAndres\n", "msg_date": "Mon, 8 Feb 2010 08:13:41 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Mon, Feb 8, 2010 at 4:53 AM, Robert Haas <[email protected]> wrote:\n> On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera\n>> Yeah, it seems there are two patches here -- one is the addition of\n>> fsync_fname() and the other is the fsync_prepare stuff.\n\nSorry, I'm just catching up on my mail from FOSDEM this past weekend.\n\nI had come to the same conclusion as Greg that I might as well just\ncommit it with Tom's \"pg_flush_data()\" name and we can decide later if\nand when we have pg_fsync_start()/pg_fsync_finish() whether it's worth\nkeeping two apis or not.\n\nSo I was just going to commit it like that but I discovered last week\nthat I don't have cvs write access set up yet. I'll commit it as soon\nas I generate a new ssh key and Dave installs it, etc. I intentionally\npicked a small simple patch that nobody was waiting on because I knew\nthere was a risk of delays like this and the paperwork. I'm nearly\nthere.\n\n-- \ngreg\n", "msg_date": "Mon, 8 Feb 2010 18:34:01 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Monday 08 February 2010 19:34:01 Greg Stark wrote:\n> On Mon, Feb 8, 2010 at 4:53 AM, Robert Haas <[email protected]> wrote:\n> > On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera\n> > \n> >> Yeah, it seems there are two patches here -- one is the addition of\n> >> fsync_fname() and the other is the fsync_prepare stuff.\n> \n> Sorry, I'm just catching up on my mail from FOSDEM this past weekend.\n> \n> I had come to the same conclusion as Greg that I might as well just\n> commit it with Tom's \"pg_flush_data()\" name and we can decide later if\n> and when we have pg_fsync_start()/pg_fsync_finish() whether it's worth\n> keeping two apis or not.\n> \n> So I was just going to commit it like that but I discovered last week\n> that I don't have cvs write access set up yet. I'll commit it as soon\n> as I generate a new ssh key and Dave installs it, etc. I intentionally\n> picked a small simple patch that nobody was waiting on because I knew\n> there was a risk of delays like this and the paperwork. I'm nearly\n> there.\nDo you still want me to split the patches into two or do you want to do it \nyourself?\nOne in multiple versions for the directory fsync and another one for 9.0?\n\nAndres\n", "msg_date": "Mon, 8 Feb 2010 20:29:46 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Monday 08 February 2010 05:53:23 Robert Haas wrote:\n> On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera\n> \n> <[email protected]> wrote:\n> > Andres Freund escribió:\n> >> I personally think the fsync on the directory should be added to the\n> >> stable branches - other opinions?\n> >> If wanted I can prepare patches for that.\n> > \n> > Yeah, it seems there are two patches here -- one is the addition of\n> > fsync_fname() and the other is the fsync_prepare stuff.\n> \n> Andres, you want to take a crack at splitting this up?\nI hope I didnt duplicate Gregs work, but I didnt hear back from him, so...\n\nEverything <8.1 is hopeless because cp is used there... I didnt see it worth \nto replace that. The patch applies cleanly for 8.1 to 8.4 and survives the \nregression tests\n\nGiven pg's heavy commit model I didnt see a point to split the patch for 9.0 \nas well...\n\nAndres", "msg_date": "Thu, 11 Feb 2010 03:27:30 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Wed, Feb 10, 2010 at 9:27 PM, Andres Freund <[email protected]> wrote:\n> On Monday 08 February 2010 05:53:23 Robert Haas wrote:\n>> On Sun, Feb 7, 2010 at 10:09 PM, Alvaro Herrera\n>>\n>> <[email protected]> wrote:\n>> > Andres Freund escribió:\n>> >> I personally think the fsync on the directory should be added to the\n>> >> stable branches - other opinions?\n>> >> If wanted I can prepare patches for that.\n>> >\n>> > Yeah, it seems there are two patches here -- one is the addition of\n>> > fsync_fname() and the other is the fsync_prepare stuff.\n>>\n>> Andres, you want to take a crack at splitting this up?\n> I hope I didnt duplicate Gregs work, but I didnt hear back from him, so...\n>\n> Everything <8.1 is hopeless because cp is used there... I didnt see it worth\n> to replace that. The patch applies cleanly for 8.1 to 8.4 and survives the\n> regression tests\n>\n> Given pg's heavy commit model I didnt see a point to split the patch for 9.0\n> as well...\n\nI'd probably argue for committing this patch to both HEAD and the\nback-branches, and doing a second commit with the remaining stuff for\nHEAD only, but I don't care very much.\n\nGreg Stark, have you managed to get your access issues sorted out? If\nyou like, I can do the actual commit on this one.\n\n...Robert\n", "msg_date": "Fri, 12 Feb 2010 10:49:16 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Fri, Feb 12, 2010 at 3:49 PM, Robert Haas <[email protected]> wrote:\n> Greg Stark, have you managed to get your access issues sorted out?  If\n\nYep, will look at this today.\n\n\n-- \ngreg\n", "msg_date": "Sun, 14 Feb 2010 14:03:44 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Sun, Feb 14, 2010 at 2:03 PM, Greg Stark <[email protected]> wrote:\n> On Fri, Feb 12, 2010 at 3:49 PM, Robert Haas <[email protected]> wrote:\n>> Greg Stark, have you managed to get your access issues sorted out?  If\n>\n> Yep, will look at this today.\n\nSo I think we have a bigger problem than just copydir.c. It seems to\nme we should be fsyncing the table space data directories on every\ncheckpoint. Otherwise any newly created relations or removed relations\ncould disappear even though the data in them was fsynced. I'm thinking\nI should add an _mdfd_opentblspc(reln) call which returns a file\ndescriptor for the tablespace and have mdsync() use that to sync the\ndirectory whenever it fsyncs a relation. It would be nice to remember\nwhich tablespaces have been fsynced and only fsync them once though,\nthat would need another hash table just for tablespaces.\n\nWe probably also need to fsync the pg_xlog directory every time we\ncreate or rename an xlog segment.\n\nAre there any other places we do directory operations which we need to\nbe permanent?\n\n\n-- \ngreg\n", "msg_date": "Sun, 14 Feb 2010 15:31:58 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> So I think we have a bigger problem than just copydir.c. It seems to\n> me we should be fsyncing the table space data directories on every\n> checkpoint.\n\nIs there any evidence that anyone anywhere has ever lost data because\nof a lack of directory fsyncs? I sure don't recall any bug reports\nthat seem to match that theory.\n\nIt seems to me that we're talking about a huge hit in both code\ncomplexity and performance to deal with a problem that doesn't actually\noccur in the field; and which furthermore is trivially solved on any\nmodern filesystem by choosing the right filesystem options. Why don't\nwe just document those options, instead?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Feb 2010 12:11:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu\n\tkarmic slow createdb)" }, { "msg_contents": "On Sunday 14 February 2010 18:11:39 Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> > So I think we have a bigger problem than just copydir.c. It seems to\n> > me we should be fsyncing the table space data directories on every\n> > checkpoint.\n> \n> Is there any evidence that anyone anywhere has ever lost data because\n> of a lack of directory fsyncs? I sure don't recall any bug reports\n> that seem to match that theory.\nI have actually seen the issue during create database at least. In a \nvirtualized hw though...\n~1GB template database, lots and lots of small tables, the crash occured maybe \na minute after CREATE DB, filesystem was xfs, kernel 2.6.30.y.\n \n> It seems to me that we're talking about a huge hit in both code\n> complexity and performance to deal with a problem that doesn't actually\n> occur in the field; and which furthermore is trivially solved on any\n> modern filesystem by choosing the right filesystem options. Why don't\n> we just document those options, instead?\nWhich options would that be? I am not aware that there any for any of the \nrecent linux filesystems.\nWell, except \"sync\" that is, but that sure would be more of a performance hit \nthan fsyncing the directory...\n\nAndres\n", "msg_date": "Sun, 14 Feb 2010 18:27:00 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu\n\tkarmic slow createdb)" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> On Sunday 14 February 2010 18:11:39 Tom Lane wrote:\n>> It seems to me that we're talking about a huge hit in both code\n>> complexity and performance to deal with a problem that doesn't actually\n>> occur in the field; and which furthermore is trivially solved on any\n>> modern filesystem by choosing the right filesystem options. Why don't\n>> we just document those options, instead?\n\n> Which options would that be? I am not aware that there any for any of the \n> recent linux filesystems.\n\nShouldn't journaling of metadata be sufficient?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 14 Feb 2010 12:37:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu\n\tkarmic slow createdb)" }, { "msg_contents": "* Tom Lane:\n\n>> Which options would that be? I am not aware that there any for any of the \n>> recent linux filesystems.\n>\n> Shouldn't journaling of metadata be sufficient?\n\nYou also need to enforce ordering between the directory update and the\nfile update. The file metadata is flushed with fsync(), but the\ndirectory isn't. On some systems, all directory operations are\nsynchronous, but not on Linux.\n", "msg_date": "Sun, 14 Feb 2010 21:24:24 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync" }, { "msg_contents": "On 02/14/2010 03:24 PM, Florian Weimer wrote:\n> * Tom Lane:\n> \n>>> Which options would that be? I am not aware that there any for any of the\n>>> recent linux filesystems.\n>>> \n>> Shouldn't journaling of metadata be sufficient?\n>> \n> You also need to enforce ordering between the directory update and the\n> file update. The file metadata is flushed with fsync(), but the\n> directory isn't. On some systems, all directory operations are\n> synchronous, but not on Linux.\n> \n\n dirsync\n All directory updates within the filesystem should be \ndone syn-\n chronously. This affects the following system calls: \ncreat,\n link, unlink, symlink, mkdir, rmdir, mknod and rename.\n\nThe widely reported problems, though, did not tend to be a problem with \ndirectory changes written too late - but directory changes being written \ntoo early. That is, the directory change is written to disk, but the \nfile content is not. This is likely because of the \"ordered journal\" \nmode widely used in ext3/ext4 where metadata changes are journalled, but \nfile pages are not journalled. Therefore, it is important for some \noperations, that the file pages are pushed to disk using fsync(file), \nbefore the metadata changes are journalled.\n\nIn theory there is some open hole where directory updates need to be \nsynchronized with file updates, as POSIX doesn't enforce this ordering, \nand we can't trust that all file systems implicitly order things \ncorrectly, but in practice, I don't see this sort of problem happening.\n\nIf you are concerned, enable dirsync.\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Sun, 14 Feb 2010 15:41:02 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync" }, { "msg_contents": "On Sunday 14 February 2010 21:41:02 Mark Mielke wrote:\n> On 02/14/2010 03:24 PM, Florian Weimer wrote:\n> > * Tom Lane:\n> >>> Which options would that be? I am not aware that there any for any of\n> >>> the recent linux filesystems.\n> >> \n> >> Shouldn't journaling of metadata be sufficient?\n> > \n> > You also need to enforce ordering between the directory update and the\n> > file update. The file metadata is flushed with fsync(), but the\n> > directory isn't. On some systems, all directory operations are\n> > synchronous, but not on Linux.\n> \n> dirsync\n> All directory updates within the filesystem should be\n> done syn-\n> chronously. This affects the following system calls:\n> creat,\n> link, unlink, symlink, mkdir, rmdir, mknod and rename.\n> \n> The widely reported problems, though, did not tend to be a problem with\n> directory changes written too late - but directory changes being written\n> too early. That is, the directory change is written to disk, but the\n> file content is not. This is likely because of the \"ordered journal\"\n> mode widely used in ext3/ext4 where metadata changes are journalled, but\n> file pages are not journalled. Therefore, it is important for some\n> operations, that the file pages are pushed to disk using fsync(file),\n> before the metadata changes are journalled.\nWell, but thats not a problem with pg as it fsyncs the file contents.\n\n> In theory there is some open hole where directory updates need to be\n> synchronized with file updates, as POSIX doesn't enforce this ordering,\n> and we can't trust that all file systems implicitly order things\n> correctly, but in practice, I don't see this sort of problem happening.\nI can try to reproduce it if you want...\n\n> If you are concerned, enable dirsync.\nIf the filesystem already behaves that way a fsync on it should be fairly \ncheap. If it doesnt behave that way doing it is correct...\n\nBesides there is no reason to fsync the directory before the checkpoint, so \ndirsync would require a higher cost than doing it correctly.\n\nAndres\n", "msg_date": "Sun, 14 Feb 2010 21:49:09 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync" }, { "msg_contents": "On Sun, Feb 14, 2010 at 10:31 AM, Greg Stark <[email protected]> wrote:\n> On Sun, Feb 14, 2010 at 2:03 PM, Greg Stark <[email protected]> wrote:\n>> On Fri, Feb 12, 2010 at 3:49 PM, Robert Haas <[email protected]> wrote:\n>>> Greg Stark, have you managed to get your access issues sorted out?  If\n>>\n>> Yep, will look at this today.\n>\n> So I think we have a bigger problem than just copydir.c. It seems to\n> me we should be fsyncing the table space data directories on every\n> checkpoint. Otherwise any newly created relations or removed relations\n> could disappear even though the data in them was fsynced. I'm thinking\n> I should add an _mdfd_opentblspc(reln) call which returns a file\n> descriptor for the tablespace and have mdsync() use that to sync the\n> directory whenever it fsyncs a relation. It would be nice to remember\n> which tablespaces have been fsynced and only fsync them once though,\n> that would need another hash table just for tablespaces.\n>\n> We probably also need to fsync the pg_xlog directory every time we\n> create or rename an xlog segment.\n>\n> Are there any other places we do directory operations which we need to\n> be permanent?\n\nI agree with Tom that we need to see some actual reproducible test\ncases where this is an issue before we go too crazy with it. In\ntheory what you're talking about could also happen when extending a\nrelation, if we extend into a new file; but I think we need to\nconvince ourselves that it really happens before we make any more\nchanges.\n\nOn a pragmatic note, if this does turn out to be a problem, it's a\nbug: and we can and do fix bugs whenever we discover them. But the\nother part of this patch - to speed up createdb - is a feature - and\nwe are very rapidly running out of time for 9.0 features. So I'd like\nto vote for getting the feature part of this committed (assuming it's\nin good shape, of course) and we can continue to investigate the other\nissues but without quite as much urgency.\n\n...Robert\n", "msg_date": "Sun, 14 Feb 2010 15:57:08 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On Sunday 14 February 2010 21:57:08 Robert Haas wrote:\n> On Sun, Feb 14, 2010 at 10:31 AM, Greg Stark <[email protected]> wrote:\n> > On Sun, Feb 14, 2010 at 2:03 PM, Greg Stark <[email protected]> wrote:\n> >> On Fri, Feb 12, 2010 at 3:49 PM, Robert Haas <[email protected]> \nwrote:\n> >>> Greg Stark, have you managed to get your access issues sorted out? If\n> >> \n> >> Yep, will look at this today.\n> > \n> > So I think we have a bigger problem than just copydir.c. It seems to\n> > me we should be fsyncing the table space data directories on every\n> > checkpoint. Otherwise any newly created relations or removed relations\n> > could disappear even though the data in them was fsynced. I'm thinking\n> > I should add an _mdfd_opentblspc(reln) call which returns a file\n> > descriptor for the tablespace and have mdsync() use that to sync the\n> > directory whenever it fsyncs a relation. It would be nice to remember\n> > which tablespaces have been fsynced and only fsync them once though,\n> > that would need another hash table just for tablespaces.\n> > \n> > We probably also need to fsync the pg_xlog directory every time we\n> > create or rename an xlog segment.\n> > \n> > Are there any other places we do directory operations which we need to\n> > be permanent?\n> \n> I agree with Tom that we need to see some actual reproducible test\n> cases where this is an issue before we go too crazy with it. In\n> theory what you're talking about could also happen when extending a\n> relation, if we extend into a new file; but I think we need to\n> convince ourselves that it really happens before we make any more\n> changes.\nOk, will try to reproduce.\n\n> On a pragmatic note, if this does turn out to be a problem, it's a\n> bug: and we can and do fix bugs whenever we discover them. But the\n> other part of this patch - to speed up createdb - is a feature - and\n> we are very rapidly running out of time for 9.0 features. So I'd like\n> to vote for getting the feature part of this committed (assuming it's\n> in good shape, of course) and we can continue to investigate the other\n> issues but without quite as much urgency.\nSound sensible.\n\nAndres\n", "msg_date": "Sun, 14 Feb 2010 22:43:23 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1 ubuntu\n\tkarmic slow createdb)" }, { "msg_contents": "On Sun, Feb 14, 2010 at 8:57 PM, Robert Haas <[email protected]> wrote:\n> On a pragmatic note, if this does turn out to be a problem, it's a\n> bug: and we can and do fix bugs whenever we discover them.  But the\n> other part of this patch - to speed up createdb - is a feature - and\n> we are very rapidly running out of time for 9.0 features.  So I'd like\n> to vote for getting the feature part of this committed (assuming it's\n> in good shape, of course) and we can continue to investigate the other\n> issues but without quite as much urgency.\n\nNo problem, I already committed the part that overlaps so I can commit\nthe rest now. I just want to take extra care given how much wine I've\nalready had tonight...\n\nIncidentally, sorry Andres, I forgot to credit you in the first commit.\n-- \ngreg\n", "msg_date": "Sun, 14 Feb 2010 23:33:54 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync (was 8.4.1\n\tubuntu karmic slow createdb)" }, { "msg_contents": "On 02/14/2010 03:49 PM, Andres Freund wrote:\n> On Sunday 14 February 2010 21:41:02 Mark Mielke wrote:\n> \n>> The widely reported problems, though, did not tend to be a problem with\n>> directory changes written too late - but directory changes being written\n>> too early. That is, the directory change is written to disk, but the\n>> file content is not. This is likely because of the \"ordered journal\"\n>> mode widely used in ext3/ext4 where metadata changes are journalled, but\n>> file pages are not journalled. Therefore, it is important for some\n>> operations, that the file pages are pushed to disk using fsync(file),\n>> before the metadata changes are journalled.\n>> \n> Well, but thats not a problem with pg as it fsyncs the file contents.\n> \n\nExactly. Not a problem.\n\n>> If you are concerned, enable dirsync.\n>> \n> If the filesystem already behaves that way a fsync on it should be fairly\n> cheap. If it doesnt behave that way doing it is correct...\n> \n\nWell, I disagree, as the whole point of this thread is that fsync() is \n*not* cheap. :-)\n\n> Besides there is no reason to fsync the directory before the checkpoint, so\n> dirsync would require a higher cost than doing it correctly.\n> \n\nUsing \"ordered\" metadata journaling has approximately the same effect. \nProvided that the data is fsync()'d before the metadata is required, \neither the metadata is recorded in the journal, in which case the data \nis accessible, or the metadata is NOT recorded in the journal, in which \ncase, the files will appear missing. The races that theoretically exist \nwould be in situations where the data of one file references a separate \nfile that does not yet exist.\n\nYou said you would try and reproduce - are you going to try and \nreproduce on ext3/ext4 with ordered journalling enabled? I think \nreproducing outside of a case such as CREATE DATABASE would be \ndifficult. It would have to be something like:\n\n open(O_CREAT)/write()/fsync()/close() of new data file, where data \ngets written, but directory data is not yet written out to journal\n open()/.../write()/fsync()/close() of existing file to point to new \ndata file, but directory data is still not yet written out to journal\n crash\n\nIn this case, \"dirsync\" should be effective at closing this hole.\n\nAs for cost? Well, most PostgreSQL data is stored within file content, \nnot directory metadata. I think \"dirsync\" might slow down some \noperations like CREATE DATABASE or \"rm -fr\", but I would not expect it \nto effect day-to-day performance of the database under real load. Many \noperating systems enable the equivalent of \"dirsync\" by default. I \nbelieve Solaris does this, for example, and other than slowing down \"rm \n-fr\", I don't recall any real complaints about the cost of \"dirsync\".\n\nAfter writing the above, I'm seriously considering adding \"dirsync\" to \nmy /db mounts that hold PostgreSQL and MySQL data.\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Sun, 14 Feb 2010 19:08:10 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Faster CREATE DATABASE by delaying fsync" } ]
[ { "msg_contents": "Problem: Function call typically takes 2-3 millisecond but at times exceeding 2-3 Minutes. \n===================================================== \n>From the DB logs it seems when multiple processes are trying to execute the function , \nexecution takes sequentially rather than parallel, which means Nth thread will have to wait for (N-1)*ExecutionTime before getting its turn \n\nIs my observation correct? If yes then what is the solution for this? If not where/how to find the exact cause of the above problem? \n===================================================== \nDB Version: 8.2 \nFunction Details: \n--returns numeric, takes 10 parameters \n--select query to validate data \n--row level lock for select and validate \n--bare minimum business logic \n--update data \n--couple of inserts for transaction logs/account management \n--also note that few of the tables have audit triggers causing the row to be inserted in audit table with the action (only Update/Insert/Delete) \n\n\nProblem: Function call typically takes 2-3 millisecond but at times exceeding 2-3 Minutes. =====================================================From the DB logs it seems when multiple processes are trying to execute the function,execution takes sequentially rather than parallel, which means Nth thread will have to wait for (N-1)*ExecutionTime before getting its turnIs my observation correct? If yes then what is the solution for this? If not where/how to find the exact cause of the above problem?=====================================================DB Version: 8.2Function Details:--returns numeric, takes 10 parameters--select query to validate data--row level lock for select and validate--bare minimum business logic--update data--couple of inserts for transaction logs/account management--also note that few of the tables have audit triggers causing the row to be inserted in audit table with the action (only Update/Insert/Delete)", "msg_date": "Wed, 16 Dec 2009 12:16:36 +0530 (IST)", "msg_from": "Vishal Gupta <[email protected]>", "msg_from_op": true, "msg_subject": "Parallel Function calls using multiple processes" }, { "msg_contents": "2009/12/16 Vishal Gupta <[email protected]>:\n> Problem: Function call typically takes 2-3 millisecond but at times\n> exceeding 2-3 Minutes.\n> =====================================================\n> From the DB logs it seems when multiple processes are trying to execute the\n> function,\n> execution takes sequentially rather than parallel, which means Nth thread\n> will have to wait for (N-1)*ExecutionTime before getting its turn\n\nit's depend - if there are some locks then yes.\n\nbut reason could be a slow query inside procedure too - look on\npg_stat_activity table, if there are processes waiting for lock.\n\nsee http://old.nabble.com/Query-is-slow-when-executing-in-procedure-td26490782.html\n\nRegards\nPavel Stehule\n\n\n>\n> Is my observation correct? If yes then what is the solution for this? If not\n> where/how to find the exact cause of the above problem?\n> =====================================================\n> DB Version: 8.2\n> Function Details:\n> --returns numeric, takes 10 parameters\n> --select query to validate data\n> --row level lock for select and validate\n> --bare minimum business logic\n> --update data\n> --couple of inserts for transaction logs/account management\n> --also note that few of the tables have audit triggers causing the row to be\n> inserted in audit table with the action (only Update/Insert/Delete)\n>\n>\n", "msg_date": "Wed, 16 Dec 2009 08:34:48 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Function calls using multiple processes" } ]
[ { "msg_contents": "Dear Pavel, \n\nThanks for quick response. \n\nYes I am using explicit locks but only at row-level, not at table level, will this cause sequential execution? \nselect ...on primary key..... for update; \nMy objective is to execute this function in parallel for different PK entities, for same primary key value, sequential is fine. \n\nAlso I have noted that though the function execution after the code calls is taking more than 2-3 minutes but , \nwhen I have put in notice statements for start/end of function it has only taken a second in pg_log. \nCode is definitely not the problem here, as earlier function logic was getting executed as multiple queries \nfrom the code and were working fine for even higher load before I moved to function for the same. This problem started after moving to function only. \n\nI doubt that query is the reason for slowness, and it seems to be more related to database load at times, because it happens 3-4 times out of every 4000 transactions. \npg_stat doesn't show any locks at the moment, but might not reflect the actual scenario as currently the transactions are working fine. \n\n\n\n\nRegards, \nVishal Gupta - 9910991635 \n\n\n----- Original Message ----- \nFrom: \"Pavel Stehule\" <[email protected]> \nTo: \"Vishal Gupta\" <[email protected]> \nCc: [email protected] \nSent: Wednesday, December 16, 2009 1:04:48 PM GMT +05:30 Chennai, Kolkata, Mumbai, New Delhi \nSubject: Re: [PERFORM] Parallel Function calls using multiple processes \n\n2009/12/16 Vishal Gupta <[email protected]>: \n> Problem: Function call typically takes 2-3 millisecond but at times \n> exceeding 2-3 Minutes. \n> ===================================================== \n> From the DB logs it seems when multiple processes are trying to execute the \n> function, \n> execution takes sequentially rather than parallel, which means Nth thread \n> will have to wait for (N-1)*ExecutionTime before getting its turn \n\nit's depend - if there are some locks then yes. \n\nbut reason could be a slow query inside procedure too - look on \npg_stat_activity table, if there are processes waiting for lock. \n\nsee http://old.nabble.com/Query-is-slow-when-executing-in-procedure-td26490782.html \n\nRegards \nPavel Stehule \n\n\n> \n> Is my observation correct? If yes then what is the solution for this? If not \n> where/how to find the exact cause of the above problem? \n> ===================================================== \n> DB Version: 8.2 \n> Function Details: \n> --returns numeric, takes 10 parameters \n> --select query to validate data \n> --row level lock for select and validate \n> --bare minimum business logic \n> --update data \n> --couple of inserts for transaction logs/account management \n> --also note that few of the tables have audit triggers causing the row to be \n> inserted in audit table with the action (only Update/Insert/Delete) \n> \n> \n\nDear Pavel,Thanks for quick response.Yes I am using explicit locks but only at row-level, not at table level, will this cause sequential execution?select ...on primary key..... for update;My objective is to execute this function in parallel for different PK entities, for same primary key value, sequential is fine.Also I have noted that though the function execution after the code calls is taking more than 2-3 minutes but , when I have put in notice statements for start/end of function it has only taken a second in pg_log. Code is definitely not the problem here, as earlier function logic was getting executed as multiple queriesfrom the code and were working fine for even higher load before I moved to function for the same. This problem started after moving to function only.I doubt that query is the reason for slowness, and it seems to be more related to database load at times, because it happens 3-4 times out of every 4000 transactions.pg_stat doesn't show any locks at the moment, but might not reflect the actual scenario as currently the transactions are working fine.Regards,Vishal Gupta - 9910991635----- Original Message -----From: \"Pavel Stehule\" <[email protected]>To: \"Vishal Gupta\" <[email protected]>Cc: [email protected]: Wednesday, December 16, 2009 1:04:48 PM GMT +05:30 Chennai, Kolkata, Mumbai, New DelhiSubject: Re: [PERFORM] Parallel Function calls using multiple processes2009/12/16 Vishal Gupta <[email protected]>:> Problem: Function call typically takes 2-3 millisecond but at times> exceeding 2-3 Minutes.> =====================================================> From the DB logs it seems when multiple processes are trying to execute the> function,> execution takes sequentially rather than parallel, which means Nth thread> will have to wait for (N-1)*ExecutionTime before getting its turnit's depend - if there are some locks then yes.but reason could be a slow query inside procedure too - look onpg_stat_activity table, if there are processes waiting for lock.see http://old.nabble.com/Query-is-slow-when-executing-in-procedure-td26490782.htmlRegardsPavel Stehule>> Is my observation correct? If yes then what is the solution for this? If not> where/how to find the exact cause of the above problem?> =====================================================> DB Version: 8.2> Function Details:> --returns numeric, takes 10 parameters> --select query to validate data> --row level lock for select and validate> --bare minimum business logic> --update data> --couple of inserts for transaction logs/account management> --also note that few of the tables have audit triggers causing the row to be> inserted in audit table with the action (only Update/Insert/Delete)>>", "msg_date": "Wed, 16 Dec 2009 13:47:01 +0530 (IST)", "msg_from": "Vishal Gupta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Parallel Function calls using multiple processes" }, { "msg_contents": "2009/12/16 Vishal Gupta <[email protected]>:\n> Dear Pavel,\n>\n> Thanks for quick response.\n>\n> Yes I am using explicit locks but only at row-level, not at table level,\n> will this cause sequential execution?\n> select ...on primary key..... for update;\n> My objective is to execute this function in parallel for different PK\n> entities, for same primary key value, sequential is fine.\n>\n> Also I have noted that though the function execution after the code calls is\n> taking more than 2-3 minutes but ,\n> when I have put in notice statements for start/end of function it has only\n> taken a second in pg_log.\n> Code is definitely not the problem here, as earlier function logic was\n> getting executed as multiple queries\n> from the code and were working fine for even higher load before I moved to\n> function for the same. This problem started after moving to function only.\n>\n> I doubt that query is the reason for slowness, and it seems to be more\n> related to database load at times, because it happens 3-4 times out of every\n> 4000 transactions.\n> pg_stat doesn't show any locks at the moment, but might not reflect the\n> actual scenario as currently the transactions are working fine.\n\nit should be a different problem - in bgwriter configuration. There\ncould help migration on 8.3, maybe.\n\nhttp://old.nabble.com/Checkpoint-tuning-on-8.2.4-td17685494.html\n\nRegards\nPavel Stehule\n\n\n\n>\n> Regards,\n> Vishal Gupta - 9910991635\n>\n>\n> ----- Original Message -----\n> From: \"Pavel Stehule\" <[email protected]>\n> To: \"Vishal Gupta\" <[email protected]>\n> Cc: [email protected]\n> Sent: Wednesday, December 16, 2009 1:04:48 PM GMT +05:30 Chennai, Kolkata,\n> Mumbai, New Delhi\n> Subject: Re: [PERFORM] Parallel Function calls using multiple processes\n>\n> 2009/12/16 Vishal Gupta <[email protected]>:\n>> Problem: Function call typically takes 2-3 millisecond but at times\n>> exceeding 2-3 Minutes.\n>> =====================================================\n>> From the DB logs it seems when multiple processes are trying to execute\n>> the\n>> function,\n>> execution takes sequentially rather than parallel, which means Nth thread\n>> will have to wait for (N-1)*ExecutionTime before getting its turn\n>\n> it's depend - if there are some locks then yes.\n>\n> but reason could be a slow query inside procedure too - look on\n> pg_stat_activity table, if there are processes waiting for lock.\n>\n> see\n> http://old.nabble.com/Query-is-slow-when-executing-in-procedure-td26490782.html\n>\n> Regards\n> Pavel Stehule\n>\n>\n>>\n>> Is my observation correct? If yes then what is the solution for this? If\n>> not\n>> where/how to find the exact cause of the above problem?\n>> =====================================================\n>> DB Version: 8.2\n>> Function Details:\n>> --returns numeric, takes 10 parameters\n>> --select query to validate data\n>> --row level lock for select and validate\n>> --bare minimum business logic\n>> --update data\n>> --couple of inserts for transaction logs/account management\n>> --also note that few of the tables have audit triggers causing the row to\n>> be\n>> inserted in audit table with the action (only Update/Insert/Delete)\n>>\n>>\n>\n", "msg_date": "Wed, 16 Dec 2009 09:38:22 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Parallel Function calls using multiple processes" } ]
[ { "msg_contents": "Apparently the latest version of MySQL has solved this problem: http://www.xaprb.com/blog/2006/06/28/why-large-in-clauses-are-problematic/\n\nBut I am running PostgreSQL v8.3 and am observing generally that SELECT ... WHERE ... IN (a, b, c, ...) is much slower than SELECT ... INNER JOIN (SELECT a UNION ALL SELECT b UNION ALL SELECT c ...)\n\nWhy doesn't the optimizer automatically transform IN clauses to INNER JOINs in this fashion?\n\n\n\n \n", "msg_date": "Thu, 17 Dec 2009 07:23:48 -0800 (PST)", "msg_from": "Thomas Hamilton <[email protected]>", "msg_from_op": true, "msg_subject": "Automatic optimization of IN clauses via INNER JOIN" }, { "msg_contents": "Thomas Hamilton <[email protected]> writes:\n> Apparently the latest version of MySQL has solved this problem: http://www.xaprb.com/blog/2006/06/28/why-large-in-clauses-are-problematic/\n> But I am running PostgreSQL v8.3 and am observing generally that SELECT ... WHERE ... IN (a, b, c, ...) is much slower than SELECT ... INNER JOIN (SELECT a UNION ALL�SELECT b UNION ALL SELECT c ...)\n\n> Why doesn't the optimizer automatically transform IN clauses to INNER JOINs in this fashion?\n\nDid you read all the comments on that three-year-old article?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Dec 2009 10:32:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN " }, { "msg_contents": "Yes, I see the one note that running Analyze can improve the performance.\n\nBut in our testing under the same optimization and conditions INNER JOIN is significantly outperforming IN.\n\n\n\n----- Original Message ----\nFrom: Tom Lane [email protected]\n\nThomas Hamilton <[email protected]> writes:\n> Apparently the latest version of MySQL has solved this problem: http://www.xaprb.com/blog/2006/06/28/why-large-in-clauses-are-problematic/\n> But I am running PostgreSQL v8.3 and am observing generally that SELECT ... WHERE ... IN (a, b, c, ...) is much slower than SELECT ... INNER JOIN (SELECT a UNION ALL SELECT b UNION ALL SELECT c ...)\n\n> Why doesn't the optimizer automatically transform IN clauses to INNER JOINs in this fashion?\n\nDid you read all the comments on that three-year-old article?\n\n            regards, tom lane\n\n\n\n \n", "msg_date": "Thu, 17 Dec 2009 07:45:30 -0800 (PST)", "msg_from": "Thomas Hamilton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN" }, { "msg_contents": "Thomas Hamilton <[email protected]> writes:\n> But in our testing�under the same optimization and conditions INNER JOIN is significantly outperforming IN.\n\n[ shrug... ] You haven't provided any details, so it's impossible to\noffer any useful advice.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Dec 2009 10:57:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN " }, { "msg_contents": "On Thu, Dec 17, 2009 at 10:23 AM, Thomas Hamilton\n<[email protected]> wrote:\n> Apparently the latest version of MySQL has solved this problem: http://www.xaprb.com/blog/2006/06/28/why-large-in-clauses-are-problematic/\n>\n> But I am running PostgreSQL v8.3 and am observing generally that SELECT ... WHERE ... IN (a, b, c, ...) is much slower than SELECT ... INNER JOIN (SELECT a UNION ALL SELECT b UNION ALL SELECT c ...)\n\nThat's certainly not MY observation. It would be interesting to see\nwhat's going on in your case but you'll need to provide more details.\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n...Robert\n", "msg_date": "Thu, 17 Dec 2009 13:05:28 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN" }, { "msg_contents": "On Thu, Dec 17, 2009 at 6:05 PM, Robert Haas <[email protected]> wrote:\n> On Thu, Dec 17, 2009 at 10:23 AM, Thomas Hamilton\n> <[email protected]> wrote:\n>> Apparently the latest version of MySQL has solved this problem: http://www.xaprb.com/blog/2006/06/28/why-large-in-clauses-are-problematic/\n>>\n>> But I am running PostgreSQL v8.3 and am observing generally that SELECT ... WHERE ... IN (a, b, c, ...) is much slower than SELECT ... INNER JOIN (SELECT a UNION ALL SELECT b UNION ALL SELECT c ...)\n>\n> That's certainly not MY observation.  It would be interesting to see\n> what's going on in your case but you'll need to provide more details.\n>\n> http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n>\n\nI asked the same question many times, and answer was always the same -\nthere's no point in doing that...\nwell... I've been asked by folks at work, the same thing (for typical\nengineer, grasping the idea of join can be hard sometimes...).\n\n\n\n\n-- \nGJ\n", "msg_date": "Thu, 17 Dec 2009 20:05:28 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN" }, { "msg_contents": "On 17/12/2009 11:57 PM, Tom Lane wrote:\n> Thomas Hamilton<[email protected]> writes:\n>> But in our testing under the same optimization and conditions INNER JOIN is significantly outperforming IN.\n>\n> [ shrug... ] You haven't provided any details, so it's impossible to\n> offer any useful advice.\n\nIn other words: can we discuss this with reference to a specific case? \nPlease provide your queries, your EXPLAIN ANALYZE output, and other \nrelevant details as per:\n\n http://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nI'd be interested in knowing whether the planner can perform such \ntransformations and if so why it doesn't myself. I have the vague \nfeeling there may be semantic differences in the handling of NULL but I \ncan't currently seem to puzzle them out.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 18 Dec 2009 10:20:14 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN" }, { "msg_contents": "On Thu, Dec 17, 2009 at 9:20 PM, Craig Ringer\n<[email protected]> wrote:\n> On 17/12/2009 11:57 PM, Tom Lane wrote:\n>>\n>> Thomas Hamilton<[email protected]>  writes:\n>>>\n>>> But in our testing under the same optimization and conditions INNER JOIN\n>>> is significantly outperforming IN.\n>>\n>> [ shrug... ]  You haven't provided any details, so it's impossible to\n>> offer any useful advice.\n>\n> In other words: can we discuss this with reference to a specific case?\n> Please provide your queries, your EXPLAIN ANALYZE output, and other relevant\n> details as per:\n>\n>  http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> I'd be interested in knowing whether the planner can perform such\n> transformations and if so why it doesn't myself. I have the vague feeling\n> there may be semantic differences in the handling of NULL but I can't\n> currently seem to puzzle them out.\n\nNOT IN is the only that really kills you as far as optimization is\nconcerned. IN can be transformed to a join. NOT IN forces a NOT\n(subplan)-type plan, which bites - hard.\n\n...Robert\n", "msg_date": "Fri, 18 Dec 2009 09:18:14 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN" }, { "msg_contents": "On Fri, Dec 18, 2009 at 2:18 PM, Robert Haas <[email protected]> wrote:\n\n> NOT IN is the only that really kills you as far as optimization is\n> concerned.  IN can be transformed to a join.  NOT IN forces a NOT\n> (subplan)-type plan, which bites - hard.\n\nin a well designed database (read: not abusing NULLs) - it can be done\nwith joins too.\n\n\n\n-- \nGJ\n", "msg_date": "Fri, 18 Dec 2009 14:24:00 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN" }, { "msg_contents": "2009/12/18 Grzegorz Jaśkiewicz <[email protected]>:\n> On Fri, Dec 18, 2009 at 2:18 PM, Robert Haas <[email protected]> wrote:\n>\n>> NOT IN is the only that really kills you as far as optimization is\n>> concerned.  IN can be transformed to a join.  NOT IN forces a NOT\n>> (subplan)-type plan, which bites - hard.\n>\n> in a well designed database (read: not abusing NULLs) - it can be done\n> with joins too.\n\nBut not by PostgreSQL, or so I believe.\n\n...Robert\n", "msg_date": "Fri, 18 Dec 2009 10:23:29 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN" }, { "msg_contents": "2009/12/18 Robert Haas <[email protected]>:\n> 2009/12/18 Grzegorz Jaśkiewicz <[email protected]>:\n>> On Fri, Dec 18, 2009 at 2:18 PM, Robert Haas <[email protected]> wrote:\n>>\n>>> NOT IN is the only that really kills you as far as optimization is\n>>> concerned.  IN can be transformed to a join.  NOT IN forces a NOT\n>>> (subplan)-type plan, which bites - hard.\n>>\n>> in a well designed database (read: not abusing NULLs) - it can be done\n>> with joins too.\n>\n> But not by PostgreSQL, or so I believe.\n\nusing left join ?\n\n\n\n-- \nGJ\n", "msg_date": "Fri, 18 Dec 2009 15:24:46 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN" }, { "msg_contents": "2009/12/18 Grzegorz Jaśkiewicz <[email protected]>:\n> 2009/12/18 Robert Haas <[email protected]>:\n>> 2009/12/18 Grzegorz Jaśkiewicz <[email protected]>:\n>>> On Fri, Dec 18, 2009 at 2:18 PM, Robert Haas <[email protected]> wrote:\n>>>\n>>>> NOT IN is the only that really kills you as far as optimization is\n>>>> concerned.  IN can be transformed to a join.  NOT IN forces a NOT\n>>>> (subplan)-type plan, which bites - hard.\n>>>\n>>> in a well designed database (read: not abusing NULLs) - it can be done\n>>> with joins too.\n>>\n>> But not by PostgreSQL, or so I believe.\n>\n> using left join ?\n\nIf at least one column in the subselect is strict, you can rewrite it\nthat way yourself, but the optimizer won't do it. I wish it did, but I\ndon't wish it badly enough to have written the code myself, and\napparently neither does anyone else.\n\n...Robert\n", "msg_date": "Fri, 18 Dec 2009 19:22:29 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> If at least one column in the subselect is strict, you can rewrite it\n> that way yourself, but the optimizer won't do it. I wish it did, but I\n> don't wish it badly enough to have written the code myself, and\n> apparently neither does anyone else.\n\nI was thinking about this earlier today. It's a bit of a PITA because\nwe need the information very early in the planner, before it's done much\nanalysis. So for example we might find ourselves duplicating the work\nthat will happen later to determine which tables are nullable by outer\njoins. I think this would be all right as long as we ensure that it's\nonly done when there's a chance for a win (ie, no extra cycles if\nthere's not actually a NOT IN present). It could still be an\nunpleasantly large amount of new code though.\n\nWouldn't we need to enforce that *all* columns of the subselect are\nnon-null, rather than *any*?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Dec 2009 19:32:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN " }, { "msg_contents": "On Fri, Dec 18, 2009 at 7:32 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> If at least one column in the subselect is strict, you can rewrite it\n>> that way yourself, but the optimizer won't do it. I wish it did, but I\n>> don't wish it badly enough to have written the code myself, and\n>> apparently neither does anyone else.\n>\n> I was thinking about this earlier today.  It's a bit of a PITA because\n> we need the information very early in the planner, before it's done much\n> analysis.  So for example we might find ourselves duplicating the work\n> that will happen later to determine which tables are nullable by outer\n> joins.  I think this would be all right as long as we ensure that it's\n> only done when there's a chance for a win (ie, no extra cycles if\n> there's not actually a NOT IN present).  It could still be an\n> unpleasantly large amount of new code though.\n\nI haven't looked at the code (I'm not even sure where you're thinking\nthis would need to happen) but is there any way that we can do this\nand usefully hold onto the results for future use?\n\n> Wouldn't we need to enforce that *all* columns of the subselect are\n> non-null, rather than *any*?\n\n[ thinks about it ]\n\nYes.\n\n...Robert\n", "msg_date": "Fri, 18 Dec 2009 22:33:26 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic optimization of IN clauses via INNER JOIN" } ]
[ { "msg_contents": "Hello.\n\nI have a problem I don't understand. I hope it's a simple problem and I'm\njust stupid.\n\nWhen I make a subquery Postgres don't care about my indexes and makes\na seq scan instead of a index scan. Why?\n\nIs it possible that the subquery change the datatype and by this make\na index scan impossible? Can I somehow see the datatypes used by the\nquery?\n\nBelow is the test I'm running.\n\n/ Karl Larsson\n\n\nCREATE TABLE table_one (\n id bigint PRIMARY KEY NOT NULL\n);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"table_one_pkey\" for table \"table_one\"\n\nCREATE TABLE table_two (\n id bigint PRIMARY KEY NOT NULL\n);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n\"table_two_pkey\" for table \"table_two\"\n\n\n\n\nINSERT INTO table_one VALUES (4);\nINSERT INTO table_one VALUES (3);\nINSERT INTO table_one VALUES (5);\nINSERT INTO table_one VALUES (2);\nINSERT INTO table_one VALUES (6);\nINSERT INTO table_one VALUES (1);\n\nINSERT INTO table_two VALUES (14);\nINSERT INTO table_two VALUES (12);\nINSERT INTO table_two VALUES (10);\nINSERT INTO table_two VALUES (8);\nINSERT INTO table_two VALUES (6);\nINSERT INTO table_two VALUES (4);\nINSERT INTO table_two VALUES (2);\n\n\n\nEXPLAIN ANALYZE\nSELECT t2.id\nFROM table_two AS t2, (\n SELECT id\n FROM table_one AS t1\n WHERE t1.id < 6\n ) AS foo\nWHERE t2.id = foo.id;\n\n\n\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=35.44..78.58 rows=647 width=8) (actual time=0.076..0.088\nrows=2 loops=1)\n Hash Cond: (t2.id = t1.id)\n -> Seq Scan on table_two t2 (cost=0.00..29.40 rows=1940 width=8)\n(actual time=0.007..0.021 rows=7 loops=1)\n -> Hash (cost=27.35..27.35 rows=647 width=8) (actual time=0.038..0.038\nrows=5 loops=1)\n -> Bitmap Heap Scan on table_one t1 (cost=9.26..27.35 rows=647\nwidth=8) (actual time=0.014..0.022 rows=5 loops=1)\n Recheck Cond: (id < 6)\n -> Bitmap Index Scan on table_one_pkey (cost=0.00..9.10\nrows=647 width=0) (actual time=0.008..0.008 rows=5 loops=1)\n Index Cond: (id < 6)\n Total runtime: 0.133 ms\n\nHello.I have a problem I don't understand. I hope it's a simple problem and I'mjust stupid.When I make a subquery Postgres don't care about my indexes and makesa seq scan instead of a index scan. Why?\nIs it possible that the subquery change the datatype and by this makea index scan impossible? Can I somehow see the datatypes used by thequery?Below is the test I'm running./ Karl Larsson\nCREATE TABLE table_one (    id bigint PRIMARY KEY NOT NULL);NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index\"table_one_pkey\" for table \"table_one\"CREATE TABLE table_two (\n\n    id bigint PRIMARY KEY NOT NULL);NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index\"table_two_pkey\" for table \"table_two\"INSERT INTO table_one VALUES (4);\n\nINSERT INTO table_one VALUES (3);INSERT INTO table_one VALUES (5);INSERT INTO table_one VALUES (2);INSERT INTO table_one VALUES (6);INSERT INTO table_one VALUES (1);INSERT INTO table_two VALUES (14);\n\nINSERT INTO table_two VALUES (12);INSERT INTO table_two VALUES (10);INSERT INTO table_two VALUES (8);INSERT INTO table_two VALUES (6);INSERT INTO table_two VALUES (4);INSERT INTO table_two VALUES (2);\nEXPLAIN ANALYZESELECT t2.idFROM table_two AS t2, (    SELECT id    FROM table_one AS t1    WHERE t1.id < 6\n  ) AS fooWHERE t2.id = foo.id;\n                                                             QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------\n\n Hash Join  (cost=35.44..78.58 rows=647 width=8) (actual time=0.076..0.088 rows=2 loops=1)   Hash Cond: (t2.id = t1.id)   ->  Seq Scan on table_two t2  (cost=0.00..29.40 rows=1940 width=8) (actual time=0.007..0.021 rows=7 loops=1)\n\n   ->  Hash  (cost=27.35..27.35 rows=647 width=8) (actual time=0.038..0.038 rows=5 loops=1)         ->  Bitmap Heap Scan on table_one t1  (cost=9.26..27.35 rows=647 width=8) (actual time=0.014..0.022 rows=5 loops=1)\n\n               Recheck Cond: (id < 6)              \n->  Bitmap Index Scan on table_one_pkey  (cost=0.00..9.10 rows=647\nwidth=0) (actual time=0.008..0.008 rows=5 loops=1)                     Index Cond: (id < 6)\n Total runtime: 0.133 ms", "msg_date": "Fri, 18 Dec 2009 00:22:15 +0100", "msg_from": "Karl Larsson <[email protected]>", "msg_from_op": true, "msg_subject": "seq scan instead of index scan" }, { "msg_contents": "On Thu, Dec 17, 2009 at 4:22 PM, Karl Larsson <[email protected]> wrote:\n> Hello.\n>\n> I have a problem I don't understand. I hope it's a simple problem and I'm\n> just stupid.\n>\n> When I make a subquery Postgres don't care about my indexes and makes\n> a seq scan instead of a index scan. Why?\n\nPostgreSQL uses an intelligent query planner that predicets how many\nrows it will get back for each plan and chooses accordingly. Since a\nfew dozen rows will all likely fit in the same block, it's way faster\nto sequentially scan the table than to use an index scan.\n\nNote that pgsql always has to go back to the original table to get the\nrows anyway, since visibility info is not stored in the indexes.\n", "msg_date": "Thu, 17 Dec 2009 16:26:36 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan instead of index scan" }, { "msg_contents": "Karl Larsson <[email protected]> wrote: \n \n> When I make a subquery Postgres don't care about my indexes and\n> makes a seq scan instead of a index scan. Why?\n \n> Total runtime: 0.133 ms\n \nBecause it thinks that it's faster that way with the particular data\nyou now have in your tables. With more data, it might think some\nother plan is faster. It's running in less than 1/7500 second --\nhow sure are you that it would be significantly faster another way?\n \n-Kevin\n", "msg_date": "Thu, 17 Dec 2009 17:29:14 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan instead of index scan" }, { "msg_contents": "On Fri, Dec 18, 2009 at 12:26 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Thu, Dec 17, 2009 at 4:22 PM, Karl Larsson <[email protected]>\n> wrote:\n> > Hello.\n> >\n> > I have a problem I don't understand. I hope it's a simple problem and I'm\n> > just stupid.\n> >\n> > When I make a subquery Postgres don't care about my indexes and makes\n> > a seq scan instead of a index scan. Why?\n>\n> PostgreSQL uses an intelligent query planner that predicets how many\n> rows it will get back for each plan and chooses accordingly. Since a\n> few dozen rows will all likely fit in the same block, it's way faster\n> to sequentially scan the table than to use an index scan.\n>\n> Note that pgsql always has to go back to the original table to get the\n> rows anyway, since visibility info is not stored in the indexes.\n>\n\nI forgot to mention that I have a reel problem with 937(and growing) rows\nof data. My test tables\nand test query is just to exemplify my problem. But I'll extend table_two\nand see if it change anything.\n\n/ Karl Larsson\n\nOn Fri, Dec 18, 2009 at 12:26 AM, Scott Marlowe <[email protected]> wrote:\nOn Thu, Dec 17, 2009 at 4:22 PM, Karl Larsson <[email protected]> wrote:\n> Hello.\n>\n> I have a problem I don't understand. I hope it's a simple problem and I'm\n> just stupid.\n>\n> When I make a subquery Postgres don't care about my indexes and makes\n> a seq scan instead of a index scan. Why?\n\nPostgreSQL uses an intelligent query planner that predicets how many\nrows it will get back for each plan and chooses accordingly.  Since a\nfew dozen rows will all likely fit in the same block, it's way faster\nto sequentially scan the table than to use an index scan.\n\nNote that pgsql always has to go back to the original table to get the\nrows anyway, since visibility info is not stored in the indexes.I forgot to mention  that I have a reel problem with 937(and growing) rows of data. My test tables and test query is just to exemplify my problem. But I'll extend table_two and see if it change anything.\n/ Karl Larsson", "msg_date": "Fri, 18 Dec 2009 00:46:32 +0100", "msg_from": "Karl Larsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seq scan instead of index scan" }, { "msg_contents": "Karl Larsson wrote:\n> When I make a subquery Postgres don't care about my indexes and makes\n> a seq scan instead of a index scan. Why?\nData set is just too small for it to matter. Watch what happens if I \ncontinue from what you posted with much bigger tables:\n\npostgres=# truncate table table_one;\nTRUNCATE TABLE\npostgres=# truncate table table_two;\nTRUNCATE TABLE\npostgres=# insert into table_one (select generate_series(1,100000));\nINSERT 0 100000\npostgres=# insert into table_two (select generate_series(1,100000));\nINSERT 0 100000\npostgres=# analyze;\nANALYZE\npostgres=# EXPLAIN ANALYZE\nSELECT t2.id\nFROM table_two AS t2, (\n SELECT id\n FROM table_one AS t1\n WHERE t1.id < 6\n ) AS foo\nWHERE t2.id = foo.id;\n QUERY \nPLAN \n------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..91.35 rows=10 width=8) (actual \ntime=0.024..0.048 rows=5 loops=1)\n -> Index Scan using table_one_pkey on table_one t1 (cost=0.00..8.44 \nrows=10 width=8) (actual time=0.009..0.013 rows=5 loops=1)\n Index Cond: (id < 6)\n -> Index Scan using table_two_pkey on table_two t2 (cost=0.00..8.28 \nrows=1 width=8) (actual time=0.005..0.005 rows=1 loops=5)\n Index Cond: (t2.id = t1.id)\n Total runtime: 0.097 ms\n(6 rows)\n\nThere's the index scan on both tables that you were expecting.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 17 Dec 2009 19:10:56 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan instead of index scan" }, { "msg_contents": "On Thu, Dec 17, 2009 at 4:46 PM, Karl Larsson <[email protected]> wrote:\n>\n>\n> On Fri, Dec 18, 2009 at 12:26 AM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> On Thu, Dec 17, 2009 at 4:22 PM, Karl Larsson <[email protected]>\n>> wrote:\n>> > Hello.\n>> >\n>> > I have a problem I don't understand. I hope it's a simple problem and\n>> > I'm\n>> > just stupid.\n>> >\n>> > When I make a subquery Postgres don't care about my indexes and makes\n>> > a seq scan instead of a index scan. Why?\n>>\n>> PostgreSQL uses an intelligent query planner that predicets how many\n>> rows it will get back for each plan and chooses accordingly.  Since a\n>> few dozen rows will all likely fit in the same block, it's way faster\n>> to sequentially scan the table than to use an index scan.\n>>\n>> Note that pgsql always has to go back to the original table to get the\n>> rows anyway, since visibility info is not stored in the indexes.\n>\n> I forgot to mention  that I have a reel problem with 937(and growing) rows\n> of data. My test tables\n> and test query is just to exemplify my problem. But I'll extend table_two\n> and see if it change anything.\n\nBest bet is to post the real problem, not a semi-representational made\nup one. Unless the made up \"test case\" is truly representative and\nrecreates the failure pretty much the same was as the original.\n", "msg_date": "Thu, 17 Dec 2009 17:11:19 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan instead of index scan" }, { "msg_contents": "On Fri, Dec 18, 2009 at 1:10 AM, Greg Smith <[email protected]> wrote:\n\n> Karl Larsson wrote:\n>\n>> When I make a subquery Postgres don't care about my indexes and makes\n>> a seq scan instead of a index scan. Why?\n>>\n> Data set is just too small for it to matter. Watch what happens if I\n> continue from what you posted with much bigger tables:\n>\n> postgres=# truncate table table_one;\n> TRUNCATE TABLE\n> postgres=# truncate table table_two;\n> TRUNCATE TABLE\n> postgres=# insert into table_one (select generate_series(1,100000));\n> INSERT 0 100000\n> postgres=# insert into table_two (select generate_series(1,100000));\n> INSERT 0 100000\n> postgres=# analyze;\n> ANALYZE\n> postgres=# EXPLAIN ANALYZE\n>\n> SELECT t2.id\n> FROM table_two AS t2, (\n> SELECT id\n> FROM table_one AS t1\n> WHERE t1.id < 6\n> ) AS foo\n> WHERE t2.id = foo.id;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..91.35 rows=10 width=8) (actual time=0.024..0.048\n> rows=5 loops=1)\n> -> Index Scan using table_one_pkey on table_one t1 (cost=0.00..8.44\n> rows=10 width=8) (actual time=0.009..0.013 rows=5 loops=1)\n> Index Cond: (id < 6)\n> -> Index Scan using table_two_pkey on table_two t2 (cost=0.00..8.28\n> rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=5)\n> Index Cond: (t2.id = t1.id)\n> Total runtime: 0.097 ms\n> (6 rows)\n>\n> There's the index scan on both tables that you were expecting.<http://www.2ndQuadrant.com>\n>\n\nTrue. Thank you. I'll try this on my reel problem as well but I have a gut\nfeeling it\nwon't work there since those tables are bigger.\n\n/ Karl Larsson\n\nOn Fri, Dec 18, 2009 at 1:10 AM, Greg Smith <[email protected]> wrote:\nKarl Larsson wrote:\n\nWhen I make a subquery Postgres don't care about my indexes and makes\na seq scan instead of a index scan. Why?\n\nData set is just too small for it to matter.  Watch what happens if I continue from what you posted with much bigger tables:\n\npostgres=# truncate table table_one;\nTRUNCATE TABLE\npostgres=# truncate table table_two;\nTRUNCATE TABLE\npostgres=# insert into table_one (select generate_series(1,100000));\nINSERT 0 100000\npostgres=# insert into table_two (select generate_series(1,100000));\nINSERT 0 100000\npostgres=# analyze;\nANALYZE\npostgres=# EXPLAIN ANALYZE\nSELECT t2.id\nFROM table_two AS t2, (\n   SELECT id\n   FROM table_one AS t1\n   WHERE t1.id < 6\n ) AS foo\nWHERE t2.id = foo.id;\n                                                            QUERY PLAN                                                            ------------------------------------------------------------------------------------------------------------------------------------\n\nNested Loop  (cost=0.00..91.35 rows=10 width=8) (actual time=0.024..0.048 rows=5 loops=1)\n  ->  Index Scan using table_one_pkey on table_one t1  (cost=0.00..8.44 rows=10 width=8) (actual time=0.009..0.013 rows=5 loops=1)\n        Index Cond: (id < 6)\n  ->  Index Scan using table_two_pkey on table_two t2  (cost=0.00..8.28 rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=5)\n        Index Cond: (t2.id = t1.id)\nTotal runtime: 0.097 ms\n(6 rows)\n\nThere's the index scan on both tables that you were expecting.True. Thank you. I'll try this on my reel problem as well but I have a gut feeling it\nwon't work there since those tables are bigger. / Karl Larsson", "msg_date": "Fri, 18 Dec 2009 02:10:32 +0100", "msg_from": "Karl Larsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seq scan instead of index scan" }, { "msg_contents": "On Thu, Dec 17, 2009 at 6:10 PM, Karl Larsson <[email protected]> wrote:\n> On Fri, Dec 18, 2009 at 1:10 AM, Greg Smith <[email protected]> wrote:\n>>\n>> Karl Larsson wrote:\n>>>\n>>> When I make a subquery Postgres don't care about my indexes and makes\n>>> a seq scan instead of a index scan. Why?\n>>\n>> Data set is just too small for it to matter.  Watch what happens if I\n>> continue from what you posted with much bigger tables:\n>>\n>> postgres=# truncate table table_one;\n>> TRUNCATE TABLE\n>> postgres=# truncate table table_two;\n>> TRUNCATE TABLE\n>> postgres=# insert into table_one (select generate_series(1,100000));\n>> INSERT 0 100000\n>> postgres=# insert into table_two (select generate_series(1,100000));\n>> INSERT 0 100000\n>> postgres=# analyze;\n>> ANALYZE\n>> postgres=# EXPLAIN ANALYZE\n>> SELECT t2.id\n>> FROM table_two AS t2, (\n>>   SELECT id\n>>   FROM table_one AS t1\n>>   WHERE t1.id < 6\n>>  ) AS foo\n>> WHERE t2.id = foo.id;\n>>                                                            QUERY PLAN\n>>\n>>  ------------------------------------------------------------------------------------------------------------------------------------\n>> Nested Loop  (cost=0.00..91.35 rows=10 width=8) (actual time=0.024..0.048\n>> rows=5 loops=1)\n>>  ->  Index Scan using table_one_pkey on table_one t1  (cost=0.00..8.44\n>> rows=10 width=8) (actual time=0.009..0.013 rows=5 loops=1)\n>>        Index Cond: (id < 6)\n>>  ->  Index Scan using table_two_pkey on table_two t2  (cost=0.00..8.28\n>> rows=1 width=8) (actual time=0.005..0.005 rows=1 loops=5)\n>>        Index Cond: (t2.id = t1.id)\n>> Total runtime: 0.097 ms\n>> (6 rows)\n>>\n>> There's the index scan on both tables that you were expecting.\n>\n> True. Thank you. I'll try this on my reel problem as well but I have a gut\n> feeling it\n> won't work there since those tables are bigger.\n\nRun it with explain analyze on the real table / SQL query and if it\ndoesn't run well, post it here. Note you can do a lot to tune the\nquery planner, with things like random_page_cost, cpu_* cost\nparameters, effective_cache_size and so on. For troubleshooting\npurposes you can use set enable_method=off where method can be things\nlike indexscan, nestloop, and so on. Use show all to see them.\n", "msg_date": "Thu, 17 Dec 2009 18:16:47 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan instead of index scan" }, { "msg_contents": "> Best bet is to post the real problem, not a semi-representational made\n> up one. Unless the made up \"test case\" is truly representative and\n> recreates the failure pretty much the same was as the original.\n\nI agree at some level but I generally believe other people won't read\na big mail like that. In this case it might come to a big post from me\none day soon. :-)\n\nThanks to all who helped me.\n\n/ Karl Larsson\n\n> Best bet is to post the real problem, not a semi-representational made > up one.  Unless the made up \"test case\" is truly representative and >  recreates the failure pretty much the same was as the original.\nI agree at some level but I generally believe other people won't read a big mail like that. In this case it might come to a big post from me one day soon. :-)Thanks to all who helped me.\n/ Karl Larsson", "msg_date": "Fri, 18 Dec 2009 02:17:18 +0100", "msg_from": "Karl Larsson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seq scan instead of index scan" }, { "msg_contents": "On Thu, Dec 17, 2009 at 6:17 PM, Karl Larsson <[email protected]> wrote:\n>> Best bet is to post the real problem, not a semi-representational made\n>> up one.  Unless the made up \"test case\" is truly representative and\n>>  recreates the failure pretty much the same was as the original.\n>\n> I agree at some level but I generally believe other people won't read\n> a big mail like that. In this case it might come to a big post from me\n> one day soon. :-)\n\nYou're on the one mailing list where they will read big posts. It's\nbest if you can attach the explain analyze output as an attachment\ntho, to keep it's format readable.\n", "msg_date": "Thu, 17 Dec 2009 18:37:36 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan instead of index scan" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> Karl Larsson wrote:\n>> When I make a subquery Postgres don't care about my indexes and makes\n>> a seq scan instead of a index scan. Why?\n\n> Data set is just too small for it to matter. Watch what happens if I \n> continue from what you posted with much bigger tables:\n> ...\n> There's the index scan on both tables that you were expecting.\n\nAnd if you go much past that, it's likely to switch *away* from\nindexscans again (eg, to a hash join, which has no use for ordered\ninput). This is not wrong. Indexes have their place but they are not\nthe solution for every query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Dec 2009 01:27:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan instead of index scan " } ]
[ { "msg_contents": "Hello!\n\nThere are many questions on internet about whether it is possible to\noptimize \"Bitmap Heap Scan\" somehow without answer, so seems like\nproblem is rather important.\n\nThe query I need to optimize is:\n\nEXPLAIN SELECT date_trunc('day', d.created_at) AS day, COUNT(*) AS\ndownload FROM downloads d WHERE d.file_id in (select id from files\nwhere owner_id = 443) AND d.download_status != 0 AND d.created_at >=\n'2009-12-05' AND d.created_at < '2009-12-16' GROUP BY 1;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=15809.49..17126.20 rows=87781 width=8)\n -> Hash Semi Join (cost=5809.51..15368.11 rows=88276 width=8)\n Hash Cond: (d.file_id = files.id)\n -> Index Scan using idx_downloads_created_at on downloads d\n(cost=0.00..7682.73 rows=88276 width=16)\n Index Cond: ((created_at >= '2009-12-05\n00:00:00'::timestamp without time zone) AND (created_at < '2009-12-16\n00:00:00'::timestamp without time zone))\n -> Hash (cost=5741.51..5741.51 rows=5440 width=8)\n -> Bitmap Heap Scan on files (cost=106.42..5741.51\nrows=5440 width=8)\n Recheck Cond: (owner_id = 443)\n -> Bitmap Index Scan on idx_files_owner\n(cost=0.00..105.06 rows=5440 width=0)\n Index Cond: (owner_id = 443)\n\nThe problem here is that we are forced to fetch \"files\" in Bitmap Heap Scan.\nBut actually there is no need for the whole \"files\" record. The\nnecessary data is only \"files\" ids.\n\nThe idea is to avoid fetching data from \"files\" table, and get the ids\nfrom index! (probably it is a little bit tricky, but it is a\nperformance area...)\n\nI created an index with following command:\ncreate index idx_files_owner_id ON files (owner_id, id);\nand even tried to remove old index to enforce postgresql to use newly\ncreated index.\nBut postgresql still do Bitmap Heap Scan.\n\n(The other idea is to use raw_id as a primary key of \"files\" table to\ndon't extend index. But I don't know whether it is possible at all or\nthis idea have some drawbacks)\n\nI think it worth to learn postgreql to do this trick especially taking\ninto account there are many questions about whether it is possible to\noptimize such a queries.\n\nIf there is an known solution to this problem please provide a link to it.\n\nWith best regards,\nMichael Mikhulya.\n", "msg_date": "Fri, 18 Dec 2009 18:44:41 +0300", "msg_from": "\"Michael N. Mikhulya\" <[email protected]>", "msg_from_op": true, "msg_subject": "Idea how to get rid of Bitmap Heap Scan" }, { "msg_contents": "On Fri, 18 Dec 2009, Michael N. Mikhulya wrote:\n> The problem here is that we are forced to fetch \"files\" in Bitmap Heap Scan.\n> But actually there is no need for the whole \"files\" record. The\n> necessary data is only \"files\" ids.\n>\n> The idea is to avoid fetching data from \"files\" table, and get the ids\n> from index! (probably it is a little bit tricky, but it is a\n> performance area...)\n\nUnfortunately, the index does not contain enough information to accomplish \nthis. This is due to Postgres' advanced concurrency control system. \nPostgres needs to fetch the actual rows from the files table in order to \ncheck whether that row is visible in the current transaction, and a Bitmap \nIndex Scan is the fastest way to do this.\n\nYou can speed this up in Postgres 8.4 by having a RAID array and setting \nthe effective_concurrency configuration to the number of spindles in the \nRAID array, or by having gobs of RAM and keeping everything in cache.\n\nMatthew\n\n-- \n A good programmer is one who looks both ways before crossing a one-way street.\n Considering the quality and quantity of one-way streets in Cambridge, it\n should be no surprise that there are so many good programmers there.\n", "msg_date": "Fri, 18 Dec 2009 15:51:11 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea how to get rid of Bitmap Heap Scan" }, { "msg_contents": "Thank you very much. I catch the point why it is done so.\n\nBut I'm curious whether it is still possible to don't fetch data from\nfiles table just because inappropriate ids (e.g. removed ones) will\nnot produce any wrong effect just because them indirectly \"checked\" on\ndownloads table?\nHere I mean that if we get id (from index) for file which is actually\nremoved, then we will not find anything in downloads table.\nProbably my knowledge about MVCC is too little to see whole picture,\nso if it is not hard to you please point the \"failure\" scenario (when\nwe get wrong result) or locking issue, ...\n\nMichael Mikhulya\n\n> Unfortunately, the index does not contain enough information to accomplish\n> this. This is due to Postgres' advanced concurrency control system. Postgres\n> needs to fetch the actual rows from the files table in order to check\n> whether that row is visible in the current transaction, and a Bitmap Index\n> Scan is the fastest way to do this.\n>\n> You can speed this up in Postgres 8.4 by having a RAID array and setting the\n> effective_concurrency configuration to the number of spindles in the RAID\n> array, or by having gobs of RAM and keeping everything in cache.\n>\n> Matthew\n>\n> --\n> A good programmer is one who looks both ways before crossing a one-way\n> street.\n> Considering the quality and quantity of one-way streets in Cambridge, it\n> should be no surprise that there are so many good programmers there.\n>\n", "msg_date": "Fri, 18 Dec 2009 19:18:10 +0300", "msg_from": "\"Michael N. Mikhulya\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Idea how to get rid of Bitmap Heap Scan" }, { "msg_contents": "On Fri, Dec 18, 2009 at 4:18 PM, Michael N. Mikhulya\n<[email protected]> wrote:\n> Thank you very much. I catch the point why it is done so.\n>\n> But I'm curious whether it is still possible to don't fetch data from\n> files table just because inappropriate ids (e.g. removed ones) will\n> not produce any wrong effect just because them indirectly \"checked\" on\n> downloads table?\n> Here I mean that if we get id (from index) for file which is actually\n> removed, then we will not find anything in downloads table.\n> Probably my knowledge about MVCC is too little to see whole picture,\n> so if it is not hard to you please point the \"failure\" scenario (when\n> we get wrong result) or locking issue, ...\n\n\nYup this ought to be possible and fruitful, I believe Heikki already\nproduced a partial patch to this end. If you're interested in working\non it you could skim back in the logs and start with that. I don't\nrecall any special keywords to search on but it might be in one of the\nthreads for the \"visibility map\" or it might be under \"index-only\nscans\".\n\nA word of warning, in my experience the hardest part for changes like\nthis isn't the executor changes (which in this case wouldn't be far\nfrom easy) but the planner changes to detect when this new plan would\nbe better.\n\n-- \ngreg\n", "msg_date": "Fri, 18 Dec 2009 17:29:15 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea how to get rid of Bitmap Heap Scan" }, { "msg_contents": "On Fri, Dec 18, 2009 at 12:29 PM, Greg Stark <[email protected]> wrote:\n> A word of warning, in my experience the hardest part for changes like\n> this isn't the executor changes (which in this case wouldn't be far\n> from easy) but the planner changes to detect when this new plan would\n> be better.\n\nThere's also the problem of making the visibility map crash-safe. I\nthink I heard you might have some ideas on that one - has it been\ndiscussed on -hackers?\n\n...Robert\n", "msg_date": "Sat, 19 Dec 2009 21:11:55 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea how to get rid of Bitmap Heap Scan" }, { "msg_contents": "On Sun, Dec 20, 2009 at 2:11 AM, Robert Haas <[email protected]> wrote:\n> On Fri, Dec 18, 2009 at 12:29 PM, Greg Stark <[email protected]> wrote:\n>> A word of warning, in my experience the hardest part for changes like\n>> this isn't the executor changes (which in this case wouldn't be far\n>> from easy) but the planner changes to detect when this new plan would\n>> be better.\n>\n> There's also the problem of making the visibility map crash-safe.  I\n> think I heard you might have some ideas on that one - has it been\n> discussed on -hackers?\n\nNot sure what ideas you mean.\n\nIn the original poster's plan that isn't an issue. We could scan the\nindex, perform the joins and restriction clauses, and only check the\nvisibility on the resulting tuples which slip through them all. That\nwould be possible even without crash-safe visibility bits.\n\n-- \ngreg\n", "msg_date": "Sun, 20 Dec 2009 11:37:45 +0000", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea how to get rid of Bitmap Heap Scan" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> In the original poster's plan that isn't an issue. We could scan the\n> index, perform the joins and restriction clauses, and only check the\n> visibility on the resulting tuples which slip through them all. That\n> would be possible even without crash-safe visibility bits.\n\nYeah, this was floated years ago as being a potentially interesting\napproach when all the join-condition fields are indexed. You end up\nnever having to fetch rows that don't pass the join.\n\nIt certainly seems reasonably straightforward on the executor side.\nAs Greg said, the hard part is planning it sanely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Dec 2009 11:26:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea how to get rid of Bitmap Heap Scan " }, { "msg_contents": "On Sun, Dec 20, 2009 at 11:26 AM, Tom Lane <[email protected]> wrote:\n> Greg Stark <[email protected]> writes:\n>> In the original poster's plan that isn't an issue. We could scan the\n>> index, perform the joins and restriction clauses, and only check the\n>> visibility on the resulting tuples which slip through them all. That\n>> would be possible even without crash-safe visibility bits.\n>\n> Yeah, this was floated years ago as being a potentially interesting\n> approach when all the join-condition fields are indexed.  You end up\n> never having to fetch rows that don't pass the join.\n>\n> It certainly seems reasonably straightforward on the executor side.\n> As Greg said, the hard part is planning it sanely.\n\nYeah, but that seems REALLY hard. First, there's the difficulty of\nactually generating all the paths and costing them appropriately. A\nplan to perform an index scan but defer the heap fetches until later\nhas a hidden cost associated with it: the heap fetches will cost\nsomething, but we don't know how much until we get the row estimate\nfor the node where we choose to implement them. Without knowing that\ncost, it's hard to be confident in discarding other plans that are\napparently more expensive. That's probably solvable by adopting a\nmore sophisticated method for comparing costs, but that gets you to\nthe second problem, which is doing all of this with reasonable\nperformance. You're going to have a lot more paths than we do now,\nand there will be many queries for which there are only trivial cost\ndifferences between them (like any query where most or all of the\njoins have a selectivity of exactly 1.0, which is a very common case\nfor me).\n\n...Robert\n", "msg_date": "Sun, 20 Dec 2009 20:56:58 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Idea how to get rid of Bitmap Heap Scan" } ]
[ { "msg_contents": "We're looking to upgrade our database hardware so that it can sustain \nus while we re-architect some of the more fundamental issues with our \napplications. The first thing to spend money on is usually disks, but \nour database currently lives almost entirely on flash storage, so \nthat's already nice and fast. My question is, what we should spend \nmoney on next?\n\nWith most data stored in flash, does it still make sense to buy as \nmuch ram as possible? RAM is still faster than flash, but while it's \ncheap, it isn't free, and our database is a couple hundred GB in size.\n\nWe also have several hundred active sessions. Does it makes sense to \nsacrifice some memory speed and go with 4 6-core Istanbul processors? \nOr does it make more sense to limit ourselves to 2 4-core Nehalem \nsockets and get Intel's 1333 MHz DDR3 memory and faster cores?\n\nOur queries are mostly simple, but we have a lot of them, and their \nlocality tends to be low. FWIW, about half are selects.\n\nDoes anybody have any experience with these kinds of tradeoffs in the \nabsence of spinning media? Any insight would be much appreciated. From \nthe information I have right now, trying to figuring out how to \noptimally spend our budget feels like a shot in the dark.\n\nThanks!\n", "msg_date": "Wed, 23 Dec 2009 14:10:55 -0800", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": true, "msg_subject": "hardware priority for an SSD database?" }, { "msg_contents": "Ben Chobot wrote:\n> With most data stored in flash, does it still make sense to buy as \n> much ram as possible? RAM is still faster than flash, but while it's \n> cheap, it isn't free, and our database is a couple hundred GB in size.\n\nDepends on the actual working set of data you run into on a regular \nbasis. If adding more RAM makes it possible to fit that where it didn't \nbefore, that can be a huge win even over SSD. RAM is still around an \norder of magnitude faster than flash (>2500MB/s vs. <200MB/s \ntypically). I'll normally stare at what's in the buffer cache to get an \nidea what the app likes to cache most to try and estimate the size of \nthe working set better.\n\n> We also have several hundred active sessions. Does it makes sense to \n> sacrifice some memory speed and go with 4 6-core Istanbul processors? \n> Or does it make more sense to limit ourselves to 2 4-core Nehalem \n> sockets and get Intel's 1333 MHz DDR3 memory and faster cores?\n\nThis is hard to say, particularly when you mix in the cost difference \nbetween the two solutions. Yours is one of the situations where AMD's \nstuff might work out very well for you on a bang-per-buck basis though; \nit's certainly not one of the ones where it's a clear win for Intel \n(which I do see sometimes).\n\n\n> Does anybody have any experience with these kinds of tradeoffs in the \n> absence of spinning media? Any insight would be much appreciated. From \n> the information I have right now, trying to figuring out how to \n> optimally spend our budget feels like a shot in the dark.\n\nThere are no easy answers or general guidelines here. There are only \ntwo ways I've ever found to get useful results in this area:\n\n1) Try some eval hardware (vendor load, friendly borrowing, etc.) and \nbenchmark with your app.\n\n2) Cripple an existing system to get more sensitivity analysis points. \nFor example, if you have a 16GB server, you might do some benchmarking, \nreduce to 8GB, and see how much that changed things, to get an idea how \nsensitive your app is to memory size changes. You can do similar tests \nunderclocking/disabling CPUs, underclocking RAM, and lowering the speed \nof the drives. For example, if you reduce the amount of RAM, but \nperformance doesn't change much, while decreasing RAM clock drops it a \nlot, that's pretty good evidence you'd prefer spending on faster RAM \nthan more of it.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Wed, 23 Dec 2009 19:26:06 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardware priority for an SSD database?" } ]
[ { "msg_contents": "Hi,\n\nwe experience some strange performance problems, we've already found a \nworkaround for us, but are curious if it's a known problem of the optimizer.\n\nTested with the following Postgres Version: 8.2.15 and 8.3.9\nAUTOVACUUM is enabled, explicit VACUUM and REINDEX both tables and the\nwhole DB.\n\neffective_cache_size = 3096MB\ndefault_statistics_target = 100\nshared_buffers = 1024MB\nwork_mem = 64MB\n\nTable Schema:\n\nTable \"click\"\n Column | Type | Modifiers\n-----------------+---------+-----------\n click_id | integer | not null\n member_id | integer |\n link_id | integer |\n click_timestamp | bigint |\n remote_host | text |\n user_agent | text |\nIndexes:\n \"click_pkey\" PRIMARY KEY, btree (click_id)\n \"idx_click_1\" btree (link_id)\n \"idx_click_2\" btree (click_timestamp)\n\nTable \"link\"\n Column | Type | Modifiers\n------------+---------+-----------\n link_id | integer | not null\n link_url | text |\n task_id | integer |\n link_type | integer |\n action_id | integer |\n link_alias | text |\n deleted | boolean |\n deletable | boolean |\nIndexes:\n \"link_pkey\" PRIMARY KEY, btree (link_id)\n \"idx_link_1\" btree (task_id)\n\nRows in click table contains:\t22874089\nRows in link table contains:\t4220601\n\n\nThe following query is slow when index scan is enabled:\n\nSELECT\nlink.link_alias,link.link_type,COUNT(click.click_id),COUNT(distinct\nclick.member_id) FROM link LEFT JOIN click ON link.link_id=click.link_id\nWHERE (link.link_type=8 OR link.link_type=9) AND link.task_id=1556 AND\n(link.deletable IS NULL OR link.deletable=false)GROUP BY\nlink.link_type,link.link_alias LIMIT 1000\n\n\nExplain with index scan enabled:\n\nexplain analyze SELECT\nlink.link_alias,link.link_type,COUNT(click.click_id),COUNT(distinct\nclick.member_id) FROM link LEFT JOIN click ON link.link_id=click.link_id\nWHERE (link.link_type=8 OR link.link_type=9) AND link.task_id=1556 AND\n(link.deletable IS NULL OR link.deletable=false)GROUP BY\nlink.link_type,link.link_alias LIMIT 1000;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1416936.47..1417849.48 rows=1 width=30) (actual\ntime=277062.951..277073.144 rows=12 loops=1)\n -> GroupAggregate (cost=1416936.47..1417849.48 rows=1 width=30)\n(actual time=277062.949..277073.126 rows=12 loops=1)\n -> Sort (cost=1416936.47..1417119.07 rows=73040 width=30)\n(actual time=277062.820..277066.219 rows=6445 loops=1)\n Sort Key: link.link_type, link.link_alias\n Sort Method: quicksort Memory: 696kB\n -> Merge Right Join (cost=1604.91..1411036.15\nrows=73040 width=30) (actual time=277027.644..277050.946 rows=6445 loops=1)\n Merge Cond: (click.link_id = link.link_id)\n -> Index Scan using idx_click_1 on click\n(cost=0.00..1351150.42 rows=22874088 width=12) (actual\ntime=6.915..263327.439 rows=22409997 loops=1)\n -> Sort (cost=1604.91..1638.61 rows=13477\nwidth=26) (actual time=12.172..15.640 rows=6445 loops=1)\n Sort Key: link.link_id\n Sort Method: quicksort Memory: 33kB\n -> Index Scan using idx_link_1 on link\n(cost=0.00..680.51 rows=13477 width=26) (actual time=5.707..12.043\nrows=126 loops=1)\n Index Cond: (task_id = 1556)\n Filter: (((deletable IS NULL) OR (NOT\ndeletable)) AND ((link_type = 8) OR (link_type = 9)))\n Total runtime: 277082.204 ms\n(15 rows)\n\n\nExplain with \"set enable_indexscan=false;\"\n\nexplain analyze SELECT\nlink.link_alias,link.link_type,COUNT(click.click_id),COUNT(distinct\nclick.member_id) FROM link LEFT JOIN click ON link.link_id=click.link_id\nWHERE (link.link_type=8 OR link.link_type=9) AND link.task_id=1556 AND\n(link.deletable IS NULL OR link.deletable=false)GROUP BY\nlink.link_type,link.link_alias LIMIT 1000;\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2577764.28..2578677.29 rows=1 width=30) (actual\ntime=51713.324..51723.517 rows=12 loops=1)\n -> GroupAggregate (cost=2577764.28..2578677.29 rows=1 width=30)\n(actual time=51713.322..51723.499 rows=12 loops=1)\n -> Sort (cost=2577764.28..2577946.88 rows=73040 width=30)\n(actual time=51713.191..51716.600 rows=6445 loops=1)\n Sort Key: link.link_type, link.link_alias\n Sort Method: quicksort Memory: 696kB\n -> Hash Left Join (cost=1140942.18..2571863.96\nrows=73040 width=30) (actual time=45276.194..51702.053 rows=6445 loops=1)\n Hash Cond: (link.link_id = click.link_id)\n -> Bitmap Heap Scan on link\n(cost=253.20..34058.86 rows=13477 width=26) (actual time=0.044..0.168\nrows=126 loops=1)\n Recheck Cond: (task_id = 1556)\n Filter: (((deletable IS NULL) OR (NOT\ndeletable)) AND ((link_type = 8) OR (link_type = 9)))\n -> Bitmap Index Scan on idx_link_1\n(cost=0.00..249.83 rows=13482 width=0) (actual time=0.030..0.030\nrows=128 loops=1)\n Index Cond: (task_id = 1556)\n -> Hash (cost=743072.88..743072.88 rows=22874088\nwidth=12) (actual time=45274.316..45274.316 rows=22874089 loops=1)\n -> Seq Scan on click (cost=0.00..743072.88\nrows=22874088 width=12) (actual time=0.024..17333.860 rows=22874089 loops=1)\n Total runtime: 51728.643 ms\n(15 rows)\n\n\n\nWe can't drop the index, because all other queries on the click table\nare 10-100 times faster if index is enabled.\n\nWe have worked around with following SQL to emulate the LEFT JOIN, which\nreturns the same result.\n\nexplain analyze SELECT\nlink.link_alias,link.link_type,COUNT(click.click_id),COUNT(distinct\nclick.member_id) FROM link JOIN click ON link.link_id=click.link_id\nWHERE (link.link_type=8 OR link.link_type=9) AND link.task_id=1556 AND\n(link.deletable IS NULL OR link.deletable=false)GROUP BY\nlink.link_type,link.link_alias\nUNION SELECT link_alias,link_type,0,0 from link where (link_type=8 OR\nlink_type=9) AND task_id=1556 AND (deletable IS NULL OR deletable=false)\nand link_alias not in ( SELECT link.link_alias FROM link JOIN click ON\nlink.link_id=click.link_id WHERE (link.link_type=8 OR link.link_type=9)\nAND link.task_id=1556 AND (link.deletable IS NULL OR\nlink.deletable=false)) GROUP BY link_type,link_alias LIMIT 1000;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=2011715.37..2011715.40 rows=2 width=30) (actual\ntime=56449.978..56450.016 rows=12 loops=1)\n -> Unique (cost=2011715.37..2011715.40 rows=2 width=30) (actual\ntime=56449.974..56449.995 rows=12 loops=1)\n -> Sort (cost=2011715.37..2011715.38 rows=2 width=30)\n(actual time=56449.972..56449.978 rows=12 loops=1)\n Sort Key: link.link_alias, link.link_type,\n(count(click.click_id)), (count(DISTINCT click.member_id))\n Sort Method: quicksort Memory: 25kB\n -> Append (cost=1007886.06..2011715.36 rows=2\nwidth=30) (actual time=28207.665..56449.932 rows=12 loops=1)\n -> GroupAggregate (cost=1007886.06..1008799.08\nrows=1 width=30) (actual time=28207.664..28217.739 rows=11 loops=1)\n -> Sort (cost=1007886.06..1008068.66\nrows=73040 width=30) (actual time=28207.562..28210.936 rows=6369 loops=1)\n Sort Key: link.link_type, link.link_alias\n Sort Method: quicksort Memory: 690kB\n -> Hash Join\n(cost=848.97..1001985.74 rows=73040 width=30) (actual\ntime=11933.222..28196.805 rows=6369 loops=1)\n Hash Cond: (click.link_id =\nlink.link_id)\n -> Seq Scan on click\n(cost=0.00..743072.88 rows=22874088 width=12) (actual\ntime=0.030..14572.729 rows=22874089 loops=1)\n -> Hash (cost=680.51..680.51\nrows=13477 width=26) (actual time=0.248..0.248 rows=126 loops=1)\n -> Index Scan using\nidx_link_1 on link (cost=0.00..680.51 rows=13477 width=26) (actual\ntime=0.025..0.143 rows=126 loops=1)\n Index Cond: (task_id\n= 1556)\n Filter: (((deletable\nIS NULL) OR (NOT deletable)) AND ((link_type = 8) OR (link_type = 9)))\n -> Subquery Scan \"*SELECT* 2\"\n(cost=1002916.26..1002916.28 rows=1 width=22) (actual\ntime=28232.176..28232.178 rows=1 loops=1)\n -> HashAggregate\n(cost=1002916.26..1002916.27 rows=1 width=22) (actual\ntime=28232.161..28232.162 rows=1 loops=1)\n -> Index Scan using idx_link_1 on\nlink (cost=1002168.34..1002882.56 rows=6739 width=22) (actual\ntime=28232.077..28232.147 rows=1 loops=1)\n Index Cond: (task_id = 1556)\n Filter: (((deletable IS NULL) OR\n(NOT deletable)) AND (NOT (hashed subplan)) AND ((link_type = 8) OR\n(link_type = 9)))\n SubPlan\n -> Hash Join\n(cost=848.97..1001985.74 rows=73040 width=18) (actual\ntime=11931.673..28226.561 rows=6369 loops=1)\n Hash Cond:\n(click.link_id = link.link_id)\n -> Seq Scan on click\n(cost=0.00..743072.88 rows=22874088 width=4) (actual\ntime=0.022..14581.208 rows=22874089 loops=1)\n -> Hash\n(cost=680.51..680.51 rows=13477 width=22) (actual time=0.240..0.240\nrows=126 loops=1)\n -> Index Scan\nusing idx_link_1 on link (cost=0.00..680.51 rows=13477 width=22)\n(actual time=0.015..0.131 rows=126 loops=1)\n Index Cond:\n(task_id = 1556)\n Filter:\n(((deletable IS NULL) OR (NOT deletable)) AND ((link_type = 8) OR\n(link_type = 9)))\n Total runtime: 56450.254 ms\n(31 rows)\n\n\nCiao,\nMichael\n\n\n\n\n", "msg_date": "Thu, 24 Dec 2009 10:38:21 +0100", "msg_from": "Michael Ruf <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer use of index slows down query by factor" }, { "msg_contents": "Michael Ruf <[email protected]> writes:\n> we experience some strange performance problems, we've already found a \n> workaround for us, but are curious if it's a known problem of the optimizer.\n\nI think you need to see about getting this rowcount estimate to be more\naccurate:\n\n> -> Index Scan using idx_link_1 on link\n> (cost=0.00..680.51 rows=13477 width=26) (actual time=5.707..12.043\n> rows=126 loops=1)\n> Index Cond: (task_id = 1556)\n> Filter: (((deletable IS NULL) OR (NOT\n> deletable)) AND ((link_type = 8) OR (link_type = 9)))\n\nIf it realized there'd be only 126 rows out of that scan, it'd probably\nhave gone for a nestloop join against the big table, which I think would\nbe noticeably faster than either of the plans you show here.\n\nYou already did crank up default_statistics_target, so I'm not sure if\nraising it further would help any. What I'd suggest is trying to avoid\nusing non-independent AND/OR conditions. For instance recasting the\nfirst OR as just \"deletable is not true\" would probably result in a\nbetter estimate. The size of the error seems to be more than that would\naccount for though, so I suspect that the deletable and link_type\nconditions are interdependent. Is it practical to recast your data\nrepresentation to avoid that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 Dec 2009 10:46:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer use of index slows down query by factor " }, { "msg_contents": "Hi,\n\nTom Lane wrote:\n >\n > I think you need to see about getting this rowcount estimate to be more\n > accurate:\n >\n >> -> Index Scan using idx_link_1 on link\n >> (cost=0.00..680.51 rows=13477 width=26) (actual time=5.707..12.043\n >> rows=126 loops=1)\n >> Index Cond: (task_id = 1556)\n >> Filter: (((deletable IS NULL) OR (NOT\n >> deletable)) AND ((link_type = 8) OR (link_type = 9)))\n >\n > If it realized there'd be only 126 rows out of that scan, it'd probably\n > have gone for a nestloop join against the big table, which I think would\n > be noticeably faster than either of the plans you show here.\n >\n > You already did crank up default_statistics_target, so I'm not sure if\n > raising it further would help any.\n\nAfter i've increased the statistic target for the specific column on the \nlink table \"alter table link alter task_id set statistics 200;\", the sql \nruns fine ( < 1 second ):\n\nLimit (cost=448478.40..448492.17 rows=1 width=30) (actual \ntime=850.698..860.838 rows=12 loops=1)\n -> GroupAggregate (cost=448478.40..448492.17 rows=1 width=30) \n(actual time=850.695..860.824 rows=12 loops=1)\n -> Sort (cost=448478.40..448481.15 rows=1100 width=30) \n(actual time=850.569..853.985 rows=6445 loops=1)\n Sort Key: link.link_type, link.link_alias\n Sort Method: quicksort Memory: 696kB\n -> Nested Loop Left Join (cost=0.00..448422.84 \nrows=1100 width=30) (actual time=819.519..838.422 rows=6445 loops=1)\n -> Seq Scan on link (cost=0.00..142722.52 \nrows=203 width=26) (actual time=819.486..820.016 rows=126 loops=1)\n Filter: (((deletable IS NULL) OR (NOT \ndeletable)) AND (task_id = 1556) AND ((link_type = 8) OR (link_type = 9)))\n -> Index Scan using idx_click_1 on click \n(cost=0.00..1370.01 rows=10872 width=12) (actual time=0.003..0.088 \nrows=51 loops=126)\n Index Cond: (link.link_id = click.link_id)\n Total runtime: 860.929 ms\n\n\n > What I'd suggest is trying to avoid\n > using non-independent AND/OR conditions. For instance recasting the\n > first OR as just \"deletable is not true\" would probably result in a\n > better estimate. The size of the error seems to be more than that would\n > account for though, so I suspect that the deletable and link_type\n > conditions are interdependent. Is it practical to recast your data\n > representation to avoid that?\n >\n\nI've tried that, but with no positive/negative effects.\n\nThanks for your help.\n\nMichael\n", "msg_date": "Thu, 07 Jan 2010 08:49:00 +0100", "msg_from": "Michael Ruf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer use of index slows down query by factor" } ]
[ { "msg_contents": "Hi there,\n\nI've got a small question about multicolumn indexes.\n\nI have a table with ~5M rows (43 bytes per column - is that relevant?) \n(but eventually it could grow up to 50M rows), used to store e-mail \nlogs. I am trying to build a web frontend to search mails in this table.\n\nI usually want only the last mails processed by my mail system, so \ntypically all my queries end with:\n... ORDER BY time LIMIT 50;\n\nBefore that, I have usually have a WHERE clause on a indexed column. \nExample of a query I might have:\n\nSELECT id FROM mail WHERE from_address LIKE 'bill%'\nORDER BY time DESC LIMIT 50;\n\nI observed that the ordering adds a significant overhead to my queries - \nthis seems quite logical, because of the ORDER BY which has to inspect \nevery row matching the WHERE clause.\n\nThe approach taken by the query planner is one of the following:\n\n1) if it thinks there are \"not so much\" rows containg 'bill' as prefix \nof the 'from_address' column, it performs an index scan (or a bitmap \nindex scan) using my index on 'from_address', then sorts all results \naccording to the 'time' column.\n\n2) if it thinks there are \"many\" rows containing 'bill' as prefix of the \n'from_address' column, it performs an reverse index scan using my index \non 'time', and looks \"sequentially\" if the 'from_address' column \ncontains 'bill' as prefix.\n\nThe problem is that \"not so much\" is in my case approx 10K rows \nsometimes. It seems to be pretty costly to perform an (bitmap) index \nscan over all these rows. As I only want the first few rows anyway \n(LIMIT 50), I thought that there had to be some better solution.\n\nThe solution I had in mind was to create a multicolumn index over \n'from_address' and 'time':\n\nCREATE INDEX idx_from_time ON mail (from_address, time DESC);\n\nso that it could directly use the 'time' ordering and lookup only the \nfirst 50 rows using the index.\n\nbut... it doesn't work :-) i.e. my multicolumn index is never used. So:\n- do you guys have any ideas why it doesn't work?\n- do you see an alternative solution?\n\nInfos:\n- I use PostgreSQL 8.4.2\n- I regularly VACUUM and ANALYZE my db. Statistics look OK.\n- I'm relatively new to PostgreSQL, so maybe this question is trivial?\n\nThanks in advance, and happy holidays!\n\n-- \nlucas maystre\ntrainee\n\nopen systems ag\nraeffelstrasse 29\nch-8045 zurich\nt: +41 44 455 74 00\nf: +41 44 455 74 01\[email protected]\n\nhttp://www.open.ch\n", "msg_date": "Thu, 24 Dec 2009 10:54:32 +0100", "msg_from": "Lucas Maystre <[email protected]>", "msg_from_op": true, "msg_subject": "Multicolumn index - WHERE ... ORDER BY" }, { "msg_contents": "Lucas Maystre <[email protected]> writes:\n> Example of a query I might have:\n> SELECT id FROM mail WHERE from_address LIKE 'bill%'\n> ORDER BY time DESC LIMIT 50;\n\n> The solution I had in mind was to create a multicolumn index over \n> 'from_address' and 'time':\n> CREATE INDEX idx_from_time ON mail (from_address, time DESC);\n> so that it could directly use the 'time' ordering and lookup only the \n> first 50 rows using the index.\n\n> but... it doesn't work :-) i.e. my multicolumn index is never used. So:\n> - do you guys have any ideas why it doesn't work?\n\nThe from_address condition isn't simple equality, so the output of a\nscan wouldn't be sorted by time --- it would have subranges that are\nsorted, but that's no help overall. You still have to read the whole\nscan output and re-sort. So this index has no advantage over the\nsmaller index on just from_address.\n\n> - do you see an alternative solution?\n\nThere might be some use in an index on (time, from_address). That\ngives the correct time ordering, and at least the prefix part of the\nfrom_address condition can be checked in the index without visiting the\nheap.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 Dec 2009 10:56:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multicolumn index - WHERE ... ORDER BY " } ]
[ { "msg_contents": "Hi all,\n\nI'm trying to figure out which HW configuration with 3 SATA drives is \nthe best in terms of reliability and performance for Postgres database.\n\nI'm thinking to connect two drives in RAID 0, and to keep the database \n(and WAL) on these disks - to improve the write performance of the SATA \ndrives.\n\nThe third drive will be used to reduce the cost of the RAID 0 failure \nwithout reducing the performance. Say, I could configure Postgres to use \nthe third drive as backup for WAL files, with archive_timeout set to 15 \nminutes. Daily backups will be created on different server. Loss of last \n15 minute updates is something the customer can afford. Also, one day \nrestore time is case of failure is also affordable (to reinstall the OS, \nPostgres, restore backup, and load WALs).\n\nThe server will be remotely administered, that is why I'm not going for \nRAID 1, 1+0 or some other solution for which, I beleive, the local \nadministion is crucial.\n\nServer must be low budget, that is why I'm avoiding SAS drives. We will \nuse CentOS Linux and Postgres 8.4. The database will have 90% of read \nactions, and 10% of writes.\n\nI would like to hear your opinion, is this reasonable or I should \nreconsider RAID 1?\n\nRegards,\nOgnjen\n", "msg_date": "Thu, 24 Dec 2009 11:37:41 +0100", "msg_from": "Ognjen Blagojevic <[email protected]>", "msg_from_op": true, "msg_subject": "SATA drives performance" }, { "msg_contents": "A couple of thoughts occur to me:\n\n1. For reads, RAID 1 should also be good: it will allow a read to occur\nfrom whichever disk can provide the data fastest.\n\n2. Also, for reads, the more RAM you have, the better (for caching). I'd\nsuspect that another 8GB of RAM is a better expenditure than a 2nd drive\nin many cases.\n\n3. RAID 0 is twice as unreliable as no raid. I'd recommend using RAID 1\nintead. If you use the Linux software mdraid, remote admin is easy.\n\n4. If you can tolerate the risk of the most recent transactions being\nlost, look at asynchronous commit. Likewise, you *might* consider\noperating with a write cache enabled. Otherwise, the time for\nfdatasync() is what's critical.\n\n5. For a 2-disk setup, I think that main DB on one, with WAL on the\nother will beat having everything on a single RAID0.\n\n6. The WAL is relatively small: you might consider a (cheap) solid-state\ndisk for it.\n\n7. If you have 3 equal disks, try doing some experiments. My inclination\nwould be to set them all up with ext4, then have the first disk set up\nas a split between OS and WAL; the 2nd disk set up for\n/var/lib/postgresql, and the 3rd disk as a backup for everything (and a\nspare OS with SSH access).\n\n8. Lastly, if you need remote administration, and can justify another\n£100 or so, the HP \"iLO\" (integrated lights out) cards are rather\nuseful: these effectively give you VNC without OS support, even for the\nBIOS.\n\nBest wishes,\n\nRichard\n\n\nOgnjen Blagojevic wrote:\n> Hi all,\n> \n> I'm trying to figure out which HW configuration with 3 SATA drives is \n> the best in terms of reliability and performance for Postgres database.\n> \n> I'm thinking to connect two drives in RAID 0, and to keep the database \n> (and WAL) on these disks - to improve the write performance of the SATA \n> drives.\n> \n> The third drive will be used to reduce the cost of the RAID 0 failure \n> without reducing the performance. Say, I could configure Postgres to use \n> the third drive as backup for WAL files, with archive_timeout set to 15 \n> minutes. Daily backups will be created on different server. Loss of last \n> 15 minute updates is something the customer can afford. Also, one day \n> restore time is case of failure is also affordable (to reinstall the OS, \n> Postgres, restore backup, and load WALs).\n> \n> The server will be remotely administered, that is why I'm not going for \n> RAID 1, 1+0 or some other solution for which, I beleive, the local \n> administion is crucial.\n> \n> Server must be low budget, that is why I'm avoiding SAS drives. We will \n> use CentOS Linux and Postgres 8.4. The database will have 90% of read \n> actions, and 10% of writes.\n> \n> I would like to hear your opinion, is this reasonable or I should \n> reconsider RAID 1?\n> \n> Regards,\n> Ognjen\n> \n\n", "msg_date": "Thu, 24 Dec 2009 14:40:38 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "2009/12/24 Ognjen Blagojevic <[email protected]>:\n> Hi all,\n>\n> I'm trying to figure out which HW configuration with 3 SATA drives is the\n> best in terms of reliability and performance for Postgres database.\n>\n> I'm thinking to connect two drives in RAID 0, and to keep the database (and\n> WAL) on these disks - to improve the write performance of the SATA drives.\n>\n> The third drive will be used to reduce the cost of the RAID 0 failure\n> without reducing the performance. Say, I could configure Postgres to use the\n> third drive as backup for WAL files, with archive_timeout set to 15 minutes.\n> Daily backups will be created on different server. Loss of last 15 minute\n> updates is something the customer can afford. Also, one day restore time is\n> case of failure is also affordable (to reinstall the OS, Postgres, restore\n> backup, and load WALs).\n>\n> The server will be remotely administered, that is why I'm not going for RAID\n> 1, 1+0 or some other solution for which, I beleive, the local administion is\n> crucial.\n>\n> Server must be low budget, that is why I'm avoiding SAS drives. We will use\n> CentOS Linux and Postgres 8.4. The database will have 90% of read actions,\n> and 10% of writes.\n>\n> I would like to hear your opinion, is this reasonable or I should reconsider\n> RAID 1?\n\nIf you're running RAID-0 and suffer a drive failure, the system\nbecomes somewhat less cheaper because you now have to rescue it and\nget it up and running again. I.e. you've moved your cost from\nhardware to your time.\n\nI'd recommend RAID-1 with a 3 disk mirror. Linux now knows to read\nfrom > 1 drive at a time even for a single user to get very good read\nbandwidth ( I routinely see read speeds on a pair of WD Black 7200 RPM\nSATA drives approaching 200MB/s (they are ~100MB/s each). Your\nredundancy is increased, so that should one drive fail you're still\ncompletely redundant. Also 1TB drives are CHEAP nowadays, even the WD\nblacks and similar drives from other manufacturers. If you need more\nstorage than a single 1TB drive can provide, then you'll need some\nother answer.\n", "msg_date": "Thu, 24 Dec 2009 07:57:31 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "\nHello,\n\nInstead of using 3 disks in RAID-0 and one without RAID for archive, I\nwould rather invest into one extra disk and have either a RAID 1+0\nsetup or use two disks in RAID-1 for the WAL and two disks in RAID-1\nfor the main database (I'm not sure which perform better between those\ntwo solutions).\n\nRAID-1 will give you about twice as fast reads as no RAID (and RAID\n1+0 will give you twice as fast as RAID 0), with no significant\npenalty for writing, and it'll save a lot of manpower in case on disk\ndies.\n\nIf you can afford hot-swappable disks, you can even replace a failed\ndisk live, in a few minutes, with no failure at software level.\n\nEverything can be remotely setup, including adding/removing a disk\nfrom RAID array, if you use Linux software RAID (mdadm), except of\ncourse the physical swap of the disk, but that can be done by a\nnon-technician.\n\nThis solution costs only one extra disk (which is quite cheap\nnowadays) and will deliver enhanced performances and save a lot of\nmanpower and downtime in case of disk breaking.\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n", "msg_date": "Thu, 24 Dec 2009 16:44:14 +0100", "msg_from": "[email protected] (=?iso-8859-1?Q?Ga=EBl?= Le Mignot)", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "Richard Neill wrote:\n> 3. RAID 0 is twice as unreliable as no raid. I'd recommend using RAID 1\n> intead. If you use the Linux software mdraid, remote admin is easy.\n\nThe main thing to be wary of with Linux software RAID-1 is that you \nconfigure things so that both drives are capable of booting the system. \nIt's easy to mirror the data, but not the boot loader and the like.\n\n\n> 7. If you have 3 equal disks, try doing some experiments. My inclination\n> would be to set them all up with ext4...\n\nI have yet to yet a single positive thing about using ext4 for \nPostgreSQL. Stick with ext3, where the problems you might run into are \nat least well understood and performance is predictable.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 24 Dec 2009 10:51:41 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "Ga�l Le Mignot wrote:\n> This solution costs only one extra disk (which is quite cheap\n> nowadays)\n\nI would wager that the system being used here only has enough space to \nhouse 3 drives, thus the question, which means that adding a fourth \ndrive probably requires buying a whole new server. Nowadays the drives \nthemselves are rarely the limiting factor on how many people use, since \nyou can get a stack of them for under $100 each. Instead the limit for \nsmall servers is always based on the physical enclosure and then \npotentially the number of available drive ports.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 24 Dec 2009 11:42:27 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "On 12/24/2009 10:51 AM, Greg Smith wrote:\n>> 7. If you have 3 equal disks, try doing some experiments. My inclination\n>> would be to set them all up with ext4...\n>\n> I have yet to yet a single positive thing about using ext4 for \n> PostgreSQL. Stick with ext3, where the problems you might run into \n> are at least well understood and performance is predictable.\n>\n\nHi Greg:\n\nCan you be more specific? I am using ext4 without problems than I have \ndiscerned - but mostly for smaller databases (~10 databases, one almost \nabout 1 Gbyte, most under 500 Mbytes).\n\nIs it the delayed allocation feature that is of concern? I believe this \nfeature is in common with other file systems such as XFS, and provided \nthat the caller is doing things \"properly\" according to POSIX and/or the \nfile system authors understanding of POSIX, which includes \nfsync()/fdatasync()/O_DIRECT (which PostgreSQL does?), everything is fine?\n\nFile systems failures have been pretty rare for me lately, so it's hard \nto say for sure whether my setup is really running well until it does \nfail one day and I find out. (Not too concerned, though, as I keep off \nsite pg_dump backups of the database on a regular schedule - the \ndatabases are small enough to afford this :-) )\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Thu, 24 Dec 2009 11:42:46 -0500", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "Mark Mielke wrote:\n> Can you be more specific? I am using ext4 without problems than I have \n> discerned - but mostly for smaller databases (~10 databases, one \n> almost about 1 Gbyte, most under 500 Mbytes).\n\nEvery time I do hear about ext4, so far it's always in the context of \nsomething that doesn't work well--not hearing about improvements yet. \nFor example, there was a thread on this list earlier this month titled \n\"8.4.1 ubuntu karmic slow createdb\" that had a number of people chime \nsaying they weren't happy with ext4 for various reasons.\n\nAlso, I have zero faith in the ability of the Linux kernel development \nprocess to produce stable code anymore, they're just messing with too \nmany things every single day. Any major new features that come out of \nthere I assume need a year or two to stabilize before I'll put a \nproduction server on them and feel safe, because that this point a \n\"stable release\" means nothing in terms of kernel QA. Something major \nlike a filesystem introduction would be closer to the two year estimate \nside. We're not even remotely close to stable yet with ext4 when stuff \nlike http://bugzilla.kernel.org/show_bug.cgi?id=14354 is still going \non. My rough estimate is that ext4 becomes usable and free of major \nbugs in late 2010, best case. At this point anyone who deploys it is \nstill playing with fire.\n\n> File systems failures have been pretty rare for me lately, so it's \n> hard to say for sure whether my setup is really running well until it \n> does fail one day and I find out.\n\nAll of the ext4 issues I've heard of that worry me are either a) \nperformance related and due to the barrier code not doing what was \nexpected, or b) crash related. No number of anecdotal \"it works for me\" \nreports can make up for those classes of issue because you will only see \nboth under very specific circumstances. I'm glad you have a good backup \nplan though.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n", "msg_date": "Thu, 24 Dec 2009 12:05:42 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "\nGreg Smith wrote:\n> Richard Neill wrote:\n>> 3. RAID 0 is twice as unreliable as no raid. I'd recommend using RAID 1\n>> intead. If you use the Linux software mdraid, remote admin is easy.\n> \n> The main thing to be wary of with Linux software RAID-1 is that you \n> configure things so that both drives are capable of booting the system. \n> It's easy to mirror the data, but not the boot loader and the like.\n\nGood point. I actually did this on a home PC (2 disks in RAID 1). The\nsolution is simple: just \"grub-install /dev/sda; grub-install /dev/sdb\"\nand that's all you have to do, provided that /boot is on the raid array.\n\nOf course, with a server machine, it's nearly impossible to use mdadm\nraid: you are usually compelled to use a hardware raid card. Those are a\npain, and less configurable, but it will take care of the bootloader issue.\n\nObviously, test it both ways.\n\n\n> \n> \n>> 7. If you have 3 equal disks, try doing some experiments. My inclination\n>> would be to set them all up with ext4...\n> \n> I have yet to yet a single positive thing about using ext4 for \n> PostgreSQL. Stick with ext3, where the problems you might run into are \n> at least well understood and performance is predictable.\n\nI did some measurements on fdatasync() performance for ext2,ext3,ext4.\n\nI found ext2 was fastest, ext4 was twice as slow as ext2, and ext3 was\nabout 5 times slower than ext2. Also, ext4 is doesn't having an\nappallingly slow fsck.\n\nWe've had pretty good results from ext4.\n\nRichard\n\n\n\n\n", "msg_date": "Thu, 24 Dec 2009 17:12:40 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "On 12/24/2009 05:12 PM, Richard Neill wrote:\n> Of course, with a server machine, it's nearly impossible to use mdadm\n> raid: you are usually compelled to use a hardware raid card.\n\nCould you expand on that?\n\n- Jeremy\n", "msg_date": "Thu, 24 Dec 2009 17:28:23 +0000", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "\n\nJeremy Harris wrote:\n> On 12/24/2009 05:12 PM, Richard Neill wrote:\n>> Of course, with a server machine, it's nearly impossible to use mdadm\n>> raid: you are usually compelled to use a hardware raid card.\n> \n> Could you expand on that?\n\nBoth of the last machines I bought (an IBM X3550 and an HP DL380) come \nwith hardware raid solutions. These are an utter nuisance because:\n\n - they can only be configured from the BIOS (or with a\n bootable utility CD). Linux has very basic monitoring tools,\n but no way to reconfigure the array, or add disks to empty\n hot-swap slots while the system is running.\n\n - If there is a Linux raid config program, it's not part of the\n main packaged distro, but usually a pre-built binary, available\n for only one release/kernel of the wrong distro.\n\n - the IBM one had dodgy firmware, which, until updated, caused the\n disk to totally fail after a few days.\n\n - you pay a lot of money for something effectively pointless, and\n have less control and less flexibility.\n\nAfter my experience with the X3550, I hunted for any server that would \nship without hardware raid, i.e. connect the 8 SATA hotswap slots direct \nto the motherboard, or where the hardware raid could be de-activated \ncompletely, and put into pass-through mode. Neither HP nor IBM make such \na thing.\n\nRichard\n\n\n\n\n> \n> - Jeremy\n> \n", "msg_date": "Thu, 24 Dec 2009 18:09:46 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "On Thu, Dec 24, 2009 at 11:09 AM, Richard Neill <[email protected]> wrote:\n>\n>\n> Jeremy Harris wrote:\n>>\n>> On 12/24/2009 05:12 PM, Richard Neill wrote:\n>>>\n>>> Of course, with a server machine, it's nearly impossible to use mdadm\n>>> raid: you are usually compelled to use a hardware raid card.\n>>\n>> Could you expand on that?\n>\n> Both of the last machines I bought (an IBM X3550 and an HP DL380) come with\n> hardware raid solutions. These are an utter nuisance because:\n>\n>  - they can only be configured from the BIOS (or with a\n>    bootable utility CD). Linux has very basic monitoring tools,\n>    but no way to reconfigure the array, or add disks to empty\n>    hot-swap slots while the system is running.\n>\n>  - If there is a Linux raid config program, it's not part of the\n>    main packaged distro, but usually a pre-built binary, available\n>    for only one release/kernel of the wrong distro.\n>\n>  - the IBM one had dodgy firmware, which, until updated, caused the\n>    disk to totally fail after a few days.\n>\n>  - you pay a lot of money for something effectively pointless, and\n>    have less control and less flexibility.\n>\n> After my experience with the X3550, I hunted for any server that would ship\n> without hardware raid, i.e. connect the 8 SATA hotswap slots direct to the\n> motherboard, or where the hardware raid could be de-activated completely,\n> and put into pass-through mode. Neither HP nor IBM make such a thing.\n\nYep. And that's why I never order servers from them. There are\ndozens of reputable white box builders (I use Aberdeen who give me a 5\nyear all parts warranty and incredible customer service, but there are\nplenty to choose from) and they build the machine I ask them to build.\n For hardware RAID I use Areca 1680 series, and they also provide me\nwith machines with software RAID for lighter loads (slave dbs,\nreporting dbs, and stats dbs)\n", "msg_date": "Thu, 24 Dec 2009 11:32:15 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "Richard and others, thank you all for your answers.\n\nMy comments inline.\n\nRichard Neill wrote:\n > 2. Also, for reads, the more RAM you have, the better (for caching). I'd\n > suspect that another 8GB of RAM is a better expenditure than a 2nd drive\n > in many cases.\n\nThe size of the RAM is already four time of the database size, so I \nbelieve I won't get any more benefit if it is increased. The number of \nsimultaneous connections to the database is small -- around 5.\n\nWhat I'm trying to do with the hard disk configuration is to increase \nthe write speed.\n\n\n > 3. RAID 0 is twice as unreliable as no raid. I'd recommend using RAID 1\n > intead. If you use the Linux software mdraid, remote admin is easy.\n\nNo, actually it is HP ML series server with HW RAID. I don't have too \nmuch experience with it, but I believe that the remote administration \nmight be hard. And that was the main reason I was avoiding RAID 1.\n\n\n > 5. For a 2-disk setup, I think that main DB on one, with WAL on the\n > other will beat having everything on a single RAID0.\n >\n > 6. The WAL is relatively small: you might consider a (cheap) solid-state\n > disk for it.\n\nThese are exactly the thing I was also considering. -- but needed advice \nfrom people who tried it already.\n\nRegards,\nOgnjen\n\n\n", "msg_date": "Thu, 24 Dec 2009 22:09:41 +0100", "msg_from": "Ognjen Blagojevic <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SATA drives performance" } ]
[ { "msg_contents": "Hi,\nWe currently have a large table (9 million rows) of which only the last\ncouple of days worth of data is queried on a regular basis.\nTo improve performance we are thinking of partitioning the table.\n\nOne idea is:\nCurrent_data = last days worth\narchive_data < today (goes back to 2005)\n\nThe idea being proposed at work is:\ncurrent_data = today's data\nprior years data - be broken down into one table per day\narchive_data - data older than a year.\n\nMy question is:\na) Does postgres suffer a performance hit say if there are 200 child tables.\nb) What about aggregation between dates in the last year. eg total sales for\nfirm a for the last year. It will need to look up n number of tables.\n\nAny ideas, tips, gotchas in implementing partitioning would be welcome. It\nis a somewhat mission critical (not trading, so not as mission critical)\nsystem.\n\nHow expensive is maintaining so many partitions both in terms of my writing\n/ maintaining scripts and performance.\n\nThanks in advance.\nRadhika\n\nHi,We currently have a large table (9 million rows) of which only the last couple of days worth of data is queried on a regular basis.To improve performance we are thinking of partitioning the table.One idea is:\nCurrent_data = last days wortharchive_data < today (goes back to 2005)The idea being proposed at work is:current_data = today's dataprior years data - be broken down into one table per dayarchive_data - data older than a year.\nMy question is:a) Does postgres suffer a performance hit say if there are 200 child tables.b) What about aggregation between dates in the last year. eg total sales for firm a  for the last year. It will need to look up n number of tables.\nAny ideas, tips, gotchas in implementing partitioning would be welcome. It is a somewhat mission critical (not trading, so not as mission critical) system. How expensive is maintaining so many partitions both in terms of my writing / maintaining scripts and performance.\nThanks in advance.Radhika", "msg_date": "Thu, 24 Dec 2009 09:42:25 -0500", "msg_from": "Radhika S <[email protected]>", "msg_from_op": true, "msg_subject": "Performance with partitions/inheritance and multiple tables" }, { "msg_contents": "Radhika,\n\nIf the data is 9 million rows, then I would suggest that you leave it as it is, unless the server configuration and the number of users firing queries simultaneously is a matter of concern.\n\nTry creating indexes on often used fields and use EXPLAIN to speed performance of the queries ... and of course proper configuration of autovacuum. I have seen query results within a few ms. on similar amount of data on a 2GB RHEL RAID 5 system, so it should not have been an issue.\n\nHTH,\n\nShrirang Chitnis\n------------------------------------------------------------------------------------------------------\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Radhika S\nSent: Thursday, December 24, 2009 8:12 PM\nTo: [email protected]\nSubject: [PERFORM] Performance with partitions/inheritance and multiple tables\n\nHi,\nWe currently have a large table (9 million rows) of which only the last couple of days worth of data is queried on a regular basis.\nTo improve performance we are thinking of partitioning the table.\n\nOne idea is:\nCurrent_data = last days worth\narchive_data < today (goes back to 2005)\n\nThe idea being proposed at work is:\ncurrent_data = today's data\nprior years data - be broken down into one table per day\narchive_data - data older than a year.\n\nMy question is:\na) Does postgres suffer a performance hit say if there are 200 child tables.\nb) What about aggregation between dates in the last year. eg total sales for firm a for the last year. It will need to look up n number of tables.\n\nAny ideas, tips, gotchas in implementing partitioning would be welcome. It is a somewhat mission critical (not trading, so not as mission critical) system.\n\nHow expensive is maintaining so many partitions both in terms of my writing / maintaining scripts and performance.\n\nThanks in advance.\nRadhika\n\nThe information contained in this message, including any attachments, is attorney privileged and/or confidential information intended only for the use of the individual or entity named as addressee. The review, dissemination, distribution or copying of this communication by or to anyone other than the intended addressee is strictly prohibited. If you have received this communication in error, please immediately notify the sender by replying to the message and destroy all copies of the original message.\n", "msg_date": "Thu, 24 Dec 2009 10:46:49 -0500", "msg_from": "Shrirang Chitnis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with partitions/inheritance and multiple\n tables" }, { "msg_contents": "The recommended partitioning guideline is if your table exceeds 2G\n\nPartitioning benefits:\n\n1. Purging old data very quickly (this is one of the biggest\nbenefits...especially if you have to purge very often...dont even\nthink of using DELETE)\n\n2. Performance for certain types of queries where full table scans\nbenefit from a smaller table size (and hence the smaller partitio will\nperform better)\n\nDisadvantages:\n\nYou have to maintain scripts to drop/create partitions. Partitions are\nnot first-class objects in Postgres yet (hopefully in a future\nversion)\n\nIf you are not sure about how large your tables will get...bite the\nbullet and partition your data. You will be glad you did so.\n\nOn Thu, Dec 24, 2009 at 6:42 AM, Radhika S <[email protected]> wrote:\n> Hi,\n> We currently have a large table (9 million rows) of which only the last\n> couple of days worth of data is queried on a regular basis.\n> To improve performance we are thinking of partitioning the table.\n>\n> One idea is:\n> Current_data = last days worth\n> archive_data < today (goes back to 2005)\n>\n> The idea being proposed at work is:\n> current_data = today's data\n> prior years data - be broken down into one table per day\n> archive_data - data older than a year.\n>\n> My question is:\n> a) Does postgres suffer a performance hit say if there are 200 child tables.\n> b) What about aggregation between dates in the last year. eg total sales for\n> firm a  for the last year. It will need to look up n number of tables.\n>\n> Any ideas, tips, gotchas in implementing partitioning would be welcome. It\n> is a somewhat mission critical (not trading, so not as mission critical)\n> system.\n>\n> How expensive is maintaining so many partitions both in terms of my writing\n> / maintaining scripts and performance.\n>\n> Thanks in advance.\n> Radhika\n>\n", "msg_date": "Tue, 29 Dec 2009 13:37:13 -0800", "msg_from": "Anj Adu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance with partitions/inheritance and multiple\n\ttables" } ]
[ { "msg_contents": "This isn't true. IBMs IPS series controllers can the checked and configured via the ipssend utility that works very well in 2.6.x LINUX.\n\n\"Scott Marlowe\" <[email protected]> wrote:\n\n>On Thu, Dec 24, 2009 at 11:09 AM, Richard Neill <[email protected]> wrote:\r\n>>\r\n>>\r\n>> Jeremy Harris wrote:\r\n>>>\r\n>>> On 12/24/2009 05:12 PM, Richard Neill wrote:\r\n>>>>\r\n>>>> Of course, with a server machine, it's nearly impossible to use mdadm\r\n>>>> raid: you are usually compelled to use a hardware raid card.\r\n>>>\r\n>>> Could you expand on that?\r\n>>\r\n>> Both of the last machines I bought (an IBM X3550 and an HP DL380) come with\r\n>> hardware raid solutions. These are an utter nuisance because:\r\n>>\r\n>>  - they can only be configured from the BIOS (or with a\r\n>>    bootable utility CD). Linux has very basic monitoring tools,\r\n>>    but no way to reconfigure the array, or add disks to empty\r\n>>    hot-swap slots while the system is running.\r\n>>\r\n>>  - If there is a Linux raid config program, it's not part of the\r\n>>    main packaged distro, but usually a pre-built binary, available\r\n>>    for only one release/kernel of the wrong distro.\r\n>>\r\n>>  - the IBM one had dodgy firmware, which, until updated, caused the\r\n>>    disk to totally fail after a few days.\r\n>>\r\n>>  - you pay a lot of money for something effectively pointless, and\r\n>>    have less control and less flexibility.\r\n>>\r\n>> After my experience with the X3550, I hunted for any server that would ship\r\n>> without hardware raid, i.e. connect the 8 SATA hotswap slots direct to the\r\n>> motherboard, or where the hardware raid could be de-activated completely,\r\n>> and put into pass-through mode. Neither HP nor IBM make such a thing.\r\n>\r\n>Yep. And that's why I never order servers from them. There are\r\n>dozens of reputable white box builders (I use Aberdeen who give me a 5\r\n>year all parts warranty and incredible customer service, but there are\r\n>plenty to choose from) and they build the machine I ask them to build.\r\n> For hardware RAID I use Areca 1680 series, and they also provide me\r\n>with machines with software RAID for lighter loads (slave dbs,\r\n>reporting dbs, and stats dbs)\r\n>\r\n>-- \r\n>Sent via pgsql-performance mailing list ([email protected])\r\n>To make changes to your subscription:\r\n>http://www.postgresql.org/mailpref/pgsql-performance\r\n\n--\nMessage composed using K-9 mail on Android.\nApologies for improper reply quoting (not supported) by client.", "msg_date": "Thu, 24 Dec 2009 16:18:44 -0600", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "\n\nAdam Tauno Williams wrote:\n> This isn't true. IBMs IPS series controllers can the checked and configured via the ipssend utility that works very well in 2.6.x LINUX.\n> \n\nUnfortunately, what we got (in the IBM) was the garbage ServeRaid 8kl \ncard. This one is atrocious - it shipped with a hideous firmware bug. \nAnd there is no way to bypass it.\n\nThe HP have the P400 cards, which are decent in themselves, just not as \ngood as software raid.\n\nRichard\n\n\n> \"Scott Marlowe\" <[email protected]> wrote:\n> \n>> On Thu, Dec 24, 2009 at 11:09 AM, Richard Neill <[email protected]> wrote:\n>>>\n>>> Jeremy Harris wrote:\n>>>> On 12/24/2009 05:12 PM, Richard Neill wrote:\n>>>>> Of course, with a server machine, it's nearly impossible to use mdadm\n>>>>> raid: you are usually compelled to use a hardware raid card.\n>>>> Could you expand on that?\n>>> Both of the last machines I bought (an IBM X3550 and an HP DL380) come with\n>>> hardware raid solutions. These are an utter nuisance because:\n>>>\n>>> - they can only be configured from the BIOS (or with a\n>>> bootable utility CD). Linux has very basic monitoring tools,\n>>> but no way to reconfigure the array, or add disks to empty\n>>> hot-swap slots while the system is running.\n>>>\n>>> - If there is a Linux raid config program, it's not part of the\n>>> main packaged distro, but usually a pre-built binary, available\n>>> for only one release/kernel of the wrong distro.\n>>>\n>>> - the IBM one had dodgy firmware, which, until updated, caused the\n>>> disk to totally fail after a few days.\n>>>\n>>> - you pay a lot of money for something effectively pointless, and\n>>> have less control and less flexibility.\n>>>\n>>> After my experience with the X3550, I hunted for any server that would ship\n>>> without hardware raid, i.e. connect the 8 SATA hotswap slots direct to the\n>>> motherboard, or where the hardware raid could be de-activated completely,\n>>> and put into pass-through mode. Neither HP nor IBM make such a thing.\n>> Yep. And that's why I never order servers from them. There are\n>> dozens of reputable white box builders (I use Aberdeen who give me a 5\n>> year all parts warranty and incredible customer service, but there are\n>> plenty to choose from) and they build the machine I ask them to build.\n>> For hardware RAID I use Areca 1680 series, and they also provide me\n>> with machines with software RAID for lighter loads (slave dbs,\n>> reporting dbs, and stats dbs)\n>>\n>> -- \n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> --\n> Message composed using K-9 mail on Android.\n> Apologies for improper reply quoting (not supported) by client.\n", "msg_date": "Thu, 24 Dec 2009 22:51:13 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "On Thu, Dec 24, 2009 at 3:51 PM, Richard Neill <[email protected]> wrote:\n>\n>\n> Adam Tauno Williams wrote:\n>>\n>> This isn't true.  IBMs IPS series controllers can the checked and\n>> configured via the ipssend utility that works very well in 2.6.x LINUX.\n>>\n>\n> Unfortunately, what we got (in the IBM) was the garbage ServeRaid 8kl card.\n> This one is atrocious - it shipped with a hideous firmware bug. And there is\n> no way to bypass it.\n>\n> The HP have the P400 cards, which are decent in themselves, just not as good\n> as software raid.\n\nYeah, the HP400 gets pretty meh reviews here on the lists. The P600\nis adequate and the P800 seems to be a good performer.\n\nCan you replace the IBM RAID controller with some other controller?\nEven just a simple 4 or 8 port SATA card with no RAID capability would\nbe better than something that locks up.\n\nPersonally I'd call my rep and ask him to come pick up his crap server\nand give me a check to replace it if it was that bad.\n", "msg_date": "Thu, 24 Dec 2009 15:57:38 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "\n\nScott Marlowe wrote:\n> On Thu, Dec 24, 2009 at 3:51 PM, Richard Neill <[email protected]> wrote:\n>>\n>> Adam Tauno Williams wrote:\n>>> This isn't true. IBMs IPS series controllers can the checked and\n>>> configured via the ipssend utility that works very well in 2.6.x LINUX.\n>>>\n>> Unfortunately, what we got (in the IBM) was the garbage ServeRaid 8kl card.\n>> This one is atrocious - it shipped with a hideous firmware bug. And there is\n>> no way to bypass it.\n>>\n> Can you replace the IBM RAID controller with some other controller?\n> Even just a simple 4 or 8 port SATA card with no RAID capability would\n> be better than something that locks up.\n\nA replacement would have been nice, however the 8kl is very tightly \nintegrated with the motherboard and the backplane. We'd have had to buy \na PCI-X card, and then get out the soldering iron to fix the cables.\n\nTo be fair, the 8kl is now working OK; also there was a note in the box \nmentioning that firmware updates should be applied if available. What I \nfound unbelievable was that IBM shipped the server to me in a state with \nknown crashing firmware (a sufficiently bad bug imho to merit a product \nrecall), and hadn't bothered to flash it themselves in the factory. \nUsually BIOS updates are only applied by the end user if there is a \nspecific issue to fix, and if the product line has been out for years, \nbut that particular server was only assembled 3 weeks ago, why would one \nexpect a company of IBM's standing to ship it in that state.\n\nRichard\n", "msg_date": "Fri, 25 Dec 2009 00:15:11 +0000", "msg_from": "Richard Neill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "On Thu, Dec 24, 2009 at 5:15 PM, Richard Neill <[email protected]> wrote:\n>\n>\n> Scott Marlowe wrote:\n>>\n>> On Thu, Dec 24, 2009 at 3:51 PM, Richard Neill <[email protected]> wrote:\n>>>\n>>> Adam Tauno Williams wrote:\n>>>>\n>>>> This isn't true.  IBMs IPS series controllers can the checked and\n>>>> configured via the ipssend utility that works very well in 2.6.x LINUX.\n>>>>\n>>> Unfortunately, what we got (in the IBM) was the garbage ServeRaid 8kl\n>>> card.\n>>> This one is atrocious - it shipped with a hideous firmware bug. And there\n>>> is\n>>> no way to bypass it.\n>>>\n>> Can you replace the IBM RAID controller with some other controller?\n>> Even just a simple 4 or 8 port SATA card with no RAID capability would\n>> be better than something that locks up.\n>\n> A replacement would have been nice, however the 8kl is very tightly\n> integrated with the motherboard and the backplane. We'd have had to buy a\n> PCI-X card, and then get out the soldering iron to fix the cables.\n>\n> To be fair, the 8kl is now working OK; also there was a note in the box\n> mentioning that firmware updates should be applied if available. What I\n> found unbelievable was that IBM shipped the server to me in a state with\n> known crashing firmware (a sufficiently bad bug imho to merit a product\n> recall), and hadn't bothered to flash it themselves in the factory. Usually\n> BIOS updates are only applied by the end user if there is a specific issue\n> to fix, and if the product line has been out for years, but that particular\n> server was only assembled 3 weeks ago, why would one expect a company of\n> IBM's standing to ship it in that state.\n\nIt does kind of knock the stuffing out of the argument that buying\nfrom the big vendors ensures good hardware experiences. I've had\nsimilar problems from all the big vendors in the past. I can't\nimagine getting treated that way by my current supplied. It's one\nthing for some obscure bug in a particular ubuntu kernel to interact\npoorly with a piece of equipment, but when a hardware RAID controller\narrives in a basically broken state, that's inexcusable. It's really\nnot too much to expect working hardware on arrival.\n", "msg_date": "Thu, 24 Dec 2009 17:52:22 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "--- On Fri, 25/12/09, Scott Marlowe <[email protected]> wrote:\n\n> It does kind of knock the stuffing out of the argument that\n> buying\n> from the big vendors ensures good hardware\n> experiences.  I've had\n> similar problems from all the big vendors in the\n> past.  I can't\n> imagine getting treated that way by my current\n> supplied.  It's one\n> thing for some obscure bug in a particular ubuntu kernel to\n> interact\n> poorly with a piece of equipment, but when a hardware RAID\n> controller\n> arrives in a basically broken state, that's\n> inexcusable.  It's really\n> not too much to expect working hardware on arrival.\n\nLast month I found myself taking a powerdrill to our new dell boxes in order to route cables to replacement raid cards. Having to do that made me feel really unprofessional and a total cowboy, but it was either that or shitty performance.\n\n\n\n\n \n", "msg_date": "Sun, 27 Dec 2009 15:36:00 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" }, { "msg_contents": "Glyn Astill wrote:\n> Last month I found myself taking a powerdrill to our new dell\n> boxes in order to route cables to replacement raid cards. Having\n> to do that made me feel really unprofessional and a total cowboy,\n> but it was either that or shitty performance.\n\nCan you be more specific? Which Dell server, which RAID card, what were the performance problems, and what did you buy to fix them?\n\nWe're thinking of expanding our servers, and so far have had no complaints about our Dell servers. My colleagues think we should buy more of the same, but if there's some new problem, I'd sure like to know about it.\n\nThanks!\nCraig\n", "msg_date": "Mon, 28 Dec 2009 10:37:12 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SATA drives performance" } ]
[ { "msg_contents": "Tom Lane wrote:\n \n> That does look weird. Do we have a self-contained test case?\n \nI've been tinkering with this and I now have a self-contained test\ncase (SQL statements and run results attached). I've debugged through\nit and things don't seem right in set_append_rel_pathlist, since\nchildrel->rows seems to contain the total rows in each table rather\nthan the number which meet the join conditions. That is reflected in\nthe output from OPTIMIZER_DEBUG logging, but not in the query plan\nfrom ANALYZE?\n \nI'm afraid I'm a bit stuck on getting farther. Hopefully this much\nwill be of use to someone. If you could point out where I should\nhave looked next, I'd be grateful. :-)\n \n-Kevin", "msg_date": "Sat, 26 Dec 2009 14:15:10 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\n\t time" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane wrote:\n>> That does look weird. Do we have a self-contained test case?\n \n> I've been tinkering with this and I now have a self-contained test\n> case (SQL statements and run results attached). I've debugged through\n> it and things don't seem right in set_append_rel_pathlist, since\n> childrel->rows seems to contain the total rows in each table rather\n> than the number which meet the join conditions.\n\nYeah, that is expected. Nestloop inner indexscans have a rowcount\nestimate that is different from that of the parent table --- the\nparent's rowcount is what would be applicable for another type of\njoin, such as merge or hash, where the join condition is applied at\nthe join node not in the relation scan.\n\nThe problem here boils down to the fact that examine_variable punts on\nappendrel variables:\n\n else if (rte->inh)\n {\n /*\n * XXX This means the Var represents a column of an append\n * relation. Later add code to look at the member relations and\n * try to derive some kind of combined statistics?\n */\n }\n\nThis means you get a default estimate for the selectivity of the join\ncondition, so the joinrel size estimate ends up being 0.005 * 1 * 40000.\nThat's set long before we ever generate indexscan plans, and I don't\nthink there's any clean way to correct the size estimate when we do.\n\nFixing this has been on the to-do list since forever. I don't think\nwe'll make much progress on it until we have an explicit notion of\npartitioned tables. The approach contemplated in the comment, of\nassembling some stats on-the-fly from the stats for individual child\ntables, doesn't seem real practical from a planning-time standpoint.\nThe thing that you really want to know here is that there will be only\none matching id value in the whole partitioned table; and that would be\nsomething trivial to know if we understood about partitioning keys,\nbut it's difficult to extract from independent sets of stats.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 27 Dec 2009 16:52:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query time " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> Yeah, that is expected. Nestloop inner indexscans have a rowcount\n> estimate that is different from that of the parent table --- the\n> parent's rowcount is what would be applicable for another type of\n> join, such as merge or hash, where the join condition is applied\n> at the join node not in the relation scan.\n> \n> The problem here boils down to the fact that examine_variable\n> punts on appendrel variables:\n \n> * XXX This means the Var represents a column of an append\n> * relation. Later add code to look at the member relations and\n> * try to derive some kind of combined statistics?\n \n> This means you get a default estimate for the selectivity of the\n> join condition, so the joinrel size estimate ends up being 0.005 *\n> 1 * 40000. That's set long before we ever generate indexscan\n> plans, and I don't think there's any clean way to correct the size\n> estimate when we do.\n \nThanks for the explanation.\n \n> Fixing this has been on the to-do list since forever.\n \nWhich item? I looked and couldn't find one which seems to fit.\n(I was hoping to find a reference to a discussion thread.)\n \n> I don't think we'll make much progress on it until we have an\n> explicit notion of partitioned tables.\n \nI'm not clear that a generalized solution for partitioned tables\nwould solve the production query from the OP. The OP was using\ntable extension to model the properties of the data. In the actual\nproduction problem, the table structure involved, for example,\nmaterials -- some of which were containers (which had all the\nproperties of other materials, plus some unique to containers); so\nthe containers table extends the materials table to model that. In\nthe problem query, he wanted to find all the materials related to\nsome item. He was also joining to location, which might be (among\nother things) a waypoint or a container (both extending location). \nNote that a container is both a location and a material.\n \n> The approach contemplated in the comment, of assembling some stats\n> on-the-fly from the stats for individual child tables, doesn't\n> seem real practical from a planning-time standpoint.\n \nCan you give a thumbnail sketch of why that is?\n \n> The thing that you really want to know here is that there will be\n> only one matching id value in the whole partitioned table\n \nIt would seem to matter nearly as much if statistics indicated you\nwould get five rows out of twenty million.\n \n> it's difficult to extract from independent sets of stats.\n \nSince we make the attempt for most intermediate results, it's not\nimmediately clear to me why it's so hard here. Not that I'm\ndoubting your assertion that it *is* hard; I'm just trying to see\n*why* it is. Perhaps the code which generates such estimates for\neverything else could be made available here?\n \nThe usefulness of inheritance to model data would seem to be rather\nlimited without better optimizer support.\n \n-Kevin\n", "msg_date": "Mon, 28 Dec 2009 09:09:26 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query\n\t time" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> The approach contemplated in the comment, of assembling some stats\n>> on-the-fly from the stats for individual child tables, doesn't\n>> seem real practical from a planning-time standpoint.\n \n> Can you give a thumbnail sketch of why that is?\n\nWell, it would be expensive, and it's not even clear you can do it at\nall (merging histograms with overlapping bins seems like a mess for\ninstance).\n\nI think we have previously discussed the idea of generating and storing\nANALYZE stats for a whole inheritance tree, which'd solve the problem\nnicely from the planner's standpoint. I'm a bit worried about the\nlocking implications, but if it took just a SELECT lock on the child\ntables it probably wouldn't be too bad --- no worse than any other\nSELECT on the inheritance tree. Another thing that's hard to figure out\nis how autovacuum would know when to redo the stats. In a lot of common\nsituations, the inheritance parent table is empty and never changes, so\nno autovac or autoanalyze would ever get launched against it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Dec 2009 12:41:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by (for 15 rows) adds 30 seconds to query time " } ]
[ { "msg_contents": "Hey\nHow are you these days? I bought one apple iphone 3gs black from this\nwebsite www.ynchenxi.com to my surprise,it's original,but much cheaper.You\ncan check it,Hope everything well.\nregards\n\n \nHeyHow are you these days? I bought one apple iphone  3gs black from this website www.ynchenxi.com to my  surprise,it's original,but much cheaper.You can  check it,Hope everything well.\nregards", "msg_date": "Wed, 30 Dec 2009 04:11:51 +0800", "msg_from": "roopasatish <[email protected]>", "msg_from_op": true, "msg_subject": "Re: " } ]